00:00:00.000 Started by upstream project "autotest-spdk-master-vs-dpdk-v23.11" build number 607 00:00:00.000 originally caused by: 00:00:00.001 Started by upstream project "nightly-trigger" build number 3269 00:00:00.001 originally caused by: 00:00:00.001 Started by timer 00:00:00.001 Started by timer 00:00:00.086 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.086 The recommended git tool is: git 00:00:00.086 using credential 00000000-0000-0000-0000-000000000002 00:00:00.088 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.114 Fetching changes from the remote Git repository 00:00:00.116 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.146 Using shallow fetch with depth 1 00:00:00.146 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.146 > git --version # timeout=10 00:00:00.180 > git --version # 'git version 2.39.2' 00:00:00.180 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.210 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.210 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:06.202 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:06.213 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:06.226 Checking out Revision 9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d (FETCH_HEAD) 00:00:06.226 > git config core.sparsecheckout # timeout=10 00:00:06.235 > git read-tree -mu HEAD # timeout=10 00:00:06.252 > git checkout -f 9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d # timeout=5 00:00:06.269 Commit message: "inventory: add WCP3 to free inventory" 00:00:06.269 > git rev-list --no-walk 9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d # timeout=10 00:00:06.385 [Pipeline] Start of Pipeline 00:00:06.400 [Pipeline] library 00:00:06.401 Loading library shm_lib@master 00:00:06.401 Library shm_lib@master is cached. Copying from home. 00:00:06.416 [Pipeline] node 00:00:06.424 Running on GP11 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:06.426 [Pipeline] { 00:00:06.436 [Pipeline] catchError 00:00:06.437 [Pipeline] { 00:00:06.448 [Pipeline] wrap 00:00:06.456 [Pipeline] { 00:00:06.462 [Pipeline] stage 00:00:06.464 [Pipeline] { (Prologue) 00:00:06.628 [Pipeline] sh 00:00:06.909 + logger -p user.info -t JENKINS-CI 00:00:06.928 [Pipeline] echo 00:00:06.930 Node: GP11 00:00:06.938 [Pipeline] sh 00:00:07.237 [Pipeline] setCustomBuildProperty 00:00:07.250 [Pipeline] echo 00:00:07.251 Cleanup processes 00:00:07.256 [Pipeline] sh 00:00:07.540 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:07.540 2951175 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:07.553 [Pipeline] sh 00:00:07.834 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:07.834 ++ grep -v 'sudo pgrep' 00:00:07.834 ++ awk '{print $1}' 00:00:07.834 + sudo kill -9 00:00:07.834 + true 00:00:07.848 [Pipeline] cleanWs 00:00:07.858 [WS-CLEANUP] Deleting project workspace... 00:00:07.858 [WS-CLEANUP] Deferred wipeout is used... 00:00:07.864 [WS-CLEANUP] done 00:00:07.869 [Pipeline] setCustomBuildProperty 00:00:07.884 [Pipeline] sh 00:00:08.162 + sudo git config --global --replace-all safe.directory '*' 00:00:08.257 [Pipeline] httpRequest 00:00:08.281 [Pipeline] echo 00:00:08.283 Sorcerer 10.211.164.101 is alive 00:00:08.293 [Pipeline] httpRequest 00:00:08.297 HttpMethod: GET 00:00:08.298 URL: http://10.211.164.101/packages/jbp_9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d.tar.gz 00:00:08.299 Sending request to url: http://10.211.164.101/packages/jbp_9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d.tar.gz 00:00:08.313 Response Code: HTTP/1.1 200 OK 00:00:08.313 Success: Status code 200 is in the accepted range: 200,404 00:00:08.314 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d.tar.gz 00:00:10.544 [Pipeline] sh 00:00:10.829 + tar --no-same-owner -xf jbp_9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d.tar.gz 00:00:10.846 [Pipeline] httpRequest 00:00:10.876 [Pipeline] echo 00:00:10.879 Sorcerer 10.211.164.101 is alive 00:00:10.888 [Pipeline] httpRequest 00:00:10.893 HttpMethod: GET 00:00:10.894 URL: http://10.211.164.101/packages/spdk_719d03c6adf20011bb50ac4109e0be7741c0d1c5.tar.gz 00:00:10.895 Sending request to url: http://10.211.164.101/packages/spdk_719d03c6adf20011bb50ac4109e0be7741c0d1c5.tar.gz 00:00:10.915 Response Code: HTTP/1.1 200 OK 00:00:10.915 Success: Status code 200 is in the accepted range: 200,404 00:00:10.916 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_719d03c6adf20011bb50ac4109e0be7741c0d1c5.tar.gz 00:00:52.607 [Pipeline] sh 00:00:52.895 + tar --no-same-owner -xf spdk_719d03c6adf20011bb50ac4109e0be7741c0d1c5.tar.gz 00:00:55.447 [Pipeline] sh 00:00:55.732 + git -C spdk log --oneline -n5 00:00:55.732 719d03c6a sock/uring: only register net impl if supported 00:00:55.732 e64f085ad vbdev_lvol_ut: unify usage of dummy base bdev 00:00:55.732 9937c0160 lib/rdma: bind TRACE_BDEV_IO_START/DONE to OBJECT_NVMF_RDMA_IO 00:00:55.732 6c7c1f57e accel: add sequence outstanding stat 00:00:55.732 3bc8e6a26 accel: add utility to put task 00:00:55.752 [Pipeline] withCredentials 00:00:55.763 > git --version # timeout=10 00:00:55.774 > git --version # 'git version 2.39.2' 00:00:55.791 Masking supported pattern matches of $GIT_PASSWORD or $GIT_ASKPASS 00:00:55.793 [Pipeline] { 00:00:55.802 [Pipeline] retry 00:00:55.804 [Pipeline] { 00:00:55.820 [Pipeline] sh 00:00:56.112 + git ls-remote http://dpdk.org/git/dpdk-stable v23.11 00:00:56.384 [Pipeline] } 00:00:56.407 [Pipeline] // retry 00:00:56.413 [Pipeline] } 00:00:56.436 [Pipeline] // withCredentials 00:00:56.447 [Pipeline] httpRequest 00:00:56.474 [Pipeline] echo 00:00:56.476 Sorcerer 10.211.164.101 is alive 00:00:56.485 [Pipeline] httpRequest 00:00:56.490 HttpMethod: GET 00:00:56.490 URL: http://10.211.164.101/packages/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:00:56.491 Sending request to url: http://10.211.164.101/packages/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:00:56.498 Response Code: HTTP/1.1 200 OK 00:00:56.499 Success: Status code 200 is in the accepted range: 200,404 00:00:56.499 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:01:19.700 [Pipeline] sh 00:01:19.983 + tar --no-same-owner -xf dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:01:21.894 [Pipeline] sh 00:01:22.179 + git -C dpdk log --oneline -n5 00:01:22.179 eeb0605f11 version: 23.11.0 00:01:22.179 238778122a doc: update release notes for 23.11 00:01:22.179 46aa6b3cfc doc: fix description of RSS features 00:01:22.179 dd88f51a57 devtools: forbid DPDK API in cnxk base driver 00:01:22.179 7e421ae345 devtools: support skipping forbid rule check 00:01:22.190 [Pipeline] } 00:01:22.206 [Pipeline] // stage 00:01:22.213 [Pipeline] stage 00:01:22.215 [Pipeline] { (Prepare) 00:01:22.233 [Pipeline] writeFile 00:01:22.247 [Pipeline] sh 00:01:22.528 + logger -p user.info -t JENKINS-CI 00:01:22.541 [Pipeline] sh 00:01:22.822 + logger -p user.info -t JENKINS-CI 00:01:22.834 [Pipeline] sh 00:01:23.117 + cat autorun-spdk.conf 00:01:23.117 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:23.117 SPDK_TEST_NVMF=1 00:01:23.117 SPDK_TEST_NVME_CLI=1 00:01:23.117 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:23.117 SPDK_TEST_NVMF_NICS=e810 00:01:23.117 SPDK_TEST_VFIOUSER=1 00:01:23.117 SPDK_RUN_UBSAN=1 00:01:23.117 NET_TYPE=phy 00:01:23.117 SPDK_TEST_NATIVE_DPDK=v23.11 00:01:23.117 SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:23.125 RUN_NIGHTLY=1 00:01:23.129 [Pipeline] readFile 00:01:23.155 [Pipeline] withEnv 00:01:23.157 [Pipeline] { 00:01:23.170 [Pipeline] sh 00:01:23.454 + set -ex 00:01:23.454 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:01:23.454 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:23.454 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:23.454 ++ SPDK_TEST_NVMF=1 00:01:23.454 ++ SPDK_TEST_NVME_CLI=1 00:01:23.454 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:23.454 ++ SPDK_TEST_NVMF_NICS=e810 00:01:23.454 ++ SPDK_TEST_VFIOUSER=1 00:01:23.454 ++ SPDK_RUN_UBSAN=1 00:01:23.454 ++ NET_TYPE=phy 00:01:23.454 ++ SPDK_TEST_NATIVE_DPDK=v23.11 00:01:23.454 ++ SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:23.454 ++ RUN_NIGHTLY=1 00:01:23.454 + case $SPDK_TEST_NVMF_NICS in 00:01:23.454 + DRIVERS=ice 00:01:23.454 + [[ tcp == \r\d\m\a ]] 00:01:23.454 + [[ -n ice ]] 00:01:23.454 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:01:23.454 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:01:23.454 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:01:23.454 rmmod: ERROR: Module irdma is not currently loaded 00:01:23.454 rmmod: ERROR: Module i40iw is not currently loaded 00:01:23.454 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:01:23.454 + true 00:01:23.454 + for D in $DRIVERS 00:01:23.454 + sudo modprobe ice 00:01:23.454 + exit 0 00:01:23.464 [Pipeline] } 00:01:23.481 [Pipeline] // withEnv 00:01:23.485 [Pipeline] } 00:01:23.502 [Pipeline] // stage 00:01:23.511 [Pipeline] catchError 00:01:23.513 [Pipeline] { 00:01:23.527 [Pipeline] timeout 00:01:23.528 Timeout set to expire in 50 min 00:01:23.529 [Pipeline] { 00:01:23.544 [Pipeline] stage 00:01:23.546 [Pipeline] { (Tests) 00:01:23.561 [Pipeline] sh 00:01:23.848 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:23.848 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:23.848 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:23.848 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:01:23.848 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:23.848 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:23.848 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:01:23.848 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:23.848 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:23.848 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:23.848 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:01:23.848 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:23.848 + source /etc/os-release 00:01:23.848 ++ NAME='Fedora Linux' 00:01:23.848 ++ VERSION='38 (Cloud Edition)' 00:01:23.848 ++ ID=fedora 00:01:23.848 ++ VERSION_ID=38 00:01:23.848 ++ VERSION_CODENAME= 00:01:23.848 ++ PLATFORM_ID=platform:f38 00:01:23.848 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:01:23.848 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:23.848 ++ LOGO=fedora-logo-icon 00:01:23.848 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:01:23.848 ++ HOME_URL=https://fedoraproject.org/ 00:01:23.848 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:01:23.848 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:23.848 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:23.848 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:23.848 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:01:23.848 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:23.848 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:01:23.848 ++ SUPPORT_END=2024-05-14 00:01:23.848 ++ VARIANT='Cloud Edition' 00:01:23.848 ++ VARIANT_ID=cloud 00:01:23.848 + uname -a 00:01:23.848 Linux spdk-gp-11 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:01:23.848 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:01:24.789 Hugepages 00:01:24.789 node hugesize free / total 00:01:24.789 node0 1048576kB 0 / 0 00:01:24.789 node0 2048kB 0 / 0 00:01:24.789 node1 1048576kB 0 / 0 00:01:24.789 node1 2048kB 0 / 0 00:01:24.789 00:01:24.789 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:24.789 I/OAT 0000:00:04.0 8086 0e20 0 ioatdma - - 00:01:24.789 I/OAT 0000:00:04.1 8086 0e21 0 ioatdma - - 00:01:24.789 I/OAT 0000:00:04.2 8086 0e22 0 ioatdma - - 00:01:24.789 I/OAT 0000:00:04.3 8086 0e23 0 ioatdma - - 00:01:24.789 I/OAT 0000:00:04.4 8086 0e24 0 ioatdma - - 00:01:24.789 I/OAT 0000:00:04.5 8086 0e25 0 ioatdma - - 00:01:24.789 I/OAT 0000:00:04.6 8086 0e26 0 ioatdma - - 00:01:24.789 I/OAT 0000:00:04.7 8086 0e27 0 ioatdma - - 00:01:24.789 I/OAT 0000:80:04.0 8086 0e20 1 ioatdma - - 00:01:24.789 I/OAT 0000:80:04.1 8086 0e21 1 ioatdma - - 00:01:24.789 I/OAT 0000:80:04.2 8086 0e22 1 ioatdma - - 00:01:24.789 I/OAT 0000:80:04.3 8086 0e23 1 ioatdma - - 00:01:24.789 I/OAT 0000:80:04.4 8086 0e24 1 ioatdma - - 00:01:24.789 I/OAT 0000:80:04.5 8086 0e25 1 ioatdma - - 00:01:24.789 I/OAT 0000:80:04.6 8086 0e26 1 ioatdma - - 00:01:24.789 I/OAT 0000:80:04.7 8086 0e27 1 ioatdma - - 00:01:24.789 NVMe 0000:88:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:01:25.049 + rm -f /tmp/spdk-ld-path 00:01:25.049 + source autorun-spdk.conf 00:01:25.049 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:25.049 ++ SPDK_TEST_NVMF=1 00:01:25.049 ++ SPDK_TEST_NVME_CLI=1 00:01:25.049 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:25.049 ++ SPDK_TEST_NVMF_NICS=e810 00:01:25.049 ++ SPDK_TEST_VFIOUSER=1 00:01:25.049 ++ SPDK_RUN_UBSAN=1 00:01:25.049 ++ NET_TYPE=phy 00:01:25.049 ++ SPDK_TEST_NATIVE_DPDK=v23.11 00:01:25.049 ++ SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:25.049 ++ RUN_NIGHTLY=1 00:01:25.049 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:25.049 + [[ -n '' ]] 00:01:25.049 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:25.049 + for M in /var/spdk/build-*-manifest.txt 00:01:25.049 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:25.049 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:25.049 + for M in /var/spdk/build-*-manifest.txt 00:01:25.049 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:25.049 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:25.049 ++ uname 00:01:25.049 + [[ Linux == \L\i\n\u\x ]] 00:01:25.049 + sudo dmesg -T 00:01:25.049 + sudo dmesg --clear 00:01:25.049 + dmesg_pid=2951897 00:01:25.049 + [[ Fedora Linux == FreeBSD ]] 00:01:25.049 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:25.049 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:25.049 + sudo dmesg -Tw 00:01:25.049 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:25.049 + [[ -x /usr/src/fio-static/fio ]] 00:01:25.049 + export FIO_BIN=/usr/src/fio-static/fio 00:01:25.049 + FIO_BIN=/usr/src/fio-static/fio 00:01:25.049 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:25.049 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:25.049 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:25.049 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:25.049 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:25.049 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:25.049 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:25.049 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:25.049 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:25.049 Test configuration: 00:01:25.049 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:25.049 SPDK_TEST_NVMF=1 00:01:25.049 SPDK_TEST_NVME_CLI=1 00:01:25.049 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:25.049 SPDK_TEST_NVMF_NICS=e810 00:01:25.049 SPDK_TEST_VFIOUSER=1 00:01:25.049 SPDK_RUN_UBSAN=1 00:01:25.049 NET_TYPE=phy 00:01:25.049 SPDK_TEST_NATIVE_DPDK=v23.11 00:01:25.049 SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:25.049 RUN_NIGHTLY=1 03:04:31 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:01:25.049 03:04:31 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:25.049 03:04:31 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:25.049 03:04:31 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:25.049 03:04:31 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:25.049 03:04:31 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:25.049 03:04:31 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:25.049 03:04:31 -- paths/export.sh@5 -- $ export PATH 00:01:25.049 03:04:31 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:25.049 03:04:31 -- common/autobuild_common.sh@443 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:01:25.049 03:04:31 -- common/autobuild_common.sh@444 -- $ date +%s 00:01:25.049 03:04:31 -- common/autobuild_common.sh@444 -- $ mktemp -dt spdk_1721005471.XXXXXX 00:01:25.049 03:04:31 -- common/autobuild_common.sh@444 -- $ SPDK_WORKSPACE=/tmp/spdk_1721005471.cjN1rY 00:01:25.049 03:04:31 -- common/autobuild_common.sh@446 -- $ [[ -n '' ]] 00:01:25.049 03:04:31 -- common/autobuild_common.sh@450 -- $ '[' -n v23.11 ']' 00:01:25.049 03:04:31 -- common/autobuild_common.sh@451 -- $ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:25.049 03:04:31 -- common/autobuild_common.sh@451 -- $ scanbuild_exclude=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk' 00:01:25.049 03:04:31 -- common/autobuild_common.sh@457 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:01:25.049 03:04:31 -- common/autobuild_common.sh@459 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:01:25.049 03:04:31 -- common/autobuild_common.sh@460 -- $ get_config_params 00:01:25.049 03:04:31 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:01:25.049 03:04:31 -- common/autotest_common.sh@10 -- $ set +x 00:01:25.050 03:04:31 -- common/autobuild_common.sh@460 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-dpdk=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build' 00:01:25.050 03:04:31 -- common/autobuild_common.sh@462 -- $ start_monitor_resources 00:01:25.050 03:04:31 -- pm/common@17 -- $ local monitor 00:01:25.050 03:04:31 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:25.050 03:04:31 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:25.050 03:04:31 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:25.050 03:04:31 -- pm/common@21 -- $ date +%s 00:01:25.050 03:04:31 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:25.050 03:04:31 -- pm/common@21 -- $ date +%s 00:01:25.050 03:04:31 -- pm/common@25 -- $ sleep 1 00:01:25.050 03:04:31 -- pm/common@21 -- $ date +%s 00:01:25.050 03:04:31 -- pm/common@21 -- $ date +%s 00:01:25.050 03:04:31 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721005471 00:01:25.050 03:04:31 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721005471 00:01:25.050 03:04:31 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721005471 00:01:25.050 03:04:31 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721005471 00:01:25.050 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721005471_collect-vmstat.pm.log 00:01:25.050 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721005471_collect-cpu-load.pm.log 00:01:25.050 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721005471_collect-cpu-temp.pm.log 00:01:25.050 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721005471_collect-bmc-pm.bmc.pm.log 00:01:25.989 03:04:32 -- common/autobuild_common.sh@463 -- $ trap stop_monitor_resources EXIT 00:01:25.989 03:04:32 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:25.989 03:04:32 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:25.989 03:04:32 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:25.989 03:04:32 -- spdk/autobuild.sh@16 -- $ date -u 00:01:25.989 Mon Jul 15 01:04:32 AM UTC 2024 00:01:25.989 03:04:32 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:25.989 v24.09-pre-202-g719d03c6a 00:01:25.989 03:04:32 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:01:25.989 03:04:32 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:25.989 03:04:32 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:25.989 03:04:32 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:01:25.989 03:04:32 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:01:25.989 03:04:32 -- common/autotest_common.sh@10 -- $ set +x 00:01:26.248 ************************************ 00:01:26.248 START TEST ubsan 00:01:26.248 ************************************ 00:01:26.248 03:04:32 ubsan -- common/autotest_common.sh@1123 -- $ echo 'using ubsan' 00:01:26.248 using ubsan 00:01:26.248 00:01:26.248 real 0m0.000s 00:01:26.248 user 0m0.000s 00:01:26.248 sys 0m0.000s 00:01:26.248 03:04:32 ubsan -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:01:26.248 03:04:32 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:26.248 ************************************ 00:01:26.248 END TEST ubsan 00:01:26.248 ************************************ 00:01:26.248 03:04:32 -- common/autotest_common.sh@1142 -- $ return 0 00:01:26.248 03:04:32 -- spdk/autobuild.sh@27 -- $ '[' -n v23.11 ']' 00:01:26.248 03:04:32 -- spdk/autobuild.sh@28 -- $ build_native_dpdk 00:01:26.248 03:04:32 -- common/autobuild_common.sh@436 -- $ run_test build_native_dpdk _build_native_dpdk 00:01:26.248 03:04:32 -- common/autotest_common.sh@1099 -- $ '[' 2 -le 1 ']' 00:01:26.248 03:04:32 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:01:26.248 03:04:32 -- common/autotest_common.sh@10 -- $ set +x 00:01:26.248 ************************************ 00:01:26.248 START TEST build_native_dpdk 00:01:26.248 ************************************ 00:01:26.248 03:04:32 build_native_dpdk -- common/autotest_common.sh@1123 -- $ _build_native_dpdk 00:01:26.248 03:04:32 build_native_dpdk -- common/autobuild_common.sh@48 -- $ local external_dpdk_dir 00:01:26.248 03:04:32 build_native_dpdk -- common/autobuild_common.sh@49 -- $ local external_dpdk_base_dir 00:01:26.248 03:04:32 build_native_dpdk -- common/autobuild_common.sh@50 -- $ local compiler_version 00:01:26.248 03:04:32 build_native_dpdk -- common/autobuild_common.sh@51 -- $ local compiler 00:01:26.248 03:04:32 build_native_dpdk -- common/autobuild_common.sh@52 -- $ local dpdk_kmods 00:01:26.248 03:04:32 build_native_dpdk -- common/autobuild_common.sh@53 -- $ local repo=dpdk 00:01:26.248 03:04:32 build_native_dpdk -- common/autobuild_common.sh@55 -- $ compiler=gcc 00:01:26.248 03:04:32 build_native_dpdk -- common/autobuild_common.sh@61 -- $ export CC=gcc 00:01:26.248 03:04:32 build_native_dpdk -- common/autobuild_common.sh@61 -- $ CC=gcc 00:01:26.248 03:04:32 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *clang* ]] 00:01:26.248 03:04:32 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *gcc* ]] 00:01:26.248 03:04:32 build_native_dpdk -- common/autobuild_common.sh@68 -- $ gcc -dumpversion 00:01:26.248 03:04:32 build_native_dpdk -- common/autobuild_common.sh@68 -- $ compiler_version=13 00:01:26.248 03:04:32 build_native_dpdk -- common/autobuild_common.sh@69 -- $ compiler_version=13 00:01:26.248 03:04:32 build_native_dpdk -- common/autobuild_common.sh@70 -- $ external_dpdk_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:26.248 03:04:32 build_native_dpdk -- common/autobuild_common.sh@71 -- $ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:26.248 03:04:32 build_native_dpdk -- common/autobuild_common.sh@71 -- $ external_dpdk_base_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk 00:01:26.248 03:04:32 build_native_dpdk -- common/autobuild_common.sh@73 -- $ [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk ]] 00:01:26.248 03:04:32 build_native_dpdk -- common/autobuild_common.sh@82 -- $ orgdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:26.248 03:04:32 build_native_dpdk -- common/autobuild_common.sh@83 -- $ git -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk log --oneline -n 5 00:01:26.248 eeb0605f11 version: 23.11.0 00:01:26.248 238778122a doc: update release notes for 23.11 00:01:26.248 46aa6b3cfc doc: fix description of RSS features 00:01:26.248 dd88f51a57 devtools: forbid DPDK API in cnxk base driver 00:01:26.248 7e421ae345 devtools: support skipping forbid rule check 00:01:26.248 03:04:32 build_native_dpdk -- common/autobuild_common.sh@85 -- $ dpdk_cflags='-fPIC -g -fcommon' 00:01:26.248 03:04:32 build_native_dpdk -- common/autobuild_common.sh@86 -- $ dpdk_ldflags= 00:01:26.248 03:04:32 build_native_dpdk -- common/autobuild_common.sh@87 -- $ dpdk_ver=23.11.0 00:01:26.248 03:04:32 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ gcc == *gcc* ]] 00:01:26.248 03:04:32 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ 13 -ge 5 ]] 00:01:26.248 03:04:32 build_native_dpdk -- common/autobuild_common.sh@90 -- $ dpdk_cflags+=' -Werror' 00:01:26.248 03:04:32 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ gcc == *gcc* ]] 00:01:26.248 03:04:32 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ 13 -ge 10 ]] 00:01:26.248 03:04:32 build_native_dpdk -- common/autobuild_common.sh@94 -- $ dpdk_cflags+=' -Wno-stringop-overflow' 00:01:26.248 03:04:32 build_native_dpdk -- common/autobuild_common.sh@100 -- $ DPDK_DRIVERS=("bus" "bus/pci" "bus/vdev" "mempool/ring" "net/i40e" "net/i40e/base") 00:01:26.248 03:04:32 build_native_dpdk -- common/autobuild_common.sh@102 -- $ local mlx5_libs_added=n 00:01:26.248 03:04:32 build_native_dpdk -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:01:26.248 03:04:32 build_native_dpdk -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:01:26.248 03:04:32 build_native_dpdk -- common/autobuild_common.sh@139 -- $ [[ 0 -eq 1 ]] 00:01:26.248 03:04:32 build_native_dpdk -- common/autobuild_common.sh@167 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk 00:01:26.248 03:04:32 build_native_dpdk -- common/autobuild_common.sh@168 -- $ uname -s 00:01:26.248 03:04:32 build_native_dpdk -- common/autobuild_common.sh@168 -- $ '[' Linux = Linux ']' 00:01:26.248 03:04:32 build_native_dpdk -- common/autobuild_common.sh@169 -- $ lt 23.11.0 21.11.0 00:01:26.248 03:04:32 build_native_dpdk -- scripts/common.sh@370 -- $ cmp_versions 23.11.0 '<' 21.11.0 00:01:26.248 03:04:32 build_native_dpdk -- scripts/common.sh@330 -- $ local ver1 ver1_l 00:01:26.248 03:04:32 build_native_dpdk -- scripts/common.sh@331 -- $ local ver2 ver2_l 00:01:26.248 03:04:32 build_native_dpdk -- scripts/common.sh@333 -- $ IFS=.-: 00:01:26.248 03:04:32 build_native_dpdk -- scripts/common.sh@333 -- $ read -ra ver1 00:01:26.249 03:04:32 build_native_dpdk -- scripts/common.sh@334 -- $ IFS=.-: 00:01:26.249 03:04:32 build_native_dpdk -- scripts/common.sh@334 -- $ read -ra ver2 00:01:26.249 03:04:32 build_native_dpdk -- scripts/common.sh@335 -- $ local 'op=<' 00:01:26.249 03:04:32 build_native_dpdk -- scripts/common.sh@337 -- $ ver1_l=3 00:01:26.249 03:04:32 build_native_dpdk -- scripts/common.sh@338 -- $ ver2_l=3 00:01:26.249 03:04:32 build_native_dpdk -- scripts/common.sh@340 -- $ local lt=0 gt=0 eq=0 v 00:01:26.249 03:04:32 build_native_dpdk -- scripts/common.sh@341 -- $ case "$op" in 00:01:26.249 03:04:32 build_native_dpdk -- scripts/common.sh@342 -- $ : 1 00:01:26.249 03:04:32 build_native_dpdk -- scripts/common.sh@361 -- $ (( v = 0 )) 00:01:26.249 03:04:32 build_native_dpdk -- scripts/common.sh@361 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:01:26.249 03:04:32 build_native_dpdk -- scripts/common.sh@362 -- $ decimal 23 00:01:26.249 03:04:32 build_native_dpdk -- scripts/common.sh@350 -- $ local d=23 00:01:26.249 03:04:32 build_native_dpdk -- scripts/common.sh@351 -- $ [[ 23 =~ ^[0-9]+$ ]] 00:01:26.249 03:04:32 build_native_dpdk -- scripts/common.sh@352 -- $ echo 23 00:01:26.249 03:04:32 build_native_dpdk -- scripts/common.sh@362 -- $ ver1[v]=23 00:01:26.249 03:04:32 build_native_dpdk -- scripts/common.sh@363 -- $ decimal 21 00:01:26.249 03:04:32 build_native_dpdk -- scripts/common.sh@350 -- $ local d=21 00:01:26.249 03:04:32 build_native_dpdk -- scripts/common.sh@351 -- $ [[ 21 =~ ^[0-9]+$ ]] 00:01:26.249 03:04:32 build_native_dpdk -- scripts/common.sh@352 -- $ echo 21 00:01:26.249 03:04:32 build_native_dpdk -- scripts/common.sh@363 -- $ ver2[v]=21 00:01:26.249 03:04:32 build_native_dpdk -- scripts/common.sh@364 -- $ (( ver1[v] > ver2[v] )) 00:01:26.249 03:04:32 build_native_dpdk -- scripts/common.sh@364 -- $ return 1 00:01:26.249 03:04:32 build_native_dpdk -- common/autobuild_common.sh@173 -- $ patch -p1 00:01:26.249 patching file config/rte_config.h 00:01:26.249 Hunk #1 succeeded at 60 (offset 1 line). 00:01:26.249 03:04:32 build_native_dpdk -- common/autobuild_common.sh@177 -- $ dpdk_kmods=false 00:01:26.249 03:04:32 build_native_dpdk -- common/autobuild_common.sh@178 -- $ uname -s 00:01:26.249 03:04:32 build_native_dpdk -- common/autobuild_common.sh@178 -- $ '[' Linux = FreeBSD ']' 00:01:26.249 03:04:32 build_native_dpdk -- common/autobuild_common.sh@182 -- $ printf %s, bus bus/pci bus/vdev mempool/ring net/i40e net/i40e/base 00:01:26.249 03:04:32 build_native_dpdk -- common/autobuild_common.sh@182 -- $ meson build-tmp --prefix=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build --libdir lib -Denable_docs=false -Denable_kmods=false -Dtests=false -Dc_link_args= '-Dc_args=-fPIC -g -fcommon -Werror -Wno-stringop-overflow' -Dmachine=native -Denable_drivers=bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:01:30.444 The Meson build system 00:01:30.444 Version: 1.3.1 00:01:30.444 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk 00:01:30.444 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp 00:01:30.444 Build type: native build 00:01:30.444 Program cat found: YES (/usr/bin/cat) 00:01:30.444 Project name: DPDK 00:01:30.444 Project version: 23.11.0 00:01:30.444 C compiler for the host machine: gcc (gcc 13.2.1 "gcc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:01:30.444 C linker for the host machine: gcc ld.bfd 2.39-16 00:01:30.444 Host machine cpu family: x86_64 00:01:30.444 Host machine cpu: x86_64 00:01:30.444 Message: ## Building in Developer Mode ## 00:01:30.444 Program pkg-config found: YES (/usr/bin/pkg-config) 00:01:30.444 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/buildtools/check-symbols.sh) 00:01:30.444 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/buildtools/options-ibverbs-static.sh) 00:01:30.444 Program python3 found: YES (/usr/bin/python3) 00:01:30.444 Program cat found: YES (/usr/bin/cat) 00:01:30.444 config/meson.build:113: WARNING: The "machine" option is deprecated. Please use "cpu_instruction_set" instead. 00:01:30.444 Compiler for C supports arguments -march=native: YES 00:01:30.444 Checking for size of "void *" : 8 00:01:30.444 Checking for size of "void *" : 8 (cached) 00:01:30.444 Library m found: YES 00:01:30.444 Library numa found: YES 00:01:30.444 Has header "numaif.h" : YES 00:01:30.444 Library fdt found: NO 00:01:30.444 Library execinfo found: NO 00:01:30.444 Has header "execinfo.h" : YES 00:01:30.444 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:01:30.444 Run-time dependency libarchive found: NO (tried pkgconfig) 00:01:30.444 Run-time dependency libbsd found: NO (tried pkgconfig) 00:01:30.444 Run-time dependency jansson found: NO (tried pkgconfig) 00:01:30.444 Run-time dependency openssl found: YES 3.0.9 00:01:30.444 Run-time dependency libpcap found: YES 1.10.4 00:01:30.444 Has header "pcap.h" with dependency libpcap: YES 00:01:30.444 Compiler for C supports arguments -Wcast-qual: YES 00:01:30.444 Compiler for C supports arguments -Wdeprecated: YES 00:01:30.444 Compiler for C supports arguments -Wformat: YES 00:01:30.444 Compiler for C supports arguments -Wformat-nonliteral: NO 00:01:30.444 Compiler for C supports arguments -Wformat-security: NO 00:01:30.444 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:30.444 Compiler for C supports arguments -Wmissing-prototypes: YES 00:01:30.444 Compiler for C supports arguments -Wnested-externs: YES 00:01:30.444 Compiler for C supports arguments -Wold-style-definition: YES 00:01:30.445 Compiler for C supports arguments -Wpointer-arith: YES 00:01:30.445 Compiler for C supports arguments -Wsign-compare: YES 00:01:30.445 Compiler for C supports arguments -Wstrict-prototypes: YES 00:01:30.445 Compiler for C supports arguments -Wundef: YES 00:01:30.445 Compiler for C supports arguments -Wwrite-strings: YES 00:01:30.445 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:01:30.445 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:01:30.445 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:30.445 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:01:30.445 Program objdump found: YES (/usr/bin/objdump) 00:01:30.445 Compiler for C supports arguments -mavx512f: YES 00:01:30.445 Checking if "AVX512 checking" compiles: YES 00:01:30.445 Fetching value of define "__SSE4_2__" : 1 00:01:30.445 Fetching value of define "__AES__" : 1 00:01:30.445 Fetching value of define "__AVX__" : 1 00:01:30.445 Fetching value of define "__AVX2__" : (undefined) 00:01:30.445 Fetching value of define "__AVX512BW__" : (undefined) 00:01:30.445 Fetching value of define "__AVX512CD__" : (undefined) 00:01:30.445 Fetching value of define "__AVX512DQ__" : (undefined) 00:01:30.445 Fetching value of define "__AVX512F__" : (undefined) 00:01:30.445 Fetching value of define "__AVX512VL__" : (undefined) 00:01:30.445 Fetching value of define "__PCLMUL__" : 1 00:01:30.445 Fetching value of define "__RDRND__" : 1 00:01:30.445 Fetching value of define "__RDSEED__" : (undefined) 00:01:30.445 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:01:30.445 Fetching value of define "__znver1__" : (undefined) 00:01:30.445 Fetching value of define "__znver2__" : (undefined) 00:01:30.445 Fetching value of define "__znver3__" : (undefined) 00:01:30.445 Fetching value of define "__znver4__" : (undefined) 00:01:30.445 Compiler for C supports arguments -Wno-format-truncation: YES 00:01:30.445 Message: lib/log: Defining dependency "log" 00:01:30.445 Message: lib/kvargs: Defining dependency "kvargs" 00:01:30.445 Message: lib/telemetry: Defining dependency "telemetry" 00:01:30.445 Checking for function "getentropy" : NO 00:01:30.445 Message: lib/eal: Defining dependency "eal" 00:01:30.445 Message: lib/ring: Defining dependency "ring" 00:01:30.445 Message: lib/rcu: Defining dependency "rcu" 00:01:30.445 Message: lib/mempool: Defining dependency "mempool" 00:01:30.445 Message: lib/mbuf: Defining dependency "mbuf" 00:01:30.445 Fetching value of define "__PCLMUL__" : 1 (cached) 00:01:30.445 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:30.445 Compiler for C supports arguments -mpclmul: YES 00:01:30.445 Compiler for C supports arguments -maes: YES 00:01:30.445 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:30.445 Compiler for C supports arguments -mavx512bw: YES 00:01:30.445 Compiler for C supports arguments -mavx512dq: YES 00:01:30.445 Compiler for C supports arguments -mavx512vl: YES 00:01:30.445 Compiler for C supports arguments -mvpclmulqdq: YES 00:01:30.445 Compiler for C supports arguments -mavx2: YES 00:01:30.445 Compiler for C supports arguments -mavx: YES 00:01:30.445 Message: lib/net: Defining dependency "net" 00:01:30.445 Message: lib/meter: Defining dependency "meter" 00:01:30.445 Message: lib/ethdev: Defining dependency "ethdev" 00:01:30.445 Message: lib/pci: Defining dependency "pci" 00:01:30.445 Message: lib/cmdline: Defining dependency "cmdline" 00:01:30.445 Message: lib/metrics: Defining dependency "metrics" 00:01:30.445 Message: lib/hash: Defining dependency "hash" 00:01:30.445 Message: lib/timer: Defining dependency "timer" 00:01:30.445 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:30.445 Fetching value of define "__AVX512VL__" : (undefined) (cached) 00:01:30.445 Fetching value of define "__AVX512CD__" : (undefined) (cached) 00:01:30.445 Fetching value of define "__AVX512BW__" : (undefined) (cached) 00:01:30.445 Compiler for C supports arguments -mavx512f -mavx512vl -mavx512cd -mavx512bw: YES 00:01:30.445 Message: lib/acl: Defining dependency "acl" 00:01:30.445 Message: lib/bbdev: Defining dependency "bbdev" 00:01:30.445 Message: lib/bitratestats: Defining dependency "bitratestats" 00:01:30.445 Run-time dependency libelf found: YES 0.190 00:01:30.445 Message: lib/bpf: Defining dependency "bpf" 00:01:30.445 Message: lib/cfgfile: Defining dependency "cfgfile" 00:01:30.445 Message: lib/compressdev: Defining dependency "compressdev" 00:01:30.445 Message: lib/cryptodev: Defining dependency "cryptodev" 00:01:30.445 Message: lib/distributor: Defining dependency "distributor" 00:01:30.445 Message: lib/dmadev: Defining dependency "dmadev" 00:01:30.445 Message: lib/efd: Defining dependency "efd" 00:01:30.445 Message: lib/eventdev: Defining dependency "eventdev" 00:01:30.445 Message: lib/dispatcher: Defining dependency "dispatcher" 00:01:30.445 Message: lib/gpudev: Defining dependency "gpudev" 00:01:30.445 Message: lib/gro: Defining dependency "gro" 00:01:30.445 Message: lib/gso: Defining dependency "gso" 00:01:30.445 Message: lib/ip_frag: Defining dependency "ip_frag" 00:01:30.445 Message: lib/jobstats: Defining dependency "jobstats" 00:01:30.445 Message: lib/latencystats: Defining dependency "latencystats" 00:01:30.445 Message: lib/lpm: Defining dependency "lpm" 00:01:30.445 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:30.445 Fetching value of define "__AVX512DQ__" : (undefined) (cached) 00:01:30.445 Fetching value of define "__AVX512IFMA__" : (undefined) 00:01:30.445 Compiler for C supports arguments -mavx512f -mavx512dq -mavx512ifma: YES 00:01:30.445 Message: lib/member: Defining dependency "member" 00:01:30.445 Message: lib/pcapng: Defining dependency "pcapng" 00:01:30.445 Compiler for C supports arguments -Wno-cast-qual: YES 00:01:30.445 Message: lib/power: Defining dependency "power" 00:01:30.445 Message: lib/rawdev: Defining dependency "rawdev" 00:01:30.445 Message: lib/regexdev: Defining dependency "regexdev" 00:01:30.445 Message: lib/mldev: Defining dependency "mldev" 00:01:30.445 Message: lib/rib: Defining dependency "rib" 00:01:30.445 Message: lib/reorder: Defining dependency "reorder" 00:01:30.445 Message: lib/sched: Defining dependency "sched" 00:01:30.445 Message: lib/security: Defining dependency "security" 00:01:30.445 Message: lib/stack: Defining dependency "stack" 00:01:30.445 Has header "linux/userfaultfd.h" : YES 00:01:30.445 Has header "linux/vduse.h" : YES 00:01:30.445 Message: lib/vhost: Defining dependency "vhost" 00:01:30.445 Message: lib/ipsec: Defining dependency "ipsec" 00:01:30.445 Message: lib/pdcp: Defining dependency "pdcp" 00:01:30.445 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:30.445 Fetching value of define "__AVX512DQ__" : (undefined) (cached) 00:01:30.445 Compiler for C supports arguments -mavx512f -mavx512dq: YES 00:01:30.445 Compiler for C supports arguments -mavx512bw: YES (cached) 00:01:30.445 Message: lib/fib: Defining dependency "fib" 00:01:30.445 Message: lib/port: Defining dependency "port" 00:01:30.445 Message: lib/pdump: Defining dependency "pdump" 00:01:30.445 Message: lib/table: Defining dependency "table" 00:01:30.445 Message: lib/pipeline: Defining dependency "pipeline" 00:01:30.445 Message: lib/graph: Defining dependency "graph" 00:01:30.445 Message: lib/node: Defining dependency "node" 00:01:31.827 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:01:31.827 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:01:31.827 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:01:31.827 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:01:31.827 Compiler for C supports arguments -Wno-sign-compare: YES 00:01:31.827 Compiler for C supports arguments -Wno-unused-value: YES 00:01:31.827 Compiler for C supports arguments -Wno-format: YES 00:01:31.827 Compiler for C supports arguments -Wno-format-security: YES 00:01:31.827 Compiler for C supports arguments -Wno-format-nonliteral: YES 00:01:31.827 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:01:31.827 Compiler for C supports arguments -Wno-unused-but-set-variable: YES 00:01:31.827 Compiler for C supports arguments -Wno-unused-parameter: YES 00:01:31.827 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:31.827 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:31.827 Compiler for C supports arguments -mavx512bw: YES (cached) 00:01:31.827 Compiler for C supports arguments -march=skylake-avx512: YES 00:01:31.827 Message: drivers/net/i40e: Defining dependency "net_i40e" 00:01:31.827 Has header "sys/epoll.h" : YES 00:01:31.827 Program doxygen found: YES (/usr/bin/doxygen) 00:01:31.827 Configuring doxy-api-html.conf using configuration 00:01:31.827 Configuring doxy-api-man.conf using configuration 00:01:31.827 Program mandb found: YES (/usr/bin/mandb) 00:01:31.827 Program sphinx-build found: NO 00:01:31.827 Configuring rte_build_config.h using configuration 00:01:31.827 Message: 00:01:31.827 ================= 00:01:31.827 Applications Enabled 00:01:31.827 ================= 00:01:31.827 00:01:31.827 apps: 00:01:31.827 dumpcap, graph, pdump, proc-info, test-acl, test-bbdev, test-cmdline, test-compress-perf, 00:01:31.827 test-crypto-perf, test-dma-perf, test-eventdev, test-fib, test-flow-perf, test-gpudev, test-mldev, test-pipeline, 00:01:31.827 test-pmd, test-regex, test-sad, test-security-perf, 00:01:31.827 00:01:31.827 Message: 00:01:31.827 ================= 00:01:31.827 Libraries Enabled 00:01:31.827 ================= 00:01:31.827 00:01:31.827 libs: 00:01:31.827 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:01:31.827 net, meter, ethdev, pci, cmdline, metrics, hash, timer, 00:01:31.827 acl, bbdev, bitratestats, bpf, cfgfile, compressdev, cryptodev, distributor, 00:01:31.827 dmadev, efd, eventdev, dispatcher, gpudev, gro, gso, ip_frag, 00:01:31.827 jobstats, latencystats, lpm, member, pcapng, power, rawdev, regexdev, 00:01:31.827 mldev, rib, reorder, sched, security, stack, vhost, ipsec, 00:01:31.827 pdcp, fib, port, pdump, table, pipeline, graph, node, 00:01:31.827 00:01:31.827 00:01:31.827 Message: 00:01:31.827 =============== 00:01:31.827 Drivers Enabled 00:01:31.827 =============== 00:01:31.827 00:01:31.827 common: 00:01:31.827 00:01:31.827 bus: 00:01:31.827 pci, vdev, 00:01:31.827 mempool: 00:01:31.827 ring, 00:01:31.827 dma: 00:01:31.827 00:01:31.827 net: 00:01:31.827 i40e, 00:01:31.827 raw: 00:01:31.827 00:01:31.827 crypto: 00:01:31.827 00:01:31.827 compress: 00:01:31.827 00:01:31.827 regex: 00:01:31.827 00:01:31.827 ml: 00:01:31.827 00:01:31.827 vdpa: 00:01:31.827 00:01:31.827 event: 00:01:31.827 00:01:31.827 baseband: 00:01:31.827 00:01:31.827 gpu: 00:01:31.827 00:01:31.827 00:01:31.827 Message: 00:01:31.827 ================= 00:01:31.827 Content Skipped 00:01:31.827 ================= 00:01:31.827 00:01:31.827 apps: 00:01:31.827 00:01:31.827 libs: 00:01:31.827 00:01:31.827 drivers: 00:01:31.827 common/cpt: not in enabled drivers build config 00:01:31.827 common/dpaax: not in enabled drivers build config 00:01:31.827 common/iavf: not in enabled drivers build config 00:01:31.827 common/idpf: not in enabled drivers build config 00:01:31.827 common/mvep: not in enabled drivers build config 00:01:31.827 common/octeontx: not in enabled drivers build config 00:01:31.827 bus/auxiliary: not in enabled drivers build config 00:01:31.827 bus/cdx: not in enabled drivers build config 00:01:31.827 bus/dpaa: not in enabled drivers build config 00:01:31.827 bus/fslmc: not in enabled drivers build config 00:01:31.827 bus/ifpga: not in enabled drivers build config 00:01:31.827 bus/platform: not in enabled drivers build config 00:01:31.827 bus/vmbus: not in enabled drivers build config 00:01:31.827 common/cnxk: not in enabled drivers build config 00:01:31.827 common/mlx5: not in enabled drivers build config 00:01:31.827 common/nfp: not in enabled drivers build config 00:01:31.827 common/qat: not in enabled drivers build config 00:01:31.827 common/sfc_efx: not in enabled drivers build config 00:01:31.827 mempool/bucket: not in enabled drivers build config 00:01:31.827 mempool/cnxk: not in enabled drivers build config 00:01:31.827 mempool/dpaa: not in enabled drivers build config 00:01:31.827 mempool/dpaa2: not in enabled drivers build config 00:01:31.827 mempool/octeontx: not in enabled drivers build config 00:01:31.827 mempool/stack: not in enabled drivers build config 00:01:31.827 dma/cnxk: not in enabled drivers build config 00:01:31.827 dma/dpaa: not in enabled drivers build config 00:01:31.827 dma/dpaa2: not in enabled drivers build config 00:01:31.827 dma/hisilicon: not in enabled drivers build config 00:01:31.827 dma/idxd: not in enabled drivers build config 00:01:31.827 dma/ioat: not in enabled drivers build config 00:01:31.827 dma/skeleton: not in enabled drivers build config 00:01:31.827 net/af_packet: not in enabled drivers build config 00:01:31.827 net/af_xdp: not in enabled drivers build config 00:01:31.827 net/ark: not in enabled drivers build config 00:01:31.827 net/atlantic: not in enabled drivers build config 00:01:31.827 net/avp: not in enabled drivers build config 00:01:31.827 net/axgbe: not in enabled drivers build config 00:01:31.827 net/bnx2x: not in enabled drivers build config 00:01:31.827 net/bnxt: not in enabled drivers build config 00:01:31.827 net/bonding: not in enabled drivers build config 00:01:31.827 net/cnxk: not in enabled drivers build config 00:01:31.827 net/cpfl: not in enabled drivers build config 00:01:31.827 net/cxgbe: not in enabled drivers build config 00:01:31.827 net/dpaa: not in enabled drivers build config 00:01:31.827 net/dpaa2: not in enabled drivers build config 00:01:31.827 net/e1000: not in enabled drivers build config 00:01:31.827 net/ena: not in enabled drivers build config 00:01:31.827 net/enetc: not in enabled drivers build config 00:01:31.827 net/enetfec: not in enabled drivers build config 00:01:31.827 net/enic: not in enabled drivers build config 00:01:31.827 net/failsafe: not in enabled drivers build config 00:01:31.827 net/fm10k: not in enabled drivers build config 00:01:31.827 net/gve: not in enabled drivers build config 00:01:31.827 net/hinic: not in enabled drivers build config 00:01:31.827 net/hns3: not in enabled drivers build config 00:01:31.827 net/iavf: not in enabled drivers build config 00:01:31.827 net/ice: not in enabled drivers build config 00:01:31.827 net/idpf: not in enabled drivers build config 00:01:31.827 net/igc: not in enabled drivers build config 00:01:31.827 net/ionic: not in enabled drivers build config 00:01:31.827 net/ipn3ke: not in enabled drivers build config 00:01:31.827 net/ixgbe: not in enabled drivers build config 00:01:31.827 net/mana: not in enabled drivers build config 00:01:31.827 net/memif: not in enabled drivers build config 00:01:31.827 net/mlx4: not in enabled drivers build config 00:01:31.827 net/mlx5: not in enabled drivers build config 00:01:31.827 net/mvneta: not in enabled drivers build config 00:01:31.827 net/mvpp2: not in enabled drivers build config 00:01:31.827 net/netvsc: not in enabled drivers build config 00:01:31.827 net/nfb: not in enabled drivers build config 00:01:31.827 net/nfp: not in enabled drivers build config 00:01:31.827 net/ngbe: not in enabled drivers build config 00:01:31.827 net/null: not in enabled drivers build config 00:01:31.828 net/octeontx: not in enabled drivers build config 00:01:31.828 net/octeon_ep: not in enabled drivers build config 00:01:31.828 net/pcap: not in enabled drivers build config 00:01:31.828 net/pfe: not in enabled drivers build config 00:01:31.828 net/qede: not in enabled drivers build config 00:01:31.828 net/ring: not in enabled drivers build config 00:01:31.828 net/sfc: not in enabled drivers build config 00:01:31.828 net/softnic: not in enabled drivers build config 00:01:31.828 net/tap: not in enabled drivers build config 00:01:31.828 net/thunderx: not in enabled drivers build config 00:01:31.828 net/txgbe: not in enabled drivers build config 00:01:31.828 net/vdev_netvsc: not in enabled drivers build config 00:01:31.828 net/vhost: not in enabled drivers build config 00:01:31.828 net/virtio: not in enabled drivers build config 00:01:31.828 net/vmxnet3: not in enabled drivers build config 00:01:31.828 raw/cnxk_bphy: not in enabled drivers build config 00:01:31.828 raw/cnxk_gpio: not in enabled drivers build config 00:01:31.828 raw/dpaa2_cmdif: not in enabled drivers build config 00:01:31.828 raw/ifpga: not in enabled drivers build config 00:01:31.828 raw/ntb: not in enabled drivers build config 00:01:31.828 raw/skeleton: not in enabled drivers build config 00:01:31.828 crypto/armv8: not in enabled drivers build config 00:01:31.828 crypto/bcmfs: not in enabled drivers build config 00:01:31.828 crypto/caam_jr: not in enabled drivers build config 00:01:31.828 crypto/ccp: not in enabled drivers build config 00:01:31.828 crypto/cnxk: not in enabled drivers build config 00:01:31.828 crypto/dpaa_sec: not in enabled drivers build config 00:01:31.828 crypto/dpaa2_sec: not in enabled drivers build config 00:01:31.828 crypto/ipsec_mb: not in enabled drivers build config 00:01:31.828 crypto/mlx5: not in enabled drivers build config 00:01:31.828 crypto/mvsam: not in enabled drivers build config 00:01:31.828 crypto/nitrox: not in enabled drivers build config 00:01:31.828 crypto/null: not in enabled drivers build config 00:01:31.828 crypto/octeontx: not in enabled drivers build config 00:01:31.828 crypto/openssl: not in enabled drivers build config 00:01:31.828 crypto/scheduler: not in enabled drivers build config 00:01:31.828 crypto/uadk: not in enabled drivers build config 00:01:31.828 crypto/virtio: not in enabled drivers build config 00:01:31.828 compress/isal: not in enabled drivers build config 00:01:31.828 compress/mlx5: not in enabled drivers build config 00:01:31.828 compress/octeontx: not in enabled drivers build config 00:01:31.828 compress/zlib: not in enabled drivers build config 00:01:31.828 regex/mlx5: not in enabled drivers build config 00:01:31.828 regex/cn9k: not in enabled drivers build config 00:01:31.828 ml/cnxk: not in enabled drivers build config 00:01:31.828 vdpa/ifc: not in enabled drivers build config 00:01:31.828 vdpa/mlx5: not in enabled drivers build config 00:01:31.828 vdpa/nfp: not in enabled drivers build config 00:01:31.828 vdpa/sfc: not in enabled drivers build config 00:01:31.828 event/cnxk: not in enabled drivers build config 00:01:31.828 event/dlb2: not in enabled drivers build config 00:01:31.828 event/dpaa: not in enabled drivers build config 00:01:31.828 event/dpaa2: not in enabled drivers build config 00:01:31.828 event/dsw: not in enabled drivers build config 00:01:31.828 event/opdl: not in enabled drivers build config 00:01:31.828 event/skeleton: not in enabled drivers build config 00:01:31.828 event/sw: not in enabled drivers build config 00:01:31.828 event/octeontx: not in enabled drivers build config 00:01:31.828 baseband/acc: not in enabled drivers build config 00:01:31.828 baseband/fpga_5gnr_fec: not in enabled drivers build config 00:01:31.828 baseband/fpga_lte_fec: not in enabled drivers build config 00:01:31.828 baseband/la12xx: not in enabled drivers build config 00:01:31.828 baseband/null: not in enabled drivers build config 00:01:31.828 baseband/turbo_sw: not in enabled drivers build config 00:01:31.828 gpu/cuda: not in enabled drivers build config 00:01:31.828 00:01:31.828 00:01:31.828 Build targets in project: 220 00:01:31.828 00:01:31.828 DPDK 23.11.0 00:01:31.828 00:01:31.828 User defined options 00:01:31.828 libdir : lib 00:01:31.828 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:31.828 c_args : -fPIC -g -fcommon -Werror -Wno-stringop-overflow 00:01:31.828 c_link_args : 00:01:31.828 enable_docs : false 00:01:31.828 enable_drivers: bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:01:31.828 enable_kmods : false 00:01:31.828 machine : native 00:01:31.828 tests : false 00:01:31.828 00:01:31.828 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:31.828 WARNING: Running the setup command as `meson [options]` instead of `meson setup [options]` is ambiguous and deprecated. 00:01:31.828 03:04:37 build_native_dpdk -- common/autobuild_common.sh@186 -- $ ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp -j48 00:01:31.828 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp' 00:01:31.828 [1/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:01:31.828 [2/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:01:31.828 [3/710] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:01:31.828 [4/710] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:01:31.828 [5/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:01:31.828 [6/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:01:31.828 [7/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:01:31.828 [8/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:01:31.828 [9/710] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:01:31.828 [10/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:01:31.828 [11/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:01:31.828 [12/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:01:32.087 [13/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:01:32.087 [14/710] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:01:32.087 [15/710] Linking static target lib/librte_kvargs.a 00:01:32.087 [16/710] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:01:32.087 [17/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:01:32.087 [18/710] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:01:32.087 [19/710] Compiling C object lib/librte_log.a.p/log_log.c.o 00:01:32.087 [20/710] Linking static target lib/librte_log.a 00:01:32.087 [21/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:01:32.348 [22/710] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:01:32.922 [23/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:01:32.922 [24/710] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:01:32.922 [25/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:01:32.922 [26/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:01:32.922 [27/710] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:01:32.922 [28/710] Linking target lib/librte_log.so.24.0 00:01:32.922 [29/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:01:32.922 [30/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:01:32.922 [31/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:01:32.922 [32/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:01:32.922 [33/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:01:32.922 [34/710] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:01:32.922 [35/710] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:01:32.922 [36/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:01:32.922 [37/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:01:32.922 [38/710] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:01:32.922 [39/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:01:32.922 [40/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:01:32.922 [41/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:01:32.922 [42/710] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:01:32.922 [43/710] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:01:32.922 [44/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:01:33.184 [45/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:01:33.184 [46/710] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:01:33.184 [47/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:01:33.184 [48/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:01:33.184 [49/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:01:33.184 [50/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:01:33.184 [51/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:01:33.184 [52/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:01:33.184 [53/710] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:01:33.184 [54/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:01:33.184 [55/710] Generating symbol file lib/librte_log.so.24.0.p/librte_log.so.24.0.symbols 00:01:33.184 [56/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:01:33.184 [57/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:01:33.184 [58/710] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:01:33.184 [59/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:01:33.184 [60/710] Linking target lib/librte_kvargs.so.24.0 00:01:33.184 [61/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:01:33.184 [62/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:01:33.449 [63/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:01:33.450 [64/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:01:33.450 [65/710] Generating symbol file lib/librte_kvargs.so.24.0.p/librte_kvargs.so.24.0.symbols 00:01:33.450 [66/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:01:33.711 [67/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:01:33.711 [68/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:01:33.711 [69/710] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:01:33.711 [70/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:01:33.711 [71/710] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:01:33.711 [72/710] Linking static target lib/librte_pci.a 00:01:33.711 [73/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:01:33.711 [74/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:01:33.711 [75/710] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:01:33.983 [76/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:01:33.983 [77/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:01:33.983 [78/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:01:33.983 [79/710] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:01:33.983 [80/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:01:33.983 [81/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:01:33.983 [82/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:01:33.983 [83/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:01:34.247 [84/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:01:34.247 [85/710] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:01:34.247 [86/710] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:01:34.247 [87/710] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:34.247 [88/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:01:34.247 [89/710] Linking static target lib/net/libnet_crc_avx512_lib.a 00:01:34.247 [90/710] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:01:34.247 [91/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:01:34.247 [92/710] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:01:34.247 [93/710] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:01:34.247 [94/710] Linking static target lib/librte_ring.a 00:01:34.247 [95/710] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:01:34.247 [96/710] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:01:34.247 [97/710] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:01:34.247 [98/710] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:01:34.247 [99/710] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:01:34.247 [100/710] Linking static target lib/librte_meter.a 00:01:34.247 [101/710] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:01:34.511 [102/710] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:01:34.511 [103/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:01:34.511 [104/710] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:01:34.511 [105/710] Linking static target lib/librte_telemetry.a 00:01:34.511 [106/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:01:34.511 [107/710] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:01:34.511 [108/710] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:01:34.511 [109/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:01:34.511 [110/710] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:01:34.511 [111/710] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:01:34.511 [112/710] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:01:34.511 [113/710] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:01:34.774 [114/710] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:01:34.774 [115/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:01:34.774 [116/710] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:01:34.774 [117/710] Linking static target lib/librte_eal.a 00:01:34.774 [118/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:01:34.774 [119/710] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:01:34.774 [120/710] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:01:34.774 [121/710] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:01:34.774 [122/710] Linking static target lib/librte_net.a 00:01:34.774 [123/710] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:01:34.774 [124/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:01:35.040 [125/710] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:01:35.040 [126/710] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:01:35.040 [127/710] Linking static target lib/librte_mempool.a 00:01:35.040 [128/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:01:35.040 [129/710] Linking static target lib/librte_cmdline.a 00:01:35.040 [130/710] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:01:35.300 [131/710] Linking target lib/librte_telemetry.so.24.0 00:01:35.300 [132/710] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:01:35.300 [133/710] Compiling C object lib/librte_cfgfile.a.p/cfgfile_rte_cfgfile.c.o 00:01:35.300 [134/710] Linking static target lib/librte_cfgfile.a 00:01:35.300 [135/710] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics.c.o 00:01:35.300 [136/710] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:01:35.300 [137/710] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics_telemetry.c.o 00:01:35.300 [138/710] Linking static target lib/librte_metrics.a 00:01:35.300 [139/710] Compiling C object lib/librte_acl.a.p/acl_tb_mem.c.o 00:01:35.300 [140/710] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:01:35.300 [141/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:01:35.300 [142/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:01:35.300 [143/710] Generating symbol file lib/librte_telemetry.so.24.0.p/librte_telemetry.so.24.0.symbols 00:01:35.563 [144/710] Compiling C object lib/librte_acl.a.p/acl_rte_acl.c.o 00:01:35.563 [145/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf.c.o 00:01:35.563 [146/710] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:01:35.563 [147/710] Linking static target lib/librte_rcu.a 00:01:35.563 [148/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_stub.c.o 00:01:35.563 [149/710] Compiling C object lib/librte_bitratestats.a.p/bitratestats_rte_bitrate.c.o 00:01:35.563 [150/710] Linking static target lib/librte_bitratestats.a 00:01:35.828 [151/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_dump.c.o 00:01:35.828 [152/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load.c.o 00:01:35.828 [153/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:01:35.828 [154/710] Compiling C object lib/librte_acl.a.p/acl_acl_gen.c.o 00:01:35.828 [155/710] Generating lib/cfgfile.sym_chk with a custom command (wrapped by meson to capture output) 00:01:35.828 [156/710] Compiling C object lib/librte_acl.a.p/acl_acl_run_scalar.c.o 00:01:35.828 [157/710] Generating lib/metrics.sym_chk with a custom command (wrapped by meson to capture output) 00:01:36.088 [158/710] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:01:36.088 [159/710] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:01:36.088 [160/710] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:01:36.088 [161/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load_elf.c.o 00:01:36.088 [162/710] Linking static target lib/librte_timer.a 00:01:36.088 [163/710] Generating lib/bitratestats.sym_chk with a custom command (wrapped by meson to capture output) 00:01:36.088 [164/710] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:01:36.088 [165/710] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:01:36.088 [166/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_exec.c.o 00:01:36.088 [167/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_convert.c.o 00:01:36.350 [168/710] Compiling C object lib/librte_bbdev.a.p/bbdev_rte_bbdev.c.o 00:01:36.350 [169/710] Linking static target lib/librte_bbdev.a 00:01:36.350 [170/710] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:01:36.350 [171/710] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:01:36.612 [172/710] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_match_sse.c.o 00:01:36.612 [173/710] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:01:36.612 [174/710] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:01:36.612 [175/710] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:01:36.612 [176/710] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:01:36.612 [177/710] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:01:36.612 [178/710] Linking static target lib/librte_compressdev.a 00:01:36.612 [179/710] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_private.c.o 00:01:36.612 [180/710] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_single.c.o 00:01:36.876 [181/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_pkt.c.o 00:01:36.876 [182/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_validate.c.o 00:01:36.876 [183/710] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:01:36.876 [184/710] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor.c.o 00:01:37.142 [185/710] Linking static target lib/librte_distributor.a 00:01:37.142 [186/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_ring.c.o 00:01:37.142 [187/710] Generating lib/bbdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:37.404 [188/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_jit_x86.c.o 00:01:37.404 [189/710] Linking static target lib/librte_bpf.a 00:01:37.404 [190/710] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_trace_points.c.o 00:01:37.404 [191/710] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:01:37.404 [192/710] Linking static target lib/librte_dmadev.a 00:01:37.404 [193/710] Compiling C object lib/librte_gso.a.p/gso_gso_tcp4.c.o 00:01:37.404 [194/710] Compiling C object lib/librte_dispatcher.a.p/dispatcher_rte_dispatcher.c.o 00:01:37.404 [195/710] Compiling C object lib/librte_gso.a.p/gso_gso_udp4.c.o 00:01:37.404 [196/710] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:37.404 [197/710] Linking static target lib/librte_dispatcher.a 00:01:37.404 [198/710] Generating lib/distributor.sym_chk with a custom command (wrapped by meson to capture output) 00:01:37.404 [199/710] Compiling C object lib/librte_gro.a.p/gro_gro_tcp6.c.o 00:01:37.404 [200/710] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_tcp4.c.o 00:01:37.662 [201/710] Compiling C object lib/librte_gpudev.a.p/gpudev_gpudev.c.o 00:01:37.662 [202/710] Linking static target lib/librte_gpudev.a 00:01:37.662 [203/710] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_udp4.c.o 00:01:37.662 [204/710] Compiling C object lib/librte_gro.a.p/gro_gro_tcp4.c.o 00:01:37.662 [205/710] Compiling C object lib/librte_gro.a.p/gro_gro_udp4.c.o 00:01:37.662 [206/710] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:01:37.662 [207/710] Compiling C object lib/librte_gso.a.p/gso_rte_gso.c.o 00:01:37.662 [208/710] Compiling C object lib/librte_gro.a.p/gro_rte_gro.c.o 00:01:37.662 [209/710] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_udp4.c.o 00:01:37.662 [210/710] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:01:37.662 [211/710] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_tcp4.c.o 00:01:37.662 [212/710] Linking static target lib/librte_gro.a 00:01:37.662 [213/710] Generating lib/bpf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:37.662 [214/710] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:01:37.940 [215/710] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_reassembly.c.o 00:01:37.940 [216/710] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:37.940 [217/710] Compiling C object lib/librte_jobstats.a.p/jobstats_rte_jobstats.c.o 00:01:37.940 [218/710] Linking static target lib/librte_jobstats.a 00:01:37.940 [219/710] Compiling C object lib/librte_acl.a.p/acl_acl_bld.c.o 00:01:37.940 [220/710] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_reassembly.c.o 00:01:38.202 [221/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_dma_adapter.c.o 00:01:38.202 [222/710] Generating lib/gro.sym_chk with a custom command (wrapped by meson to capture output) 00:01:38.202 [223/710] Generating lib/dispatcher.sym_chk with a custom command (wrapped by meson to capture output) 00:01:38.467 [224/710] Compiling C object lib/librte_ip_frag.a.p/ip_frag_ip_frag_internal.c.o 00:01:38.467 [225/710] Compiling C object lib/librte_latencystats.a.p/latencystats_rte_latencystats.c.o 00:01:38.467 [226/710] Linking static target lib/librte_latencystats.a 00:01:38.467 [227/710] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ip_frag_common.c.o 00:01:38.467 [228/710] Compiling C object lib/member/libsketch_avx512_tmp.a.p/rte_member_sketch_avx512.c.o 00:01:38.467 [229/710] Linking static target lib/member/libsketch_avx512_tmp.a 00:01:38.467 [230/710] Generating lib/jobstats.sym_chk with a custom command (wrapped by meson to capture output) 00:01:38.467 [231/710] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_fragmentation.c.o 00:01:38.467 [232/710] Compiling C object lib/librte_acl.a.p/acl_acl_run_sse.c.o 00:01:38.467 [233/710] Compiling C object lib/librte_member.a.p/member_rte_member.c.o 00:01:38.468 [234/710] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm.c.o 00:01:38.729 [235/710] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_fragmentation.c.o 00:01:38.729 [236/710] Linking static target lib/librte_ip_frag.a 00:01:38.729 [237/710] Compiling C object lib/librte_sched.a.p/sched_rte_approx.c.o 00:01:38.729 [238/710] Generating lib/latencystats.sym_chk with a custom command (wrapped by meson to capture output) 00:01:38.992 [239/710] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:01:38.992 [240/710] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:01:38.992 [241/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_crypto_adapter.c.o 00:01:38.992 [242/710] Generating lib/gpudev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:38.992 [243/710] Compiling C object lib/librte_mldev.a.p/mldev_rte_mldev_pmd.c.o 00:01:38.992 [244/710] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:01:38.992 [245/710] Generating lib/ip_frag.sym_chk with a custom command (wrapped by meson to capture output) 00:01:39.257 [246/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_tx_adapter.c.o 00:01:39.257 [247/710] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils.c.o 00:01:39.257 [248/710] Compiling C object lib/librte_gso.a.p/gso_gso_common.c.o 00:01:39.257 [249/710] Linking static target lib/librte_gso.a 00:01:39.257 [250/710] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:01:39.257 [251/710] Compiling C object lib/librte_regexdev.a.p/regexdev_rte_regexdev.c.o 00:01:39.257 [252/710] Linking static target lib/librte_regexdev.a 00:01:39.257 [253/710] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils_scalar_bfloat16.c.o 00:01:39.519 [254/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_eventdev.c.o 00:01:39.519 [255/710] Compiling C object lib/librte_member.a.p/member_rte_member_vbf.c.o 00:01:39.519 [256/710] Compiling C object lib/librte_rawdev.a.p/rawdev_rte_rawdev.c.o 00:01:39.519 [257/710] Linking static target lib/librte_rawdev.a 00:01:39.519 [258/710] Compiling C object lib/librte_mldev.a.p/mldev_rte_mldev.c.o 00:01:39.519 [259/710] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils_scalar.c.o 00:01:39.519 [260/710] Linking static target lib/librte_mldev.a 00:01:39.519 [261/710] Generating lib/gso.sym_chk with a custom command (wrapped by meson to capture output) 00:01:39.519 [262/710] Compiling C object lib/librte_sched.a.p/sched_rte_pie.c.o 00:01:39.519 [263/710] Compiling C object lib/librte_sched.a.p/sched_rte_red.c.o 00:01:39.781 [264/710] Compiling C object lib/librte_pcapng.a.p/pcapng_rte_pcapng.c.o 00:01:39.781 [265/710] Compiling C object lib/librte_efd.a.p/efd_rte_efd.c.o 00:01:39.781 [266/710] Compiling C object lib/librte_stack.a.p/stack_rte_stack_std.c.o 00:01:39.781 [267/710] Linking static target lib/librte_pcapng.a 00:01:39.781 [268/710] Linking static target lib/librte_efd.a 00:01:39.781 [269/710] Compiling C object lib/librte_stack.a.p/stack_rte_stack.c.o 00:01:39.781 [270/710] Compiling C object lib/acl/libavx2_tmp.a.p/acl_run_avx2.c.o 00:01:39.781 [271/710] Linking static target lib/acl/libavx2_tmp.a 00:01:39.781 [272/710] Compiling C object lib/librte_stack.a.p/stack_rte_stack_lf.c.o 00:01:39.781 [273/710] Linking static target lib/librte_stack.a 00:01:40.042 [274/710] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm6.c.o 00:01:40.042 [275/710] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:01:40.042 [276/710] Compiling C object lib/librte_member.a.p/member_rte_member_ht.c.o 00:01:40.042 [277/710] Linking static target lib/librte_lpm.a 00:01:40.042 [278/710] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:01:40.042 [279/710] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:01:40.042 [280/710] Generating lib/pcapng.sym_chk with a custom command (wrapped by meson to capture output) 00:01:40.042 [281/710] Generating lib/efd.sym_chk with a custom command (wrapped by meson to capture output) 00:01:40.042 [282/710] Compiling C object lib/librte_rib.a.p/rib_rte_rib.c.o 00:01:40.042 [283/710] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:01:40.042 [284/710] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:01:40.042 [285/710] Linking static target lib/librte_hash.a 00:01:40.301 [286/710] Generating lib/rawdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:40.301 [287/710] Generating lib/stack.sym_chk with a custom command (wrapped by meson to capture output) 00:01:40.301 [288/710] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:01:40.301 [289/710] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:01:40.301 [290/710] Linking static target lib/librte_reorder.a 00:01:40.301 [291/710] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:01:40.301 [292/710] Linking static target lib/librte_power.a 00:01:40.563 [293/710] Compiling C object lib/acl/libavx512_tmp.a.p/acl_run_avx512.c.o 00:01:40.563 [294/710] Linking static target lib/acl/libavx512_tmp.a 00:01:40.563 [295/710] Generating lib/lpm.sym_chk with a custom command (wrapped by meson to capture output) 00:01:40.563 [296/710] Linking static target lib/librte_acl.a 00:01:40.563 [297/710] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:01:40.563 [298/710] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:01:40.563 [299/710] Linking static target lib/librte_security.a 00:01:40.563 [300/710] Generating lib/regexdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:40.832 [301/710] Compiling C object lib/librte_ipsec.a.p/ipsec_ses.c.o 00:01:40.832 [302/710] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:01:40.832 [303/710] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:01:40.832 [304/710] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:01:40.832 [305/710] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:01:40.832 [306/710] Compiling C object lib/librte_rib.a.p/rib_rte_rib6.c.o 00:01:40.832 [307/710] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:01:40.832 [308/710] Linking static target lib/librte_mbuf.a 00:01:40.832 [309/710] Linking static target lib/librte_rib.a 00:01:40.832 [310/710] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_ctrl_pdu.c.o 00:01:41.093 [311/710] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_crypto.c.o 00:01:41.093 [312/710] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:01:41.093 [313/710] Generating lib/acl.sym_chk with a custom command (wrapped by meson to capture output) 00:01:41.093 [314/710] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_telemetry.c.o 00:01:41.093 [315/710] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_reorder.c.o 00:01:41.093 [316/710] Compiling C object lib/librte_ipsec.a.p/ipsec_sa.c.o 00:01:41.093 [317/710] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_cnt.c.o 00:01:41.093 [318/710] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:01:41.358 [319/710] Compiling C object lib/librte_fib.a.p/fib_rte_fib.c.o 00:01:41.358 [320/710] Compiling C object lib/fib/libdir24_8_avx512_tmp.a.p/dir24_8_avx512.c.o 00:01:41.358 [321/710] Linking static target lib/fib/libdir24_8_avx512_tmp.a 00:01:41.358 [322/710] Compiling C object lib/fib/libtrie_avx512_tmp.a.p/trie_avx512.c.o 00:01:41.358 [323/710] Linking static target lib/fib/libtrie_avx512_tmp.a 00:01:41.358 [324/710] Compiling C object lib/librte_table.a.p/table_rte_swx_keycmp.c.o 00:01:41.358 [325/710] Compiling C object lib/librte_fib.a.p/fib_rte_fib6.c.o 00:01:41.358 [326/710] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:01:41.622 [327/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:01:41.622 [328/710] Generating lib/rib.sym_chk with a custom command (wrapped by meson to capture output) 00:01:41.622 [329/710] Generating lib/mldev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:41.622 [330/710] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:41.889 [331/710] Compiling C object lib/librte_port.a.p/port_rte_port_sched.c.o 00:01:41.889 [332/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_rx_adapter.c.o 00:01:41.889 [333/710] Compiling C object lib/librte_pdcp.a.p/pdcp_rte_pdcp.c.o 00:01:42.148 [334/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_timer_adapter.c.o 00:01:42.148 [335/710] Linking static target lib/librte_eventdev.a 00:01:42.148 [336/710] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:01:42.148 [337/710] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_sad.c.o 00:01:42.148 [338/710] Compiling C object lib/librte_member.a.p/member_rte_member_sketch.c.o 00:01:42.148 [339/710] Linking static target lib/librte_member.a 00:01:42.418 [340/710] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:01:42.418 [341/710] Linking static target lib/librte_cryptodev.a 00:01:42.418 [342/710] Compiling C object lib/librte_port.a.p/port_rte_port_frag.c.o 00:01:42.418 [343/710] Compiling C object lib/librte_table.a.p/table_rte_swx_table_selector.c.o 00:01:42.418 [344/710] Compiling C object lib/librte_port.a.p/port_rte_port_ethdev.c.o 00:01:42.418 [345/710] Compiling C object lib/librte_port.a.p/port_rte_port_fd.c.o 00:01:42.418 [346/710] Compiling C object lib/librte_table.a.p/table_rte_swx_table_wm.c.o 00:01:42.418 [347/710] Compiling C object lib/librte_table.a.p/table_rte_table_array.c.o 00:01:42.418 [348/710] Compiling C object lib/librte_fib.a.p/fib_trie.c.o 00:01:42.418 [349/710] Compiling C object lib/librte_port.a.p/port_rte_port_ras.c.o 00:01:42.418 [350/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:01:42.418 [351/710] Compiling C object lib/librte_table.a.p/table_rte_swx_table_learner.c.o 00:01:42.418 [352/710] Linking static target lib/librte_ethdev.a 00:01:42.678 [353/710] Compiling C object lib/librte_fib.a.p/fib_dir24_8.c.o 00:01:42.678 [354/710] Linking static target lib/librte_fib.a 00:01:42.678 [355/710] Compiling C object lib/librte_table.a.p/table_rte_swx_table_em.c.o 00:01:42.678 [356/710] Compiling C object lib/librte_table.a.p/table_rte_table_hash_cuckoo.c.o 00:01:42.678 [357/710] Compiling C object lib/librte_sched.a.p/sched_rte_sched.c.o 00:01:42.678 [358/710] Linking static target lib/librte_sched.a 00:01:42.678 [359/710] Generating lib/member.sym_chk with a custom command (wrapped by meson to capture output) 00:01:42.939 [360/710] Compiling C object lib/librte_table.a.p/table_rte_table_acl.c.o 00:01:42.939 [361/710] Compiling C object lib/librte_table.a.p/table_rte_table_lpm_ipv6.c.o 00:01:42.939 [362/710] Compiling C object lib/librte_table.a.p/table_rte_table_lpm.c.o 00:01:42.939 [363/710] Compiling C object lib/librte_table.a.p/table_rte_table_stub.c.o 00:01:42.939 [364/710] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ethdev.c.o 00:01:42.939 [365/710] Compiling C object lib/librte_port.a.p/port_rte_swx_port_fd.c.o 00:01:42.939 [366/710] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:01:42.939 [367/710] Compiling C object lib/librte_port.a.p/port_rte_port_sym_crypto.c.o 00:01:43.201 [368/710] Generating lib/fib.sym_chk with a custom command (wrapped by meson to capture output) 00:01:43.201 [369/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_port_in_action.c.o 00:01:43.201 [370/710] Compiling C object lib/librte_port.a.p/port_rte_port_eventdev.c.o 00:01:43.201 [371/710] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_outb.c.o 00:01:43.201 [372/710] Compiling C object lib/librte_node.a.p/node_null.c.o 00:01:43.466 [373/710] Generating lib/sched.sym_chk with a custom command (wrapped by meson to capture output) 00:01:43.466 [374/710] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key8.c.o 00:01:43.729 [375/710] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ring.c.o 00:01:43.729 [376/710] Compiling C object lib/librte_pdump.a.p/pdump_rte_pdump.c.o 00:01:43.729 [377/710] Linking static target lib/librte_pdump.a 00:01:43.729 [378/710] Compiling C object lib/librte_graph.a.p/graph_graph_debug.c.o 00:01:43.729 [379/710] Compiling C object lib/librte_graph.a.p/graph_graph_ops.c.o 00:01:43.729 [380/710] Compiling C object lib/librte_graph.a.p/graph_rte_graph_worker.c.o 00:01:43.729 [381/710] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:01:43.729 [382/710] Compiling C object lib/librte_graph.a.p/graph_node.c.o 00:01:43.729 [383/710] Compiling C object lib/librte_table.a.p/table_rte_table_hash_ext.c.o 00:01:43.729 [384/710] Compiling C object lib/librte_graph.a.p/graph_graph_populate.c.o 00:01:43.729 [385/710] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key16.c.o 00:01:43.993 [386/710] Compiling C object lib/librte_graph.a.p/graph_graph_pcap.c.o 00:01:43.993 [387/710] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:01:43.993 [388/710] Compiling C object lib/librte_node.a.p/node_ethdev_ctrl.c.o 00:01:43.993 [389/710] Compiling C object lib/librte_table.a.p/table_rte_table_hash_lru.c.o 00:01:43.993 [390/710] Compiling C object lib/librte_graph.a.p/graph_graph.c.o 00:01:43.993 [391/710] Generating lib/pdump.sym_chk with a custom command (wrapped by meson to capture output) 00:01:44.254 [392/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_pipeline.c.o 00:01:44.254 [393/710] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_inb.c.o 00:01:44.254 [394/710] Linking static target lib/librte_ipsec.a 00:01:44.254 [395/710] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:44.254 [396/710] Compiling C object lib/librte_node.a.p/node_log.c.o 00:01:44.254 [397/710] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key32.c.o 00:01:44.254 [398/710] Linking static target lib/librte_table.a 00:01:44.521 [399/710] Compiling C object lib/librte_node.a.p/node_pkt_drop.c.o 00:01:44.521 [400/710] Compiling C object lib/librte_node.a.p/node_kernel_tx.c.o 00:01:44.785 [401/710] Compiling C object lib/librte_port.a.p/port_rte_swx_port_source_sink.c.o 00:01:44.785 [402/710] Generating lib/ipsec.sym_chk with a custom command (wrapped by meson to capture output) 00:01:45.050 [403/710] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:01:45.050 [404/710] Compiling C object lib/librte_graph.a.p/graph_graph_stats.c.o 00:01:45.050 [405/710] Compiling C object lib/librte_node.a.p/node_ethdev_tx.c.o 00:01:45.050 [406/710] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:01:45.050 [407/710] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:01:45.050 [408/710] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:01:45.310 [409/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_diag.c.o 00:01:45.310 [410/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ipsec.c.o 00:01:45.310 [411/710] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:01:45.310 [412/710] Compiling C object lib/librte_node.a.p/node_ethdev_rx.c.o 00:01:45.310 [413/710] Linking static target drivers/libtmp_rte_bus_vdev.a 00:01:45.310 [414/710] Compiling C object lib/librte_port.a.p/port_rte_port_ring.c.o 00:01:45.570 [415/710] Generating lib/table.sym_chk with a custom command (wrapped by meson to capture output) 00:01:45.570 [416/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_hmc.c.o 00:01:45.570 [417/710] Generating lib/eventdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:45.570 [418/710] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:01:45.570 [419/710] Linking static target drivers/libtmp_rte_bus_pci.a 00:01:45.570 [420/710] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:01:45.570 [421/710] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:45.570 [422/710] Linking static target drivers/librte_bus_vdev.a 00:01:45.833 [423/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_dcb.c.o 00:01:45.833 [424/710] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:01:45.833 [425/710] Compiling C object drivers/librte_bus_vdev.so.24.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:45.833 [426/710] Compiling C object lib/librte_port.a.p/port_rte_port_source_sink.c.o 00:01:45.833 [427/710] Linking static target lib/librte_port.a 00:01:45.833 [428/710] Linking target lib/librte_eal.so.24.0 00:01:46.098 [429/710] Compiling C object lib/librte_graph.a.p/graph_rte_graph_model_mcore_dispatch.c.o 00:01:46.098 [430/710] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:01:46.098 [431/710] Linking static target lib/librte_graph.a 00:01:46.098 [432/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ctl.c.o 00:01:46.098 [433/710] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:46.098 [434/710] Linking static target drivers/librte_bus_pci.a 00:01:46.098 [435/710] Compiling C object drivers/librte_bus_pci.so.24.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:46.098 [436/710] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:46.098 [437/710] Generating symbol file lib/librte_eal.so.24.0.p/librte_eal.so.24.0.symbols 00:01:46.098 [438/710] Compiling C object lib/librte_node.a.p/node_ip4_reassembly.c.o 00:01:46.098 [439/710] Compiling C object lib/librte_node.a.p/node_kernel_rx.c.o 00:01:46.359 [440/710] Linking target lib/librte_ring.so.24.0 00:01:46.359 [441/710] Linking target lib/librte_meter.so.24.0 00:01:46.359 [442/710] Linking target lib/librte_pci.so.24.0 00:01:46.359 [443/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_lan_hmc.c.o 00:01:46.359 [444/710] Linking target lib/librte_timer.so.24.0 00:01:46.359 [445/710] Generating symbol file lib/librte_pci.so.24.0.p/librte_pci.so.24.0.symbols 00:01:46.359 [446/710] Generating symbol file lib/librte_meter.so.24.0.p/librte_meter.so.24.0.symbols 00:01:46.359 [447/710] Generating symbol file lib/librte_ring.so.24.0.p/librte_ring.so.24.0.symbols 00:01:46.625 [448/710] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:01:46.625 [449/710] Compiling C object app/dpdk-graph.p/graph_cli.c.o 00:01:46.625 [450/710] Linking target lib/librte_rcu.so.24.0 00:01:46.625 [451/710] Linking target lib/librte_cfgfile.so.24.0 00:01:46.625 [452/710] Linking target lib/librte_acl.so.24.0 00:01:46.625 [453/710] Compiling C object app/dpdk-graph.p/graph_conn.c.o 00:01:46.625 [454/710] Linking target lib/librte_mempool.so.24.0 00:01:46.625 [455/710] Linking target lib/librte_dmadev.so.24.0 00:01:46.625 [456/710] Generating symbol file lib/librte_timer.so.24.0.p/librte_timer.so.24.0.symbols 00:01:46.625 [457/710] Linking target lib/librte_jobstats.so.24.0 00:01:46.625 [458/710] Linking target lib/librte_rawdev.so.24.0 00:01:46.625 [459/710] Linking static target drivers/libtmp_rte_mempool_ring.a 00:01:46.625 [460/710] Generating lib/port.sym_chk with a custom command (wrapped by meson to capture output) 00:01:46.625 [461/710] Linking target lib/librte_stack.so.24.0 00:01:46.625 [462/710] Compiling C object app/dpdk-graph.p/graph_ethdev_rx.c.o 00:01:46.625 [463/710] Linking target drivers/librte_bus_vdev.so.24.0 00:01:46.886 [464/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_vf_representor.c.o 00:01:46.886 [465/710] Compiling C object app/dpdk-graph.p/graph_ip4_route.c.o 00:01:46.886 [466/710] Generating symbol file lib/librte_acl.so.24.0.p/librte_acl.so.24.0.symbols 00:01:46.886 [467/710] Generating symbol file lib/librte_rcu.so.24.0.p/librte_rcu.so.24.0.symbols 00:01:46.886 [468/710] Generating symbol file lib/librte_dmadev.so.24.0.p/librte_dmadev.so.24.0.symbols 00:01:46.886 [469/710] Generating symbol file lib/librte_mempool.so.24.0.p/librte_mempool.so.24.0.symbols 00:01:46.886 [470/710] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:46.887 [471/710] Linking target lib/librte_mbuf.so.24.0 00:01:46.887 [472/710] Generating symbol file drivers/librte_bus_vdev.so.24.0.p/librte_bus_vdev.so.24.0.symbols 00:01:46.887 [473/710] Linking target lib/librte_rib.so.24.0 00:01:46.887 [474/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_tm.c.o 00:01:46.887 [475/710] Generating lib/graph.sym_chk with a custom command (wrapped by meson to capture output) 00:01:46.887 [476/710] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:01:47.152 [477/710] Linking target drivers/librte_bus_pci.so.24.0 00:01:47.152 [478/710] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:47.152 [479/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_hash.c.o 00:01:47.152 [480/710] Compiling C object app/dpdk-graph.p/graph_ip6_route.c.o 00:01:47.152 [481/710] Linking static target drivers/librte_mempool_ring.a 00:01:47.152 [482/710] Compiling C object app/dpdk-graph.p/graph_graph.c.o 00:01:47.152 [483/710] Compiling C object drivers/librte_mempool_ring.so.24.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:47.152 [484/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_pf.c.o 00:01:47.152 [485/710] Linking target drivers/librte_mempool_ring.so.24.0 00:01:47.152 [486/710] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_cmdline_test.c.o 00:01:47.152 [487/710] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_commands.c.o 00:01:47.152 [488/710] Generating symbol file lib/librte_mbuf.so.24.0.p/librte_mbuf.so.24.0.symbols 00:01:47.152 [489/710] Generating symbol file lib/librte_rib.so.24.0.p/librte_rib.so.24.0.symbols 00:01:47.152 [490/710] Compiling C object app/dpdk-graph.p/graph_ethdev.c.o 00:01:47.152 [491/710] Linking target lib/librte_net.so.24.0 00:01:47.152 [492/710] Linking target lib/librte_bbdev.so.24.0 00:01:47.418 [493/710] Linking target lib/librte_compressdev.so.24.0 00:01:47.418 [494/710] Generating symbol file drivers/librte_bus_pci.so.24.0.p/librte_bus_pci.so.24.0.symbols 00:01:47.418 [495/710] Compiling C object app/dpdk-dumpcap.p/dumpcap_main.c.o 00:01:47.418 [496/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_recycle_mbufs_vec_common.c.o 00:01:47.418 [497/710] Linking target lib/librte_cryptodev.so.24.0 00:01:47.418 [498/710] Linking target lib/librte_distributor.so.24.0 00:01:47.418 [499/710] Linking target lib/librte_gpudev.so.24.0 00:01:47.418 [500/710] Linking target lib/librte_regexdev.so.24.0 00:01:47.418 [501/710] Compiling C object app/dpdk-graph.p/graph_main.c.o 00:01:47.418 [502/710] Linking target lib/librte_reorder.so.24.0 00:01:47.418 [503/710] Linking target lib/librte_sched.so.24.0 00:01:47.418 [504/710] Linking target lib/librte_mldev.so.24.0 00:01:47.418 [505/710] Compiling C object app/dpdk-graph.p/graph_l3fwd.c.o 00:01:47.418 [506/710] Linking target lib/librte_fib.so.24.0 00:01:47.418 [507/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline_spec.c.o 00:01:47.418 [508/710] Generating symbol file lib/librte_net.so.24.0.p/librte_net.so.24.0.symbols 00:01:47.679 [509/710] Linking target lib/librte_cmdline.so.24.0 00:01:47.679 [510/710] Generating symbol file lib/librte_cryptodev.so.24.0.p/librte_cryptodev.so.24.0.symbols 00:01:47.679 [511/710] Linking target lib/librte_hash.so.24.0 00:01:47.679 [512/710] Linking target lib/librte_security.so.24.0 00:01:47.679 [513/710] Compiling C object lib/librte_node.a.p/node_ip4_local.c.o 00:01:47.679 [514/710] Generating symbol file lib/librte_reorder.so.24.0.p/librte_reorder.so.24.0.symbols 00:01:47.679 [515/710] Generating symbol file lib/librte_sched.so.24.0.p/librte_sched.so.24.0.symbols 00:01:47.942 [516/710] Compiling C object app/dpdk-graph.p/graph_mempool.c.o 00:01:47.942 [517/710] Generating symbol file lib/librte_hash.so.24.0.p/librte_hash.so.24.0.symbols 00:01:47.942 [518/710] Generating symbol file lib/librte_security.so.24.0.p/librte_security.so.24.0.symbols 00:01:47.942 [519/710] Compiling C object app/dpdk-graph.p/graph_utils.c.o 00:01:47.942 [520/710] Linking target lib/librte_efd.so.24.0 00:01:47.942 [521/710] Linking target lib/librte_lpm.so.24.0 00:01:47.942 [522/710] Linking target lib/librte_member.so.24.0 00:01:47.942 [523/710] Linking target lib/librte_ipsec.so.24.0 00:01:47.942 [524/710] Compiling C object lib/librte_node.a.p/node_udp4_input.c.o 00:01:48.202 [525/710] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_main.c.o 00:01:48.202 [526/710] Generating symbol file lib/librte_lpm.so.24.0.p/librte_lpm.so.24.0.symbols 00:01:48.202 [527/710] Generating symbol file lib/librte_ipsec.so.24.0.p/librte_ipsec.so.24.0.symbols 00:01:48.202 [528/710] Compiling C object drivers/net/i40e/libi40e_avx512_lib.a.p/i40e_rxtx_vec_avx512.c.o 00:01:48.202 [529/710] Linking static target drivers/net/i40e/libi40e_avx512_lib.a 00:01:48.202 [530/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_fdir.c.o 00:01:48.480 [531/710] Compiling C object app/dpdk-graph.p/graph_neigh.c.o 00:01:48.480 [532/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_test.c.o 00:01:48.480 [533/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_adminq.c.o 00:01:48.743 [534/710] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_options_parse.c.o 00:01:48.743 [535/710] Compiling C object app/dpdk-test-acl.p/test-acl_main.c.o 00:01:48.743 [536/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_parser.c.o 00:01:48.743 [537/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_common.c.o 00:01:48.743 [538/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_common.c.o 00:01:49.007 [539/710] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_main.c.o 00:01:49.007 [540/710] Compiling C object drivers/net/i40e/libi40e_avx2_lib.a.p/i40e_rxtx_vec_avx2.c.o 00:01:49.007 [541/710] Linking static target drivers/net/i40e/libi40e_avx2_lib.a 00:01:49.267 [542/710] Compiling C object lib/librte_node.a.p/node_pkt_cls.c.o 00:01:49.267 [543/710] Compiling C object app/dpdk-pdump.p/pdump_main.c.o 00:01:49.267 [544/710] Compiling C object lib/librte_node.a.p/node_ip6_lookup.c.o 00:01:49.267 [545/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_flow.c.o 00:01:49.267 [546/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vectors.c.o 00:01:49.267 [547/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vector_parsing.c.o 00:01:49.267 [548/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_options_parsing.c.o 00:01:49.267 [549/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_nvm.c.o 00:01:49.267 [550/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_ops.c.o 00:01:49.267 [551/710] Linking static target drivers/net/i40e/base/libi40e_base.a 00:01:49.533 [552/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_test.c.o 00:01:49.533 [553/710] Compiling C object app/dpdk-test-dma-perf.p/test-dma-perf_main.c.o 00:01:49.533 [554/710] Compiling C object app/dpdk-proc-info.p/proc-info_main.c.o 00:01:49.793 [555/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_main.c.o 00:01:49.793 [556/710] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_vector.c.o 00:01:49.793 [557/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_main.c.o 00:01:49.793 [558/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_parser.c.o 00:01:50.062 [559/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_options.c.o 00:01:50.322 [560/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_common.c.o 00:01:50.322 [561/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_rte_pmd_i40e.c.o 00:01:50.586 [562/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_main.c.o 00:01:50.586 [563/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_atq.c.o 00:01:50.586 [564/710] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_flow_gen.c.o 00:01:50.586 [565/710] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_items_gen.c.o 00:01:50.586 [566/710] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:50.848 [567/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_queue.c.o 00:01:50.848 [568/710] Linking target lib/librte_ethdev.so.24.0 00:01:50.848 [569/710] Compiling C object app/dpdk-test-gpudev.p/test-gpudev_main.c.o 00:01:50.848 [570/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_options.c.o 00:01:50.848 [571/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_device_ops.c.o 00:01:51.113 [572/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_model_common.c.o 00:01:51.113 [573/710] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_actions_gen.c.o 00:01:51.113 [574/710] Generating symbol file lib/librte_ethdev.so.24.0.p/librte_ethdev.so.24.0.symbols 00:01:51.113 [575/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_ordered.c.o 00:01:51.113 [576/710] Linking target lib/librte_metrics.so.24.0 00:01:51.114 [577/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_interleave.c.o 00:01:51.114 [578/710] Linking target lib/librte_bpf.so.24.0 00:01:51.114 [579/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_model_ops.c.o 00:01:51.376 [580/710] Compiling C object lib/librte_node.a.p/node_ip4_lookup.c.o 00:01:51.376 [581/710] Linking target lib/librte_eventdev.so.24.0 00:01:51.376 [582/710] Linking target lib/librte_gro.so.24.0 00:01:51.376 [583/710] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_process.c.o 00:01:51.376 [584/710] Linking target lib/librte_gso.so.24.0 00:01:51.376 [585/710] Generating symbol file lib/librte_metrics.so.24.0.p/librte_metrics.so.24.0.symbols 00:01:51.376 [586/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_atq.c.o 00:01:51.376 [587/710] Linking target lib/librte_ip_frag.so.24.0 00:01:51.376 [588/710] Linking target lib/librte_pcapng.so.24.0 00:01:51.376 [589/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_sse.c.o 00:01:51.376 [590/710] Linking target lib/librte_power.so.24.0 00:01:51.376 [591/710] Generating symbol file lib/librte_bpf.so.24.0.p/librte_bpf.so.24.0.symbols 00:01:51.376 [592/710] Linking target lib/librte_bitratestats.so.24.0 00:01:51.376 [593/710] Linking target lib/librte_latencystats.so.24.0 00:01:51.376 [594/710] Linking static target lib/librte_pdcp.a 00:01:51.376 [595/710] Compiling C object app/dpdk-test-dma-perf.p/test-dma-perf_benchmark.c.o 00:01:51.376 [596/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_stats.c.o 00:01:51.640 [597/710] Generating symbol file lib/librte_eventdev.so.24.0.p/librte_eventdev.so.24.0.symbols 00:01:51.640 [598/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_queue.c.o 00:01:51.640 [599/710] Linking target lib/librte_dispatcher.so.24.0 00:01:51.640 [600/710] Generating symbol file lib/librte_ip_frag.so.24.0.p/librte_ip_frag.so.24.0.symbols 00:01:51.640 [601/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_common.c.o 00:01:51.640 [602/710] Compiling C object app/dpdk-test-fib.p/test-fib_main.c.o 00:01:51.640 [603/710] Generating symbol file lib/librte_pcapng.so.24.0.p/librte_pcapng.so.24.0.symbols 00:01:51.903 [604/710] Linking target lib/librte_port.so.24.0 00:01:51.903 [605/710] Linking target lib/librte_pdump.so.24.0 00:01:51.903 [606/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_config.c.o 00:01:51.903 [607/710] Linking target lib/librte_graph.so.24.0 00:01:51.903 [608/710] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev.c.o 00:01:51.903 [609/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_main.c.o 00:01:51.903 [610/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_init.c.o 00:01:51.903 [611/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_common.c.o 00:01:51.903 [612/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_acl.c.o 00:01:51.903 [613/710] Generating lib/pdcp.sym_chk with a custom command (wrapped by meson to capture output) 00:01:51.903 [614/710] Generating symbol file lib/librte_port.so.24.0.p/librte_port.so.24.0.symbols 00:01:52.168 [615/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_stub.c.o 00:01:52.168 [616/710] Linking target lib/librte_pdcp.so.24.0 00:01:52.168 [617/710] Generating symbol file lib/librte_graph.so.24.0.p/librte_graph.so.24.0.symbols 00:01:52.168 [618/710] Linking target lib/librte_table.so.24.0 00:01:52.168 [619/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm.c.o 00:01:52.168 [620/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm_ipv6.c.o 00:01:52.431 [621/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_throughput.c.o 00:01:52.431 [622/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_hash.c.o 00:01:52.431 [623/710] Generating symbol file lib/librte_table.so.24.0.p/librte_table.so.24.0.symbols 00:01:52.431 [624/710] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_main.c.o 00:01:52.431 [625/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_pmd_cyclecount.c.o 00:01:52.431 [626/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx.c.o 00:01:52.693 [627/710] Compiling C object app/dpdk-testpmd.p/test-pmd_5tswap.c.o 00:01:52.693 [628/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_verify.c.o 00:01:52.693 [629/710] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_cman.c.o 00:01:52.956 [630/710] Compiling C object app/dpdk-testpmd.p/test-pmd_cmd_flex_item.c.o 00:01:52.956 [631/710] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_throughput.c.o 00:01:53.214 [632/710] Compiling C object app/dpdk-testpmd.p/test-pmd_iofwd.c.o 00:01:53.214 [633/710] Compiling C object app/dpdk-testpmd.p/test-pmd_flowgen.c.o 00:01:53.214 [634/710] Compiling C object app/dpdk-testpmd.p/test-pmd_macfwd.c.o 00:01:53.214 [635/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_runtime.c.o 00:01:53.214 [636/710] Compiling C object app/dpdk-testpmd.p/test-pmd_recycle_mbufs.c.o 00:01:53.214 [637/710] Compiling C object app/dpdk-testpmd.p/test-pmd_rxonly.c.o 00:01:53.473 [638/710] Compiling C object app/dpdk-testpmd.p/test-pmd_macswap.c.o 00:01:53.473 [639/710] Compiling C object app/dpdk-testpmd.p/test-pmd_icmpecho.c.o 00:01:53.473 [640/710] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_mtr.c.o 00:01:53.473 [641/710] Compiling C object app/dpdk-testpmd.p/test-pmd_shared_rxq_fwd.c.o 00:01:53.473 [642/710] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_tm.c.o 00:01:53.732 [643/710] Compiling C object app/dpdk-testpmd.p/test-pmd_ieee1588fwd.c.o 00:01:53.732 [644/710] Compiling C object app/dpdk-testpmd.p/test-pmd_bpf_cmd.c.o 00:01:53.732 [645/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_latency.c.o 00:01:53.989 [646/710] Compiling C object app/dpdk-test-sad.p/test-sad_main.c.o 00:01:53.989 [647/710] Compiling C object app/dpdk-testpmd.p/test-pmd_util.c.o 00:01:53.989 [648/710] Compiling C object app/dpdk-test-security-perf.p/test-security-perf_test_security_perf.c.o 00:01:53.989 [649/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_queue.c.o 00:01:53.989 [650/710] Compiling C object app/dpdk-test-regex.p/test-regex_main.c.o 00:01:53.989 [651/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_ethdev.c.o 00:01:54.247 [652/710] Linking static target drivers/libtmp_rte_net_i40e.a 00:01:54.247 [653/710] Compiling C object app/dpdk-testpmd.p/.._drivers_net_i40e_i40e_testpmd.c.o 00:01:54.247 [654/710] Compiling C object app/dpdk-testpmd.p/test-pmd_parameters.c.o 00:01:54.247 [655/710] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_common.c.o 00:01:54.247 [656/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_atq.c.o 00:01:54.505 [657/710] Compiling C object app/dpdk-test-security-perf.p/test_test_cryptodev_security_ipsec.c.o 00:01:54.505 [658/710] Generating drivers/rte_net_i40e.pmd.c with a custom command 00:01:54.505 [659/710] Compiling C object drivers/librte_net_i40e.a.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:01:54.505 [660/710] Compiling C object drivers/librte_net_i40e.so.24.0.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:01:54.505 [661/710] Linking static target drivers/librte_net_i40e.a 00:01:54.762 [662/710] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_verify.c.o 00:01:54.762 [663/710] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_cyclecount.c.o 00:01:54.762 [664/710] Compiling C object app/dpdk-testpmd.p/test-pmd_csumonly.c.o 00:01:55.020 [665/710] Compiling C object app/dpdk-testpmd.p/test-pmd_testpmd.c.o 00:01:55.278 [666/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline.c.o 00:01:55.278 [667/710] Generating drivers/rte_net_i40e.sym_chk with a custom command (wrapped by meson to capture output) 00:01:55.278 [668/710] Linking target drivers/librte_net_i40e.so.24.0 00:01:55.536 [669/710] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline.c.o 00:01:55.536 [670/710] Compiling C object app/dpdk-testpmd.p/test-pmd_noisy_vnf.c.o 00:01:55.793 [671/710] Compiling C object lib/librte_node.a.p/node_ip6_rewrite.c.o 00:01:56.051 [672/710] Compiling C object lib/librte_node.a.p/node_ip4_rewrite.c.o 00:01:56.051 [673/710] Linking static target lib/librte_node.a 00:01:56.310 [674/710] Generating lib/node.sym_chk with a custom command (wrapped by meson to capture output) 00:01:56.310 [675/710] Linking target lib/librte_node.so.24.0 00:01:56.310 [676/710] Compiling C object app/dpdk-testpmd.p/test-pmd_txonly.c.o 00:01:57.684 [677/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_common.c.o 00:01:57.684 [678/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_common.c.o 00:01:57.684 [679/710] Compiling C object app/dpdk-testpmd.p/test-pmd_config.c.o 00:01:59.584 [680/710] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_flow.c.o 00:01:59.842 [681/710] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_perf.c.o 00:02:06.402 [682/710] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:38.538 [683/710] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:38.538 [684/710] Linking static target lib/librte_vhost.a 00:02:38.538 [685/710] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:38.538 [686/710] Linking target lib/librte_vhost.so.24.0 00:02:53.423 [687/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_table_action.c.o 00:02:53.423 [688/710] Linking static target lib/librte_pipeline.a 00:02:53.423 [689/710] Linking target app/dpdk-test-acl 00:02:53.423 [690/710] Linking target app/dpdk-dumpcap 00:02:53.423 [691/710] Linking target app/dpdk-pdump 00:02:53.423 [692/710] Linking target app/dpdk-proc-info 00:02:53.423 [693/710] Linking target app/dpdk-test-cmdline 00:02:53.423 [694/710] Linking target app/dpdk-test-regex 00:02:53.423 [695/710] Linking target app/dpdk-test-fib 00:02:53.423 [696/710] Linking target app/dpdk-test-gpudev 00:02:53.423 [697/710] Linking target app/dpdk-test-sad 00:02:53.423 [698/710] Linking target app/dpdk-test-security-perf 00:02:53.423 [699/710] Linking target app/dpdk-test-dma-perf 00:02:53.423 [700/710] Linking target app/dpdk-graph 00:02:53.423 [701/710] Linking target app/dpdk-test-pipeline 00:02:53.423 [702/710] Linking target app/dpdk-test-flow-perf 00:02:53.423 [703/710] Linking target app/dpdk-test-bbdev 00:02:53.423 [704/710] Linking target app/dpdk-test-compress-perf 00:02:53.423 [705/710] Linking target app/dpdk-test-mldev 00:02:53.423 [706/710] Linking target app/dpdk-test-crypto-perf 00:02:53.423 [707/710] Linking target app/dpdk-test-eventdev 00:02:53.423 [708/710] Linking target app/dpdk-testpmd 00:02:55.322 [709/710] Generating lib/pipeline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:55.322 [710/710] Linking target lib/librte_pipeline.so.24.0 00:02:55.322 03:06:01 build_native_dpdk -- common/autobuild_common.sh@188 -- $ uname -s 00:02:55.322 03:06:01 build_native_dpdk -- common/autobuild_common.sh@188 -- $ [[ Linux == \F\r\e\e\B\S\D ]] 00:02:55.322 03:06:01 build_native_dpdk -- common/autobuild_common.sh@201 -- $ ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp -j48 install 00:02:55.322 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp' 00:02:55.322 [0/1] Installing files. 00:02:55.582 Installing subdir /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples 00:02:55.582 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:55.582 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:55.582 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/em_default_v4.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:55.582 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:55.582 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:55.582 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_route.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:55.582 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:55.582 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:55.582 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:55.582 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:55.582 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:55.582 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:55.582 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:55.582 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/lpm_route_parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:55.582 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_altivec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:55.582 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:55.582 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_altivec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:55.582 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_fib.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:55.582 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event_generic.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:55.582 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/lpm_default_v6.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:55.582 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event_internal_port.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:55.582 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_sequential.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:55.582 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:55.582 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:55.582 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:55.582 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:55.583 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/em_route_parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:55.583 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/lpm_default_v4.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:55.583 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:55.583 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:55.583 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/em_default_v6.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:55.583 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl_scalar.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:55.583 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:55.583 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq_dcb/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq_dcb 00:02:55.583 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq_dcb/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq_dcb 00:02:55.583 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_filtering/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:02:55.583 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_filtering/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:02:55.583 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_filtering/flow_blocks.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:02:55.583 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event_internal_port.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:55.583 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:55.583 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:55.583 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:55.583 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event_generic.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:55.583 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:55.583 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:55.583 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_poll.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:55.583 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_common.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:55.583 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_poll.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:55.583 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/commands.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:55.583 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/parse_obj_list.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:55.583 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/commands.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:55.583 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:55.583 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:55.583 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/parse_obj_list.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:55.583 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/pkt_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common 00:02:55.583 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/neon/port_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common/neon 00:02:55.583 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/altivec/port_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common/altivec 00:02:55.583 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/sse/port_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common/sse 00:02:55.583 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ptpclient/ptpclient.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ptpclient 00:02:55.583 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ptpclient/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ptpclient 00:02:55.583 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/helloworld/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/helloworld 00:02:55.583 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/helloworld/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/helloworld 00:02:55.583 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd 00:02:55.583 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd 00:02:55.583 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/rxtx_callbacks/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:02:55.583 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/rxtx_callbacks/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:02:55.583 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor_x86.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:55.583 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/vm_power_cli.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:55.583 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_manager.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:55.583 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_monitor.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:55.583 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:55.583 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:55.583 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_monitor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:55.583 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor_nop.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:55.583 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:55.583 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/parse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:55.583 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_manager.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:55.583 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/vm_power_cli.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:55.583 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/power_manager.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:55.583 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:55.584 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/power_manager.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:55.584 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:55.584 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:55.584 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:55.584 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/parse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:55.584 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:55.584 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:55.584 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/app_thread.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:55.584 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile_ov.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:55.584 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/args.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:55.584 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/cfg_file.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:55.584 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/init.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:55.584 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/cmdline.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:55.584 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:55.584 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/cfg_file.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:55.584 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/stats.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:55.584 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:55.584 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:55.584 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile_red.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:55.584 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile_pie.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:55.584 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:55.584 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/l2fwd-cat.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:55.584 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/cat.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:55.584 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:55.584 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/cat.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:55.584 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/perf_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:55.584 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:55.584 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:55.584 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:55.584 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/perf_core.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:55.584 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:02:55.584 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/commands.list to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:02:55.584 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:02:55.584 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/vdpa_blk_compact.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:02:55.584 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/virtio_net.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:02:55.584 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:02:55.584 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:02:55.584 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:02:55.584 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/blk_spec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:55.584 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/vhost_blk.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:55.584 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/vhost_blk_compat.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:55.584 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/vhost_blk.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:55.584 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:55.584 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/blk.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:55.584 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_ecdsa.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:55.584 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_aes.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:55.584 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_sha.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:55.584 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_tdes.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:55.584 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:55.584 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:55.844 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_dev_self_test.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:55.844 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_rsa.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:55.844 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:55.844 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_dev_self_test.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:55.844 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_gcm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:55.844 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_cmac.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:55.844 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_xts.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:55.844 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_hmac.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:55.844 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_ccm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:55.844 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:55.844 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bond/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:02:55.844 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bond/commands.list to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:02:55.844 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bond/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:02:55.844 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/dma/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/dma 00:02:55.844 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/dma/dmafwd.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/dma 00:02:55.844 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process 00:02:55.844 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/commands.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:55.844 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:55.845 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/commands.list to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:55.845 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:55.845 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/mp_commands.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:55.845 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:55.845 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/commands.list to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:55.845 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:55.845 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/mp_commands.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:55.845 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/symmetric_mp/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:02:55.845 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/symmetric_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:02:55.845 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp 00:02:55.845 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/args.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:55.845 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/args.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:55.845 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/init.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:55.845 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:55.845 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:55.845 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/init.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:55.845 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_client/client.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:02:55.845 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_client/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:02:55.845 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/shared/common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/shared 00:02:55.845 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-graph/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-graph 00:02:55.845 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-graph/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-graph 00:02:55.845 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-jobstats/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:02:55.845 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-jobstats/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:02:55.845 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_fragmentation/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_fragmentation 00:02:55.845 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_fragmentation/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_fragmentation 00:02:55.845 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/packet_ordering/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/packet_ordering 00:02:55.845 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/packet_ordering/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/packet_ordering 00:02:55.845 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-crypto/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:02:55.845 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-crypto/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:02:55.845 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/shm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:55.845 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:55.845 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:55.845 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/shm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:55.845 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/ka-agent/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:02:55.845 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/ka-agent/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:02:55.845 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/skeleton/basicfwd.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/skeleton 00:02:55.845 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/skeleton/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/skeleton 00:02:55.845 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-macsec/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-macsec 00:02:55.845 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-macsec/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-macsec 00:02:55.845 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/service_cores/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/service_cores 00:02:55.845 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/service_cores/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/service_cores 00:02:55.845 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/distributor/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/distributor 00:02:55.845 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/distributor/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/distributor 00:02:55.845 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_worker_tx.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:55.845 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_worker_generic.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:55.845 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:55.845 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:55.845 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:55.845 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/esp.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:55.845 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/event_helper.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:55.845 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_worker.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:55.845 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:55.845 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipip.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:55.845 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ep1.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:55.845 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sp4.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:55.845 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:55.845 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/flow.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:55.845 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_process.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:55.845 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/flow.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:55.845 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:55.845 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/rt.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:55.845 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_worker.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:55.845 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_lpm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:55.845 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sa.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:55.845 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/event_helper.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:55.845 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sad.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:55.845 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:55.845 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sad.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:55.845 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/esp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:55.845 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec-secgw.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:55.845 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sp6.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:55.845 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/parser.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:55.846 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ep0.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:55.846 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/parser.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:55.846 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec-secgw.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:55.846 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:55.846 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:55.846 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:55.846 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:55.846 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/load_env.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:55.846 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:55.846 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_null_header_reconstruct.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:55.846 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/run_test.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:55.846 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesgcm_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:55.846 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/common_defs_secgw.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:55.846 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:55.846 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:55.846 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:55.846 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/data_rxtx.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:55.846 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/bypass_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:55.846 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_ipv6opts.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:55.846 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesgcm_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:55.846 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesgcm_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:55.846 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:55.846 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/pkttest.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:55.846 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesgcm_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:55.846 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:55.846 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/pkttest.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:55.846 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:55.846 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/linux_test.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:55.846 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:55.846 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:55.846 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipv4_multicast/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipv4_multicast 00:02:55.846 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipv4_multicast/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipv4_multicast 00:02:55.846 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd 00:02:55.846 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_node/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_node 00:02:55.846 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_node/node.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_node 00:02:55.846 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/args.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:02:55.846 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/args.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:02:55.846 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/init.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:02:55.846 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:02:55.846 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:02:55.846 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/init.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:02:55.846 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/shared/common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/shared 00:02:55.846 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/thread.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:55.846 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cli.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:55.846 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/action.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:55.846 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tap.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:55.846 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tmgr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:55.846 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/swq.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:55.846 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/thread.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:55.846 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/pipeline.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:55.846 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/swq.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:55.846 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/action.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:55.846 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/conn.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:55.846 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tap.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:55.846 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/link.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:55.846 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:55.846 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cryptodev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:55.846 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/mempool.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:55.846 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:55.846 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/conn.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:55.846 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/parser.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:55.846 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/link.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:55.846 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cryptodev.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:55.846 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:55.846 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/parser.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:55.846 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cli.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:55.846 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/pipeline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:55.846 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/mempool.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:55.846 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tmgr.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:55.846 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/route_ecmp.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:55.846 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/rss.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:55.846 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/flow.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:55.846 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/firewall.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:55.847 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/tap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:55.847 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/l2fwd.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:55.847 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/route.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:55.847 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/flow_crypto.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:55.847 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/t1.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:55.847 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/t3.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:55.847 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/README to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:55.847 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/dummy.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:55.847 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/t2.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:55.847 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq 00:02:55.847 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq 00:02:55.847 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/link_status_interrupt/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/link_status_interrupt 00:02:55.847 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/link_status_interrupt/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/link_status_interrupt 00:02:55.847 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_reassembly/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_reassembly 00:02:55.847 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_reassembly/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_reassembly 00:02:55.847 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bbdev_app/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bbdev_app 00:02:55.847 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bbdev_app/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bbdev_app 00:02:55.847 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/thread.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:55.847 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/cli.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:55.847 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/thread.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:55.847 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/conn.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:55.847 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:55.847 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/obj.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:55.847 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:55.847 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/conn.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:55.847 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/cli.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:55.847 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/obj.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:55.847 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/rss.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:55.847 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/hash_func.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:55.847 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:55.847 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/mirroring.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:55.847 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ipsec.io to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:55.847 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/selector.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:55.847 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ethdev.io to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:55.847 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/mirroring.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:55.847 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/varbit.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:55.847 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:55.847 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:55.847 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/recirculation.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:55.847 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/recirculation.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:55.847 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/rss.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:55.847 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ipsec.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:55.847 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/varbit.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:55.847 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:55.847 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib_routing_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:55.847 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:55.847 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:55.847 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib_nexthop_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:55.847 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/learner.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:55.847 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:55.847 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/registers.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:55.847 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_pcap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:55.847 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib_nexthop_group_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:55.847 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:55.847 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/registers.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:55.847 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan_pcap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:55.847 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/packet.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:55.847 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/selector.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:55.847 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/meter.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:55.847 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/hash_func.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:55.847 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/pcap.io to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:55.847 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ipsec.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:55.847 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan_table.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:55.847 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ipsec_sa.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:55.847 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/learner.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:55.847 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:55.847 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/selector.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:55.847 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/meter.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:55.847 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp_pcap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:55.847 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:55.847 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/rte_policer.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:55.847 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:55.847 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:55.847 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/rte_policer.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:55.847 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool 00:02:55.848 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/ethapp.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:55.848 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/ethapp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:55.848 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:55.848 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:55.848 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/lib/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:02:55.848 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/lib/rte_ethtool.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:02:55.848 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/lib/rte_ethtool.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:02:55.848 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ntb/commands.list to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ntb 00:02:55.848 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ntb/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ntb 00:02:55.848 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ntb/ntb_fwd.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ntb 00:02:55.848 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_crypto/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_crypto 00:02:55.848 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_crypto/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_crypto 00:02:55.848 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/timer/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/timer 00:02:55.848 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/timer/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/timer 00:02:55.848 Installing lib/librte_log.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:55.848 Installing lib/librte_log.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:55.848 Installing lib/librte_kvargs.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:55.848 Installing lib/librte_kvargs.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:55.848 Installing lib/librte_telemetry.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:55.848 Installing lib/librte_telemetry.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:55.848 Installing lib/librte_eal.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:55.848 Installing lib/librte_eal.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:55.848 Installing lib/librte_ring.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:55.848 Installing lib/librte_ring.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:55.848 Installing lib/librte_rcu.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:55.848 Installing lib/librte_rcu.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:55.848 Installing lib/librte_mempool.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:55.848 Installing lib/librte_mempool.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:55.848 Installing lib/librte_mbuf.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:55.848 Installing lib/librte_mbuf.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:55.848 Installing lib/librte_net.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:55.848 Installing lib/librte_net.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:55.848 Installing lib/librte_meter.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:55.848 Installing lib/librte_meter.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:55.848 Installing lib/librte_ethdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:55.848 Installing lib/librte_ethdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:55.848 Installing lib/librte_pci.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:55.848 Installing lib/librte_pci.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:55.848 Installing lib/librte_cmdline.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:55.848 Installing lib/librte_cmdline.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:55.848 Installing lib/librte_metrics.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:55.848 Installing lib/librte_metrics.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:55.848 Installing lib/librte_hash.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:55.848 Installing lib/librte_hash.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:55.848 Installing lib/librte_timer.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:55.848 Installing lib/librte_timer.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:55.848 Installing lib/librte_acl.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:55.848 Installing lib/librte_acl.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:55.848 Installing lib/librte_bbdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:55.848 Installing lib/librte_bbdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:55.848 Installing lib/librte_bitratestats.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:55.848 Installing lib/librte_bitratestats.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:55.848 Installing lib/librte_bpf.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:55.848 Installing lib/librte_bpf.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:55.848 Installing lib/librte_cfgfile.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:55.848 Installing lib/librte_cfgfile.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:55.848 Installing lib/librte_compressdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:55.848 Installing lib/librte_compressdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:55.848 Installing lib/librte_cryptodev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:55.848 Installing lib/librte_cryptodev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:55.848 Installing lib/librte_distributor.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:55.848 Installing lib/librte_distributor.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:55.848 Installing lib/librte_dmadev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:55.848 Installing lib/librte_dmadev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:55.848 Installing lib/librte_efd.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:55.848 Installing lib/librte_efd.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:55.848 Installing lib/librte_eventdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:55.848 Installing lib/librte_eventdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:55.848 Installing lib/librte_dispatcher.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:55.848 Installing lib/librte_dispatcher.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:55.848 Installing lib/librte_gpudev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:55.848 Installing lib/librte_gpudev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:55.848 Installing lib/librte_gro.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:55.848 Installing lib/librte_gro.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:55.848 Installing lib/librte_gso.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:55.848 Installing lib/librte_gso.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:55.848 Installing lib/librte_ip_frag.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:55.848 Installing lib/librte_ip_frag.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:55.848 Installing lib/librte_jobstats.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:55.848 Installing lib/librte_jobstats.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:55.848 Installing lib/librte_latencystats.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:55.848 Installing lib/librte_latencystats.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:55.848 Installing lib/librte_lpm.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:55.848 Installing lib/librte_lpm.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:55.848 Installing lib/librte_member.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:55.848 Installing lib/librte_member.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:55.848 Installing lib/librte_pcapng.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:55.848 Installing lib/librte_pcapng.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:55.848 Installing lib/librte_power.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:55.848 Installing lib/librte_power.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:55.848 Installing lib/librte_rawdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:55.848 Installing lib/librte_rawdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:55.848 Installing lib/librte_regexdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:55.848 Installing lib/librte_regexdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:55.848 Installing lib/librte_mldev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:55.848 Installing lib/librte_mldev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:55.849 Installing lib/librte_rib.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:55.849 Installing lib/librte_rib.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:55.849 Installing lib/librte_reorder.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:56.418 Installing lib/librte_reorder.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:56.418 Installing lib/librte_sched.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:56.418 Installing lib/librte_sched.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:56.418 Installing lib/librte_security.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:56.418 Installing lib/librte_security.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:56.418 Installing lib/librte_stack.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:56.418 Installing lib/librte_stack.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:56.418 Installing lib/librte_vhost.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:56.418 Installing lib/librte_vhost.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:56.418 Installing lib/librte_ipsec.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:56.418 Installing lib/librte_ipsec.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:56.418 Installing lib/librte_pdcp.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:56.418 Installing lib/librte_pdcp.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:56.418 Installing lib/librte_fib.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:56.418 Installing lib/librte_fib.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:56.418 Installing lib/librte_port.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:56.418 Installing lib/librte_port.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:56.418 Installing lib/librte_pdump.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:56.418 Installing lib/librte_pdump.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:56.418 Installing lib/librte_table.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:56.418 Installing lib/librte_table.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:56.418 Installing lib/librte_pipeline.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:56.418 Installing lib/librte_pipeline.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:56.418 Installing lib/librte_graph.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:56.418 Installing lib/librte_graph.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:56.419 Installing lib/librte_node.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:56.419 Installing lib/librte_node.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:56.419 Installing drivers/librte_bus_pci.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:56.419 Installing drivers/librte_bus_pci.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0 00:02:56.419 Installing drivers/librte_bus_vdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:56.419 Installing drivers/librte_bus_vdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0 00:02:56.419 Installing drivers/librte_mempool_ring.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:56.419 Installing drivers/librte_mempool_ring.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0 00:02:56.419 Installing drivers/librte_net_i40e.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:56.419 Installing drivers/librte_net_i40e.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0 00:02:56.419 Installing app/dpdk-dumpcap to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:56.419 Installing app/dpdk-graph to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:56.419 Installing app/dpdk-pdump to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:56.419 Installing app/dpdk-proc-info to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:56.419 Installing app/dpdk-test-acl to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:56.419 Installing app/dpdk-test-bbdev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:56.419 Installing app/dpdk-test-cmdline to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:56.419 Installing app/dpdk-test-compress-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:56.419 Installing app/dpdk-test-crypto-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:56.419 Installing app/dpdk-test-dma-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:56.419 Installing app/dpdk-test-eventdev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:56.419 Installing app/dpdk-test-fib to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:56.419 Installing app/dpdk-test-flow-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:56.419 Installing app/dpdk-test-gpudev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:56.419 Installing app/dpdk-test-mldev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:56.419 Installing app/dpdk-test-pipeline to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:56.419 Installing app/dpdk-testpmd to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:56.419 Installing app/dpdk-test-regex to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:56.419 Installing app/dpdk-test-sad to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:56.419 Installing app/dpdk-test-security-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:56.419 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/config/rte_config.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.419 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/log/rte_log.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.419 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/kvargs/rte_kvargs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.419 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/telemetry/rte_telemetry.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.419 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_atomic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:56.419 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_byteorder.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:56.419 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_cpuflags.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:56.419 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_cycles.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:56.419 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_io.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:56.419 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_memcpy.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:56.419 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_pause.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:56.419 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_power_intrinsics.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:56.419 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_prefetch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:56.419 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_rwlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:56.419 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_spinlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:56.419 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_vect.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:56.419 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.419 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.419 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_cpuflags.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.419 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_cycles.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.419 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_io.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.419 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_memcpy.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.419 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_pause.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.419 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_power_intrinsics.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.419 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_prefetch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.419 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_rtm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.419 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_rwlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.419 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_spinlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.419 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_vect.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.419 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic_32.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.419 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic_64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.419 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder_32.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.419 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder_64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.419 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_alarm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.419 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_bitmap.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.419 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_bitops.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.419 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_branch_prediction.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.419 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_bus.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.419 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_class.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.419 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.419 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_compat.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.419 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_debug.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.419 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_dev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.419 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_devargs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.419 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_eal.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.419 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_eal_memconfig.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.419 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_eal_trace.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.419 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_errno.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.420 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_epoll.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.420 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_fbarray.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.420 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_hexdump.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.420 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_hypervisor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.420 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_interrupts.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.420 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_keepalive.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.420 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_launch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.420 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_lcore.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.420 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_lock_annotations.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.420 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_malloc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.420 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_mcslock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.420 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_memory.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.420 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_memzone.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.420 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_pci_dev_feature_defs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.420 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_pci_dev_features.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.420 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_per_lcore.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.420 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_pflock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.420 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_random.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.420 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_reciprocal.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.420 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_seqcount.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.420 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_seqlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.420 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_service.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.420 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_service_component.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.420 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_stdatomic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.420 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_string_fns.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.420 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_tailq.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.420 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_thread.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.420 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_ticketlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.420 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_time.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.420 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_trace.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.420 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_trace_point.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.420 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_trace_point_register.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.420 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_uuid.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.420 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_version.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.420 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_vfio.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.420 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/linux/include/rte_os.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.420 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.420 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.420 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_elem.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.420 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.420 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_c11_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.420 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_generic_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.420 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_hts.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.420 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_hts_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.420 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_peek.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.420 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_peek_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.420 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_peek_zc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.420 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_rts.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.420 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_rts_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.420 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rcu/rte_rcu_qsbr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.420 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mempool/rte_mempool.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.420 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mempool/rte_mempool_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.420 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.420 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.420 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_ptype.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.420 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_pool_ops.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.420 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_dyn.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.420 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ip.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.420 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_tcp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.420 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_udp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.420 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_tls.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.420 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_dtls.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.420 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_esp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.420 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_sctp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.420 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_icmp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.420 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_arp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.420 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ether.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.420 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_macsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.420 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_vxlan.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.420 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_gre.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.420 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_gtp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.420 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_net.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.420 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_net_crc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.420 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_mpls.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.420 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_higig.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.420 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ecpri.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.420 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_pdcp_hdr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.421 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_geneve.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.421 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_l2tpv2.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.421 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ppp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.421 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ib.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.421 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/meter/rte_meter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.421 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_cman.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.421 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.421 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.421 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_dev_info.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.421 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_flow.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.421 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_flow_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.421 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_mtr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.421 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_mtr_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.421 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_tm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.421 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_tm_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.421 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.421 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_eth_ctrl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.421 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pci/rte_pci.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.421 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.421 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.421 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_num.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.421 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_ipaddr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.421 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_etheraddr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.421 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_string.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.421 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_rdline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.421 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_vt100.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.421 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_socket.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.421 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_cirbuf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.421 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_portlist.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.421 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/metrics/rte_metrics.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.421 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/metrics/rte_metrics_telemetry.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.421 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_fbk_hash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.421 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_hash_crc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.421 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_hash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.421 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_jhash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.421 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_thash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.421 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_thash_gfni.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.421 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_arm64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.421 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_generic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.421 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_sw.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.421 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_x86.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.421 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_thash_x86_gfni.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.421 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/timer/rte_timer.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.421 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/acl/rte_acl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.421 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/acl/rte_acl_osdep.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.421 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bbdev/rte_bbdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.421 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bbdev/rte_bbdev_pmd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.421 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bbdev/rte_bbdev_op.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.421 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bitratestats/rte_bitrate.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.421 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bpf/bpf_def.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.421 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bpf/rte_bpf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.421 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bpf/rte_bpf_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.421 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cfgfile/rte_cfgfile.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.421 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/compressdev/rte_compressdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.421 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/compressdev/rte_comp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.421 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.421 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.421 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_crypto.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.421 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_crypto_sym.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.421 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_crypto_asym.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.421 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.421 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/distributor/rte_distributor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.421 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/dmadev/rte_dmadev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.421 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/dmadev/rte_dmadev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.421 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/efd/rte_efd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.421 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_crypto_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.421 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_dma_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.421 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_eth_rx_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.422 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_eth_tx_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.422 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.422 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_timer_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.422 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_eventdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.422 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_eventdev_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.422 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_eventdev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.422 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/dispatcher/rte_dispatcher.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.422 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/gpudev/rte_gpudev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.422 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/gro/rte_gro.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.422 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/gso/rte_gso.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.422 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ip_frag/rte_ip_frag.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.422 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/jobstats/rte_jobstats.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.422 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/latencystats/rte_latencystats.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.422 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.422 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.422 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_altivec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.422 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.422 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_scalar.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.422 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.422 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_sve.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.422 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/member/rte_member.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.422 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pcapng/rte_pcapng.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.422 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.422 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_guest_channel.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.422 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_pmd_mgmt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.422 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_uncore.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.422 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rawdev/rte_rawdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.422 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rawdev/rte_rawdev_pmd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.422 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/regexdev/rte_regexdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.422 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/regexdev/rte_regexdev_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.422 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/regexdev/rte_regexdev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.422 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mldev/rte_mldev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.422 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mldev/rte_mldev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.422 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rib/rte_rib.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.422 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rib/rte_rib6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.422 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/reorder/rte_reorder.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.422 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_approx.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.422 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_red.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.422 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_sched.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.422 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_sched_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.422 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_pie.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.422 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/security/rte_security.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.422 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/security/rte_security_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.422 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.422 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_std.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.422 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.422 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf_generic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.422 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf_c11.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.422 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf_stubs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.422 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vdpa.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.422 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vhost.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.422 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vhost_async.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.422 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vhost_crypto.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.422 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.422 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec_sa.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.422 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec_sad.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.422 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.422 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pdcp/rte_pdcp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.422 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pdcp/rte_pdcp_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.422 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/fib/rte_fib.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.422 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/fib/rte_fib6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.422 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.422 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_fd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.422 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_frag.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.422 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_ras.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.422 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.422 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.422 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_sched.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.422 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_source_sink.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.422 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_sym_crypto.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.423 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_eventdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.423 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.423 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.423 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_fd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.423 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.423 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_source_sink.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.423 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pdump/rte_pdump.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.423 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_lru.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.423 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_hash_func.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.423 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.423 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_em.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.423 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_learner.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.423 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_selector.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.423 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_wm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.423 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.423 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_acl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.423 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_array.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.423 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.423 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash_cuckoo.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.423 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash_func.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.423 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_lpm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.423 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_lpm_ipv6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.423 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_stub.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.423 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_lru_arm64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.423 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_lru_x86.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.423 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash_func_arm64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.423 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_pipeline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.423 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_port_in_action.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.423 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_table_action.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.423 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_ipsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.423 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_pipeline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.423 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_extern.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.423 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_ctl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.423 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.423 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph_worker.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.423 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph_model_mcore_dispatch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.423 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph_model_rtc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.423 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph_worker_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.423 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_eth_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.423 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_ip4_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.423 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_ip6_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.423 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_udp4_input_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.423 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/drivers/bus/pci/rte_bus_pci.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.423 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/drivers/bus/vdev/rte_bus_vdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.423 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/drivers/net/i40e/rte_pmd_i40e.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.423 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/buildtools/dpdk-cmdline-gen.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:56.423 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-devbind.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:56.423 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-pmdinfo.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:56.423 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-telemetry.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:56.423 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-hugepages.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:56.423 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-rss-flows.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:56.423 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp/rte_build_config.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.423 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp/meson-private/libdpdk-libs.pc to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/pkgconfig 00:02:56.423 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp/meson-private/libdpdk.pc to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/pkgconfig 00:02:56.423 Installing symlink pointing to librte_log.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_log.so.24 00:02:56.423 Installing symlink pointing to librte_log.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_log.so 00:02:56.423 Installing symlink pointing to librte_kvargs.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_kvargs.so.24 00:02:56.423 Installing symlink pointing to librte_kvargs.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_kvargs.so 00:02:56.424 Installing symlink pointing to librte_telemetry.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_telemetry.so.24 00:02:56.424 Installing symlink pointing to librte_telemetry.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_telemetry.so 00:02:56.424 Installing symlink pointing to librte_eal.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eal.so.24 00:02:56.424 Installing symlink pointing to librte_eal.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eal.so 00:02:56.424 Installing symlink pointing to librte_ring.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ring.so.24 00:02:56.424 Installing symlink pointing to librte_ring.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ring.so 00:02:56.424 Installing symlink pointing to librte_rcu.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rcu.so.24 00:02:56.424 Installing symlink pointing to librte_rcu.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rcu.so 00:02:56.424 Installing symlink pointing to librte_mempool.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mempool.so.24 00:02:56.424 Installing symlink pointing to librte_mempool.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mempool.so 00:02:56.424 Installing symlink pointing to librte_mbuf.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mbuf.so.24 00:02:56.424 Installing symlink pointing to librte_mbuf.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mbuf.so 00:02:56.424 Installing symlink pointing to librte_net.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_net.so.24 00:02:56.424 Installing symlink pointing to librte_net.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_net.so 00:02:56.424 Installing symlink pointing to librte_meter.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_meter.so.24 00:02:56.424 Installing symlink pointing to librte_meter.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_meter.so 00:02:56.424 Installing symlink pointing to librte_ethdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ethdev.so.24 00:02:56.424 Installing symlink pointing to librte_ethdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ethdev.so 00:02:56.424 Installing symlink pointing to librte_pci.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pci.so.24 00:02:56.424 Installing symlink pointing to librte_pci.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pci.so 00:02:56.424 Installing symlink pointing to librte_cmdline.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cmdline.so.24 00:02:56.424 Installing symlink pointing to librte_cmdline.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cmdline.so 00:02:56.424 Installing symlink pointing to librte_metrics.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_metrics.so.24 00:02:56.424 Installing symlink pointing to librte_metrics.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_metrics.so 00:02:56.424 Installing symlink pointing to librte_hash.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_hash.so.24 00:02:56.424 Installing symlink pointing to librte_hash.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_hash.so 00:02:56.424 Installing symlink pointing to librte_timer.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_timer.so.24 00:02:56.424 Installing symlink pointing to librte_timer.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_timer.so 00:02:56.424 Installing symlink pointing to librte_acl.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_acl.so.24 00:02:56.424 Installing symlink pointing to librte_acl.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_acl.so 00:02:56.424 Installing symlink pointing to librte_bbdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bbdev.so.24 00:02:56.424 Installing symlink pointing to librte_bbdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bbdev.so 00:02:56.424 Installing symlink pointing to librte_bitratestats.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bitratestats.so.24 00:02:56.424 Installing symlink pointing to librte_bitratestats.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bitratestats.so 00:02:56.424 Installing symlink pointing to librte_bpf.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bpf.so.24 00:02:56.424 Installing symlink pointing to librte_bpf.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bpf.so 00:02:56.424 Installing symlink pointing to librte_cfgfile.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cfgfile.so.24 00:02:56.424 Installing symlink pointing to librte_cfgfile.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cfgfile.so 00:02:56.424 Installing symlink pointing to librte_compressdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_compressdev.so.24 00:02:56.424 Installing symlink pointing to librte_compressdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_compressdev.so 00:02:56.424 Installing symlink pointing to librte_cryptodev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cryptodev.so.24 00:02:56.424 Installing symlink pointing to librte_cryptodev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cryptodev.so 00:02:56.424 Installing symlink pointing to librte_distributor.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_distributor.so.24 00:02:56.424 Installing symlink pointing to librte_distributor.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_distributor.so 00:02:56.424 Installing symlink pointing to librte_dmadev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dmadev.so.24 00:02:56.424 Installing symlink pointing to librte_dmadev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dmadev.so 00:02:56.424 Installing symlink pointing to librte_efd.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_efd.so.24 00:02:56.424 Installing symlink pointing to librte_efd.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_efd.so 00:02:56.424 Installing symlink pointing to librte_eventdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eventdev.so.24 00:02:56.424 Installing symlink pointing to librte_eventdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eventdev.so 00:02:56.424 Installing symlink pointing to librte_dispatcher.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dispatcher.so.24 00:02:56.424 Installing symlink pointing to librte_dispatcher.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dispatcher.so 00:02:56.424 Installing symlink pointing to librte_gpudev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gpudev.so.24 00:02:56.424 Installing symlink pointing to librte_gpudev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gpudev.so 00:02:56.424 Installing symlink pointing to librte_gro.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gro.so.24 00:02:56.424 Installing symlink pointing to librte_gro.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gro.so 00:02:56.424 Installing symlink pointing to librte_gso.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gso.so.24 00:02:56.424 Installing symlink pointing to librte_gso.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gso.so 00:02:56.424 Installing symlink pointing to librte_ip_frag.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ip_frag.so.24 00:02:56.424 Installing symlink pointing to librte_ip_frag.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ip_frag.so 00:02:56.424 Installing symlink pointing to librte_jobstats.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_jobstats.so.24 00:02:56.424 Installing symlink pointing to librte_jobstats.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_jobstats.so 00:02:56.424 Installing symlink pointing to librte_latencystats.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_latencystats.so.24 00:02:56.424 Installing symlink pointing to librte_latencystats.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_latencystats.so 00:02:56.424 Installing symlink pointing to librte_lpm.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_lpm.so.24 00:02:56.424 Installing symlink pointing to librte_lpm.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_lpm.so 00:02:56.424 Installing symlink pointing to librte_member.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_member.so.24 00:02:56.424 Installing symlink pointing to librte_member.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_member.so 00:02:56.424 Installing symlink pointing to librte_pcapng.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pcapng.so.24 00:02:56.424 Installing symlink pointing to librte_pcapng.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pcapng.so 00:02:56.424 Installing symlink pointing to librte_power.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_power.so.24 00:02:56.425 Installing symlink pointing to librte_power.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_power.so 00:02:56.425 './librte_bus_pci.so' -> 'dpdk/pmds-24.0/librte_bus_pci.so' 00:02:56.425 './librte_bus_pci.so.24' -> 'dpdk/pmds-24.0/librte_bus_pci.so.24' 00:02:56.425 './librte_bus_pci.so.24.0' -> 'dpdk/pmds-24.0/librte_bus_pci.so.24.0' 00:02:56.425 './librte_bus_vdev.so' -> 'dpdk/pmds-24.0/librte_bus_vdev.so' 00:02:56.425 './librte_bus_vdev.so.24' -> 'dpdk/pmds-24.0/librte_bus_vdev.so.24' 00:02:56.425 './librte_bus_vdev.so.24.0' -> 'dpdk/pmds-24.0/librte_bus_vdev.so.24.0' 00:02:56.425 './librte_mempool_ring.so' -> 'dpdk/pmds-24.0/librte_mempool_ring.so' 00:02:56.425 './librte_mempool_ring.so.24' -> 'dpdk/pmds-24.0/librte_mempool_ring.so.24' 00:02:56.425 './librte_mempool_ring.so.24.0' -> 'dpdk/pmds-24.0/librte_mempool_ring.so.24.0' 00:02:56.425 './librte_net_i40e.so' -> 'dpdk/pmds-24.0/librte_net_i40e.so' 00:02:56.425 './librte_net_i40e.so.24' -> 'dpdk/pmds-24.0/librte_net_i40e.so.24' 00:02:56.425 './librte_net_i40e.so.24.0' -> 'dpdk/pmds-24.0/librte_net_i40e.so.24.0' 00:02:56.425 Installing symlink pointing to librte_rawdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rawdev.so.24 00:02:56.425 Installing symlink pointing to librte_rawdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rawdev.so 00:02:56.425 Installing symlink pointing to librte_regexdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_regexdev.so.24 00:02:56.425 Installing symlink pointing to librte_regexdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_regexdev.so 00:02:56.425 Installing symlink pointing to librte_mldev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mldev.so.24 00:02:56.425 Installing symlink pointing to librte_mldev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mldev.so 00:02:56.425 Installing symlink pointing to librte_rib.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rib.so.24 00:02:56.425 Installing symlink pointing to librte_rib.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rib.so 00:02:56.425 Installing symlink pointing to librte_reorder.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_reorder.so.24 00:02:56.425 Installing symlink pointing to librte_reorder.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_reorder.so 00:02:56.425 Installing symlink pointing to librte_sched.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_sched.so.24 00:02:56.425 Installing symlink pointing to librte_sched.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_sched.so 00:02:56.425 Installing symlink pointing to librte_security.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_security.so.24 00:02:56.425 Installing symlink pointing to librte_security.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_security.so 00:02:56.425 Installing symlink pointing to librte_stack.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_stack.so.24 00:02:56.425 Installing symlink pointing to librte_stack.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_stack.so 00:02:56.425 Installing symlink pointing to librte_vhost.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_vhost.so.24 00:02:56.425 Installing symlink pointing to librte_vhost.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_vhost.so 00:02:56.425 Installing symlink pointing to librte_ipsec.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ipsec.so.24 00:02:56.425 Installing symlink pointing to librte_ipsec.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ipsec.so 00:02:56.425 Installing symlink pointing to librte_pdcp.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdcp.so.24 00:02:56.425 Installing symlink pointing to librte_pdcp.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdcp.so 00:02:56.425 Installing symlink pointing to librte_fib.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_fib.so.24 00:02:56.425 Installing symlink pointing to librte_fib.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_fib.so 00:02:56.425 Installing symlink pointing to librte_port.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_port.so.24 00:02:56.425 Installing symlink pointing to librte_port.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_port.so 00:02:56.425 Installing symlink pointing to librte_pdump.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdump.so.24 00:02:56.425 Installing symlink pointing to librte_pdump.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdump.so 00:02:56.425 Installing symlink pointing to librte_table.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_table.so.24 00:02:56.425 Installing symlink pointing to librte_table.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_table.so 00:02:56.425 Installing symlink pointing to librte_pipeline.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pipeline.so.24 00:02:56.425 Installing symlink pointing to librte_pipeline.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pipeline.so 00:02:56.425 Installing symlink pointing to librte_graph.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_graph.so.24 00:02:56.425 Installing symlink pointing to librte_graph.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_graph.so 00:02:56.425 Installing symlink pointing to librte_node.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_node.so.24 00:02:56.425 Installing symlink pointing to librte_node.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_node.so 00:02:56.425 Installing symlink pointing to librte_bus_pci.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so.24 00:02:56.425 Installing symlink pointing to librte_bus_pci.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so 00:02:56.425 Installing symlink pointing to librte_bus_vdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so.24 00:02:56.425 Installing symlink pointing to librte_bus_vdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so 00:02:56.425 Installing symlink pointing to librte_mempool_ring.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so.24 00:02:56.425 Installing symlink pointing to librte_mempool_ring.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so 00:02:56.425 Installing symlink pointing to librte_net_i40e.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so.24 00:02:56.425 Installing symlink pointing to librte_net_i40e.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so 00:02:56.425 Running custom install script '/bin/sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/config/../buildtools/symlink-drivers-solibs.sh lib dpdk/pmds-24.0' 00:02:56.683 03:06:02 build_native_dpdk -- common/autobuild_common.sh@207 -- $ cat 00:02:56.683 03:06:02 build_native_dpdk -- common/autobuild_common.sh@212 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:56.683 00:02:56.683 real 1m30.375s 00:02:56.683 user 18m1.957s 00:02:56.683 sys 2m6.997s 00:02:56.684 03:06:02 build_native_dpdk -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:02:56.684 03:06:02 build_native_dpdk -- common/autotest_common.sh@10 -- $ set +x 00:02:56.684 ************************************ 00:02:56.684 END TEST build_native_dpdk 00:02:56.684 ************************************ 00:02:56.684 03:06:02 -- common/autotest_common.sh@1142 -- $ return 0 00:02:56.684 03:06:02 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:02:56.684 03:06:02 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:02:56.684 03:06:02 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:02:56.684 03:06:02 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:02:56.684 03:06:02 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:02:56.684 03:06:02 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:02:56.684 03:06:02 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:02:56.684 03:06:02 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-dpdk=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build --with-shared 00:02:56.684 Using /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/pkgconfig for additional libs... 00:02:56.684 DPDK libraries: /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:56.684 DPDK includes: //var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.684 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:02:56.941 Using 'verbs' RDMA provider 00:03:07.495 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:03:17.473 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:03:17.473 Creating mk/config.mk...done. 00:03:17.473 Creating mk/cc.flags.mk...done. 00:03:17.473 Type 'make' to build. 00:03:17.473 03:06:21 -- spdk/autobuild.sh@69 -- $ run_test make make -j48 00:03:17.473 03:06:21 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:03:17.473 03:06:21 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:03:17.473 03:06:21 -- common/autotest_common.sh@10 -- $ set +x 00:03:17.473 ************************************ 00:03:17.473 START TEST make 00:03:17.473 ************************************ 00:03:17.473 03:06:22 make -- common/autotest_common.sh@1123 -- $ make -j48 00:03:17.473 make[1]: Nothing to be done for 'all'. 00:03:17.735 The Meson build system 00:03:17.735 Version: 1.3.1 00:03:17.735 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:03:17.735 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:03:17.735 Build type: native build 00:03:17.735 Project name: libvfio-user 00:03:17.735 Project version: 0.0.1 00:03:17.735 C compiler for the host machine: gcc (gcc 13.2.1 "gcc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:03:17.735 C linker for the host machine: gcc ld.bfd 2.39-16 00:03:17.735 Host machine cpu family: x86_64 00:03:17.735 Host machine cpu: x86_64 00:03:17.735 Run-time dependency threads found: YES 00:03:17.735 Library dl found: YES 00:03:17.735 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:03:17.735 Run-time dependency json-c found: YES 0.17 00:03:17.735 Run-time dependency cmocka found: YES 1.1.7 00:03:17.735 Program pytest-3 found: NO 00:03:17.735 Program flake8 found: NO 00:03:17.735 Program misspell-fixer found: NO 00:03:17.735 Program restructuredtext-lint found: NO 00:03:17.735 Program valgrind found: YES (/usr/bin/valgrind) 00:03:17.735 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:03:17.735 Compiler for C supports arguments -Wmissing-declarations: YES 00:03:17.735 Compiler for C supports arguments -Wwrite-strings: YES 00:03:17.735 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:03:17.735 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:03:17.735 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:03:17.735 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:03:17.735 Build targets in project: 8 00:03:17.735 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:03:17.736 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:03:17.736 00:03:17.736 libvfio-user 0.0.1 00:03:17.736 00:03:17.736 User defined options 00:03:17.736 buildtype : debug 00:03:17.736 default_library: shared 00:03:17.736 libdir : /usr/local/lib 00:03:17.736 00:03:17.736 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:03:18.741 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:03:18.741 [1/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:03:18.741 [2/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:03:18.741 [3/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:03:18.741 [4/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:03:18.741 [5/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:03:18.741 [6/37] Compiling C object samples/null.p/null.c.o 00:03:18.741 [7/37] Compiling C object samples/lspci.p/lspci.c.o 00:03:18.741 [8/37] Compiling C object test/unit_tests.p/mocks.c.o 00:03:18.741 [9/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:03:18.741 [10/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:03:18.741 [11/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:03:18.741 [12/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:03:18.741 [13/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:03:18.741 [14/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:03:18.741 [15/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:03:19.017 [16/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:03:19.017 [17/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:03:19.017 [18/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:03:19.017 [19/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:03:19.017 [20/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:03:19.017 [21/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:03:19.017 [22/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:03:19.017 [23/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:03:19.017 [24/37] Compiling C object samples/server.p/server.c.o 00:03:19.017 [25/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:03:19.017 [26/37] Compiling C object samples/client.p/client.c.o 00:03:19.017 [27/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:03:19.017 [28/37] Linking target lib/libvfio-user.so.0.0.1 00:03:19.017 [29/37] Linking target samples/client 00:03:19.017 [30/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:03:19.281 [31/37] Linking target test/unit_tests 00:03:19.281 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:03:19.281 [33/37] Linking target samples/null 00:03:19.281 [34/37] Linking target samples/gpio-pci-idio-16 00:03:19.281 [35/37] Linking target samples/lspci 00:03:19.282 [36/37] Linking target samples/server 00:03:19.282 [37/37] Linking target samples/shadow_ioeventfd_server 00:03:19.282 INFO: autodetecting backend as ninja 00:03:19.282 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:03:19.282 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:03:20.227 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:03:20.227 ninja: no work to do. 00:03:32.435 CC lib/ut_mock/mock.o 00:03:32.435 CC lib/log/log.o 00:03:32.435 CC lib/log/log_flags.o 00:03:32.435 CC lib/log/log_deprecated.o 00:03:32.435 CC lib/ut/ut.o 00:03:32.435 LIB libspdk_log.a 00:03:32.435 LIB libspdk_ut.a 00:03:32.435 LIB libspdk_ut_mock.a 00:03:32.435 SO libspdk_log.so.7.0 00:03:32.435 SO libspdk_ut.so.2.0 00:03:32.435 SO libspdk_ut_mock.so.6.0 00:03:32.435 SYMLINK libspdk_ut_mock.so 00:03:32.435 SYMLINK libspdk_ut.so 00:03:32.435 SYMLINK libspdk_log.so 00:03:32.435 CC lib/ioat/ioat.o 00:03:32.435 CXX lib/trace_parser/trace.o 00:03:32.435 CC lib/dma/dma.o 00:03:32.435 CC lib/util/base64.o 00:03:32.435 CC lib/util/bit_array.o 00:03:32.435 CC lib/util/cpuset.o 00:03:32.435 CC lib/util/crc16.o 00:03:32.435 CC lib/util/crc32.o 00:03:32.435 CC lib/util/crc32c.o 00:03:32.435 CC lib/util/crc32_ieee.o 00:03:32.435 CC lib/util/crc64.o 00:03:32.435 CC lib/util/dif.o 00:03:32.435 CC lib/util/fd.o 00:03:32.435 CC lib/util/file.o 00:03:32.435 CC lib/util/hexlify.o 00:03:32.435 CC lib/util/iov.o 00:03:32.435 CC lib/util/math.o 00:03:32.435 CC lib/util/pipe.o 00:03:32.435 CC lib/util/strerror_tls.o 00:03:32.435 CC lib/util/string.o 00:03:32.435 CC lib/util/uuid.o 00:03:32.435 CC lib/util/fd_group.o 00:03:32.435 CC lib/util/xor.o 00:03:32.435 CC lib/util/zipf.o 00:03:32.435 CC lib/vfio_user/host/vfio_user_pci.o 00:03:32.435 CC lib/vfio_user/host/vfio_user.o 00:03:32.435 LIB libspdk_dma.a 00:03:32.435 SO libspdk_dma.so.4.0 00:03:32.435 SYMLINK libspdk_dma.so 00:03:32.435 LIB libspdk_ioat.a 00:03:32.435 SO libspdk_ioat.so.7.0 00:03:32.435 LIB libspdk_vfio_user.a 00:03:32.435 SYMLINK libspdk_ioat.so 00:03:32.435 SO libspdk_vfio_user.so.5.0 00:03:32.435 SYMLINK libspdk_vfio_user.so 00:03:32.435 LIB libspdk_util.a 00:03:32.435 SO libspdk_util.so.9.1 00:03:32.693 SYMLINK libspdk_util.so 00:03:32.950 CC lib/conf/conf.o 00:03:32.950 CC lib/idxd/idxd.o 00:03:32.950 CC lib/rdma_provider/common.o 00:03:32.950 CC lib/env_dpdk/env.o 00:03:32.950 CC lib/rdma_utils/rdma_utils.o 00:03:32.950 CC lib/json/json_parse.o 00:03:32.950 CC lib/vmd/vmd.o 00:03:32.950 CC lib/idxd/idxd_user.o 00:03:32.950 CC lib/env_dpdk/memory.o 00:03:32.950 CC lib/rdma_provider/rdma_provider_verbs.o 00:03:32.950 CC lib/vmd/led.o 00:03:32.950 CC lib/json/json_util.o 00:03:32.950 CC lib/idxd/idxd_kernel.o 00:03:32.950 CC lib/env_dpdk/pci.o 00:03:32.950 CC lib/json/json_write.o 00:03:32.950 CC lib/env_dpdk/init.o 00:03:32.950 CC lib/env_dpdk/threads.o 00:03:32.950 CC lib/env_dpdk/pci_ioat.o 00:03:32.950 CC lib/env_dpdk/pci_virtio.o 00:03:32.950 CC lib/env_dpdk/pci_vmd.o 00:03:32.950 CC lib/env_dpdk/pci_idxd.o 00:03:32.950 CC lib/env_dpdk/pci_event.o 00:03:32.950 CC lib/env_dpdk/pci_dpdk.o 00:03:32.950 CC lib/env_dpdk/sigbus_handler.o 00:03:32.950 CC lib/env_dpdk/pci_dpdk_2211.o 00:03:32.950 CC lib/env_dpdk/pci_dpdk_2207.o 00:03:32.950 LIB libspdk_trace_parser.a 00:03:32.950 SO libspdk_trace_parser.so.5.0 00:03:32.950 SYMLINK libspdk_trace_parser.so 00:03:33.208 LIB libspdk_rdma_provider.a 00:03:33.208 SO libspdk_rdma_provider.so.6.0 00:03:33.208 LIB libspdk_conf.a 00:03:33.208 SO libspdk_conf.so.6.0 00:03:33.208 LIB libspdk_rdma_utils.a 00:03:33.208 SYMLINK libspdk_rdma_provider.so 00:03:33.208 LIB libspdk_json.a 00:03:33.208 SYMLINK libspdk_conf.so 00:03:33.208 SO libspdk_rdma_utils.so.1.0 00:03:33.208 SO libspdk_json.so.6.0 00:03:33.208 SYMLINK libspdk_rdma_utils.so 00:03:33.208 SYMLINK libspdk_json.so 00:03:33.466 CC lib/jsonrpc/jsonrpc_server.o 00:03:33.466 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:03:33.466 CC lib/jsonrpc/jsonrpc_client.o 00:03:33.466 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:03:33.466 LIB libspdk_idxd.a 00:03:33.466 SO libspdk_idxd.so.12.0 00:03:33.466 SYMLINK libspdk_idxd.so 00:03:33.466 LIB libspdk_vmd.a 00:03:33.466 SO libspdk_vmd.so.6.0 00:03:33.724 SYMLINK libspdk_vmd.so 00:03:33.724 LIB libspdk_jsonrpc.a 00:03:33.724 SO libspdk_jsonrpc.so.6.0 00:03:33.724 SYMLINK libspdk_jsonrpc.so 00:03:33.982 CC lib/rpc/rpc.o 00:03:34.239 LIB libspdk_rpc.a 00:03:34.239 SO libspdk_rpc.so.6.0 00:03:34.239 SYMLINK libspdk_rpc.so 00:03:34.501 CC lib/notify/notify.o 00:03:34.502 CC lib/notify/notify_rpc.o 00:03:34.502 CC lib/keyring/keyring.o 00:03:34.502 CC lib/trace/trace.o 00:03:34.502 CC lib/keyring/keyring_rpc.o 00:03:34.502 CC lib/trace/trace_flags.o 00:03:34.502 CC lib/trace/trace_rpc.o 00:03:34.502 LIB libspdk_notify.a 00:03:34.502 SO libspdk_notify.so.6.0 00:03:34.763 LIB libspdk_keyring.a 00:03:34.763 SYMLINK libspdk_notify.so 00:03:34.763 LIB libspdk_trace.a 00:03:34.763 SO libspdk_keyring.so.1.0 00:03:34.763 SO libspdk_trace.so.10.0 00:03:34.763 SYMLINK libspdk_keyring.so 00:03:34.763 SYMLINK libspdk_trace.so 00:03:34.763 LIB libspdk_env_dpdk.a 00:03:35.020 CC lib/sock/sock.o 00:03:35.020 CC lib/sock/sock_rpc.o 00:03:35.020 CC lib/thread/thread.o 00:03:35.020 CC lib/thread/iobuf.o 00:03:35.020 SO libspdk_env_dpdk.so.14.1 00:03:35.020 SYMLINK libspdk_env_dpdk.so 00:03:35.276 LIB libspdk_sock.a 00:03:35.276 SO libspdk_sock.so.10.0 00:03:35.276 SYMLINK libspdk_sock.so 00:03:35.533 CC lib/nvme/nvme_ctrlr_cmd.o 00:03:35.533 CC lib/nvme/nvme_ctrlr.o 00:03:35.533 CC lib/nvme/nvme_fabric.o 00:03:35.533 CC lib/nvme/nvme_ns_cmd.o 00:03:35.534 CC lib/nvme/nvme_pcie_common.o 00:03:35.534 CC lib/nvme/nvme_ns.o 00:03:35.534 CC lib/nvme/nvme_pcie.o 00:03:35.534 CC lib/nvme/nvme_qpair.o 00:03:35.534 CC lib/nvme/nvme.o 00:03:35.534 CC lib/nvme/nvme_quirks.o 00:03:35.534 CC lib/nvme/nvme_transport.o 00:03:35.534 CC lib/nvme/nvme_discovery.o 00:03:35.534 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:03:35.534 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:03:35.534 CC lib/nvme/nvme_tcp.o 00:03:35.534 CC lib/nvme/nvme_opal.o 00:03:35.534 CC lib/nvme/nvme_io_msg.o 00:03:35.534 CC lib/nvme/nvme_poll_group.o 00:03:35.534 CC lib/nvme/nvme_zns.o 00:03:35.534 CC lib/nvme/nvme_stubs.o 00:03:35.534 CC lib/nvme/nvme_auth.o 00:03:35.534 CC lib/nvme/nvme_cuse.o 00:03:35.534 CC lib/nvme/nvme_vfio_user.o 00:03:35.534 CC lib/nvme/nvme_rdma.o 00:03:36.469 LIB libspdk_thread.a 00:03:36.469 SO libspdk_thread.so.10.1 00:03:36.469 SYMLINK libspdk_thread.so 00:03:36.726 CC lib/accel/accel.o 00:03:36.726 CC lib/blob/blobstore.o 00:03:36.726 CC lib/vfu_tgt/tgt_endpoint.o 00:03:36.726 CC lib/virtio/virtio.o 00:03:36.726 CC lib/init/json_config.o 00:03:36.726 CC lib/accel/accel_rpc.o 00:03:36.726 CC lib/blob/request.o 00:03:36.726 CC lib/vfu_tgt/tgt_rpc.o 00:03:36.726 CC lib/virtio/virtio_vhost_user.o 00:03:36.726 CC lib/init/subsystem.o 00:03:36.726 CC lib/accel/accel_sw.o 00:03:36.726 CC lib/blob/zeroes.o 00:03:36.726 CC lib/init/subsystem_rpc.o 00:03:36.726 CC lib/virtio/virtio_vfio_user.o 00:03:36.726 CC lib/blob/blob_bs_dev.o 00:03:36.726 CC lib/init/rpc.o 00:03:36.726 CC lib/virtio/virtio_pci.o 00:03:36.984 LIB libspdk_init.a 00:03:36.984 SO libspdk_init.so.5.0 00:03:37.251 LIB libspdk_virtio.a 00:03:37.251 LIB libspdk_vfu_tgt.a 00:03:37.251 SYMLINK libspdk_init.so 00:03:37.251 SO libspdk_vfu_tgt.so.3.0 00:03:37.251 SO libspdk_virtio.so.7.0 00:03:37.251 SYMLINK libspdk_vfu_tgt.so 00:03:37.251 SYMLINK libspdk_virtio.so 00:03:37.251 CC lib/event/app.o 00:03:37.251 CC lib/event/reactor.o 00:03:37.251 CC lib/event/log_rpc.o 00:03:37.251 CC lib/event/app_rpc.o 00:03:37.251 CC lib/event/scheduler_static.o 00:03:37.820 LIB libspdk_event.a 00:03:37.820 SO libspdk_event.so.14.0 00:03:37.820 LIB libspdk_accel.a 00:03:37.820 SYMLINK libspdk_event.so 00:03:37.820 SO libspdk_accel.so.15.1 00:03:37.820 SYMLINK libspdk_accel.so 00:03:38.077 LIB libspdk_nvme.a 00:03:38.077 CC lib/bdev/bdev.o 00:03:38.077 CC lib/bdev/bdev_rpc.o 00:03:38.077 CC lib/bdev/bdev_zone.o 00:03:38.077 CC lib/bdev/part.o 00:03:38.077 CC lib/bdev/scsi_nvme.o 00:03:38.077 SO libspdk_nvme.so.13.1 00:03:38.335 SYMLINK libspdk_nvme.so 00:03:40.233 LIB libspdk_blob.a 00:03:40.233 SO libspdk_blob.so.11.0 00:03:40.233 SYMLINK libspdk_blob.so 00:03:40.492 CC lib/lvol/lvol.o 00:03:40.492 CC lib/blobfs/blobfs.o 00:03:40.492 CC lib/blobfs/tree.o 00:03:40.492 LIB libspdk_bdev.a 00:03:40.492 SO libspdk_bdev.so.15.1 00:03:40.762 SYMLINK libspdk_bdev.so 00:03:40.762 CC lib/nbd/nbd.o 00:03:40.762 CC lib/ublk/ublk.o 00:03:40.762 CC lib/scsi/dev.o 00:03:40.762 CC lib/nbd/nbd_rpc.o 00:03:40.762 CC lib/nvmf/ctrlr.o 00:03:40.762 CC lib/scsi/lun.o 00:03:40.762 CC lib/ublk/ublk_rpc.o 00:03:40.762 CC lib/nvmf/ctrlr_discovery.o 00:03:40.762 CC lib/scsi/port.o 00:03:40.762 CC lib/ftl/ftl_core.o 00:03:40.762 CC lib/scsi/scsi.o 00:03:40.762 CC lib/nvmf/ctrlr_bdev.o 00:03:40.762 CC lib/ftl/ftl_init.o 00:03:40.762 CC lib/scsi/scsi_bdev.o 00:03:40.762 CC lib/nvmf/subsystem.o 00:03:40.762 CC lib/ftl/ftl_layout.o 00:03:40.762 CC lib/scsi/scsi_pr.o 00:03:40.762 CC lib/nvmf/nvmf.o 00:03:40.762 CC lib/ftl/ftl_debug.o 00:03:40.762 CC lib/nvmf/nvmf_rpc.o 00:03:40.762 CC lib/ftl/ftl_io.o 00:03:40.762 CC lib/scsi/scsi_rpc.o 00:03:40.762 CC lib/nvmf/transport.o 00:03:40.762 CC lib/nvmf/tcp.o 00:03:40.762 CC lib/ftl/ftl_sb.o 00:03:40.762 CC lib/scsi/task.o 00:03:40.762 CC lib/ftl/ftl_l2p.o 00:03:40.762 CC lib/nvmf/stubs.o 00:03:40.762 CC lib/ftl/ftl_l2p_flat.o 00:03:40.762 CC lib/nvmf/mdns_server.o 00:03:40.762 CC lib/nvmf/vfio_user.o 00:03:40.762 CC lib/ftl/ftl_nv_cache.o 00:03:40.762 CC lib/nvmf/rdma.o 00:03:40.762 CC lib/ftl/ftl_band.o 00:03:40.762 CC lib/ftl/ftl_band_ops.o 00:03:40.762 CC lib/nvmf/auth.o 00:03:40.762 CC lib/ftl/ftl_writer.o 00:03:40.762 CC lib/ftl/ftl_rq.o 00:03:40.762 CC lib/ftl/ftl_reloc.o 00:03:40.762 CC lib/ftl/ftl_l2p_cache.o 00:03:40.762 CC lib/ftl/ftl_p2l.o 00:03:40.762 CC lib/ftl/mngt/ftl_mngt.o 00:03:40.762 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:03:40.762 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:03:40.762 CC lib/ftl/mngt/ftl_mngt_startup.o 00:03:40.762 CC lib/ftl/mngt/ftl_mngt_md.o 00:03:41.335 CC lib/ftl/mngt/ftl_mngt_misc.o 00:03:41.335 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:03:41.335 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:03:41.335 CC lib/ftl/mngt/ftl_mngt_band.o 00:03:41.335 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:03:41.335 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:03:41.335 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:03:41.335 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:03:41.335 CC lib/ftl/utils/ftl_conf.o 00:03:41.335 CC lib/ftl/utils/ftl_md.o 00:03:41.335 CC lib/ftl/utils/ftl_mempool.o 00:03:41.335 CC lib/ftl/utils/ftl_bitmap.o 00:03:41.335 CC lib/ftl/utils/ftl_property.o 00:03:41.335 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:03:41.335 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:03:41.335 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:03:41.335 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:03:41.335 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:03:41.595 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:03:41.595 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:03:41.595 CC lib/ftl/upgrade/ftl_sb_v3.o 00:03:41.595 CC lib/ftl/upgrade/ftl_sb_v5.o 00:03:41.595 CC lib/ftl/nvc/ftl_nvc_dev.o 00:03:41.595 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:03:41.595 CC lib/ftl/base/ftl_base_dev.o 00:03:41.595 CC lib/ftl/base/ftl_base_bdev.o 00:03:41.595 CC lib/ftl/ftl_trace.o 00:03:41.595 LIB libspdk_nbd.a 00:03:41.595 LIB libspdk_blobfs.a 00:03:41.595 SO libspdk_nbd.so.7.0 00:03:41.595 SO libspdk_blobfs.so.10.0 00:03:41.854 SYMLINK libspdk_nbd.so 00:03:41.854 LIB libspdk_scsi.a 00:03:41.854 SYMLINK libspdk_blobfs.so 00:03:41.854 SO libspdk_scsi.so.9.0 00:03:41.854 LIB libspdk_lvol.a 00:03:41.854 SO libspdk_lvol.so.10.0 00:03:41.854 LIB libspdk_ublk.a 00:03:41.854 SYMLINK libspdk_scsi.so 00:03:41.854 SYMLINK libspdk_lvol.so 00:03:41.854 SO libspdk_ublk.so.3.0 00:03:42.112 SYMLINK libspdk_ublk.so 00:03:42.112 CC lib/vhost/vhost.o 00:03:42.112 CC lib/iscsi/conn.o 00:03:42.112 CC lib/iscsi/init_grp.o 00:03:42.112 CC lib/vhost/vhost_rpc.o 00:03:42.112 CC lib/vhost/vhost_scsi.o 00:03:42.112 CC lib/iscsi/iscsi.o 00:03:42.112 CC lib/vhost/vhost_blk.o 00:03:42.113 CC lib/vhost/rte_vhost_user.o 00:03:42.113 CC lib/iscsi/md5.o 00:03:42.113 CC lib/iscsi/param.o 00:03:42.113 CC lib/iscsi/portal_grp.o 00:03:42.113 CC lib/iscsi/tgt_node.o 00:03:42.113 CC lib/iscsi/iscsi_subsystem.o 00:03:42.113 CC lib/iscsi/iscsi_rpc.o 00:03:42.113 CC lib/iscsi/task.o 00:03:42.371 LIB libspdk_ftl.a 00:03:42.371 SO libspdk_ftl.so.9.0 00:03:42.937 SYMLINK libspdk_ftl.so 00:03:43.195 LIB libspdk_vhost.a 00:03:43.454 SO libspdk_vhost.so.8.0 00:03:43.454 LIB libspdk_nvmf.a 00:03:43.454 SO libspdk_nvmf.so.18.1 00:03:43.454 SYMLINK libspdk_vhost.so 00:03:43.454 LIB libspdk_iscsi.a 00:03:43.712 SO libspdk_iscsi.so.8.0 00:03:43.712 SYMLINK libspdk_nvmf.so 00:03:43.712 SYMLINK libspdk_iscsi.so 00:03:43.970 CC module/vfu_device/vfu_virtio.o 00:03:43.970 CC module/vfu_device/vfu_virtio_blk.o 00:03:43.970 CC module/vfu_device/vfu_virtio_scsi.o 00:03:43.970 CC module/vfu_device/vfu_virtio_rpc.o 00:03:43.970 CC module/env_dpdk/env_dpdk_rpc.o 00:03:43.970 CC module/accel/error/accel_error.o 00:03:43.970 CC module/accel/iaa/accel_iaa_rpc.o 00:03:43.970 CC module/accel/error/accel_error_rpc.o 00:03:43.970 CC module/accel/iaa/accel_iaa.o 00:03:43.970 CC module/keyring/file/keyring.o 00:03:43.970 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:03:43.970 CC module/keyring/file/keyring_rpc.o 00:03:43.970 CC module/accel/ioat/accel_ioat.o 00:03:43.970 CC module/keyring/linux/keyring.o 00:03:43.970 CC module/accel/ioat/accel_ioat_rpc.o 00:03:43.970 CC module/blob/bdev/blob_bdev.o 00:03:43.970 CC module/keyring/linux/keyring_rpc.o 00:03:43.970 CC module/accel/dsa/accel_dsa.o 00:03:43.970 CC module/accel/dsa/accel_dsa_rpc.o 00:03:43.970 CC module/sock/posix/posix.o 00:03:43.970 CC module/scheduler/dynamic/scheduler_dynamic.o 00:03:43.970 CC module/scheduler/gscheduler/gscheduler.o 00:03:44.229 LIB libspdk_env_dpdk_rpc.a 00:03:44.229 SO libspdk_env_dpdk_rpc.so.6.0 00:03:44.229 SYMLINK libspdk_env_dpdk_rpc.so 00:03:44.229 LIB libspdk_keyring_file.a 00:03:44.229 LIB libspdk_keyring_linux.a 00:03:44.229 LIB libspdk_scheduler_dpdk_governor.a 00:03:44.229 LIB libspdk_scheduler_gscheduler.a 00:03:44.229 SO libspdk_keyring_linux.so.1.0 00:03:44.229 SO libspdk_keyring_file.so.1.0 00:03:44.229 SO libspdk_scheduler_dpdk_governor.so.4.0 00:03:44.229 SO libspdk_scheduler_gscheduler.so.4.0 00:03:44.229 LIB libspdk_accel_error.a 00:03:44.229 LIB libspdk_accel_ioat.a 00:03:44.229 LIB libspdk_scheduler_dynamic.a 00:03:44.229 LIB libspdk_accel_iaa.a 00:03:44.229 SO libspdk_accel_error.so.2.0 00:03:44.229 SO libspdk_scheduler_dynamic.so.4.0 00:03:44.487 SO libspdk_accel_ioat.so.6.0 00:03:44.487 SYMLINK libspdk_keyring_file.so 00:03:44.487 SYMLINK libspdk_keyring_linux.so 00:03:44.487 SYMLINK libspdk_scheduler_dpdk_governor.so 00:03:44.487 SYMLINK libspdk_scheduler_gscheduler.so 00:03:44.487 SO libspdk_accel_iaa.so.3.0 00:03:44.487 LIB libspdk_accel_dsa.a 00:03:44.487 SYMLINK libspdk_accel_error.so 00:03:44.487 SYMLINK libspdk_scheduler_dynamic.so 00:03:44.487 LIB libspdk_blob_bdev.a 00:03:44.487 SYMLINK libspdk_accel_ioat.so 00:03:44.487 SO libspdk_accel_dsa.so.5.0 00:03:44.487 SYMLINK libspdk_accel_iaa.so 00:03:44.487 SO libspdk_blob_bdev.so.11.0 00:03:44.487 SYMLINK libspdk_blob_bdev.so 00:03:44.487 SYMLINK libspdk_accel_dsa.so 00:03:44.771 LIB libspdk_vfu_device.a 00:03:44.771 SO libspdk_vfu_device.so.3.0 00:03:44.771 CC module/bdev/malloc/bdev_malloc.o 00:03:44.771 CC module/blobfs/bdev/blobfs_bdev.o 00:03:44.771 CC module/bdev/error/vbdev_error.o 00:03:44.771 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:03:44.771 CC module/bdev/malloc/bdev_malloc_rpc.o 00:03:44.771 CC module/bdev/gpt/gpt.o 00:03:44.771 CC module/bdev/null/bdev_null.o 00:03:44.771 CC module/bdev/lvol/vbdev_lvol.o 00:03:44.771 CC module/bdev/error/vbdev_error_rpc.o 00:03:44.771 CC module/bdev/null/bdev_null_rpc.o 00:03:44.771 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:03:44.771 CC module/bdev/raid/bdev_raid.o 00:03:44.771 CC module/bdev/delay/vbdev_delay.o 00:03:44.771 CC module/bdev/gpt/vbdev_gpt.o 00:03:44.771 CC module/bdev/nvme/bdev_nvme.o 00:03:44.771 CC module/bdev/delay/vbdev_delay_rpc.o 00:03:44.771 CC module/bdev/raid/bdev_raid_rpc.o 00:03:44.771 CC module/bdev/virtio/bdev_virtio_scsi.o 00:03:44.771 CC module/bdev/split/vbdev_split.o 00:03:44.771 CC module/bdev/raid/bdev_raid_sb.o 00:03:44.771 CC module/bdev/iscsi/bdev_iscsi.o 00:03:44.771 CC module/bdev/split/vbdev_split_rpc.o 00:03:44.771 CC module/bdev/nvme/bdev_nvme_rpc.o 00:03:44.771 CC module/bdev/virtio/bdev_virtio_blk.o 00:03:44.771 CC module/bdev/ftl/bdev_ftl.o 00:03:44.771 CC module/bdev/raid/raid0.o 00:03:44.771 CC module/bdev/virtio/bdev_virtio_rpc.o 00:03:44.771 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:03:44.771 CC module/bdev/raid/raid1.o 00:03:44.771 CC module/bdev/nvme/nvme_rpc.o 00:03:44.771 CC module/bdev/passthru/vbdev_passthru.o 00:03:44.771 CC module/bdev/ftl/bdev_ftl_rpc.o 00:03:44.771 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:03:44.771 CC module/bdev/aio/bdev_aio.o 00:03:44.771 CC module/bdev/zone_block/vbdev_zone_block.o 00:03:44.771 CC module/bdev/nvme/bdev_mdns_client.o 00:03:44.771 CC module/bdev/raid/concat.o 00:03:44.771 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:03:44.771 CC module/bdev/aio/bdev_aio_rpc.o 00:03:44.771 CC module/bdev/nvme/vbdev_opal.o 00:03:44.771 CC module/bdev/nvme/vbdev_opal_rpc.o 00:03:44.771 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:03:44.771 SYMLINK libspdk_vfu_device.so 00:03:45.086 LIB libspdk_sock_posix.a 00:03:45.086 SO libspdk_sock_posix.so.6.0 00:03:45.086 LIB libspdk_blobfs_bdev.a 00:03:45.086 SO libspdk_blobfs_bdev.so.6.0 00:03:45.086 SYMLINK libspdk_sock_posix.so 00:03:45.344 LIB libspdk_bdev_aio.a 00:03:45.344 LIB libspdk_bdev_split.a 00:03:45.344 SYMLINK libspdk_blobfs_bdev.so 00:03:45.345 SO libspdk_bdev_aio.so.6.0 00:03:45.345 SO libspdk_bdev_split.so.6.0 00:03:45.345 LIB libspdk_bdev_error.a 00:03:45.345 LIB libspdk_bdev_null.a 00:03:45.345 LIB libspdk_bdev_gpt.a 00:03:45.345 SO libspdk_bdev_error.so.6.0 00:03:45.345 SYMLINK libspdk_bdev_aio.so 00:03:45.345 SO libspdk_bdev_null.so.6.0 00:03:45.345 SYMLINK libspdk_bdev_split.so 00:03:45.345 LIB libspdk_bdev_ftl.a 00:03:45.345 LIB libspdk_bdev_passthru.a 00:03:45.345 SO libspdk_bdev_gpt.so.6.0 00:03:45.345 LIB libspdk_bdev_iscsi.a 00:03:45.345 LIB libspdk_bdev_zone_block.a 00:03:45.345 SO libspdk_bdev_ftl.so.6.0 00:03:45.345 SO libspdk_bdev_passthru.so.6.0 00:03:45.345 SYMLINK libspdk_bdev_error.so 00:03:45.345 SYMLINK libspdk_bdev_null.so 00:03:45.345 SO libspdk_bdev_iscsi.so.6.0 00:03:45.345 LIB libspdk_bdev_virtio.a 00:03:45.345 SO libspdk_bdev_zone_block.so.6.0 00:03:45.345 SYMLINK libspdk_bdev_gpt.so 00:03:45.345 LIB libspdk_bdev_delay.a 00:03:45.345 LIB libspdk_bdev_malloc.a 00:03:45.345 SO libspdk_bdev_virtio.so.6.0 00:03:45.345 SYMLINK libspdk_bdev_passthru.so 00:03:45.345 SYMLINK libspdk_bdev_ftl.so 00:03:45.345 SO libspdk_bdev_malloc.so.6.0 00:03:45.345 SO libspdk_bdev_delay.so.6.0 00:03:45.345 SYMLINK libspdk_bdev_iscsi.so 00:03:45.345 SYMLINK libspdk_bdev_zone_block.so 00:03:45.345 SYMLINK libspdk_bdev_virtio.so 00:03:45.345 SYMLINK libspdk_bdev_delay.so 00:03:45.345 SYMLINK libspdk_bdev_malloc.so 00:03:45.603 LIB libspdk_bdev_lvol.a 00:03:45.603 SO libspdk_bdev_lvol.so.6.0 00:03:45.603 SYMLINK libspdk_bdev_lvol.so 00:03:45.861 LIB libspdk_bdev_raid.a 00:03:45.861 SO libspdk_bdev_raid.so.6.0 00:03:46.120 SYMLINK libspdk_bdev_raid.so 00:03:47.054 LIB libspdk_bdev_nvme.a 00:03:47.311 SO libspdk_bdev_nvme.so.7.0 00:03:47.311 SYMLINK libspdk_bdev_nvme.so 00:03:47.568 CC module/event/subsystems/iobuf/iobuf.o 00:03:47.568 CC module/event/subsystems/scheduler/scheduler.o 00:03:47.568 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:03:47.568 CC module/event/subsystems/vmd/vmd.o 00:03:47.568 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:03:47.568 CC module/event/subsystems/keyring/keyring.o 00:03:47.568 CC module/event/subsystems/sock/sock.o 00:03:47.568 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:03:47.568 CC module/event/subsystems/vmd/vmd_rpc.o 00:03:47.825 LIB libspdk_event_keyring.a 00:03:47.825 LIB libspdk_event_vhost_blk.a 00:03:47.825 LIB libspdk_event_scheduler.a 00:03:47.825 LIB libspdk_event_vmd.a 00:03:47.825 LIB libspdk_event_vfu_tgt.a 00:03:47.825 LIB libspdk_event_sock.a 00:03:47.825 SO libspdk_event_keyring.so.1.0 00:03:47.825 SO libspdk_event_vhost_blk.so.3.0 00:03:47.825 LIB libspdk_event_iobuf.a 00:03:47.825 SO libspdk_event_scheduler.so.4.0 00:03:47.825 SO libspdk_event_vfu_tgt.so.3.0 00:03:47.825 SO libspdk_event_vmd.so.6.0 00:03:47.825 SO libspdk_event_sock.so.5.0 00:03:47.825 SO libspdk_event_iobuf.so.3.0 00:03:47.825 SYMLINK libspdk_event_keyring.so 00:03:47.825 SYMLINK libspdk_event_vhost_blk.so 00:03:47.825 SYMLINK libspdk_event_vfu_tgt.so 00:03:47.825 SYMLINK libspdk_event_scheduler.so 00:03:47.825 SYMLINK libspdk_event_sock.so 00:03:47.825 SYMLINK libspdk_event_vmd.so 00:03:47.825 SYMLINK libspdk_event_iobuf.so 00:03:48.082 CC module/event/subsystems/accel/accel.o 00:03:48.340 LIB libspdk_event_accel.a 00:03:48.340 SO libspdk_event_accel.so.6.0 00:03:48.340 SYMLINK libspdk_event_accel.so 00:03:48.598 CC module/event/subsystems/bdev/bdev.o 00:03:48.598 LIB libspdk_event_bdev.a 00:03:48.598 SO libspdk_event_bdev.so.6.0 00:03:48.598 SYMLINK libspdk_event_bdev.so 00:03:48.856 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:03:48.856 CC module/event/subsystems/nbd/nbd.o 00:03:48.856 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:03:48.856 CC module/event/subsystems/ublk/ublk.o 00:03:48.856 CC module/event/subsystems/scsi/scsi.o 00:03:49.114 LIB libspdk_event_nbd.a 00:03:49.114 LIB libspdk_event_ublk.a 00:03:49.114 LIB libspdk_event_scsi.a 00:03:49.114 SO libspdk_event_ublk.so.3.0 00:03:49.114 SO libspdk_event_nbd.so.6.0 00:03:49.114 SO libspdk_event_scsi.so.6.0 00:03:49.114 SYMLINK libspdk_event_nbd.so 00:03:49.114 SYMLINK libspdk_event_ublk.so 00:03:49.114 SYMLINK libspdk_event_scsi.so 00:03:49.114 LIB libspdk_event_nvmf.a 00:03:49.114 SO libspdk_event_nvmf.so.6.0 00:03:49.114 SYMLINK libspdk_event_nvmf.so 00:03:49.373 CC module/event/subsystems/iscsi/iscsi.o 00:03:49.373 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:03:49.373 LIB libspdk_event_vhost_scsi.a 00:03:49.373 LIB libspdk_event_iscsi.a 00:03:49.373 SO libspdk_event_vhost_scsi.so.3.0 00:03:49.373 SO libspdk_event_iscsi.so.6.0 00:03:49.631 SYMLINK libspdk_event_vhost_scsi.so 00:03:49.631 SYMLINK libspdk_event_iscsi.so 00:03:49.631 SO libspdk.so.6.0 00:03:49.631 SYMLINK libspdk.so 00:03:49.898 CXX app/trace/trace.o 00:03:49.898 CC app/trace_record/trace_record.o 00:03:49.898 CC test/rpc_client/rpc_client_test.o 00:03:49.898 CC app/spdk_nvme_discover/discovery_aer.o 00:03:49.898 CC app/spdk_nvme_identify/identify.o 00:03:49.898 CC app/spdk_nvme_perf/perf.o 00:03:49.898 CC app/spdk_top/spdk_top.o 00:03:49.898 TEST_HEADER include/spdk/accel.h 00:03:49.898 CC app/spdk_lspci/spdk_lspci.o 00:03:49.898 TEST_HEADER include/spdk/accel_module.h 00:03:49.898 TEST_HEADER include/spdk/assert.h 00:03:49.898 TEST_HEADER include/spdk/barrier.h 00:03:49.898 TEST_HEADER include/spdk/base64.h 00:03:49.898 TEST_HEADER include/spdk/bdev.h 00:03:49.898 TEST_HEADER include/spdk/bdev_module.h 00:03:49.898 TEST_HEADER include/spdk/bdev_zone.h 00:03:49.898 TEST_HEADER include/spdk/bit_array.h 00:03:49.898 TEST_HEADER include/spdk/bit_pool.h 00:03:49.898 TEST_HEADER include/spdk/blob_bdev.h 00:03:49.898 TEST_HEADER include/spdk/blobfs_bdev.h 00:03:49.898 TEST_HEADER include/spdk/blobfs.h 00:03:49.898 TEST_HEADER include/spdk/blob.h 00:03:49.898 TEST_HEADER include/spdk/conf.h 00:03:49.898 TEST_HEADER include/spdk/config.h 00:03:49.898 TEST_HEADER include/spdk/crc16.h 00:03:49.898 TEST_HEADER include/spdk/cpuset.h 00:03:49.898 TEST_HEADER include/spdk/crc32.h 00:03:49.898 TEST_HEADER include/spdk/crc64.h 00:03:49.898 TEST_HEADER include/spdk/dma.h 00:03:49.898 TEST_HEADER include/spdk/dif.h 00:03:49.898 TEST_HEADER include/spdk/endian.h 00:03:49.898 TEST_HEADER include/spdk/env_dpdk.h 00:03:49.898 TEST_HEADER include/spdk/env.h 00:03:49.898 TEST_HEADER include/spdk/event.h 00:03:49.898 TEST_HEADER include/spdk/fd_group.h 00:03:49.898 TEST_HEADER include/spdk/fd.h 00:03:49.898 TEST_HEADER include/spdk/file.h 00:03:49.898 TEST_HEADER include/spdk/ftl.h 00:03:49.898 TEST_HEADER include/spdk/gpt_spec.h 00:03:49.898 TEST_HEADER include/spdk/hexlify.h 00:03:49.898 TEST_HEADER include/spdk/histogram_data.h 00:03:49.898 TEST_HEADER include/spdk/idxd.h 00:03:49.898 TEST_HEADER include/spdk/idxd_spec.h 00:03:49.898 TEST_HEADER include/spdk/init.h 00:03:49.898 TEST_HEADER include/spdk/ioat.h 00:03:49.898 TEST_HEADER include/spdk/ioat_spec.h 00:03:49.898 TEST_HEADER include/spdk/iscsi_spec.h 00:03:49.898 TEST_HEADER include/spdk/json.h 00:03:49.898 TEST_HEADER include/spdk/jsonrpc.h 00:03:49.898 TEST_HEADER include/spdk/keyring.h 00:03:49.898 TEST_HEADER include/spdk/likely.h 00:03:49.898 TEST_HEADER include/spdk/keyring_module.h 00:03:49.898 TEST_HEADER include/spdk/log.h 00:03:49.898 TEST_HEADER include/spdk/lvol.h 00:03:49.899 TEST_HEADER include/spdk/memory.h 00:03:49.899 TEST_HEADER include/spdk/mmio.h 00:03:49.899 TEST_HEADER include/spdk/nbd.h 00:03:49.899 TEST_HEADER include/spdk/notify.h 00:03:49.899 TEST_HEADER include/spdk/nvme.h 00:03:49.899 TEST_HEADER include/spdk/nvme_intel.h 00:03:49.899 TEST_HEADER include/spdk/nvme_ocssd.h 00:03:49.899 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:03:49.899 TEST_HEADER include/spdk/nvme_spec.h 00:03:49.899 TEST_HEADER include/spdk/nvme_zns.h 00:03:49.899 TEST_HEADER include/spdk/nvmf_cmd.h 00:03:49.899 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:03:49.899 TEST_HEADER include/spdk/nvmf.h 00:03:49.899 TEST_HEADER include/spdk/nvmf_spec.h 00:03:49.899 TEST_HEADER include/spdk/nvmf_transport.h 00:03:49.899 TEST_HEADER include/spdk/opal.h 00:03:49.899 TEST_HEADER include/spdk/opal_spec.h 00:03:49.899 TEST_HEADER include/spdk/pci_ids.h 00:03:49.899 TEST_HEADER include/spdk/pipe.h 00:03:49.899 TEST_HEADER include/spdk/queue.h 00:03:49.899 TEST_HEADER include/spdk/reduce.h 00:03:49.899 TEST_HEADER include/spdk/rpc.h 00:03:49.899 TEST_HEADER include/spdk/scheduler.h 00:03:49.899 TEST_HEADER include/spdk/scsi.h 00:03:49.899 TEST_HEADER include/spdk/scsi_spec.h 00:03:49.899 TEST_HEADER include/spdk/sock.h 00:03:49.899 TEST_HEADER include/spdk/stdinc.h 00:03:49.899 TEST_HEADER include/spdk/string.h 00:03:49.899 TEST_HEADER include/spdk/thread.h 00:03:49.899 TEST_HEADER include/spdk/trace.h 00:03:49.899 CC examples/interrupt_tgt/interrupt_tgt.o 00:03:49.899 TEST_HEADER include/spdk/trace_parser.h 00:03:49.899 TEST_HEADER include/spdk/tree.h 00:03:49.899 TEST_HEADER include/spdk/ublk.h 00:03:49.899 TEST_HEADER include/spdk/util.h 00:03:49.899 CC app/spdk_dd/spdk_dd.o 00:03:49.899 TEST_HEADER include/spdk/uuid.h 00:03:49.899 TEST_HEADER include/spdk/version.h 00:03:49.899 TEST_HEADER include/spdk/vfio_user_pci.h 00:03:49.899 TEST_HEADER include/spdk/vfio_user_spec.h 00:03:49.899 TEST_HEADER include/spdk/vhost.h 00:03:49.899 TEST_HEADER include/spdk/xor.h 00:03:49.899 TEST_HEADER include/spdk/vmd.h 00:03:49.899 TEST_HEADER include/spdk/zipf.h 00:03:49.899 CXX test/cpp_headers/accel.o 00:03:49.899 CXX test/cpp_headers/accel_module.o 00:03:49.899 CXX test/cpp_headers/assert.o 00:03:49.899 CXX test/cpp_headers/barrier.o 00:03:49.899 CXX test/cpp_headers/base64.o 00:03:49.899 CXX test/cpp_headers/bdev.o 00:03:49.899 CXX test/cpp_headers/bdev_module.o 00:03:49.899 CXX test/cpp_headers/bdev_zone.o 00:03:49.899 CXX test/cpp_headers/bit_array.o 00:03:49.899 CXX test/cpp_headers/bit_pool.o 00:03:49.899 CC app/nvmf_tgt/nvmf_main.o 00:03:49.899 CXX test/cpp_headers/blob_bdev.o 00:03:49.899 CXX test/cpp_headers/blobfs_bdev.o 00:03:49.899 CXX test/cpp_headers/blobfs.o 00:03:49.899 CXX test/cpp_headers/blob.o 00:03:49.899 CXX test/cpp_headers/conf.o 00:03:49.899 CXX test/cpp_headers/config.o 00:03:49.899 CXX test/cpp_headers/cpuset.o 00:03:49.899 CXX test/cpp_headers/crc16.o 00:03:49.899 CC app/iscsi_tgt/iscsi_tgt.o 00:03:49.899 CXX test/cpp_headers/crc32.o 00:03:49.899 CC app/spdk_tgt/spdk_tgt.o 00:03:49.899 CC test/env/memory/memory_ut.o 00:03:49.899 CC test/app/jsoncat/jsoncat.o 00:03:49.899 CC test/thread/poller_perf/poller_perf.o 00:03:49.899 CC test/env/vtophys/vtophys.o 00:03:49.899 CC examples/util/zipf/zipf.o 00:03:49.899 CC test/app/histogram_perf/histogram_perf.o 00:03:49.899 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:03:49.899 CC examples/ioat/verify/verify.o 00:03:49.899 CC examples/ioat/perf/perf.o 00:03:49.899 CC test/app/stub/stub.o 00:03:49.899 CC test/env/pci/pci_ut.o 00:03:49.899 CC app/fio/nvme/fio_plugin.o 00:03:49.899 CC test/dma/test_dma/test_dma.o 00:03:50.161 CC test/app/bdev_svc/bdev_svc.o 00:03:50.161 CC app/fio/bdev/fio_plugin.o 00:03:50.161 CC test/env/mem_callbacks/mem_callbacks.o 00:03:50.161 LINK spdk_lspci 00:03:50.161 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:03:50.161 LINK rpc_client_test 00:03:50.161 LINK spdk_nvme_discover 00:03:50.161 LINK jsoncat 00:03:50.161 LINK histogram_perf 00:03:50.161 LINK vtophys 00:03:50.425 LINK interrupt_tgt 00:03:50.425 LINK poller_perf 00:03:50.425 CXX test/cpp_headers/crc64.o 00:03:50.425 LINK zipf 00:03:50.425 CXX test/cpp_headers/dif.o 00:03:50.425 CXX test/cpp_headers/dma.o 00:03:50.425 CXX test/cpp_headers/endian.o 00:03:50.425 CXX test/cpp_headers/env_dpdk.o 00:03:50.425 CXX test/cpp_headers/env.o 00:03:50.425 CXX test/cpp_headers/event.o 00:03:50.425 CXX test/cpp_headers/fd_group.o 00:03:50.425 CXX test/cpp_headers/fd.o 00:03:50.425 LINK nvmf_tgt 00:03:50.425 CXX test/cpp_headers/file.o 00:03:50.425 CXX test/cpp_headers/ftl.o 00:03:50.425 LINK stub 00:03:50.425 LINK env_dpdk_post_init 00:03:50.425 LINK iscsi_tgt 00:03:50.425 LINK spdk_trace_record 00:03:50.425 CXX test/cpp_headers/gpt_spec.o 00:03:50.425 CXX test/cpp_headers/hexlify.o 00:03:50.425 CXX test/cpp_headers/histogram_data.o 00:03:50.425 CXX test/cpp_headers/idxd.o 00:03:50.425 CXX test/cpp_headers/idxd_spec.o 00:03:50.425 LINK verify 00:03:50.425 LINK bdev_svc 00:03:50.425 LINK spdk_tgt 00:03:50.425 LINK ioat_perf 00:03:50.425 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:03:50.425 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:03:50.425 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:03:50.425 CXX test/cpp_headers/init.o 00:03:50.689 CXX test/cpp_headers/ioat.o 00:03:50.689 CXX test/cpp_headers/ioat_spec.o 00:03:50.689 CXX test/cpp_headers/json.o 00:03:50.689 CXX test/cpp_headers/iscsi_spec.o 00:03:50.689 LINK spdk_dd 00:03:50.689 CXX test/cpp_headers/jsonrpc.o 00:03:50.689 LINK spdk_trace 00:03:50.689 CXX test/cpp_headers/keyring.o 00:03:50.689 CXX test/cpp_headers/keyring_module.o 00:03:50.689 CXX test/cpp_headers/likely.o 00:03:50.689 CXX test/cpp_headers/log.o 00:03:50.689 CXX test/cpp_headers/lvol.o 00:03:50.689 CXX test/cpp_headers/memory.o 00:03:50.689 CXX test/cpp_headers/mmio.o 00:03:50.689 CXX test/cpp_headers/nbd.o 00:03:50.689 CXX test/cpp_headers/notify.o 00:03:50.689 CXX test/cpp_headers/nvme.o 00:03:50.689 LINK pci_ut 00:03:50.689 CXX test/cpp_headers/nvme_intel.o 00:03:50.689 CXX test/cpp_headers/nvme_ocssd.o 00:03:50.689 CXX test/cpp_headers/nvme_spec.o 00:03:50.689 CXX test/cpp_headers/nvme_ocssd_spec.o 00:03:50.689 LINK test_dma 00:03:50.689 CXX test/cpp_headers/nvme_zns.o 00:03:50.689 CXX test/cpp_headers/nvmf_cmd.o 00:03:50.689 CXX test/cpp_headers/nvmf_fc_spec.o 00:03:50.950 CXX test/cpp_headers/nvmf.o 00:03:50.950 CXX test/cpp_headers/nvmf_spec.o 00:03:50.950 CXX test/cpp_headers/nvmf_transport.o 00:03:50.950 CXX test/cpp_headers/opal.o 00:03:50.950 CXX test/cpp_headers/opal_spec.o 00:03:50.950 CC test/event/event_perf/event_perf.o 00:03:50.950 CC test/event/reactor_perf/reactor_perf.o 00:03:50.950 CXX test/cpp_headers/pci_ids.o 00:03:50.950 CC test/event/reactor/reactor.o 00:03:50.950 CXX test/cpp_headers/pipe.o 00:03:50.950 CC test/event/app_repeat/app_repeat.o 00:03:50.950 CC examples/sock/hello_world/hello_sock.o 00:03:50.950 CXX test/cpp_headers/queue.o 00:03:50.950 LINK nvme_fuzz 00:03:50.950 CC examples/thread/thread/thread_ex.o 00:03:50.950 LINK spdk_nvme 00:03:50.950 CC examples/vmd/lsvmd/lsvmd.o 00:03:50.950 CXX test/cpp_headers/reduce.o 00:03:50.950 CC test/event/scheduler/scheduler.o 00:03:50.950 CC examples/idxd/perf/perf.o 00:03:50.950 LINK spdk_bdev 00:03:50.950 CXX test/cpp_headers/rpc.o 00:03:51.211 CC examples/vmd/led/led.o 00:03:51.211 CXX test/cpp_headers/scheduler.o 00:03:51.211 CXX test/cpp_headers/scsi.o 00:03:51.211 CXX test/cpp_headers/scsi_spec.o 00:03:51.211 CXX test/cpp_headers/sock.o 00:03:51.211 CXX test/cpp_headers/stdinc.o 00:03:51.211 CXX test/cpp_headers/string.o 00:03:51.211 CXX test/cpp_headers/thread.o 00:03:51.211 CXX test/cpp_headers/trace.o 00:03:51.211 CXX test/cpp_headers/trace_parser.o 00:03:51.211 CXX test/cpp_headers/tree.o 00:03:51.211 CXX test/cpp_headers/ublk.o 00:03:51.211 CXX test/cpp_headers/util.o 00:03:51.211 CXX test/cpp_headers/uuid.o 00:03:51.211 CXX test/cpp_headers/version.o 00:03:51.211 LINK reactor_perf 00:03:51.211 LINK event_perf 00:03:51.212 CC app/vhost/vhost.o 00:03:51.212 CXX test/cpp_headers/vfio_user_pci.o 00:03:51.212 LINK reactor 00:03:51.212 CXX test/cpp_headers/vfio_user_spec.o 00:03:51.212 CXX test/cpp_headers/vhost.o 00:03:51.212 CXX test/cpp_headers/vmd.o 00:03:51.212 CXX test/cpp_headers/xor.o 00:03:51.212 LINK vhost_fuzz 00:03:51.212 CXX test/cpp_headers/zipf.o 00:03:51.212 LINK app_repeat 00:03:51.212 LINK mem_callbacks 00:03:51.471 LINK lsvmd 00:03:51.471 LINK spdk_nvme_perf 00:03:51.471 LINK led 00:03:51.471 LINK spdk_nvme_identify 00:03:51.471 LINK hello_sock 00:03:51.471 LINK spdk_top 00:03:51.471 LINK thread 00:03:51.471 LINK scheduler 00:03:51.471 CC test/nvme/reset/reset.o 00:03:51.471 CC test/nvme/e2edp/nvme_dp.o 00:03:51.471 CC test/nvme/err_injection/err_injection.o 00:03:51.471 CC test/nvme/overhead/overhead.o 00:03:51.471 CC test/nvme/sgl/sgl.o 00:03:51.471 CC test/nvme/aer/aer.o 00:03:51.471 CC test/nvme/startup/startup.o 00:03:51.471 CC test/nvme/reserve/reserve.o 00:03:51.471 CC test/accel/dif/dif.o 00:03:51.471 CC test/nvme/simple_copy/simple_copy.o 00:03:51.729 CC test/blobfs/mkfs/mkfs.o 00:03:51.729 CC test/nvme/connect_stress/connect_stress.o 00:03:51.729 CC test/nvme/boot_partition/boot_partition.o 00:03:51.729 CC test/nvme/compliance/nvme_compliance.o 00:03:51.729 CC test/nvme/fused_ordering/fused_ordering.o 00:03:51.729 LINK vhost 00:03:51.729 CC test/nvme/doorbell_aers/doorbell_aers.o 00:03:51.729 CC test/nvme/fdp/fdp.o 00:03:51.729 CC test/nvme/cuse/cuse.o 00:03:51.729 CC test/lvol/esnap/esnap.o 00:03:51.729 LINK idxd_perf 00:03:51.729 LINK startup 00:03:51.729 LINK connect_stress 00:03:51.988 LINK reserve 00:03:51.988 LINK mkfs 00:03:51.988 LINK fused_ordering 00:03:51.988 LINK reset 00:03:51.988 LINK err_injection 00:03:51.988 CC examples/nvme/hello_world/hello_world.o 00:03:51.988 LINK doorbell_aers 00:03:51.988 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:03:51.988 LINK nvme_dp 00:03:51.988 CC examples/nvme/reconnect/reconnect.o 00:03:51.988 CC examples/nvme/nvme_manage/nvme_manage.o 00:03:51.988 CC examples/nvme/arbitration/arbitration.o 00:03:51.988 CC examples/nvme/abort/abort.o 00:03:51.988 CC examples/nvme/hotplug/hotplug.o 00:03:51.988 CC examples/nvme/cmb_copy/cmb_copy.o 00:03:51.988 LINK boot_partition 00:03:51.988 LINK overhead 00:03:51.988 LINK memory_ut 00:03:51.988 CC examples/accel/perf/accel_perf.o 00:03:51.988 LINK simple_copy 00:03:51.988 CC examples/blob/hello_world/hello_blob.o 00:03:51.988 CC examples/blob/cli/blobcli.o 00:03:51.988 LINK sgl 00:03:51.988 LINK aer 00:03:52.247 LINK nvme_compliance 00:03:52.247 LINK dif 00:03:52.247 LINK fdp 00:03:52.247 LINK cmb_copy 00:03:52.247 LINK hello_world 00:03:52.247 LINK hotplug 00:03:52.247 LINK pmr_persistence 00:03:52.247 LINK reconnect 00:03:52.247 LINK hello_blob 00:03:52.506 LINK arbitration 00:03:52.506 LINK abort 00:03:52.506 CC test/bdev/bdevio/bdevio.o 00:03:52.506 LINK blobcli 00:03:52.506 LINK nvme_manage 00:03:52.506 LINK accel_perf 00:03:52.765 LINK iscsi_fuzz 00:03:53.023 CC examples/bdev/hello_world/hello_bdev.o 00:03:53.023 CC examples/bdev/bdevperf/bdevperf.o 00:03:53.023 LINK bdevio 00:03:53.281 LINK cuse 00:03:53.281 LINK hello_bdev 00:03:53.848 LINK bdevperf 00:03:54.105 CC examples/nvmf/nvmf/nvmf.o 00:03:54.363 LINK nvmf 00:03:57.642 LINK esnap 00:03:57.642 00:03:57.642 real 0m41.519s 00:03:57.642 user 7m27.777s 00:03:57.642 sys 1m48.431s 00:03:57.642 03:07:03 make -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:03:57.642 03:07:03 make -- common/autotest_common.sh@10 -- $ set +x 00:03:57.642 ************************************ 00:03:57.642 END TEST make 00:03:57.642 ************************************ 00:03:57.642 03:07:03 -- common/autotest_common.sh@1142 -- $ return 0 00:03:57.642 03:07:03 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:03:57.642 03:07:03 -- pm/common@29 -- $ signal_monitor_resources TERM 00:03:57.642 03:07:03 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:03:57.642 03:07:03 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:57.642 03:07:03 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:03:57.642 03:07:03 -- pm/common@44 -- $ pid=2951934 00:03:57.642 03:07:03 -- pm/common@50 -- $ kill -TERM 2951934 00:03:57.642 03:07:03 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:57.642 03:07:03 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:03:57.642 03:07:03 -- pm/common@44 -- $ pid=2951936 00:03:57.643 03:07:03 -- pm/common@50 -- $ kill -TERM 2951936 00:03:57.643 03:07:03 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:57.643 03:07:03 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:03:57.643 03:07:03 -- pm/common@44 -- $ pid=2951938 00:03:57.643 03:07:03 -- pm/common@50 -- $ kill -TERM 2951938 00:03:57.643 03:07:03 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:57.643 03:07:03 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:03:57.643 03:07:03 -- pm/common@44 -- $ pid=2951966 00:03:57.643 03:07:03 -- pm/common@50 -- $ sudo -E kill -TERM 2951966 00:03:57.643 03:07:03 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:03:57.643 03:07:03 -- nvmf/common.sh@7 -- # uname -s 00:03:57.643 03:07:03 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:57.643 03:07:03 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:57.643 03:07:03 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:57.643 03:07:03 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:57.643 03:07:03 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:57.643 03:07:03 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:57.643 03:07:03 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:57.643 03:07:03 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:57.643 03:07:03 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:57.643 03:07:03 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:57.643 03:07:03 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:03:57.643 03:07:03 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:03:57.643 03:07:03 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:57.643 03:07:03 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:57.643 03:07:03 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:03:57.643 03:07:03 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:57.643 03:07:03 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:03:57.643 03:07:03 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:57.643 03:07:03 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:57.643 03:07:03 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:57.643 03:07:03 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:57.643 03:07:03 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:57.643 03:07:03 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:57.643 03:07:03 -- paths/export.sh@5 -- # export PATH 00:03:57.643 03:07:03 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:57.643 03:07:03 -- nvmf/common.sh@47 -- # : 0 00:03:57.643 03:07:03 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:03:57.643 03:07:03 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:03:57.643 03:07:03 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:57.643 03:07:03 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:57.643 03:07:03 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:57.643 03:07:03 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:03:57.643 03:07:03 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:03:57.643 03:07:03 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:03:57.643 03:07:03 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:57.643 03:07:03 -- spdk/autotest.sh@32 -- # uname -s 00:03:57.643 03:07:03 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:03:57.643 03:07:03 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:03:57.643 03:07:03 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:03:57.643 03:07:03 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:03:57.643 03:07:03 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:03:57.643 03:07:03 -- spdk/autotest.sh@44 -- # modprobe nbd 00:03:57.643 03:07:03 -- spdk/autotest.sh@46 -- # type -P udevadm 00:03:57.643 03:07:03 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:03:57.643 03:07:03 -- spdk/autotest.sh@48 -- # udevadm_pid=3028823 00:03:57.643 03:07:03 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:03:57.643 03:07:03 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:03:57.643 03:07:03 -- pm/common@17 -- # local monitor 00:03:57.643 03:07:03 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:57.643 03:07:03 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:57.643 03:07:03 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:57.643 03:07:03 -- pm/common@21 -- # date +%s 00:03:57.643 03:07:03 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:57.643 03:07:03 -- pm/common@21 -- # date +%s 00:03:57.643 03:07:03 -- pm/common@25 -- # sleep 1 00:03:57.643 03:07:03 -- pm/common@21 -- # date +%s 00:03:57.643 03:07:03 -- pm/common@21 -- # date +%s 00:03:57.643 03:07:03 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721005623 00:03:57.643 03:07:03 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721005623 00:03:57.643 03:07:03 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721005623 00:03:57.643 03:07:03 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721005623 00:03:57.643 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721005623_collect-vmstat.pm.log 00:03:57.643 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721005623_collect-cpu-load.pm.log 00:03:57.643 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721005623_collect-cpu-temp.pm.log 00:03:57.643 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721005623_collect-bmc-pm.bmc.pm.log 00:03:58.576 03:07:04 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:58.576 03:07:04 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:03:58.576 03:07:04 -- common/autotest_common.sh@722 -- # xtrace_disable 00:03:58.576 03:07:04 -- common/autotest_common.sh@10 -- # set +x 00:03:58.576 03:07:04 -- spdk/autotest.sh@59 -- # create_test_list 00:03:58.576 03:07:04 -- common/autotest_common.sh@746 -- # xtrace_disable 00:03:58.576 03:07:04 -- common/autotest_common.sh@10 -- # set +x 00:03:58.576 03:07:04 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:03:58.576 03:07:04 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:58.576 03:07:04 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:58.576 03:07:04 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:03:58.576 03:07:04 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:58.576 03:07:04 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:03:58.576 03:07:04 -- common/autotest_common.sh@1455 -- # uname 00:03:58.576 03:07:04 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:03:58.576 03:07:04 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:03:58.576 03:07:04 -- common/autotest_common.sh@1475 -- # uname 00:03:58.576 03:07:04 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:03:58.576 03:07:04 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:03:58.576 03:07:04 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:03:58.576 03:07:04 -- spdk/autotest.sh@72 -- # hash lcov 00:03:58.576 03:07:04 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:03:58.576 03:07:04 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:03:58.576 --rc lcov_branch_coverage=1 00:03:58.576 --rc lcov_function_coverage=1 00:03:58.576 --rc genhtml_branch_coverage=1 00:03:58.576 --rc genhtml_function_coverage=1 00:03:58.576 --rc genhtml_legend=1 00:03:58.576 --rc geninfo_all_blocks=1 00:03:58.576 ' 00:03:58.576 03:07:04 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:03:58.576 --rc lcov_branch_coverage=1 00:03:58.576 --rc lcov_function_coverage=1 00:03:58.576 --rc genhtml_branch_coverage=1 00:03:58.576 --rc genhtml_function_coverage=1 00:03:58.576 --rc genhtml_legend=1 00:03:58.576 --rc geninfo_all_blocks=1 00:03:58.576 ' 00:03:58.576 03:07:04 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:03:58.576 --rc lcov_branch_coverage=1 00:03:58.576 --rc lcov_function_coverage=1 00:03:58.576 --rc genhtml_branch_coverage=1 00:03:58.576 --rc genhtml_function_coverage=1 00:03:58.576 --rc genhtml_legend=1 00:03:58.576 --rc geninfo_all_blocks=1 00:03:58.576 --no-external' 00:03:58.576 03:07:04 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:03:58.576 --rc lcov_branch_coverage=1 00:03:58.576 --rc lcov_function_coverage=1 00:03:58.576 --rc genhtml_branch_coverage=1 00:03:58.576 --rc genhtml_function_coverage=1 00:03:58.576 --rc genhtml_legend=1 00:03:58.576 --rc geninfo_all_blocks=1 00:03:58.576 --no-external' 00:03:58.576 03:07:04 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:03:58.834 lcov: LCOV version 1.14 00:03:58.834 03:07:04 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:04:05.386 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno:no functions found 00:04:05.386 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno 00:04:05.386 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno:no functions found 00:04:05.386 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno 00:04:05.386 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:04:05.386 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno 00:04:05.386 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno:no functions found 00:04:05.386 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno 00:04:05.386 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno:no functions found 00:04:05.386 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno 00:04:05.386 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno:no functions found 00:04:05.386 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno 00:04:05.386 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:04:05.386 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno 00:04:05.386 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:04:05.386 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno 00:04:05.386 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:04:05.386 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno 00:04:05.386 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:04:05.386 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno 00:04:05.386 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:04:05.386 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno 00:04:05.386 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:04:05.386 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno 00:04:05.386 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno:no functions found 00:04:05.386 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno 00:04:05.386 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:04:05.386 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno 00:04:05.386 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno:no functions found 00:04:05.386 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno 00:04:05.386 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:04:05.386 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno 00:04:05.386 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno:no functions found 00:04:05.386 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno 00:04:05.386 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno:no functions found 00:04:05.386 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno 00:04:05.386 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno:no functions found 00:04:05.386 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno 00:04:05.386 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno:no functions found 00:04:05.386 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno 00:04:05.386 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno:no functions found 00:04:05.386 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno 00:04:05.386 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno:no functions found 00:04:05.386 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno 00:04:05.386 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno:no functions found 00:04:05.386 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno 00:04:05.386 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:04:05.386 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno 00:04:05.386 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno:no functions found 00:04:05.386 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno 00:04:05.386 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:04:05.386 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno 00:04:05.386 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno:no functions found 00:04:05.386 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno 00:04:05.386 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno:no functions found 00:04:05.386 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno 00:04:05.386 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno:no functions found 00:04:05.386 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno 00:04:05.387 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno:no functions found 00:04:05.387 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno 00:04:05.387 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:04:05.387 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno 00:04:05.387 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:04:05.387 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno 00:04:05.387 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:04:05.387 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno 00:04:05.387 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno:no functions found 00:04:05.387 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno 00:04:05.387 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:04:05.387 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno 00:04:05.387 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno:no functions found 00:04:05.387 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno 00:04:05.387 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno:no functions found 00:04:05.387 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno 00:04:05.387 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:04:05.387 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno 00:04:05.387 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno:no functions found 00:04:05.387 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno 00:04:05.387 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:04:05.387 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno 00:04:05.387 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:04:05.387 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno 00:04:05.387 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno:no functions found 00:04:05.387 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno 00:04:05.387 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:04:05.387 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno 00:04:05.387 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno:no functions found 00:04:05.387 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno 00:04:05.387 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno:no functions found 00:04:05.387 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno 00:04:05.387 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno:no functions found 00:04:05.387 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno 00:04:05.387 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno:no functions found 00:04:05.387 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno 00:04:05.387 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno:no functions found 00:04:05.387 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno 00:04:05.387 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno:no functions found 00:04:05.387 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno 00:04:05.387 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno:no functions found 00:04:05.387 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno 00:04:05.387 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:04:05.387 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno 00:04:05.387 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno:no functions found 00:04:05.387 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno 00:04:05.387 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:04:05.387 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno 00:04:05.387 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:04:05.387 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:04:05.387 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:04:05.387 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno 00:04:05.387 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:04:05.387 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno 00:04:05.387 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:04:05.387 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno 00:04:05.387 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:04:05.387 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:04:05.387 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:04:05.387 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno 00:04:05.387 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:04:05.387 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno 00:04:05.387 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:04:05.387 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno 00:04:05.387 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno:no functions found 00:04:05.387 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno 00:04:05.387 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:04:05.387 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno 00:04:05.387 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:04:05.387 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno 00:04:05.387 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno:no functions found 00:04:05.387 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno 00:04:05.387 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno:no functions found 00:04:05.387 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno 00:04:05.387 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno:no functions found 00:04:05.387 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno 00:04:05.387 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno:no functions found 00:04:05.387 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno 00:04:05.387 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:04:05.387 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno 00:04:05.387 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno:no functions found 00:04:05.387 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno 00:04:05.387 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:04:05.387 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno 00:04:05.387 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno:no functions found 00:04:05.387 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno 00:04:05.387 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:04:05.387 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno 00:04:05.387 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno:no functions found 00:04:05.387 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno 00:04:05.387 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno:no functions found 00:04:05.387 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno 00:04:05.387 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno:no functions found 00:04:05.387 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno 00:04:05.387 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:04:05.387 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno 00:04:05.387 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno:no functions found 00:04:05.387 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno 00:04:05.387 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno:no functions found 00:04:05.387 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno 00:04:05.387 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno:no functions found 00:04:05.387 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno 00:04:05.387 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno:no functions found 00:04:05.387 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno 00:04:05.387 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno:no functions found 00:04:05.387 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno 00:04:05.387 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:04:05.387 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno 00:04:05.387 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:04:05.387 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno 00:04:05.387 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno:no functions found 00:04:05.388 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno 00:04:05.388 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno:no functions found 00:04:05.388 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno 00:04:05.388 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno:no functions found 00:04:05.388 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno 00:04:05.388 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno:no functions found 00:04:05.388 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno 00:04:27.360 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:04:27.360 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:04:39.582 03:07:43 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:04:39.582 03:07:43 -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:39.582 03:07:43 -- common/autotest_common.sh@10 -- # set +x 00:04:39.582 03:07:43 -- spdk/autotest.sh@91 -- # rm -f 00:04:39.582 03:07:43 -- spdk/autotest.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:39.582 0000:88:00.0 (8086 0a54): Already using the nvme driver 00:04:39.582 0000:00:04.7 (8086 0e27): Already using the ioatdma driver 00:04:39.582 0000:00:04.6 (8086 0e26): Already using the ioatdma driver 00:04:39.582 0000:00:04.5 (8086 0e25): Already using the ioatdma driver 00:04:39.582 0000:00:04.4 (8086 0e24): Already using the ioatdma driver 00:04:39.582 0000:00:04.3 (8086 0e23): Already using the ioatdma driver 00:04:39.582 0000:00:04.2 (8086 0e22): Already using the ioatdma driver 00:04:39.582 0000:00:04.1 (8086 0e21): Already using the ioatdma driver 00:04:39.582 0000:00:04.0 (8086 0e20): Already using the ioatdma driver 00:04:39.582 0000:80:04.7 (8086 0e27): Already using the ioatdma driver 00:04:39.582 0000:80:04.6 (8086 0e26): Already using the ioatdma driver 00:04:39.582 0000:80:04.5 (8086 0e25): Already using the ioatdma driver 00:04:39.582 0000:80:04.4 (8086 0e24): Already using the ioatdma driver 00:04:39.582 0000:80:04.3 (8086 0e23): Already using the ioatdma driver 00:04:39.582 0000:80:04.2 (8086 0e22): Already using the ioatdma driver 00:04:39.582 0000:80:04.1 (8086 0e21): Already using the ioatdma driver 00:04:39.582 0000:80:04.0 (8086 0e20): Already using the ioatdma driver 00:04:39.582 03:07:45 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:04:39.582 03:07:45 -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:04:39.582 03:07:45 -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:04:39.582 03:07:45 -- common/autotest_common.sh@1670 -- # local nvme bdf 00:04:39.582 03:07:45 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:39.582 03:07:45 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:04:39.582 03:07:45 -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:04:39.582 03:07:45 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:39.582 03:07:45 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:39.582 03:07:45 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:04:39.582 03:07:45 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:04:39.582 03:07:45 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:04:39.582 03:07:45 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:04:39.582 03:07:45 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:04:39.582 03:07:45 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:04:39.582 No valid GPT data, bailing 00:04:39.582 03:07:45 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:39.582 03:07:45 -- scripts/common.sh@391 -- # pt= 00:04:39.582 03:07:45 -- scripts/common.sh@392 -- # return 1 00:04:39.582 03:07:45 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:04:39.582 1+0 records in 00:04:39.582 1+0 records out 00:04:39.582 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00195787 s, 536 MB/s 00:04:39.582 03:07:45 -- spdk/autotest.sh@118 -- # sync 00:04:39.582 03:07:45 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:04:39.582 03:07:45 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:04:39.582 03:07:45 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:04:40.959 03:07:46 -- spdk/autotest.sh@124 -- # uname -s 00:04:40.959 03:07:46 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:04:40.959 03:07:46 -- spdk/autotest.sh@125 -- # run_test setup.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:04:40.959 03:07:46 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:40.959 03:07:46 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:40.959 03:07:46 -- common/autotest_common.sh@10 -- # set +x 00:04:40.959 ************************************ 00:04:40.959 START TEST setup.sh 00:04:40.959 ************************************ 00:04:40.959 03:07:46 setup.sh -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:04:40.959 * Looking for test storage... 00:04:40.959 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:04:40.959 03:07:47 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:04:40.959 03:07:47 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:04:40.959 03:07:47 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:04:40.960 03:07:47 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:40.960 03:07:47 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:40.960 03:07:47 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:40.960 ************************************ 00:04:40.960 START TEST acl 00:04:40.960 ************************************ 00:04:40.960 03:07:47 setup.sh.acl -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:04:40.960 * Looking for test storage... 00:04:41.217 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:04:41.217 03:07:47 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:04:41.217 03:07:47 setup.sh.acl -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:04:41.217 03:07:47 setup.sh.acl -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:04:41.217 03:07:47 setup.sh.acl -- common/autotest_common.sh@1670 -- # local nvme bdf 00:04:41.217 03:07:47 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:41.217 03:07:47 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:04:41.217 03:07:47 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:04:41.217 03:07:47 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:41.217 03:07:47 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:41.217 03:07:47 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:04:41.217 03:07:47 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:04:41.217 03:07:47 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:04:41.217 03:07:47 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:04:41.217 03:07:47 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:04:41.217 03:07:47 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:41.217 03:07:47 setup.sh.acl -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:42.594 03:07:48 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:04:42.594 03:07:48 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:04:42.594 03:07:48 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:42.594 03:07:48 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:04:42.594 03:07:48 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:04:42.594 03:07:48 setup.sh.acl -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:04:43.531 Hugepages 00:04:43.531 node hugesize free / total 00:04:43.531 03:07:49 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:04:43.531 03:07:49 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:04:43.531 03:07:49 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:43.531 03:07:49 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:04:43.531 03:07:49 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:04:43.531 03:07:49 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:43.531 03:07:49 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:04:43.531 03:07:49 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:04:43.531 03:07:49 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:43.531 00:04:43.531 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:43.531 03:07:49 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:04:43.531 03:07:49 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:04:43.531 03:07:49 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:43.531 03:07:49 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.0 == *:*:*.* ]] 00:04:43.531 03:07:49 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:43.531 03:07:49 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:43.531 03:07:49 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:43.531 03:07:49 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.1 == *:*:*.* ]] 00:04:43.531 03:07:49 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:43.531 03:07:49 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:43.531 03:07:49 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:43.531 03:07:49 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.2 == *:*:*.* ]] 00:04:43.531 03:07:49 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:43.531 03:07:49 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:43.531 03:07:49 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:43.531 03:07:49 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.3 == *:*:*.* ]] 00:04:43.531 03:07:49 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:43.531 03:07:49 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:43.531 03:07:49 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:43.531 03:07:49 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.4 == *:*:*.* ]] 00:04:43.531 03:07:49 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:43.531 03:07:49 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:43.531 03:07:49 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:43.531 03:07:49 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.5 == *:*:*.* ]] 00:04:43.531 03:07:49 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:43.531 03:07:49 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:43.531 03:07:49 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:43.531 03:07:49 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.6 == *:*:*.* ]] 00:04:43.531 03:07:49 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:43.531 03:07:49 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:43.531 03:07:49 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:43.531 03:07:49 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.7 == *:*:*.* ]] 00:04:43.531 03:07:49 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:43.531 03:07:49 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:43.531 03:07:49 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:43.531 03:07:49 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.0 == *:*:*.* ]] 00:04:43.531 03:07:49 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:43.531 03:07:49 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:43.531 03:07:49 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:43.531 03:07:49 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.1 == *:*:*.* ]] 00:04:43.531 03:07:49 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:43.531 03:07:49 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:43.531 03:07:49 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:43.531 03:07:49 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.2 == *:*:*.* ]] 00:04:43.531 03:07:49 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:43.531 03:07:49 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:43.531 03:07:49 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:43.531 03:07:49 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.3 == *:*:*.* ]] 00:04:43.531 03:07:49 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:43.531 03:07:49 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:43.531 03:07:49 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:43.531 03:07:49 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.4 == *:*:*.* ]] 00:04:43.531 03:07:49 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:43.531 03:07:49 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:43.531 03:07:49 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:43.531 03:07:49 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.5 == *:*:*.* ]] 00:04:43.531 03:07:49 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:43.531 03:07:49 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:43.531 03:07:49 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:43.531 03:07:49 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.6 == *:*:*.* ]] 00:04:43.531 03:07:49 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:43.531 03:07:49 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:43.531 03:07:49 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:43.531 03:07:49 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.7 == *:*:*.* ]] 00:04:43.531 03:07:49 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:43.531 03:07:49 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:43.531 03:07:49 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:43.789 03:07:49 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:88:00.0 == *:*:*.* ]] 00:04:43.789 03:07:49 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:04:43.789 03:07:49 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\8\8\:\0\0\.\0* ]] 00:04:43.789 03:07:49 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:04:43.789 03:07:49 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:04:43.789 03:07:49 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:43.789 03:07:49 setup.sh.acl -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:04:43.789 03:07:49 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:04:43.789 03:07:49 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:43.789 03:07:49 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:43.789 03:07:49 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:04:43.789 ************************************ 00:04:43.789 START TEST denied 00:04:43.789 ************************************ 00:04:43.789 03:07:49 setup.sh.acl.denied -- common/autotest_common.sh@1123 -- # denied 00:04:43.789 03:07:49 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:88:00.0' 00:04:43.789 03:07:49 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:04:43.789 03:07:49 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:88:00.0' 00:04:43.789 03:07:49 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:04:43.789 03:07:49 setup.sh.acl.denied -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:45.165 0000:88:00.0 (8086 0a54): Skipping denied controller at 0000:88:00.0 00:04:45.165 03:07:51 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:88:00.0 00:04:45.165 03:07:51 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:04:45.165 03:07:51 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:04:45.165 03:07:51 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:88:00.0 ]] 00:04:45.165 03:07:51 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:88:00.0/driver 00:04:45.165 03:07:51 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:04:45.165 03:07:51 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:04:45.165 03:07:51 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:04:45.165 03:07:51 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:45.165 03:07:51 setup.sh.acl.denied -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:47.716 00:04:47.716 real 0m3.914s 00:04:47.716 user 0m1.126s 00:04:47.716 sys 0m1.870s 00:04:47.716 03:07:53 setup.sh.acl.denied -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:47.716 03:07:53 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:04:47.716 ************************************ 00:04:47.716 END TEST denied 00:04:47.716 ************************************ 00:04:47.716 03:07:53 setup.sh.acl -- common/autotest_common.sh@1142 -- # return 0 00:04:47.716 03:07:53 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:04:47.716 03:07:53 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:47.716 03:07:53 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:47.716 03:07:53 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:04:47.716 ************************************ 00:04:47.716 START TEST allowed 00:04:47.716 ************************************ 00:04:47.716 03:07:53 setup.sh.acl.allowed -- common/autotest_common.sh@1123 -- # allowed 00:04:47.716 03:07:53 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:88:00.0 00:04:47.716 03:07:53 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:04:47.716 03:07:53 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:88:00.0 .*: nvme -> .*' 00:04:47.716 03:07:53 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:04:47.716 03:07:53 setup.sh.acl.allowed -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:50.252 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:04:50.252 03:07:55 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 00:04:50.252 03:07:55 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:04:50.252 03:07:55 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:04:50.252 03:07:55 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:50.252 03:07:55 setup.sh.acl.allowed -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:51.628 00:04:51.629 real 0m3.806s 00:04:51.629 user 0m1.005s 00:04:51.629 sys 0m1.628s 00:04:51.629 03:07:57 setup.sh.acl.allowed -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:51.629 03:07:57 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:04:51.629 ************************************ 00:04:51.629 END TEST allowed 00:04:51.629 ************************************ 00:04:51.629 03:07:57 setup.sh.acl -- common/autotest_common.sh@1142 -- # return 0 00:04:51.629 00:04:51.629 real 0m10.460s 00:04:51.629 user 0m3.240s 00:04:51.629 sys 0m5.194s 00:04:51.629 03:07:57 setup.sh.acl -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:51.629 03:07:57 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:04:51.629 ************************************ 00:04:51.629 END TEST acl 00:04:51.629 ************************************ 00:04:51.629 03:07:57 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:04:51.629 03:07:57 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:04:51.629 03:07:57 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:51.629 03:07:57 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:51.629 03:07:57 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:51.629 ************************************ 00:04:51.629 START TEST hugepages 00:04:51.629 ************************************ 00:04:51.629 03:07:57 setup.sh.hugepages -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:04:51.629 * Looking for test storage... 00:04:51.629 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:04:51.629 03:07:57 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:04:51.629 03:07:57 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:04:51.629 03:07:57 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:04:51.629 03:07:57 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:04:51.629 03:07:57 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:04:51.629 03:07:57 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:04:51.629 03:07:57 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:04:51.629 03:07:57 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:04:51.629 03:07:57 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:04:51.629 03:07:57 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:04:51.629 03:07:57 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:51.629 03:07:57 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:51.629 03:07:57 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:51.629 03:07:57 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:04:51.629 03:07:57 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:51.629 03:07:57 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:51.629 03:07:57 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:51.629 03:07:57 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 41181388 kB' 'MemAvailable: 44683216 kB' 'Buffers: 3736 kB' 'Cached: 12793868 kB' 'SwapCached: 0 kB' 'Active: 9755660 kB' 'Inactive: 3501484 kB' 'Active(anon): 9361452 kB' 'Inactive(anon): 0 kB' 'Active(file): 394208 kB' 'Inactive(file): 3501484 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 462776 kB' 'Mapped: 193384 kB' 'Shmem: 8901912 kB' 'KReclaimable: 199952 kB' 'Slab: 569524 kB' 'SReclaimable: 199952 kB' 'SUnreclaim: 369572 kB' 'KernelStack: 12816 kB' 'PageTables: 8192 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36562296 kB' 'Committed_AS: 10466784 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195872 kB' 'VmallocChunk: 0 kB' 'Percpu: 34368 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 1740380 kB' 'DirectMap2M: 13907968 kB' 'DirectMap1G: 53477376 kB' 00:04:51.629 03:07:57 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:51.629 03:07:57 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:51.629 03:07:57 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:51.629 03:07:57 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:51.629 03:07:57 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:51.629 03:07:57 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:51.629 03:07:57 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:51.629 03:07:57 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:51.629 03:07:57 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:51.629 03:07:57 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:51.629 03:07:57 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:51.629 03:07:57 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:51.629 03:07:57 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:51.629 03:07:57 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:51.629 03:07:57 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:51.629 03:07:57 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:51.629 03:07:57 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:51.629 03:07:57 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:51.629 03:07:57 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:51.629 03:07:57 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:51.629 03:07:57 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:51.629 03:07:57 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:51.629 03:07:57 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:51.629 03:07:57 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:51.629 03:07:57 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:51.629 03:07:57 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:51.629 03:07:57 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:51.629 03:07:57 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:51.629 03:07:57 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:51.629 03:07:57 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:51.629 03:07:57 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:51.629 03:07:57 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:51.629 03:07:57 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:51.629 03:07:57 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:51.629 03:07:57 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:51.629 03:07:57 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:51.629 03:07:57 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:51.629 03:07:57 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:51.629 03:07:57 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:51.629 03:07:57 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:51.629 03:07:57 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:51.629 03:07:57 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:51.629 03:07:57 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:51.629 03:07:57 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:51.629 03:07:57 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:51.629 03:07:57 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:51.629 03:07:57 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:51.629 03:07:57 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:51.629 03:07:57 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:51.629 03:07:57 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:51.629 03:07:57 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:51.629 03:07:57 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:51.629 03:07:57 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:51.629 03:07:57 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:51.629 03:07:57 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:51.629 03:07:57 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:51.629 03:07:57 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:51.629 03:07:57 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:51.629 03:07:57 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:51.629 03:07:57 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:51.629 03:07:57 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:51.629 03:07:57 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:51.629 03:07:57 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:51.629 03:07:57 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:51.629 03:07:57 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:51.629 03:07:57 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:51.629 03:07:57 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:51.629 03:07:57 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:51.629 03:07:57 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:51.629 03:07:57 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:51.629 03:07:57 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:51.629 03:07:57 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:51.629 03:07:57 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:51.629 03:07:57 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:51.629 03:07:57 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:51.629 03:07:57 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:51.629 03:07:57 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:51.629 03:07:57 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:51.629 03:07:57 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:51.629 03:07:57 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:51.629 03:07:57 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:51.629 03:07:57 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:51.629 03:07:57 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:51.629 03:07:57 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:51.629 03:07:57 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:51.629 03:07:57 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:51.629 03:07:57 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:51.629 03:07:57 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:51.629 03:07:57 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:51.630 03:07:57 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:51.630 03:07:57 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:51.630 03:07:57 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:51.630 03:07:57 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:51.630 03:07:57 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:51.630 03:07:57 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:51.630 03:07:57 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:51.630 03:07:57 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:51.630 03:07:57 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:51.630 03:07:57 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:51.630 03:07:57 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:51.630 03:07:57 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:51.630 03:07:57 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:51.630 03:07:57 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:51.630 03:07:57 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:51.630 03:07:57 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:51.630 03:07:57 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:51.630 03:07:57 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:51.630 03:07:57 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:51.630 03:07:57 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:51.630 03:07:57 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:51.630 03:07:57 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:51.630 03:07:57 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:51.630 03:07:57 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:51.630 03:07:57 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:51.630 03:07:57 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:51.630 03:07:57 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:51.630 03:07:57 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:51.630 03:07:57 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:51.630 03:07:57 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:51.630 03:07:57 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:51.630 03:07:57 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:51.630 03:07:57 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:51.630 03:07:57 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:51.630 03:07:57 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:51.630 03:07:57 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:51.630 03:07:57 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:51.630 03:07:57 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:51.630 03:07:57 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:51.630 03:07:57 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:51.630 03:07:57 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:51.630 03:07:57 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:51.630 03:07:57 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:51.630 03:07:57 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:51.630 03:07:57 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:51.630 03:07:57 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:51.630 03:07:57 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:51.630 03:07:57 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:51.630 03:07:57 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:51.630 03:07:57 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:51.630 03:07:57 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:51.630 03:07:57 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:51.630 03:07:57 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:51.630 03:07:57 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:51.630 03:07:57 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:51.630 03:07:57 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:51.630 03:07:57 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:51.630 03:07:57 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:51.630 03:07:57 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:51.630 03:07:57 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:51.630 03:07:57 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:51.630 03:07:57 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:51.630 03:07:57 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:51.630 03:07:57 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:51.630 03:07:57 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:51.630 03:07:57 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:51.630 03:07:57 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:51.630 03:07:57 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:51.630 03:07:57 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:51.630 03:07:57 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:51.630 03:07:57 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:51.630 03:07:57 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:51.630 03:07:57 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:51.630 03:07:57 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:51.630 03:07:57 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:51.630 03:07:57 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:51.630 03:07:57 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:51.630 03:07:57 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:51.630 03:07:57 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:51.630 03:07:57 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:51.630 03:07:57 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:51.630 03:07:57 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:51.630 03:07:57 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:51.630 03:07:57 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:51.630 03:07:57 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:51.630 03:07:57 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:51.630 03:07:57 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:51.630 03:07:57 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:51.630 03:07:57 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:51.630 03:07:57 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:51.630 03:07:57 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:51.630 03:07:57 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:51.630 03:07:57 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:51.630 03:07:57 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:51.630 03:07:57 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:51.630 03:07:57 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:51.630 03:07:57 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:51.630 03:07:57 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:51.630 03:07:57 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:51.630 03:07:57 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:51.630 03:07:57 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:51.630 03:07:57 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:51.630 03:07:57 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:51.630 03:07:57 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:51.630 03:07:57 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:51.630 03:07:57 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:51.630 03:07:57 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:51.630 03:07:57 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:51.630 03:07:57 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:51.630 03:07:57 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:51.630 03:07:57 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:51.630 03:07:57 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:51.630 03:07:57 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:51.630 03:07:57 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:51.630 03:07:57 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:51.630 03:07:57 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:51.630 03:07:57 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:51.630 03:07:57 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:51.630 03:07:57 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:51.630 03:07:57 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:51.630 03:07:57 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:04:51.630 03:07:57 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:04:51.630 03:07:57 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:04:51.630 03:07:57 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:04:51.630 03:07:57 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:04:51.630 03:07:57 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:04:51.630 03:07:57 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:04:51.630 03:07:57 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:04:51.630 03:07:57 setup.sh.hugepages -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:04:51.630 03:07:57 setup.sh.hugepages -- setup/hugepages.sh@207 -- # get_nodes 00:04:51.630 03:07:57 setup.sh.hugepages -- setup/hugepages.sh@27 -- # local node 00:04:51.630 03:07:57 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:51.630 03:07:57 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:04:51.630 03:07:57 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:51.630 03:07:57 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:51.630 03:07:57 setup.sh.hugepages -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:51.630 03:07:57 setup.sh.hugepages -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:51.630 03:07:57 setup.sh.hugepages -- setup/hugepages.sh@208 -- # clear_hp 00:04:51.630 03:07:57 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:04:51.630 03:07:57 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:51.630 03:07:57 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:51.630 03:07:57 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:51.630 03:07:57 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:51.630 03:07:57 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:51.630 03:07:57 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:51.631 03:07:57 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:51.631 03:07:57 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:51.631 03:07:57 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:51.631 03:07:57 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:51.631 03:07:57 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:51.631 03:07:57 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:51.631 03:07:57 setup.sh.hugepages -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:04:51.631 03:07:57 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:51.631 03:07:57 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:51.631 03:07:57 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:51.631 ************************************ 00:04:51.631 START TEST default_setup 00:04:51.631 ************************************ 00:04:51.631 03:07:57 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1123 -- # default_setup 00:04:51.631 03:07:57 setup.sh.hugepages.default_setup -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:04:51.631 03:07:57 setup.sh.hugepages.default_setup -- setup/hugepages.sh@49 -- # local size=2097152 00:04:51.631 03:07:57 setup.sh.hugepages.default_setup -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:51.631 03:07:57 setup.sh.hugepages.default_setup -- setup/hugepages.sh@51 -- # shift 00:04:51.631 03:07:57 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:51.631 03:07:57 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # local node_ids 00:04:51.631 03:07:57 setup.sh.hugepages.default_setup -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:51.631 03:07:57 setup.sh.hugepages.default_setup -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:51.631 03:07:57 setup.sh.hugepages.default_setup -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:51.631 03:07:57 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:51.631 03:07:57 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # local user_nodes 00:04:51.631 03:07:57 setup.sh.hugepages.default_setup -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:51.631 03:07:57 setup.sh.hugepages.default_setup -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:51.631 03:07:57 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:51.631 03:07:57 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:51.631 03:07:57 setup.sh.hugepages.default_setup -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:51.631 03:07:57 setup.sh.hugepages.default_setup -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:51.631 03:07:57 setup.sh.hugepages.default_setup -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:04:51.631 03:07:57 setup.sh.hugepages.default_setup -- setup/hugepages.sh@73 -- # return 0 00:04:51.631 03:07:57 setup.sh.hugepages.default_setup -- setup/hugepages.sh@137 -- # setup output 00:04:51.631 03:07:57 setup.sh.hugepages.default_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:04:51.631 03:07:57 setup.sh.hugepages.default_setup -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:53.006 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:04:53.006 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:04:53.006 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:04:53.006 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:04:53.006 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:04:53.006 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:04:53.006 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:04:53.006 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:04:53.006 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:04:53.006 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:04:53.006 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:04:53.006 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:04:53.006 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:04:53.006 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:04:53.006 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:04:53.006 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:04:53.993 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:04:53.993 03:08:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:04:53.993 03:08:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@89 -- # local node 00:04:53.993 03:08:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@90 -- # local sorted_t 00:04:53.993 03:08:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@91 -- # local sorted_s 00:04:53.993 03:08:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@92 -- # local surp 00:04:53.993 03:08:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@93 -- # local resv 00:04:53.993 03:08:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@94 -- # local anon 00:04:53.993 03:08:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:53.993 03:08:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:53.993 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:53.993 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:53.993 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:53.993 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:53.993 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:53.993 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:53.993 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:53.993 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:53.993 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:53.993 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.993 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.993 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 43298916 kB' 'MemAvailable: 46800692 kB' 'Buffers: 3736 kB' 'Cached: 12793960 kB' 'SwapCached: 0 kB' 'Active: 9774472 kB' 'Inactive: 3501484 kB' 'Active(anon): 9380264 kB' 'Inactive(anon): 0 kB' 'Active(file): 394208 kB' 'Inactive(file): 3501484 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 482172 kB' 'Mapped: 193396 kB' 'Shmem: 8902004 kB' 'KReclaimable: 199848 kB' 'Slab: 568940 kB' 'SReclaimable: 199848 kB' 'SUnreclaim: 369092 kB' 'KernelStack: 12736 kB' 'PageTables: 8156 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610872 kB' 'Committed_AS: 10487788 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196000 kB' 'VmallocChunk: 0 kB' 'Percpu: 34368 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1740380 kB' 'DirectMap2M: 13907968 kB' 'DirectMap1G: 53477376 kB' 00:04:53.993 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.993 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.993 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.993 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.993 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.993 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.993 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.993 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.993 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.993 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.993 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.993 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.993 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.993 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.993 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.993 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.993 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.993 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.993 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.993 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.993 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.993 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.993 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.993 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.993 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.993 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.993 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.993 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.993 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.993 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.993 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.993 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.993 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.993 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.993 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.993 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.993 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.993 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.993 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.993 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.993 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.993 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.993 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.993 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.993 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.993 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.993 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.993 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.993 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.993 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.993 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.993 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.993 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.993 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.993 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.993 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.993 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.993 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.993 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.993 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.993 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.993 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.993 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.993 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.993 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.993 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.993 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.993 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.993 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.993 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.993 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.993 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.993 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.993 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.993 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.993 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.993 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.993 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.993 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.993 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.993 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.993 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.994 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.994 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.994 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.994 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.994 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.994 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.994 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.994 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.994 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.994 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.994 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.994 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.994 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.994 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.994 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.994 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.994 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.994 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.994 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.994 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.994 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.994 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.994 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.994 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.994 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.994 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.994 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.994 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.994 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.994 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.994 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.994 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.994 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.994 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.994 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.994 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.994 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.994 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.994 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.994 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.994 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.994 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.994 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.994 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.994 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.994 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.994 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.994 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.994 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.994 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.994 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.994 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.994 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.994 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.994 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.994 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.994 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.994 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.994 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.994 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.994 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.994 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.994 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.994 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.994 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.994 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.994 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.994 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.994 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.994 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.994 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.994 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.994 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.994 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.994 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.994 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.994 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.994 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.994 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.994 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:53.994 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:53.994 03:08:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # anon=0 00:04:53.994 03:08:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:53.994 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:53.994 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:53.994 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:53.994 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:53.994 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:53.994 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:53.994 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:53.994 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:53.994 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:53.994 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.994 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.994 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 43297912 kB' 'MemAvailable: 46799688 kB' 'Buffers: 3736 kB' 'Cached: 12793960 kB' 'SwapCached: 0 kB' 'Active: 9774552 kB' 'Inactive: 3501484 kB' 'Active(anon): 9380344 kB' 'Inactive(anon): 0 kB' 'Active(file): 394208 kB' 'Inactive(file): 3501484 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 481796 kB' 'Mapped: 193396 kB' 'Shmem: 8902004 kB' 'KReclaimable: 199848 kB' 'Slab: 568940 kB' 'SReclaimable: 199848 kB' 'SUnreclaim: 369092 kB' 'KernelStack: 12736 kB' 'PageTables: 8124 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610872 kB' 'Committed_AS: 10487804 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195952 kB' 'VmallocChunk: 0 kB' 'Percpu: 34368 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1740380 kB' 'DirectMap2M: 13907968 kB' 'DirectMap1G: 53477376 kB' 00:04:53.994 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.994 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.994 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.994 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.994 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.994 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.994 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.994 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.994 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.994 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.994 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.994 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.994 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.994 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.994 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.994 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.994 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.994 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.994 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.994 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.994 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.994 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.994 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.994 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.995 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.995 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.995 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.995 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.995 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.995 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.995 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.995 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.995 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.995 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.995 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.995 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.995 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.995 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.995 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.995 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.995 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.995 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.995 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.995 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.995 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.995 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.995 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.995 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.995 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.995 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.995 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.995 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.995 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.995 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.995 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.995 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.995 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.995 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.995 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.995 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.995 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.995 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.995 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.995 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.995 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.995 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.995 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.995 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.995 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.995 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.995 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.995 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.995 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.995 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.995 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.995 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.995 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.995 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.995 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.995 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.995 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.995 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.995 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.995 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.995 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.995 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.995 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.995 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.995 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.995 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.995 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.995 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.995 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.995 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.995 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.995 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.995 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.995 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.995 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.995 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.995 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.995 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.995 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.995 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.995 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.995 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.995 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.995 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.995 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.995 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.995 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.995 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.995 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.995 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.995 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.995 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.995 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.995 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.995 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.995 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.995 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.995 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.995 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.995 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.995 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.995 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.995 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.995 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.995 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.995 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.995 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.995 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.995 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.995 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.995 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.995 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.995 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.995 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.995 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.995 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.995 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.995 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.995 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.995 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.995 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.995 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.995 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.995 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.995 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.995 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.995 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.995 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.995 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.995 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.995 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.996 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.996 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.996 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.996 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.996 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.996 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.996 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.996 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.996 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.996 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.996 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.996 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.996 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.996 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.996 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.996 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.996 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.996 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.996 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.996 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.996 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.996 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.996 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.996 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.996 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.996 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.996 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.996 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.996 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.996 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.996 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.996 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.996 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.996 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.996 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.996 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.996 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.996 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.996 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.996 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.996 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.996 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.996 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.996 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.996 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.996 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.996 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.996 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.996 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.996 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.996 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:53.996 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:53.996 03:08:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # surp=0 00:04:53.996 03:08:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:53.996 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:53.996 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:53.996 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:53.996 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:53.996 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:53.996 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:53.996 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:53.996 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:53.996 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:53.996 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.996 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.996 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 43298332 kB' 'MemAvailable: 46800128 kB' 'Buffers: 3736 kB' 'Cached: 12793968 kB' 'SwapCached: 0 kB' 'Active: 9774320 kB' 'Inactive: 3501484 kB' 'Active(anon): 9380112 kB' 'Inactive(anon): 0 kB' 'Active(file): 394208 kB' 'Inactive(file): 3501484 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 481648 kB' 'Mapped: 193412 kB' 'Shmem: 8902012 kB' 'KReclaimable: 199888 kB' 'Slab: 568972 kB' 'SReclaimable: 199888 kB' 'SUnreclaim: 369084 kB' 'KernelStack: 12656 kB' 'PageTables: 7924 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610872 kB' 'Committed_AS: 10487828 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195920 kB' 'VmallocChunk: 0 kB' 'Percpu: 34368 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1740380 kB' 'DirectMap2M: 13907968 kB' 'DirectMap1G: 53477376 kB' 00:04:53.996 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.996 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.996 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.996 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.996 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.996 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.996 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.996 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.996 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.996 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.996 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.996 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.996 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.996 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.996 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.996 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.996 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.996 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.996 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.996 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.996 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.996 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.996 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.996 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.996 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.996 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.996 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.996 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.996 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.996 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.996 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.996 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.996 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.996 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.996 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.996 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.996 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.996 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.996 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.996 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.996 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.996 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.996 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.996 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.996 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.996 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.996 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.996 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.996 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.996 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.996 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.996 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.997 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.997 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.997 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.997 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.997 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.997 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.997 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.997 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.997 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.997 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.997 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.997 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.997 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.997 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.997 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.997 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.997 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.997 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.997 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.997 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.997 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.997 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.997 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.997 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.997 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.997 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.997 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.997 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.997 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.997 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.997 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.997 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.997 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.997 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.997 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.997 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.997 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.997 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.997 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.997 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.997 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.997 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.997 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.997 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.997 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.997 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.997 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.997 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.997 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.997 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.997 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.997 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.997 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.997 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.997 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.997 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.997 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.997 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.997 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.997 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.997 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.997 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.997 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.997 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.997 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.997 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.997 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.997 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.997 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.997 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.997 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.997 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.997 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.997 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.997 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.997 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.997 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.997 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.997 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.997 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.997 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.997 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.997 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.997 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.997 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.997 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.997 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.997 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.997 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.997 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.997 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.997 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.997 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.997 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.997 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.997 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.997 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.997 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.997 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.997 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.997 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.997 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.997 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.998 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.998 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.998 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.998 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.998 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.998 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.998 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.998 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.998 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.998 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.998 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.998 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.998 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.998 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.998 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.998 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.998 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.998 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.998 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.998 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.998 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.998 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.998 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.998 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.998 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.998 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.998 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.998 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.998 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.998 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.998 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.998 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.998 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.998 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.998 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.998 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.998 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.998 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.998 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.998 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.998 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.998 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.998 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.998 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.998 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.998 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.998 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:53.998 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:53.998 03:08:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # resv=0 00:04:53.998 03:08:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:53.998 nr_hugepages=1024 00:04:53.998 03:08:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:53.998 resv_hugepages=0 00:04:53.998 03:08:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:53.998 surplus_hugepages=0 00:04:53.998 03:08:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:53.998 anon_hugepages=0 00:04:53.998 03:08:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:53.998 03:08:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:53.998 03:08:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:53.998 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:53.998 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:53.998 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:53.998 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:53.998 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:53.998 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:53.998 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:53.998 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:53.998 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:53.998 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.998 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.998 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 43298332 kB' 'MemAvailable: 46800128 kB' 'Buffers: 3736 kB' 'Cached: 12793980 kB' 'SwapCached: 0 kB' 'Active: 9774412 kB' 'Inactive: 3501484 kB' 'Active(anon): 9380204 kB' 'Inactive(anon): 0 kB' 'Active(file): 394208 kB' 'Inactive(file): 3501484 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 481784 kB' 'Mapped: 193412 kB' 'Shmem: 8902024 kB' 'KReclaimable: 199888 kB' 'Slab: 568972 kB' 'SReclaimable: 199888 kB' 'SUnreclaim: 369084 kB' 'KernelStack: 12688 kB' 'PageTables: 8020 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610872 kB' 'Committed_AS: 10487848 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195920 kB' 'VmallocChunk: 0 kB' 'Percpu: 34368 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1740380 kB' 'DirectMap2M: 13907968 kB' 'DirectMap1G: 53477376 kB' 00:04:53.998 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.998 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.998 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.998 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.998 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.998 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.998 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.998 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.998 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.998 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.998 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.998 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.998 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.998 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.998 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.998 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.998 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.998 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.998 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.998 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.998 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.998 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.998 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.998 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.998 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.998 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.998 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.998 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.998 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.998 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.998 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.998 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.998 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.998 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.998 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.998 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.998 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.998 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.998 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.998 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.998 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.998 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.998 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.998 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.998 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.998 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.998 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.998 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.999 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.999 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.999 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.999 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.999 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.999 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.999 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.999 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.999 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.999 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.999 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.999 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.999 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.999 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.999 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.999 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.999 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.999 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.999 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.999 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.999 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.999 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.999 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.999 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.999 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.999 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.999 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.999 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.999 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.999 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.999 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.999 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.999 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.999 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.999 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.999 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.999 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.999 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.999 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.999 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.999 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.999 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.999 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.999 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.999 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.999 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.999 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.999 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.999 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.999 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.999 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.999 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.999 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.999 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.999 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.999 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.999 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.999 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.999 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.999 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.999 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.999 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.999 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.999 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.999 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.999 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.999 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.999 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.999 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.999 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.999 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.999 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.999 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.999 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.999 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.999 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.999 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.999 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.999 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.999 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.999 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.999 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.999 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.999 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.999 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.999 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.999 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.999 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.999 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.999 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.999 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.999 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.999 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.999 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.999 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.999 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.999 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.999 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.999 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.999 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.999 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.999 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.999 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.999 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.999 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.999 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.999 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.999 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.999 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.999 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.999 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.999 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.999 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.999 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.999 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.999 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.999 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.999 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.999 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.999 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.999 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.999 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.999 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.999 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.999 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.999 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.999 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.999 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.999 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.999 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:54.000 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:54.000 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:54.000 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.000 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:54.000 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:54.000 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:54.000 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.000 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:54.000 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:54.000 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:54.000 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.000 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:54.000 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:54.000 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:54.000 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.000 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 1024 00:04:54.000 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:54.000 03:08:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:54.000 03:08:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@112 -- # get_nodes 00:04:54.000 03:08:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@27 -- # local node 00:04:54.000 03:08:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:54.000 03:08:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:54.000 03:08:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:54.000 03:08:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:54.000 03:08:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:54.000 03:08:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:54.000 03:08:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:54.000 03:08:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:54.000 03:08:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:54.000 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:54.000 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node=0 00:04:54.000 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:54.000 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:54.000 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:54.000 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:54.000 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:54.000 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:54.000 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:54.000 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:54.000 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:54.000 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32876940 kB' 'MemFree: 18996004 kB' 'MemUsed: 13880936 kB' 'SwapCached: 0 kB' 'Active: 7315036 kB' 'Inactive: 3266244 kB' 'Active(anon): 7126464 kB' 'Inactive(anon): 0 kB' 'Active(file): 188572 kB' 'Inactive(file): 3266244 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 10344776 kB' 'Mapped: 67488 kB' 'AnonPages: 239828 kB' 'Shmem: 6889960 kB' 'KernelStack: 7608 kB' 'PageTables: 4760 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 125680 kB' 'Slab: 326216 kB' 'SReclaimable: 125680 kB' 'SUnreclaim: 200536 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:54.000 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.000 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:54.000 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:54.000 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:54.000 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.000 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:54.000 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:54.000 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:54.000 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.000 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:54.000 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:54.000 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:54.000 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.000 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:54.000 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:54.000 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:54.000 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.000 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:54.000 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:54.000 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:54.000 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.000 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:54.000 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:54.000 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:54.000 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.000 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:54.000 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:54.000 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:54.000 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.000 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:54.000 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:54.000 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:54.000 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.000 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:54.000 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:54.000 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:54.000 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.000 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:54.000 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:54.000 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:54.000 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.000 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:54.000 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:54.000 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:54.000 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.000 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:54.000 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:54.000 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:54.261 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.261 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:54.261 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:54.261 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:54.261 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.261 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:54.261 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:54.261 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:54.261 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.261 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:54.261 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:54.261 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:54.261 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.261 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:54.261 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:54.261 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:54.261 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.261 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:54.261 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:54.261 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:54.261 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.261 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:54.261 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:54.261 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:54.261 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.261 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:54.261 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:54.261 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:54.261 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.261 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:54.261 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:54.261 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:54.261 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.261 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:54.261 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:54.261 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:54.261 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.261 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:54.261 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:54.261 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:54.261 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.261 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:54.261 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:54.261 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:54.261 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.261 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:54.261 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:54.261 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:54.261 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.261 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:54.261 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:54.261 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:54.261 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.261 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:54.261 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:54.261 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:54.261 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.261 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:54.261 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:54.261 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:54.261 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.261 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:54.261 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:54.261 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:54.261 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.261 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:54.261 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:54.261 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:54.261 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.261 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:54.261 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:54.261 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:54.261 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.261 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:54.261 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:54.261 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:54.261 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.261 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:54.261 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:54.261 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:54.261 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.261 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:54.261 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:54.261 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:54.261 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.261 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:54.261 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:54.261 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:54.261 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.261 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:54.261 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:54.261 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:54.261 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.261 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:54.261 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:54.261 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:54.261 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.261 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:54.261 03:08:00 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:54.261 03:08:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:54.261 03:08:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:54.261 03:08:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:54.261 03:08:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:54.261 03:08:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:54.261 node0=1024 expecting 1024 00:04:54.261 03:08:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:54.261 00:04:54.261 real 0m2.453s 00:04:54.261 user 0m0.679s 00:04:54.261 sys 0m0.893s 00:04:54.261 03:08:00 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:54.261 03:08:00 setup.sh.hugepages.default_setup -- common/autotest_common.sh@10 -- # set +x 00:04:54.261 ************************************ 00:04:54.262 END TEST default_setup 00:04:54.262 ************************************ 00:04:54.262 03:08:00 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:04:54.262 03:08:00 setup.sh.hugepages -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:04:54.262 03:08:00 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:54.262 03:08:00 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:54.262 03:08:00 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:54.262 ************************************ 00:04:54.262 START TEST per_node_1G_alloc 00:04:54.262 ************************************ 00:04:54.262 03:08:00 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1123 -- # per_node_1G_alloc 00:04:54.262 03:08:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@143 -- # local IFS=, 00:04:54.262 03:08:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 1 00:04:54.262 03:08:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:04:54.262 03:08:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@50 -- # (( 3 > 1 )) 00:04:54.262 03:08:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@51 -- # shift 00:04:54.262 03:08:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # node_ids=('0' '1') 00:04:54.262 03:08:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:04:54.262 03:08:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:54.262 03:08:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:04:54.262 03:08:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 1 00:04:54.262 03:08:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0' '1') 00:04:54.262 03:08:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:54.262 03:08:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:54.262 03:08:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:54.262 03:08:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:54.262 03:08:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:54.262 03:08:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@69 -- # (( 2 > 0 )) 00:04:54.262 03:08:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:54.262 03:08:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:04:54.262 03:08:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:54.262 03:08:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:04:54.262 03:08:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@73 -- # return 0 00:04:54.262 03:08:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # NRHUGE=512 00:04:54.262 03:08:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # HUGENODE=0,1 00:04:54.262 03:08:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # setup output 00:04:54.262 03:08:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:54.262 03:08:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:55.196 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:55.196 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:55.196 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:55.196 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:55.196 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:55.196 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:55.196 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:55.196 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:55.196 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:55.196 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:55.196 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:55.196 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:55.196 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:55.196 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:55.196 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:55.196 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:55.196 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:55.460 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # nr_hugepages=1024 00:04:55.460 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:04:55.460 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@89 -- # local node 00:04:55.460 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:55.460 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:55.460 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:55.460 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:55.460 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:55.460 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:55.460 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:55.460 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:55.460 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:55.460 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:55.460 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:55.460 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:55.460 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:55.460 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:55.460 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:55.460 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:55.460 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.460 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.460 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 43293336 kB' 'MemAvailable: 46795132 kB' 'Buffers: 3736 kB' 'Cached: 12794072 kB' 'SwapCached: 0 kB' 'Active: 9774948 kB' 'Inactive: 3501484 kB' 'Active(anon): 9380740 kB' 'Inactive(anon): 0 kB' 'Active(file): 394208 kB' 'Inactive(file): 3501484 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 481984 kB' 'Mapped: 193484 kB' 'Shmem: 8902116 kB' 'KReclaimable: 199888 kB' 'Slab: 568976 kB' 'SReclaimable: 199888 kB' 'SUnreclaim: 369088 kB' 'KernelStack: 12656 kB' 'PageTables: 7936 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610872 kB' 'Committed_AS: 10487900 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196032 kB' 'VmallocChunk: 0 kB' 'Percpu: 34368 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1740380 kB' 'DirectMap2M: 13907968 kB' 'DirectMap1G: 53477376 kB' 00:04:55.460 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.460 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.460 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.460 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.460 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.460 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.460 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.460 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.460 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.460 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.460 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.460 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.460 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.460 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.460 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.460 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.460 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.460 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.460 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.460 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.460 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.460 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.460 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.460 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.460 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.460 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.460 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.460 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.460 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.460 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.461 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.461 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.461 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.461 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.461 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.461 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.461 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.461 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.461 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.461 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.461 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.461 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.461 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.461 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.461 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.461 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.461 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.461 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.461 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.461 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.461 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.461 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.461 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.461 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.461 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.461 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.461 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.461 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.461 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.461 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.461 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.461 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.461 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.461 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.461 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.461 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.461 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.461 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.461 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.461 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.461 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.461 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.461 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.461 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.461 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.461 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.461 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.461 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.461 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.461 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.461 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.461 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.461 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.461 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.461 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.461 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.461 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.461 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.461 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.461 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.461 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.461 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.461 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.461 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.461 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.461 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.461 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.461 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.461 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.461 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.461 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.461 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.461 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.461 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.461 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.461 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.461 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.461 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.461 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.461 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.461 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.461 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.461 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.461 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.461 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.461 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.461 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.461 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.461 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.461 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.461 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.461 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.461 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.461 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.461 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.461 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.461 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.461 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.461 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.461 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.461 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.461 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.461 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.461 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.461 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.461 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.461 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.461 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.461 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.461 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.461 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.461 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.461 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.461 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.461 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.461 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.461 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.461 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.461 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.461 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.461 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.461 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.461 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.461 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.461 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.461 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.461 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.462 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.462 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.462 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.462 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.462 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:55.462 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:55.462 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:55.462 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:55.462 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:55.462 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:55.462 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:55.462 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:55.462 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:55.462 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:55.462 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:55.462 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:55.462 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:55.462 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.462 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.462 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 43294600 kB' 'MemAvailable: 46796396 kB' 'Buffers: 3736 kB' 'Cached: 12794076 kB' 'SwapCached: 0 kB' 'Active: 9774844 kB' 'Inactive: 3501484 kB' 'Active(anon): 9380636 kB' 'Inactive(anon): 0 kB' 'Active(file): 394208 kB' 'Inactive(file): 3501484 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 481868 kB' 'Mapped: 193428 kB' 'Shmem: 8902120 kB' 'KReclaimable: 199888 kB' 'Slab: 568952 kB' 'SReclaimable: 199888 kB' 'SUnreclaim: 369064 kB' 'KernelStack: 12672 kB' 'PageTables: 7952 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610872 kB' 'Committed_AS: 10487920 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195984 kB' 'VmallocChunk: 0 kB' 'Percpu: 34368 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1740380 kB' 'DirectMap2M: 13907968 kB' 'DirectMap1G: 53477376 kB' 00:04:55.462 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.462 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.462 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.462 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.462 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.462 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.462 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.462 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.462 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.462 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.462 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.462 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.462 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.462 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.462 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.462 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.462 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.462 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.462 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.462 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.462 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.462 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.462 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.462 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.462 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.462 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.462 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.462 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.462 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.462 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.462 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.462 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.462 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.462 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.462 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.462 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.462 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.462 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.462 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.462 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.462 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.462 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.462 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.462 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.462 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.462 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.462 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.462 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.462 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.462 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.462 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.462 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.462 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.462 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.462 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.462 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.462 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.462 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.462 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.462 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.462 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.462 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.462 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.462 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.462 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.462 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.462 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.462 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.462 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.462 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.462 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.462 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.462 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.462 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.462 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.462 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.462 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.462 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.462 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.462 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.462 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.462 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.462 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.462 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.462 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.462 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.462 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.462 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.462 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.462 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.462 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.462 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.462 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.462 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.462 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.463 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.463 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.463 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.463 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.463 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.463 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.463 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.463 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.463 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.463 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.463 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.463 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.463 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.463 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.463 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.463 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.463 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.463 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.463 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.463 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.463 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.463 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.463 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.463 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.463 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.463 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.463 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.463 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.463 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.463 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.463 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.463 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.463 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.463 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.463 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.463 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.463 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.463 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.463 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.463 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.463 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.463 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.463 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.463 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.463 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.463 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.463 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.463 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.463 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.463 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.463 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.463 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.463 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.463 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.463 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.463 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.463 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.463 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.463 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.463 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.463 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.463 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.463 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.463 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.463 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.463 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.463 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.463 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.463 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.463 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.463 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.463 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.463 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.463 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.463 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.463 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.463 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.463 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.463 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.463 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.463 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.463 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.463 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.463 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.463 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.463 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.463 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.463 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.463 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.463 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.463 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.463 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.463 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.463 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.463 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.463 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.463 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.463 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.463 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.463 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.463 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.463 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.463 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.463 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.463 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.463 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.463 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.463 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.463 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.463 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.463 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:55.463 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:55.463 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:55.463 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:55.463 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:55.463 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:55.463 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:55.463 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:55.463 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:55.463 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:55.463 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:55.463 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:55.463 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:55.463 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.463 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.464 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 43294372 kB' 'MemAvailable: 46796168 kB' 'Buffers: 3736 kB' 'Cached: 12794092 kB' 'SwapCached: 0 kB' 'Active: 9774828 kB' 'Inactive: 3501484 kB' 'Active(anon): 9380620 kB' 'Inactive(anon): 0 kB' 'Active(file): 394208 kB' 'Inactive(file): 3501484 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 481848 kB' 'Mapped: 193428 kB' 'Shmem: 8902136 kB' 'KReclaimable: 199888 kB' 'Slab: 569060 kB' 'SReclaimable: 199888 kB' 'SUnreclaim: 369172 kB' 'KernelStack: 12672 kB' 'PageTables: 7972 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610872 kB' 'Committed_AS: 10487940 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195984 kB' 'VmallocChunk: 0 kB' 'Percpu: 34368 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1740380 kB' 'DirectMap2M: 13907968 kB' 'DirectMap1G: 53477376 kB' 00:04:55.464 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.464 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.464 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.464 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.464 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.464 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.464 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.464 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.464 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.464 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.464 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.464 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.464 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.464 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.464 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.464 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.464 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.464 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.464 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.464 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.464 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.464 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.464 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.464 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.464 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.464 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.464 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.464 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.464 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.464 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.464 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.464 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.464 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.464 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.464 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.464 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.464 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.464 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.464 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.464 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.464 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.464 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.464 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.464 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.464 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.464 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.464 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.464 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.464 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.464 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.464 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.464 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.464 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.464 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.464 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.464 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.464 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.464 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.464 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.464 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.464 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.464 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.464 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.464 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.464 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.464 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.464 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.464 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.464 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.464 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.464 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.464 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.464 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.464 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.464 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.464 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.464 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.464 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.464 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.464 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.464 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.464 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.464 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.464 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.464 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.464 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.464 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.464 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.464 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.464 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.464 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.464 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.464 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.465 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.465 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.465 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.465 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.465 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.465 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.465 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.465 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.465 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.465 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.465 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.465 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.465 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.465 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.465 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.465 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.465 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.465 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.465 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.465 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.465 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.465 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.465 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.465 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.465 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.465 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.465 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.465 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.465 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.465 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.465 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.465 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.465 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.465 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.465 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.465 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.465 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.465 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.465 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.465 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.465 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.465 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.465 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.465 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.465 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.465 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.465 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.465 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.465 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.465 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.465 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.465 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.465 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.465 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.465 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.465 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.465 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.465 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.465 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.465 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.465 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.465 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.465 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.465 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.465 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.465 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.465 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.465 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.465 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.465 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.465 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.465 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.465 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.465 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.465 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.465 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.465 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.465 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.465 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.465 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.465 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.465 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.465 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.465 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.465 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.465 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.465 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.465 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.465 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.465 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.465 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.465 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.465 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.465 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.465 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.465 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.465 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.465 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.465 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.465 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.465 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.465 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.465 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.465 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.465 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.465 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.465 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.465 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.465 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:55.465 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:55.465 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:55.465 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:55.465 nr_hugepages=1024 00:04:55.465 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:55.465 resv_hugepages=0 00:04:55.465 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:55.465 surplus_hugepages=0 00:04:55.465 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:55.465 anon_hugepages=0 00:04:55.465 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:55.465 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:55.465 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:55.465 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:55.465 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:55.465 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:55.465 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:55.465 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:55.465 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:55.465 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:55.466 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:55.466 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:55.466 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.466 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.466 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 43294372 kB' 'MemAvailable: 46796168 kB' 'Buffers: 3736 kB' 'Cached: 12794116 kB' 'SwapCached: 0 kB' 'Active: 9774880 kB' 'Inactive: 3501484 kB' 'Active(anon): 9380672 kB' 'Inactive(anon): 0 kB' 'Active(file): 394208 kB' 'Inactive(file): 3501484 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 481848 kB' 'Mapped: 193428 kB' 'Shmem: 8902160 kB' 'KReclaimable: 199888 kB' 'Slab: 569060 kB' 'SReclaimable: 199888 kB' 'SUnreclaim: 369172 kB' 'KernelStack: 12672 kB' 'PageTables: 7972 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610872 kB' 'Committed_AS: 10487964 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195984 kB' 'VmallocChunk: 0 kB' 'Percpu: 34368 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1740380 kB' 'DirectMap2M: 13907968 kB' 'DirectMap1G: 53477376 kB' 00:04:55.466 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.466 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.466 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.466 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.466 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.466 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.466 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.466 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.466 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.466 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.466 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.466 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.466 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.466 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.466 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.466 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.466 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.466 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.466 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.466 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.466 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.466 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.466 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.466 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.466 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.466 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.466 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.466 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.466 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.466 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.466 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.466 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.466 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.466 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.466 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.466 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.466 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.466 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.466 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.466 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.466 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.466 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.466 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.466 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.466 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.466 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.466 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.466 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.466 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.466 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.466 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.466 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.466 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.466 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.466 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.466 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.466 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.466 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.466 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.466 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.466 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.466 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.466 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.466 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.466 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.466 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.466 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.466 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.466 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.466 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.466 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.466 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.466 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.466 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.466 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.466 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.466 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.466 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.466 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.466 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.466 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.466 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.466 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.466 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.466 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.466 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.466 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.466 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.466 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.466 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.466 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.466 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.466 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.466 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.466 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.466 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.466 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.466 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.466 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.466 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.466 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.466 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.466 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.466 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.466 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.466 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.466 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.467 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.467 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.467 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.467 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.467 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.467 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.467 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.467 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.467 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.467 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.467 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.467 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.467 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.467 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.467 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.467 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.467 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.467 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.467 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.467 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.467 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.467 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.467 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.467 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.467 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.467 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.467 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.467 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.467 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.467 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.467 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.467 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.467 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.467 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.467 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.467 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.467 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.467 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.467 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.467 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.467 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.467 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.467 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.467 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.467 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.467 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.467 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.467 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.467 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.467 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.467 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.467 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.467 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.467 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.467 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.467 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.467 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.467 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.467 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.467 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.467 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.467 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.467 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.467 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.467 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.467 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.467 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.467 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.467 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.467 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.467 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.467 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.467 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.467 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.467 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.467 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.467 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.467 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.467 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.467 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.467 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.467 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.467 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.467 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.467 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.467 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.467 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 1024 00:04:55.467 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:55.467 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:55.467 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:55.467 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@27 -- # local node 00:04:55.467 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:55.467 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:55.467 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:55.467 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:55.467 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:55.467 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:55.467 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:55.467 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:55.467 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:55.467 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:55.467 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=0 00:04:55.467 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:55.467 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:55.467 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:55.467 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:55.467 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:55.467 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:55.467 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:55.467 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.467 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.467 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32876940 kB' 'MemFree: 20054352 kB' 'MemUsed: 12822588 kB' 'SwapCached: 0 kB' 'Active: 7315056 kB' 'Inactive: 3266244 kB' 'Active(anon): 7126484 kB' 'Inactive(anon): 0 kB' 'Active(file): 188572 kB' 'Inactive(file): 3266244 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 10344776 kB' 'Mapped: 67504 kB' 'AnonPages: 239700 kB' 'Shmem: 6889960 kB' 'KernelStack: 7640 kB' 'PageTables: 4840 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 125680 kB' 'Slab: 326268 kB' 'SReclaimable: 125680 kB' 'SUnreclaim: 200588 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:55.467 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.467 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.467 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.467 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.467 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.467 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.468 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.468 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.468 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.468 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.468 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.468 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.468 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.468 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.468 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.468 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.468 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.468 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.468 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.468 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.468 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.468 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.468 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.468 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.468 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.468 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.468 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.468 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.468 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.468 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.468 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.468 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.468 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.468 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.468 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.468 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.468 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.468 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.468 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.468 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.468 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.468 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.468 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.468 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.468 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.468 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.468 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.468 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.468 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.468 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.468 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.468 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.468 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.468 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.468 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.468 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.468 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.468 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.468 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.468 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.468 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.468 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.468 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.468 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.468 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.468 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.468 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.468 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.468 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.468 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.468 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.468 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.468 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.468 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.468 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.468 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.468 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.468 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.468 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.468 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.468 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.468 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.468 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.468 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.468 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.468 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.468 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.468 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.468 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.468 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.468 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.468 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.468 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.468 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.468 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.468 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.468 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.468 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.468 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.468 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.468 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.468 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.468 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.468 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.468 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.468 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.468 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.468 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.468 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.468 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.468 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.468 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.468 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.468 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.468 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.468 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.468 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.468 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.468 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.468 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.468 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.468 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.468 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.468 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.468 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.468 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.468 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.468 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.468 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.468 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.468 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.468 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.468 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.469 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.469 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.469 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.469 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.469 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.469 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.469 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.469 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.469 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.469 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.469 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.469 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.469 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:55.469 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:55.469 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:55.469 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:55.469 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:55.469 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:55.469 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:55.469 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=1 00:04:55.469 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:55.469 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:55.469 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:55.469 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:55.469 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:55.469 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:55.469 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:55.469 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.469 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.469 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27664752 kB' 'MemFree: 23239412 kB' 'MemUsed: 4425340 kB' 'SwapCached: 0 kB' 'Active: 2459700 kB' 'Inactive: 235240 kB' 'Active(anon): 2254064 kB' 'Inactive(anon): 0 kB' 'Active(file): 205636 kB' 'Inactive(file): 235240 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2453080 kB' 'Mapped: 125924 kB' 'AnonPages: 242024 kB' 'Shmem: 2012204 kB' 'KernelStack: 5032 kB' 'PageTables: 3156 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 74208 kB' 'Slab: 242792 kB' 'SReclaimable: 74208 kB' 'SUnreclaim: 168584 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:55.469 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.469 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.469 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.469 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.469 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.469 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.469 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.469 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.469 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.469 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.469 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.469 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.469 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.469 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.469 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.469 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.469 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.469 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.469 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.469 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.469 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.469 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.469 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.469 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.469 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.469 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.469 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.469 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.469 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.469 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.469 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.469 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.469 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.469 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.469 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.469 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.469 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.469 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.469 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.469 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.469 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.469 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.469 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.469 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.469 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.469 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.469 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.469 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.469 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.469 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.469 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.469 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.469 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.469 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.469 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.469 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.469 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.469 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.469 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.469 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.469 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.469 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.469 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.469 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.469 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.469 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.469 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.469 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.469 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.469 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.469 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.469 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.469 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.470 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.470 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.470 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.470 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.470 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.470 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.470 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.470 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.470 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.470 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.470 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.470 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.470 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.470 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.470 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.470 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.470 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.470 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.470 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.470 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.470 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.470 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.470 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.470 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.470 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.470 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.470 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.470 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.470 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.470 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.470 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.470 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.470 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.470 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.470 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.470 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.470 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.470 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.470 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.470 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.470 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.470 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.470 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.470 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.470 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.470 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.470 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.470 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.470 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.470 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.470 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.470 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.470 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.470 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.470 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.470 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.470 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.470 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.470 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.470 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.470 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.470 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.470 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.470 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.470 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.470 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.470 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.470 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.470 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.470 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.470 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.470 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.470 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:55.470 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:55.470 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:55.470 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:55.470 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:55.470 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:55.470 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:55.470 node0=512 expecting 512 00:04:55.470 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:55.470 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:55.470 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:55.470 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:04:55.470 node1=512 expecting 512 00:04:55.470 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:04:55.470 00:04:55.470 real 0m1.378s 00:04:55.470 user 0m0.549s 00:04:55.470 sys 0m0.782s 00:04:55.470 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:55.470 03:08:01 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:55.470 ************************************ 00:04:55.470 END TEST per_node_1G_alloc 00:04:55.470 ************************************ 00:04:55.470 03:08:01 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:04:55.470 03:08:01 setup.sh.hugepages -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:04:55.470 03:08:01 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:55.470 03:08:01 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:55.470 03:08:01 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:55.730 ************************************ 00:04:55.730 START TEST even_2G_alloc 00:04:55.730 ************************************ 00:04:55.730 03:08:01 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1123 -- # even_2G_alloc 00:04:55.730 03:08:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:04:55.730 03:08:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:04:55.730 03:08:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:55.730 03:08:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:55.730 03:08:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:55.730 03:08:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:55.730 03:08:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:55.730 03:08:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:55.730 03:08:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:55.730 03:08:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:55.730 03:08:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:55.730 03:08:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:55.730 03:08:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:55.730 03:08:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:55.730 03:08:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:55.730 03:08:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:55.730 03:08:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 512 00:04:55.730 03:08:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 1 00:04:55.730 03:08:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:55.730 03:08:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:55.730 03:08:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:55.730 03:08:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 0 00:04:55.730 03:08:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:55.730 03:08:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:04:55.730 03:08:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:04:55.730 03:08:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # setup output 00:04:55.730 03:08:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:55.730 03:08:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:56.667 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:56.667 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:56.667 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:56.667 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:56.667 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:56.667 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:56.667 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:56.667 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:56.667 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:56.667 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:56.667 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:56.667 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:56.667 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:56.667 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:56.667 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:56.667 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:56.667 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:56.929 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:04:56.929 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local node 00:04:56.929 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:56.929 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:56.929 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:56.929 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:56.929 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:56.929 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:56.929 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:56.929 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:56.929 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:56.929 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:56.929 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:56.929 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:56.929 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:56.929 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:56.929 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:56.929 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:56.929 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.929 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.930 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 43259344 kB' 'MemAvailable: 46761140 kB' 'Buffers: 3736 kB' 'Cached: 12794208 kB' 'SwapCached: 0 kB' 'Active: 9775424 kB' 'Inactive: 3501484 kB' 'Active(anon): 9381216 kB' 'Inactive(anon): 0 kB' 'Active(file): 394208 kB' 'Inactive(file): 3501484 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 482216 kB' 'Mapped: 193464 kB' 'Shmem: 8902252 kB' 'KReclaimable: 199888 kB' 'Slab: 569128 kB' 'SReclaimable: 199888 kB' 'SUnreclaim: 369240 kB' 'KernelStack: 12656 kB' 'PageTables: 7864 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610872 kB' 'Committed_AS: 10488456 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196048 kB' 'VmallocChunk: 0 kB' 'Percpu: 34368 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1740380 kB' 'DirectMap2M: 13907968 kB' 'DirectMap1G: 53477376 kB' 00:04:56.930 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.930 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.930 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.930 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.930 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.930 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.930 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.930 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.930 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.930 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.930 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.930 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.930 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.930 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.930 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.930 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.930 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.930 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.930 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.930 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.930 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.930 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.930 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.930 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.930 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.930 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.930 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.930 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.930 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.930 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.930 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.930 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.930 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.930 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.930 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.930 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.930 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.930 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.930 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.930 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.930 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.930 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.930 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.930 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.930 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.930 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.930 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.930 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.930 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.930 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.930 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.930 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.930 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.930 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.930 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.930 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.930 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.930 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.930 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.930 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.930 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.930 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.930 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.930 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.930 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.930 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.930 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.930 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.930 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.930 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.930 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.930 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.930 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.930 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.930 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.930 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.930 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.930 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.930 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.930 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.930 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.930 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.930 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.930 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.930 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.930 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.930 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.930 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.930 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.930 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.930 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.930 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.930 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.930 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.930 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.930 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.930 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.930 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.930 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.930 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.930 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.930 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.930 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.930 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.930 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.930 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.930 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.930 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.930 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.930 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.930 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.930 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.930 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.930 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.930 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.930 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.930 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.930 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.930 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.930 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.930 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.931 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.931 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.931 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.931 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.931 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.931 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.931 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.931 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.931 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.931 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.931 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.931 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.931 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.931 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.931 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.931 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.931 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.931 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.931 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.931 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.931 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.931 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.931 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.931 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.931 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.931 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.931 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.931 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.931 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.931 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.931 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.931 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.931 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.931 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.931 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.931 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.931 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.931 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.931 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.931 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.931 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:56.931 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:56.931 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:56.931 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:56.931 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:56.931 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:56.931 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:56.931 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:56.931 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:56.931 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:56.931 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:56.931 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:56.931 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:56.931 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.931 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.931 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 43261980 kB' 'MemAvailable: 46763776 kB' 'Buffers: 3736 kB' 'Cached: 12794212 kB' 'SwapCached: 0 kB' 'Active: 9775276 kB' 'Inactive: 3501484 kB' 'Active(anon): 9381068 kB' 'Inactive(anon): 0 kB' 'Active(file): 394208 kB' 'Inactive(file): 3501484 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 482060 kB' 'Mapped: 193444 kB' 'Shmem: 8902256 kB' 'KReclaimable: 199888 kB' 'Slab: 569128 kB' 'SReclaimable: 199888 kB' 'SUnreclaim: 369240 kB' 'KernelStack: 12704 kB' 'PageTables: 7956 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610872 kB' 'Committed_AS: 10488476 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196016 kB' 'VmallocChunk: 0 kB' 'Percpu: 34368 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1740380 kB' 'DirectMap2M: 13907968 kB' 'DirectMap1G: 53477376 kB' 00:04:56.931 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.931 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.931 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.931 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.931 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.931 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.931 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.931 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.931 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.931 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.931 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.931 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.931 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.931 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.931 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.931 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.931 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.931 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.931 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.931 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.931 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.931 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.931 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.931 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.931 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.931 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.931 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.931 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.931 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.931 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.931 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.931 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.931 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.931 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.931 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.931 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.931 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.931 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.931 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.931 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.931 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.931 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.931 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.931 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.931 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.931 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.931 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.931 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.931 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.931 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.931 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.931 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.931 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.931 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.931 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.931 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.931 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.931 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.931 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.931 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.931 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.931 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.932 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.932 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.932 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.932 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.932 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.932 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.932 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.932 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.932 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.932 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.932 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.932 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.932 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.932 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.932 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.932 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.932 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.932 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.932 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.932 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.932 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.932 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.932 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.932 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.932 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.932 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.932 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.932 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.932 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.932 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.932 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.932 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.932 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.932 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.932 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.932 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.932 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.932 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.932 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.932 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.932 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.932 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.932 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.932 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.932 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.932 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.932 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.932 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.932 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.932 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.932 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.932 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.932 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.932 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.932 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.932 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.932 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.932 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.932 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.932 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.932 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.932 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.932 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.932 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.932 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.932 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.932 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.932 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.932 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.932 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.932 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.932 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.932 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.932 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.932 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.932 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.932 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.932 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.932 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.932 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.932 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.932 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.932 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.932 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.932 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.932 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.932 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.932 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.932 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.932 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.932 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.932 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.932 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.932 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.932 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.932 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.932 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.932 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.932 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.932 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.932 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.932 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.932 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.932 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.932 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.932 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.932 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.932 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.932 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.932 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.932 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.932 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.932 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.932 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.932 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.932 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.932 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.932 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.932 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.932 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.932 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.932 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.932 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.932 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.932 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.932 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.932 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.932 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.932 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.932 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.933 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.933 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.933 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.933 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.933 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.933 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.933 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.933 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.933 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.933 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.933 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.933 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.933 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.933 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:56.933 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:56.933 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:56.933 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:56.933 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:56.933 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:56.933 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:56.933 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:56.933 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:56.933 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:56.933 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:56.933 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:56.933 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:56.933 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.933 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.933 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 43262708 kB' 'MemAvailable: 46764504 kB' 'Buffers: 3736 kB' 'Cached: 12794236 kB' 'SwapCached: 0 kB' 'Active: 9775232 kB' 'Inactive: 3501484 kB' 'Active(anon): 9381024 kB' 'Inactive(anon): 0 kB' 'Active(file): 394208 kB' 'Inactive(file): 3501484 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 482036 kB' 'Mapped: 193444 kB' 'Shmem: 8902280 kB' 'KReclaimable: 199888 kB' 'Slab: 569192 kB' 'SReclaimable: 199888 kB' 'SUnreclaim: 369304 kB' 'KernelStack: 12704 kB' 'PageTables: 7972 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610872 kB' 'Committed_AS: 10488496 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196016 kB' 'VmallocChunk: 0 kB' 'Percpu: 34368 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1740380 kB' 'DirectMap2M: 13907968 kB' 'DirectMap1G: 53477376 kB' 00:04:56.933 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.933 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.933 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.933 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.933 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.933 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.933 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.933 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.933 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.933 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.933 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.933 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.933 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.933 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.933 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.933 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.933 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.933 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.933 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.933 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.933 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.933 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.933 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.933 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.933 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.933 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.933 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.933 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.933 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.933 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.933 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.933 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.933 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.933 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.933 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.933 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.933 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.933 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.933 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.933 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.933 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.933 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.933 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.933 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.933 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.933 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.933 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.933 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.933 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.933 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.933 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.933 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.933 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.933 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.933 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.933 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.933 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.933 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.933 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.933 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.933 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.933 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.933 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.933 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.933 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.933 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.933 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.933 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.933 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.933 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.933 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.933 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.933 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.933 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.933 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.933 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.933 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.933 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.933 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.933 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.933 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.933 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.933 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.933 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.933 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.933 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.933 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.933 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.933 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.934 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.934 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.934 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.934 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.934 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.934 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.934 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.934 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.934 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.934 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.934 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.934 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.934 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.934 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.934 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.934 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.934 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.934 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.934 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.934 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.934 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.934 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.934 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.934 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.934 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.934 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.934 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.934 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.934 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.934 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.934 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.934 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.934 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.934 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.934 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.934 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.934 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.934 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.934 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.934 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.934 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.934 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.934 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.934 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.934 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.934 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.934 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.934 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.934 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.934 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.934 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.934 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.934 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.934 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.934 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.934 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.934 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.934 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.934 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.934 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.934 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.934 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.934 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.934 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.934 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.934 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.934 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.934 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.934 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.934 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.934 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.934 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.934 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.934 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.934 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.934 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.934 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.934 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.934 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.934 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.934 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.934 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.934 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.934 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.934 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.934 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.934 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.934 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.934 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.934 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.934 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.934 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.934 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.934 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.934 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.934 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.934 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.934 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.934 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.934 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.934 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.934 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.934 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.934 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.934 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.934 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.935 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.935 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.935 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.935 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.935 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.935 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.935 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:56.935 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:56.935 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:56.935 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:56.935 nr_hugepages=1024 00:04:56.935 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:56.935 resv_hugepages=0 00:04:56.935 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:56.935 surplus_hugepages=0 00:04:56.935 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:56.935 anon_hugepages=0 00:04:56.935 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:56.935 03:08:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:56.935 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:56.935 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:56.935 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:56.935 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:56.935 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:56.935 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:56.935 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:56.935 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:56.935 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:56.935 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:56.935 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.935 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.935 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 43262708 kB' 'MemAvailable: 46764504 kB' 'Buffers: 3736 kB' 'Cached: 12794236 kB' 'SwapCached: 0 kB' 'Active: 9774960 kB' 'Inactive: 3501484 kB' 'Active(anon): 9380752 kB' 'Inactive(anon): 0 kB' 'Active(file): 394208 kB' 'Inactive(file): 3501484 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 481756 kB' 'Mapped: 193444 kB' 'Shmem: 8902280 kB' 'KReclaimable: 199888 kB' 'Slab: 569192 kB' 'SReclaimable: 199888 kB' 'SUnreclaim: 369304 kB' 'KernelStack: 12704 kB' 'PageTables: 7972 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610872 kB' 'Committed_AS: 10488516 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196016 kB' 'VmallocChunk: 0 kB' 'Percpu: 34368 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1740380 kB' 'DirectMap2M: 13907968 kB' 'DirectMap1G: 53477376 kB' 00:04:56.935 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.935 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.935 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.935 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.935 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.935 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.935 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.935 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.935 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.935 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.935 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.935 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.935 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.935 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.935 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.935 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.935 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.935 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.935 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.935 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.935 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.935 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.935 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.935 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.935 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.935 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.935 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.935 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.935 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.935 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.935 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.935 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.935 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.935 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.935 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.935 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.935 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.935 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.935 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.935 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.935 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.935 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.935 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.935 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.935 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.935 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.935 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.935 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.935 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.935 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.935 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.935 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.935 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.935 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.935 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.935 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.935 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.935 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.935 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.935 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.935 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.935 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.935 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.935 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.935 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.935 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.935 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.935 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.935 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.935 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.935 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.935 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.935 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.935 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.935 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.935 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.935 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.935 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.935 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.935 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.935 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.935 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.935 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.935 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.935 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.935 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.935 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.936 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.936 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.936 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.936 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.936 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.936 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.936 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.936 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.936 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.936 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.936 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.936 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.936 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.936 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.936 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.936 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.936 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.936 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.936 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.936 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.936 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.936 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.936 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.936 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.936 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.936 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.936 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.936 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.936 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.936 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.936 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.936 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.936 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.936 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.936 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.936 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.936 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.936 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.936 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.936 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.936 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.936 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.936 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.936 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.936 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.936 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.936 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.936 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.936 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.936 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.936 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.936 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.936 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.936 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.936 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.936 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.936 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.936 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.936 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.936 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.936 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.936 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.936 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.936 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.936 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.936 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.936 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.936 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.936 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.936 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.936 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.936 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.936 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.936 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.936 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.936 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.936 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.936 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.936 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.936 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.936 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.936 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.936 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.936 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.936 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.936 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.936 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.936 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.936 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.936 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.936 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.936 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.936 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.936 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.936 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.936 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.936 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.936 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.936 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.936 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.936 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.936 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.936 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.936 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.936 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.936 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.936 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:04:56.936 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:56.936 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:56.936 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:56.936 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@27 -- # local node 00:04:56.936 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:56.936 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:56.936 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:56.936 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:56.936 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:56.936 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:56.936 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:56.936 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:56.936 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:56.936 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:56.936 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:04:56.936 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:56.936 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:56.936 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:56.936 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:56.936 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:56.936 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:56.937 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:56.937 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.937 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.937 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32876940 kB' 'MemFree: 20040964 kB' 'MemUsed: 12835976 kB' 'SwapCached: 0 kB' 'Active: 7315456 kB' 'Inactive: 3266244 kB' 'Active(anon): 7126884 kB' 'Inactive(anon): 0 kB' 'Active(file): 188572 kB' 'Inactive(file): 3266244 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 10344780 kB' 'Mapped: 67516 kB' 'AnonPages: 240048 kB' 'Shmem: 6889964 kB' 'KernelStack: 7624 kB' 'PageTables: 4716 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 125680 kB' 'Slab: 326392 kB' 'SReclaimable: 125680 kB' 'SUnreclaim: 200712 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:56.937 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.937 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.937 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.937 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.937 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.937 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.937 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.937 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.937 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.937 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.937 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.937 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.937 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.937 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.937 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.937 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.937 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.937 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.937 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.937 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.937 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.937 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.937 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.937 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.937 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.937 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.937 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.937 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.937 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.937 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.937 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.937 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.937 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.937 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.937 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.937 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.937 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.937 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.937 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.937 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.937 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.937 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.937 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.937 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.937 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.937 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.937 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.937 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.937 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.937 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.937 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.937 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.937 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.937 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.937 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.937 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.937 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.937 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.937 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.937 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.937 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.937 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.937 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.937 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.937 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.937 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.937 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.937 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.937 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.937 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.937 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.937 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.937 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.937 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.937 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.937 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.937 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.937 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.937 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.937 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.937 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.937 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.937 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.937 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.937 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.937 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.937 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.937 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.937 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.937 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.937 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.937 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.937 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.937 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.937 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.937 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.937 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.937 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.937 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.937 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.937 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.937 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.937 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.937 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.937 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.937 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.937 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.937 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.937 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.937 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.937 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.937 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.937 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.937 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.937 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.937 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.937 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.937 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.937 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.938 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.938 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.938 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.938 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.938 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.938 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.938 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.938 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.938 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.938 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.938 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.938 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.938 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.938 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.938 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.938 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.938 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.938 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.938 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.938 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.938 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.938 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.938 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.938 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.938 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.938 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.938 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:56.938 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:56.938 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:56.938 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:56.938 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:56.938 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:56.938 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:56.938 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=1 00:04:56.938 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:56.938 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:56.938 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:56.938 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:56.938 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:56.938 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:56.938 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:56.938 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.938 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.938 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27664752 kB' 'MemFree: 23221820 kB' 'MemUsed: 4442932 kB' 'SwapCached: 0 kB' 'Active: 2459688 kB' 'Inactive: 235240 kB' 'Active(anon): 2254052 kB' 'Inactive(anon): 0 kB' 'Active(file): 205636 kB' 'Inactive(file): 235240 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2453244 kB' 'Mapped: 125928 kB' 'AnonPages: 241836 kB' 'Shmem: 2012368 kB' 'KernelStack: 5048 kB' 'PageTables: 3164 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 74208 kB' 'Slab: 242796 kB' 'SReclaimable: 74208 kB' 'SUnreclaim: 168588 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:56.938 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.938 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.938 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.938 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.938 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.938 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.938 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.938 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.938 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.938 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.938 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.938 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.938 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.938 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.938 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.938 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.938 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.938 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.938 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.938 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.938 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.938 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.938 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.938 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.938 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.938 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.938 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.938 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.938 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.938 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.938 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.938 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.938 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.938 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.938 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.938 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.938 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.938 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.938 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.938 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.938 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.938 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.938 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.938 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.938 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.938 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.938 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.938 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.938 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.938 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.938 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.938 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.938 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.938 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.938 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.938 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.938 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.938 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.938 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.938 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.939 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.939 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.939 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.939 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.939 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.939 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.939 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.939 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.939 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.939 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.939 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.939 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.939 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.939 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.939 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.939 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.939 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.939 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.939 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.939 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.939 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.939 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.939 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.939 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.939 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.939 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.939 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.939 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.939 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.939 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.939 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.939 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.939 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.939 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.939 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.939 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.939 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.939 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.939 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.939 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.939 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.939 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.939 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.939 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.939 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.939 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.939 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.939 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.939 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.939 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.939 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.939 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.939 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.939 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.939 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.939 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.939 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.939 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.939 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.939 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.939 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.939 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.939 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.939 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.939 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.939 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.939 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.939 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.939 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.939 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.939 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.939 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.939 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.939 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.939 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.939 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.939 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.939 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.939 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.939 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.939 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.202 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:57.202 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.202 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.202 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.202 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:57.202 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:57.202 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:57.202 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:57.202 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:57.202 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:57.202 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:57.202 node0=512 expecting 512 00:04:57.202 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:57.202 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:57.202 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:57.202 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:04:57.202 node1=512 expecting 512 00:04:57.202 03:08:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:04:57.202 00:04:57.202 real 0m1.450s 00:04:57.202 user 0m0.649s 00:04:57.202 sys 0m0.760s 00:04:57.202 03:08:03 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:57.202 03:08:03 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:57.202 ************************************ 00:04:57.202 END TEST even_2G_alloc 00:04:57.202 ************************************ 00:04:57.202 03:08:03 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:04:57.202 03:08:03 setup.sh.hugepages -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:04:57.202 03:08:03 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:57.202 03:08:03 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:57.202 03:08:03 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:57.202 ************************************ 00:04:57.202 START TEST odd_alloc 00:04:57.202 ************************************ 00:04:57.202 03:08:03 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1123 -- # odd_alloc 00:04:57.202 03:08:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:04:57.202 03:08:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # local size=2098176 00:04:57.202 03:08:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:57.202 03:08:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:57.202 03:08:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:04:57.202 03:08:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:57.202 03:08:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:57.202 03:08:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:57.202 03:08:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:04:57.202 03:08:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:57.202 03:08:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:57.202 03:08:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:57.202 03:08:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:57.202 03:08:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:57.202 03:08:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:57.202 03:08:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:57.202 03:08:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 513 00:04:57.202 03:08:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 1 00:04:57.202 03:08:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:57.202 03:08:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=513 00:04:57.202 03:08:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:57.202 03:08:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 0 00:04:57.202 03:08:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:57.202 03:08:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:04:57.202 03:08:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:04:57.202 03:08:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # setup output 00:04:57.202 03:08:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:57.202 03:08:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:58.134 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:58.134 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:58.134 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:58.134 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:58.135 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:58.135 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:58.135 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:58.135 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:58.135 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:58.135 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:58.135 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:58.135 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:58.135 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:58.135 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:58.135 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:58.135 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:58.135 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:58.398 03:08:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:04:58.399 03:08:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local node 00:04:58.399 03:08:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:58.399 03:08:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:58.399 03:08:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:58.399 03:08:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:58.399 03:08:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:58.399 03:08:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:58.399 03:08:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:58.399 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:58.399 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:58.399 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:58.399 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:58.399 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:58.399 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:58.399 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:58.399 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:58.399 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:58.399 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.399 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.399 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 43272620 kB' 'MemAvailable: 46774412 kB' 'Buffers: 3736 kB' 'Cached: 12794340 kB' 'SwapCached: 0 kB' 'Active: 9775644 kB' 'Inactive: 3501484 kB' 'Active(anon): 9381436 kB' 'Inactive(anon): 0 kB' 'Active(file): 394208 kB' 'Inactive(file): 3501484 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 482300 kB' 'Mapped: 192936 kB' 'Shmem: 8902384 kB' 'KReclaimable: 199880 kB' 'Slab: 568752 kB' 'SReclaimable: 199880 kB' 'SUnreclaim: 368872 kB' 'KernelStack: 12656 kB' 'PageTables: 7608 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609848 kB' 'Committed_AS: 10478328 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195984 kB' 'VmallocChunk: 0 kB' 'Percpu: 34368 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1740380 kB' 'DirectMap2M: 13907968 kB' 'DirectMap1G: 53477376 kB' 00:04:58.399 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.399 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.399 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.399 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.399 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.399 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.399 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.399 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.399 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.399 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.399 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.399 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.399 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.399 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.399 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.399 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.399 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.399 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.399 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.399 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.399 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.399 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.399 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.399 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.399 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.399 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.399 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.399 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.399 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.399 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.399 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.399 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.399 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.399 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.399 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.399 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.399 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.399 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.399 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.399 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.399 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.399 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.399 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.399 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.399 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.399 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.399 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.399 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.399 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.399 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.399 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.399 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.399 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.399 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.399 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.399 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.399 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.399 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.399 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.399 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.399 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.399 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.399 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.399 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.399 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.399 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.399 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.399 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.399 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.399 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.399 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.399 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.399 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.399 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.399 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.399 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.399 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.399 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.399 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.399 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.399 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.399 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.399 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.399 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.399 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.399 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.399 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.399 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.399 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.399 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.399 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.399 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.399 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.399 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.399 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.399 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.399 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.399 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.399 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.400 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.400 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.400 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.400 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.400 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.400 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.400 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.400 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.400 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.400 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.400 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.400 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.400 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.400 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.400 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.400 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.400 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.400 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.400 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.400 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.400 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.400 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.400 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.400 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.400 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.400 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.400 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.400 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.400 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.400 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.400 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.400 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.400 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.400 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.400 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.400 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.400 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.400 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.400 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.400 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.400 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.400 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.400 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.400 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.400 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.400 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.400 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.400 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.400 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.400 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.400 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.400 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.400 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.400 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.400 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.400 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.400 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.400 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.400 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.400 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.400 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.400 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.400 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:58.400 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:58.400 03:08:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:58.400 03:08:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:58.400 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:58.400 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:58.400 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:58.400 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:58.400 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:58.400 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:58.400 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:58.400 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:58.400 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:58.400 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.400 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.400 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 43269172 kB' 'MemAvailable: 46770964 kB' 'Buffers: 3736 kB' 'Cached: 12794344 kB' 'SwapCached: 0 kB' 'Active: 9777512 kB' 'Inactive: 3501484 kB' 'Active(anon): 9383304 kB' 'Inactive(anon): 0 kB' 'Active(file): 394208 kB' 'Inactive(file): 3501484 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 484220 kB' 'Mapped: 193408 kB' 'Shmem: 8902388 kB' 'KReclaimable: 199880 kB' 'Slab: 568736 kB' 'SReclaimable: 199880 kB' 'SUnreclaim: 368856 kB' 'KernelStack: 12672 kB' 'PageTables: 7672 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609848 kB' 'Committed_AS: 10481780 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195940 kB' 'VmallocChunk: 0 kB' 'Percpu: 34368 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1740380 kB' 'DirectMap2M: 13907968 kB' 'DirectMap1G: 53477376 kB' 00:04:58.400 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.400 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.400 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.400 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.400 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.400 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.400 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.400 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.400 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.400 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.400 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.400 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.400 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.400 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.400 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.400 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.400 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.400 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.400 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.400 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.400 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.400 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.400 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.400 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.400 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.400 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.400 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.400 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.400 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.400 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.400 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.400 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.400 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.400 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.400 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.400 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.400 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.400 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.400 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.400 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.400 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.400 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.401 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.401 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.401 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.401 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.401 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.401 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.401 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.401 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.401 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.401 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.401 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.401 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.401 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.401 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.401 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.401 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.401 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.401 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.401 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.401 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.401 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.401 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.401 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.401 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.401 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.401 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.401 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.401 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.401 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.401 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.401 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.401 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.401 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.401 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.401 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.401 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.401 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.401 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.401 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.401 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.401 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.401 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.401 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.401 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.401 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.401 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.401 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.401 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.401 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.401 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.401 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.401 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.401 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.401 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.401 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.401 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.401 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.401 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.401 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.401 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.401 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.401 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.401 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.401 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.401 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.401 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.401 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.401 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.401 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.401 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.401 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.401 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.401 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.401 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.401 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.401 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.401 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.401 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.401 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.401 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.401 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.401 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.401 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.401 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.401 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.401 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.401 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.401 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.401 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.401 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.401 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.401 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.401 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.401 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.401 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.401 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.401 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.401 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.401 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.401 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.401 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.401 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.401 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.401 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.401 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.401 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.401 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.401 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.401 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.401 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.401 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.401 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.401 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.401 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.401 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.401 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.401 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.401 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.401 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.401 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.401 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.401 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.401 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.401 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.401 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.401 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.401 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.401 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.401 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.401 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.401 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.401 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.401 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.401 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.402 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.402 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.402 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.402 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.402 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.402 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.402 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.402 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.402 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.402 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.402 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.402 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.402 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.402 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.402 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.402 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.402 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.402 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.402 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.402 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.402 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.402 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.402 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.402 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.402 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.402 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.402 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.402 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.402 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.402 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:58.402 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:58.402 03:08:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:58.402 03:08:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:58.402 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:58.402 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:58.402 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:58.402 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:58.402 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:58.402 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:58.402 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:58.402 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:58.402 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:58.402 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.402 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.402 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 43277096 kB' 'MemAvailable: 46778888 kB' 'Buffers: 3736 kB' 'Cached: 12794356 kB' 'SwapCached: 0 kB' 'Active: 9772628 kB' 'Inactive: 3501484 kB' 'Active(anon): 9378420 kB' 'Inactive(anon): 0 kB' 'Active(file): 394208 kB' 'Inactive(file): 3501484 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 479364 kB' 'Mapped: 193008 kB' 'Shmem: 8902400 kB' 'KReclaimable: 199880 kB' 'Slab: 568736 kB' 'SReclaimable: 199880 kB' 'SUnreclaim: 368856 kB' 'KernelStack: 12784 kB' 'PageTables: 7724 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609848 kB' 'Committed_AS: 10474316 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196000 kB' 'VmallocChunk: 0 kB' 'Percpu: 34368 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1740380 kB' 'DirectMap2M: 13907968 kB' 'DirectMap1G: 53477376 kB' 00:04:58.402 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.402 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.402 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.402 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.402 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.402 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.402 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.402 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.402 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.402 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.402 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.402 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.402 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.402 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.402 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.402 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.402 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.402 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.402 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.402 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.402 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.402 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.402 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.402 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.402 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.402 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.402 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.402 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.402 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.402 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.402 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.402 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.402 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.402 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.402 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.402 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.402 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.402 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.402 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.402 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.402 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.402 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.402 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.402 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.402 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.402 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.402 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.402 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.402 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.402 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.402 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.402 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.402 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.402 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.402 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.402 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.402 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.402 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.402 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.402 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.402 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.402 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.402 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.402 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.402 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.402 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.402 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.402 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.402 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.402 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.402 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.402 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.402 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.402 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.402 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.402 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.402 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.402 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.403 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.403 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.403 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.403 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.403 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.403 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.403 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.403 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.403 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.403 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.403 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.403 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.403 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.403 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.403 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.403 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.403 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.403 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.403 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.403 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.403 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.403 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.403 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.403 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.403 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.403 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.403 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.403 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.403 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.403 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.403 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.403 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.403 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.403 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.403 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.403 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.403 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.403 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.403 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.403 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.403 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.403 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.403 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.403 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.403 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.403 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.403 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.403 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.403 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.403 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.403 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.403 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.403 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.403 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.403 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.403 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.403 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.403 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.403 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.403 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.403 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.403 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.403 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.403 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.403 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.403 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.403 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.403 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.403 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.403 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.403 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.403 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.403 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.403 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.403 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.403 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.403 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.403 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.403 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.403 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.403 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.403 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.403 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.403 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.403 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.403 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.403 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.403 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.403 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.403 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.403 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.403 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.403 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.403 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.403 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.403 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.403 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.403 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.403 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.403 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.403 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.403 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.403 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.403 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.403 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.403 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.403 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.404 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.404 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.404 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.404 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.404 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.404 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.404 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.404 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.404 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.404 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.404 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.404 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.404 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.404 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.404 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.404 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.404 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:58.404 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:58.404 03:08:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:58.404 03:08:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:04:58.404 nr_hugepages=1025 00:04:58.404 03:08:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:58.404 resv_hugepages=0 00:04:58.404 03:08:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:58.404 surplus_hugepages=0 00:04:58.404 03:08:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:58.404 anon_hugepages=0 00:04:58.404 03:08:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:58.404 03:08:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:04:58.404 03:08:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:58.404 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:58.404 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:58.404 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:58.404 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:58.404 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:58.404 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:58.404 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:58.404 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:58.404 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:58.404 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.404 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.404 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 43276196 kB' 'MemAvailable: 46777988 kB' 'Buffers: 3736 kB' 'Cached: 12794360 kB' 'SwapCached: 0 kB' 'Active: 9772332 kB' 'Inactive: 3501484 kB' 'Active(anon): 9378124 kB' 'Inactive(anon): 0 kB' 'Active(file): 394208 kB' 'Inactive(file): 3501484 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 479004 kB' 'Mapped: 192544 kB' 'Shmem: 8902404 kB' 'KReclaimable: 199880 kB' 'Slab: 568800 kB' 'SReclaimable: 199880 kB' 'SUnreclaim: 368920 kB' 'KernelStack: 12928 kB' 'PageTables: 8388 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609848 kB' 'Committed_AS: 10475704 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196192 kB' 'VmallocChunk: 0 kB' 'Percpu: 34368 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1740380 kB' 'DirectMap2M: 13907968 kB' 'DirectMap1G: 53477376 kB' 00:04:58.404 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.404 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.404 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.404 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.404 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.404 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.404 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.404 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.404 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.404 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.404 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.404 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.404 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.404 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.404 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.404 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.404 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.404 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.404 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.404 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.404 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.404 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.404 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.404 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.404 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.404 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.404 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.404 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.404 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.404 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.404 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.404 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.404 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.404 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.404 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.404 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.404 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.404 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.404 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.404 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.404 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.404 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.404 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.404 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.404 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.404 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.404 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.404 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.404 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.404 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.404 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.404 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.404 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.404 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.404 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.404 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.404 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.404 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.404 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.404 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.404 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.404 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.404 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.404 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.404 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.404 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.404 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.404 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.404 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.404 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.404 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.404 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.404 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.404 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.404 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.404 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.404 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.404 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.404 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.404 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.404 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.405 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.405 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.405 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.405 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.405 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.405 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.405 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.405 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.405 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.405 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.405 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.405 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.405 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.405 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.405 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.405 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.405 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.405 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.405 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.405 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.405 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.405 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.405 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.405 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.405 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.405 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.405 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.405 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.405 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.405 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.405 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.405 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.405 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.405 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.405 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.405 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.405 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.405 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.405 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.405 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.405 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.405 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.405 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.405 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.405 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.405 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.405 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.405 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.405 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.405 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.405 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.405 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.405 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.405 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.405 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.405 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.405 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.405 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.405 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.405 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.405 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.405 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.405 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.405 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.405 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.405 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.405 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.405 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.405 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.405 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.405 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.405 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.405 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.405 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.405 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.405 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.405 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.405 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.405 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.405 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.405 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.405 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.405 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.405 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.405 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.405 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.405 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.405 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.405 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.405 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.405 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.405 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.405 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.405 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.405 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.405 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.405 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.405 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.405 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.405 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.405 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.405 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.405 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.405 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.405 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.405 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.405 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.405 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.405 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.405 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.405 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.405 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.405 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:04:58.405 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:58.405 03:08:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:58.405 03:08:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:58.405 03:08:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@27 -- # local node 00:04:58.405 03:08:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:58.405 03:08:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:58.405 03:08:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:58.405 03:08:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=513 00:04:58.405 03:08:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:58.405 03:08:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:58.405 03:08:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:58.405 03:08:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:58.405 03:08:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:58.405 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:58.405 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:04:58.405 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:58.405 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:58.405 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:58.405 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:58.405 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:58.405 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:58.406 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:58.406 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.406 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.406 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32876940 kB' 'MemFree: 20059920 kB' 'MemUsed: 12817020 kB' 'SwapCached: 0 kB' 'Active: 7314540 kB' 'Inactive: 3266244 kB' 'Active(anon): 7125968 kB' 'Inactive(anon): 0 kB' 'Active(file): 188572 kB' 'Inactive(file): 3266244 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 10344788 kB' 'Mapped: 66792 kB' 'AnonPages: 239136 kB' 'Shmem: 6889972 kB' 'KernelStack: 7800 kB' 'PageTables: 5376 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 125672 kB' 'Slab: 326212 kB' 'SReclaimable: 125672 kB' 'SUnreclaim: 200540 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:58.406 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.406 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.406 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.406 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.406 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.406 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.406 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.406 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.406 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.406 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.406 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.406 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.406 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.406 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.406 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.406 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.406 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.406 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.406 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.406 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.406 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.406 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.406 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.406 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.406 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.406 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.406 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.406 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.406 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.406 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.406 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.406 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.406 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.406 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.406 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.406 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.406 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.406 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.406 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.406 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.406 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.406 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.406 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.406 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.406 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.406 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.406 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.406 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.406 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.406 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.406 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.406 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.406 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.406 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.406 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.406 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.406 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.406 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.406 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.406 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.406 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.406 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.406 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.406 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.406 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.406 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.406 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.406 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.406 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.406 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.406 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.406 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.406 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.406 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.406 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.406 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.406 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.406 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.406 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.406 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.406 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.406 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.406 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.406 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.406 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.406 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.406 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.406 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.406 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.406 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.406 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.406 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.406 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.406 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.406 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.406 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.406 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.406 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.406 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.406 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.406 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.406 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.406 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.406 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.406 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.406 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.406 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.406 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.406 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.406 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.406 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.406 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.406 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.406 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.406 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.406 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.406 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.406 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.406 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.406 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.406 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.406 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.406 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.407 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.407 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.407 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.407 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.407 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.407 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.407 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.407 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.407 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.407 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.407 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.407 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.407 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.407 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.407 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.407 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.407 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.407 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.407 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.407 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.407 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.407 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.407 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:58.407 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:58.407 03:08:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:58.407 03:08:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:58.407 03:08:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:58.407 03:08:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:58.407 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:58.407 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=1 00:04:58.407 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:58.407 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:58.407 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:58.407 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:58.407 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:58.407 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:58.407 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:58.407 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.407 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.407 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27664752 kB' 'MemFree: 23214240 kB' 'MemUsed: 4450512 kB' 'SwapCached: 0 kB' 'Active: 2457740 kB' 'Inactive: 235240 kB' 'Active(anon): 2252104 kB' 'Inactive(anon): 0 kB' 'Active(file): 205636 kB' 'Inactive(file): 235240 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2453364 kB' 'Mapped: 125744 kB' 'AnonPages: 239768 kB' 'Shmem: 2012488 kB' 'KernelStack: 5048 kB' 'PageTables: 3068 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 74208 kB' 'Slab: 242588 kB' 'SReclaimable: 74208 kB' 'SUnreclaim: 168380 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 513' 'HugePages_Free: 513' 'HugePages_Surp: 0' 00:04:58.407 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.407 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.407 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.407 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.407 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.407 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.407 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.407 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.407 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.407 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.407 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.407 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.407 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.407 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.407 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.407 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.407 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.407 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.407 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.407 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.407 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.407 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.407 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.407 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.407 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.407 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.407 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.407 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.407 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.407 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.407 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.407 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.407 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.407 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.407 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.407 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.407 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.407 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.407 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.407 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.407 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.407 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.407 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.407 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.407 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.407 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.407 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.407 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.407 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.407 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.407 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.407 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.407 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.407 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.407 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.407 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.407 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.407 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.407 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.407 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.407 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.407 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.407 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.407 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.407 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.408 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.408 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.408 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.408 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.408 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.408 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.408 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.408 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.408 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.408 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.408 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.408 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.408 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.408 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.408 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.408 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.408 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.408 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.408 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.408 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.408 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.408 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.408 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.408 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.408 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.408 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.408 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.408 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.408 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.408 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.408 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.408 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.408 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.408 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.408 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.408 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.408 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.408 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.408 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.408 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.408 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.408 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.408 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.408 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.408 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.408 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.408 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.408 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.408 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.408 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.408 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.408 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.408 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.408 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.408 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.408 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.408 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.408 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.408 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.408 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.408 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.408 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.408 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.408 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.408 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.408 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.408 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.408 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.408 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.408 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.408 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.408 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.408 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.408 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.408 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.408 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.408 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.408 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.408 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.408 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.408 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:58.408 03:08:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:58.408 03:08:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:58.408 03:08:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:58.408 03:08:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:58.408 03:08:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:58.408 03:08:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 513' 00:04:58.408 node0=512 expecting 513 00:04:58.408 03:08:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:58.408 03:08:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:58.408 03:08:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:58.408 03:08:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node1=513 expecting 512' 00:04:58.408 node1=513 expecting 512 00:04:58.667 03:08:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@130 -- # [[ 512 513 == \5\1\2\ \5\1\3 ]] 00:04:58.667 00:04:58.667 real 0m1.422s 00:04:58.667 user 0m0.590s 00:04:58.667 sys 0m0.792s 00:04:58.667 03:08:04 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:58.667 03:08:04 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:58.667 ************************************ 00:04:58.667 END TEST odd_alloc 00:04:58.667 ************************************ 00:04:58.667 03:08:04 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:04:58.667 03:08:04 setup.sh.hugepages -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:04:58.667 03:08:04 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:58.667 03:08:04 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:58.667 03:08:04 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:58.667 ************************************ 00:04:58.667 START TEST custom_alloc 00:04:58.667 ************************************ 00:04:58.667 03:08:04 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1123 -- # custom_alloc 00:04:58.667 03:08:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # local IFS=, 00:04:58.667 03:08:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@169 -- # local node 00:04:58.667 03:08:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # nodes_hp=() 00:04:58.667 03:08:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # local nodes_hp 00:04:58.667 03:08:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:04:58.667 03:08:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:04:58.667 03:08:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:04:58.667 03:08:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:58.667 03:08:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:58.667 03:08:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:04:58.667 03:08:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:58.667 03:08:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:58.667 03:08:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:58.667 03:08:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:58.667 03:08:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:58.667 03:08:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:58.667 03:08:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:58.667 03:08:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:58.667 03:08:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:58.667 03:08:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:58.667 03:08:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:04:58.667 03:08:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 256 00:04:58.667 03:08:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 1 00:04:58.667 03:08:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:58.667 03:08:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:04:58.667 03:08:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:58.667 03:08:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 0 00:04:58.667 03:08:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:58.667 03:08:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:04:58.667 03:08:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # (( 2 > 1 )) 00:04:58.667 03:08:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@177 -- # get_test_nr_hugepages 2097152 00:04:58.667 03:08:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:04:58.667 03:08:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:58.667 03:08:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:58.667 03:08:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:58.667 03:08:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:58.667 03:08:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:58.667 03:08:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:58.667 03:08:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:58.667 03:08:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:58.667 03:08:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:58.667 03:08:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:58.668 03:08:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:58.668 03:08:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:04:58.668 03:08:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:58.668 03:08:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:04:58.668 03:08:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:04:58.668 03:08:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@178 -- # nodes_hp[1]=1024 00:04:58.668 03:08:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:04:58.668 03:08:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:04:58.668 03:08:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:04:58.668 03:08:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:04:58.668 03:08:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:04:58.668 03:08:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:04:58.668 03:08:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:04:58.668 03:08:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:58.668 03:08:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:58.668 03:08:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:58.668 03:08:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:58.668 03:08:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:58.668 03:08:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:58.668 03:08:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:58.668 03:08:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 2 > 0 )) 00:04:58.668 03:08:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:58.668 03:08:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:04:58.668 03:08:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:58.668 03:08:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=1024 00:04:58.668 03:08:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:04:58.668 03:08:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512,nodes_hp[1]=1024' 00:04:58.668 03:08:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # setup output 00:04:58.668 03:08:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:58.668 03:08:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:59.605 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:59.605 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:59.605 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:59.605 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:59.605 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:59.605 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:59.605 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:59.605 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:59.605 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:59.605 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:59.605 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:59.605 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:59.605 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:59.605 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:59.605 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:59.605 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:59.605 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:59.871 03:08:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # nr_hugepages=1536 00:04:59.871 03:08:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:04:59.871 03:08:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local node 00:04:59.871 03:08:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:59.871 03:08:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:59.871 03:08:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:59.871 03:08:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:59.871 03:08:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:59.871 03:08:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:59.871 03:08:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:59.871 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:59.871 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:59.871 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:59.871 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:59.871 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:59.871 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:59.871 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:59.871 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:59.871 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:59.871 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.871 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.871 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 42238016 kB' 'MemAvailable: 45739808 kB' 'Buffers: 3736 kB' 'Cached: 12794468 kB' 'SwapCached: 0 kB' 'Active: 9771988 kB' 'Inactive: 3501484 kB' 'Active(anon): 9377780 kB' 'Inactive(anon): 0 kB' 'Active(file): 394208 kB' 'Inactive(file): 3501484 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 478028 kB' 'Mapped: 192520 kB' 'Shmem: 8902512 kB' 'KReclaimable: 199880 kB' 'Slab: 569048 kB' 'SReclaimable: 199880 kB' 'SUnreclaim: 369168 kB' 'KernelStack: 12640 kB' 'PageTables: 7548 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086584 kB' 'Committed_AS: 10473532 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196032 kB' 'VmallocChunk: 0 kB' 'Percpu: 34368 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1740380 kB' 'DirectMap2M: 13907968 kB' 'DirectMap1G: 53477376 kB' 00:04:59.871 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.871 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.871 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.871 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.871 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.871 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.871 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.871 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.871 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.871 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.871 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.871 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.871 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.871 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.871 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.871 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.871 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.871 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.871 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.871 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.871 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.871 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.871 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.871 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.871 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.871 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.871 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.871 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.871 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.871 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.871 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.871 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.871 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.871 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.871 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.871 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.871 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.871 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.871 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.871 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.871 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.871 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.871 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.871 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.871 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.871 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.871 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.871 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.871 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.871 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.871 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.871 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.871 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.871 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.871 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.871 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.871 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.871 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.871 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.871 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.871 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.871 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.871 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.871 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.871 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.871 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.871 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.871 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.871 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.871 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.871 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.871 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.871 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.871 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.871 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.871 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.871 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.871 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.871 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.871 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.871 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.871 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.871 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.871 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.871 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.871 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.871 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.871 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.871 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.871 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.871 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.871 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.871 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.871 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.871 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.871 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.871 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.871 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.871 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.871 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.871 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.871 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.871 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.871 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.872 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.872 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.872 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.872 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.872 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.872 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.872 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.872 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.872 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.872 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.872 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.872 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.872 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.872 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.872 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.872 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.872 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.872 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.872 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.872 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.872 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.872 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.872 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.872 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.872 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.872 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.872 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.872 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.872 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.872 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.872 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.872 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.872 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.872 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.872 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.872 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.872 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.872 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.872 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.872 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.872 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.872 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.872 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.872 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.872 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.872 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.872 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.872 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.872 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.872 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.872 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.872 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.872 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.872 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.872 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.872 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.872 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.872 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:59.872 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:59.872 03:08:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:59.872 03:08:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:59.872 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:59.872 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:59.872 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:59.872 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:59.872 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:59.872 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:59.872 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:59.872 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:59.872 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:59.872 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.872 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.872 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 42238436 kB' 'MemAvailable: 45740228 kB' 'Buffers: 3736 kB' 'Cached: 12794468 kB' 'SwapCached: 0 kB' 'Active: 9772052 kB' 'Inactive: 3501484 kB' 'Active(anon): 9377844 kB' 'Inactive(anon): 0 kB' 'Active(file): 394208 kB' 'Inactive(file): 3501484 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 478488 kB' 'Mapped: 192520 kB' 'Shmem: 8902512 kB' 'KReclaimable: 199880 kB' 'Slab: 569024 kB' 'SReclaimable: 199880 kB' 'SUnreclaim: 369144 kB' 'KernelStack: 12688 kB' 'PageTables: 7604 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086584 kB' 'Committed_AS: 10473548 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196000 kB' 'VmallocChunk: 0 kB' 'Percpu: 34368 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1740380 kB' 'DirectMap2M: 13907968 kB' 'DirectMap1G: 53477376 kB' 00:04:59.872 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.872 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.872 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.872 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.872 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.872 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.872 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.872 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.872 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.872 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.872 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.872 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.872 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.872 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.872 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.872 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.872 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.872 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.872 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.872 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.872 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.872 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.872 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.872 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.872 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.872 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.872 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.872 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.872 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.872 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.872 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.872 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.872 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.872 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.872 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.872 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.872 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.872 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.872 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.872 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.872 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.872 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.872 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.872 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.872 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.872 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.872 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.872 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.872 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.872 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.872 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.872 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.872 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.872 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.872 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.872 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.872 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.872 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.872 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.872 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.872 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.872 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.872 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.872 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.872 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.872 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.872 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.872 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.872 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.872 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.872 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.872 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.872 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.872 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.872 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.872 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.872 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.872 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.872 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.872 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.872 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.872 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.872 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.872 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.872 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.872 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.872 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.872 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.872 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.872 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.872 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.872 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.872 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.872 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.872 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.872 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.872 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.872 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.872 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.872 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.873 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.873 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.873 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.873 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.873 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.873 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.873 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.873 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.873 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.873 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.873 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.873 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.873 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.873 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.873 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.873 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.873 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.873 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.873 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.873 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.873 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.873 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.873 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.873 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.873 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.873 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.873 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.873 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.873 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.873 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.873 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.873 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.873 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.873 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.873 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.873 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.873 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.873 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.873 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.873 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.873 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.873 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.873 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.873 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.873 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.873 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.873 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.873 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.873 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.873 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.873 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.873 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.873 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.873 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.873 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.873 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.873 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.873 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.873 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.873 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.873 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.873 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.873 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.873 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.873 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.873 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.873 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.873 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.873 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.873 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.873 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.873 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.873 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.873 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.873 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.873 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.873 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.873 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.873 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.873 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.873 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.873 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.873 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.873 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.873 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.873 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.873 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.873 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.873 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.873 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.873 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.873 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.873 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.873 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.873 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.873 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.873 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.873 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.873 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.873 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.873 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.873 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.873 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.873 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.873 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.873 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:59.873 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:59.873 03:08:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:59.873 03:08:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:59.873 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:59.873 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:59.873 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:59.873 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:59.873 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:59.873 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:59.873 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:59.873 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:59.873 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:59.873 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.873 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.874 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 42237788 kB' 'MemAvailable: 45739580 kB' 'Buffers: 3736 kB' 'Cached: 12794480 kB' 'SwapCached: 0 kB' 'Active: 9771908 kB' 'Inactive: 3501484 kB' 'Active(anon): 9377700 kB' 'Inactive(anon): 0 kB' 'Active(file): 394208 kB' 'Inactive(file): 3501484 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 478344 kB' 'Mapped: 192520 kB' 'Shmem: 8902524 kB' 'KReclaimable: 199880 kB' 'Slab: 569092 kB' 'SReclaimable: 199880 kB' 'SUnreclaim: 369212 kB' 'KernelStack: 12704 kB' 'PageTables: 7676 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086584 kB' 'Committed_AS: 10473572 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195984 kB' 'VmallocChunk: 0 kB' 'Percpu: 34368 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1740380 kB' 'DirectMap2M: 13907968 kB' 'DirectMap1G: 53477376 kB' 00:04:59.874 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.874 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.874 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.874 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.874 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.874 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.874 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.874 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.874 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.874 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.874 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.874 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.874 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.874 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.874 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.874 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.874 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.874 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.874 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.874 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.874 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.874 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.874 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.874 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.874 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.874 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.874 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.874 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.874 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.874 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.874 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.874 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.874 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.874 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.874 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.874 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.874 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.874 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.874 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.874 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.874 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.874 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.874 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.874 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.874 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.874 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.874 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.874 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.874 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.874 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.874 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.874 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.874 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.874 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.874 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.874 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.874 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.874 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.874 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.874 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.874 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.874 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.874 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.874 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.874 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.874 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.874 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.874 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.874 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.874 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.874 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.874 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.874 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.874 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.874 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.874 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.874 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.874 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.874 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.874 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.874 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.874 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.874 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.874 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.874 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.874 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.874 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.874 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.874 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.874 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.874 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.874 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.874 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.874 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.874 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.874 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.874 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.874 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.874 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.874 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.874 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.874 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.875 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.875 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.875 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.875 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.875 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.875 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.875 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.875 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.875 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.875 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.875 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.875 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.875 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.875 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.875 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.875 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.875 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.875 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.875 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.875 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.875 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.875 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.875 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.875 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.875 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.875 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.875 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.875 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.875 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.875 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.875 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.875 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.875 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.875 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.875 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.875 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.875 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.875 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.875 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.875 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.875 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.875 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.875 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.875 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.875 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.875 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.875 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.875 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.875 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.875 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.875 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.875 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.875 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.875 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.875 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.875 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.875 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.875 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.875 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.875 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.875 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.875 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.875 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.875 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.875 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.875 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.875 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.875 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.875 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.875 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.875 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.875 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.875 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.875 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.875 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.875 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.875 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.875 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.875 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.875 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.875 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.875 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.875 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.875 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.875 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.875 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.875 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.875 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.875 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.875 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.875 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.875 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.875 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.875 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.875 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.875 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.875 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.875 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.875 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.875 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:59.875 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:59.875 03:08:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:59.875 03:08:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1536 00:04:59.875 nr_hugepages=1536 00:04:59.875 03:08:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:59.875 resv_hugepages=0 00:04:59.875 03:08:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:59.875 surplus_hugepages=0 00:04:59.875 03:08:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:59.875 anon_hugepages=0 00:04:59.875 03:08:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@107 -- # (( 1536 == nr_hugepages + surp + resv )) 00:04:59.875 03:08:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 1536 == nr_hugepages )) 00:04:59.875 03:08:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:59.875 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:59.875 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:59.875 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:59.875 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:59.875 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:59.875 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:59.875 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:59.875 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:59.875 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:59.875 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.875 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.876 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 42237788 kB' 'MemAvailable: 45739580 kB' 'Buffers: 3736 kB' 'Cached: 12794508 kB' 'SwapCached: 0 kB' 'Active: 9771948 kB' 'Inactive: 3501484 kB' 'Active(anon): 9377740 kB' 'Inactive(anon): 0 kB' 'Active(file): 394208 kB' 'Inactive(file): 3501484 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 478404 kB' 'Mapped: 192520 kB' 'Shmem: 8902552 kB' 'KReclaimable: 199880 kB' 'Slab: 569092 kB' 'SReclaimable: 199880 kB' 'SUnreclaim: 369212 kB' 'KernelStack: 12704 kB' 'PageTables: 7696 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086584 kB' 'Committed_AS: 10473228 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195968 kB' 'VmallocChunk: 0 kB' 'Percpu: 34368 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1740380 kB' 'DirectMap2M: 13907968 kB' 'DirectMap1G: 53477376 kB' 00:04:59.876 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.876 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.876 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.876 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.876 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.876 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.876 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.876 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.876 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.876 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.876 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.876 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.876 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.876 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.876 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.876 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.876 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.876 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.876 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.876 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.876 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.876 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.876 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.876 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.876 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.876 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.876 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.876 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.876 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.876 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.876 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.876 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.876 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.876 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.876 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.876 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.876 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.876 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.876 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.876 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.876 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.876 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.876 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.876 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.876 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.876 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.876 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.876 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.876 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.876 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.876 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.876 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.876 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.876 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.876 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.876 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.876 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.876 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.876 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.876 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.876 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.876 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.876 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.876 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.876 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.876 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.876 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.876 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.876 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.876 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.876 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.876 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.876 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.876 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.876 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.876 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.876 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.876 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.876 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.876 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.876 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.876 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.876 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.876 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.876 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.876 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.876 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.876 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.876 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.876 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.876 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.876 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.876 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.876 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.876 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.876 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.876 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.876 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.876 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.876 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.876 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.876 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.876 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.876 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.876 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.876 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.876 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.876 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.876 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.876 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.876 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.876 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.876 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.876 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.876 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.876 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.876 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.876 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.876 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.876 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.876 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.876 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.876 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.876 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.876 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.876 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.876 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.876 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.877 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.877 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.877 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.877 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.877 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.877 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.877 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.877 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.877 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.877 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.877 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.877 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.877 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.877 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.877 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.877 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.877 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.877 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.877 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.877 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.877 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.877 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.877 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.877 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.877 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.877 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.877 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.877 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.877 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.877 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.877 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.877 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.877 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.877 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.877 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.877 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.877 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.877 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.877 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.877 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.877 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.877 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.877 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.877 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.877 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.877 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.877 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.877 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.877 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.877 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.877 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.877 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.877 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.877 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.877 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.877 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.877 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.877 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.877 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.877 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.877 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.877 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.877 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.877 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.877 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.877 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 1536 00:04:59.877 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:59.877 03:08:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # (( 1536 == nr_hugepages + surp + resv )) 00:04:59.877 03:08:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:59.877 03:08:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@27 -- # local node 00:04:59.877 03:08:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:59.877 03:08:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:59.877 03:08:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:59.877 03:08:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:59.877 03:08:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:59.877 03:08:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:59.877 03:08:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:59.877 03:08:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:59.877 03:08:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:59.877 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:59.877 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:04:59.877 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:59.877 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:59.877 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:59.877 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:59.877 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:59.877 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:59.877 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:59.877 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.877 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.877 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32876940 kB' 'MemFree: 20076764 kB' 'MemUsed: 12800176 kB' 'SwapCached: 0 kB' 'Active: 7314128 kB' 'Inactive: 3266244 kB' 'Active(anon): 7125556 kB' 'Inactive(anon): 0 kB' 'Active(file): 188572 kB' 'Inactive(file): 3266244 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 10344864 kB' 'Mapped: 66808 kB' 'AnonPages: 238656 kB' 'Shmem: 6890048 kB' 'KernelStack: 7608 kB' 'PageTables: 4472 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 125672 kB' 'Slab: 326368 kB' 'SReclaimable: 125672 kB' 'SUnreclaim: 200696 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:59.877 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.877 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.877 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.877 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.877 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.877 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.877 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.877 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.877 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.877 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.877 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.877 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.877 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.877 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.877 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.877 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.877 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.877 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.877 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.877 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.877 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.877 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.877 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.877 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.877 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.877 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.877 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.877 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.877 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.877 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.877 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.878 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.878 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.878 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.878 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.878 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.878 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.878 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.878 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.878 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.878 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.878 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.878 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.878 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.878 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.878 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.878 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.878 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.878 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.878 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.878 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.878 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.878 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.878 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.878 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.878 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.878 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.878 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.878 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.878 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.878 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.878 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.878 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.878 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.878 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.878 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.878 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.878 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.878 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.878 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.878 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.878 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.878 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.878 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.878 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.878 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.878 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.878 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.878 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.878 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.878 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.878 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.878 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.878 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.878 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.878 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.878 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.878 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.878 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.878 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.878 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.878 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.878 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.878 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.878 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.878 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.878 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.878 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.878 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.878 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.878 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.878 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.878 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.878 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.878 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.878 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.878 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.878 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.878 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.878 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.878 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.878 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.878 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.878 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.878 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.878 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.878 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.878 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.878 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.878 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.878 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.878 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.878 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.878 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.878 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.878 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.878 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.878 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.878 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.878 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.878 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.878 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.878 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.878 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.878 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.878 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.878 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.878 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.878 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.878 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.878 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.878 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.878 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.878 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.878 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.878 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:59.878 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:59.878 03:08:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:59.878 03:08:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:59.878 03:08:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:59.878 03:08:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:59.878 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:59.878 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=1 00:04:59.878 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:59.878 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:59.878 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:59.878 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:59.878 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:59.878 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:59.878 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:59.878 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.878 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.879 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27664752 kB' 'MemFree: 22161036 kB' 'MemUsed: 5503716 kB' 'SwapCached: 0 kB' 'Active: 2457416 kB' 'Inactive: 235240 kB' 'Active(anon): 2251780 kB' 'Inactive(anon): 0 kB' 'Active(file): 205636 kB' 'Inactive(file): 235240 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2453404 kB' 'Mapped: 125712 kB' 'AnonPages: 239276 kB' 'Shmem: 2012528 kB' 'KernelStack: 5000 kB' 'PageTables: 2904 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 74208 kB' 'Slab: 242724 kB' 'SReclaimable: 74208 kB' 'SUnreclaim: 168516 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:59.879 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.879 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.879 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.879 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.879 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.879 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.879 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.879 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.879 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.879 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.879 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.879 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.879 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.879 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.879 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.879 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.879 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.879 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.879 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.879 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.879 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.879 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.879 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.879 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.879 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.879 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.879 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.879 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.879 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.879 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.879 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.879 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.879 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.879 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.879 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.879 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.879 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.879 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.879 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.879 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.879 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.879 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.879 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.879 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.879 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.879 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.879 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.879 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.879 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.879 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.879 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.879 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.879 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.879 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.879 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.879 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.879 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.879 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.879 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.879 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.879 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.879 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.879 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.879 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.879 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.879 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.879 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.879 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.879 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.879 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.879 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.879 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.879 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.879 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.879 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.879 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.879 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.879 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.879 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.879 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.879 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.879 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.879 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.879 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.879 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.879 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.879 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.879 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.879 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.879 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.879 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.879 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.879 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.879 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.879 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.879 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.879 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.879 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.879 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.879 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.879 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.879 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.879 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.879 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.879 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.879 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.880 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.880 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.880 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.880 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.880 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.880 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.880 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.880 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.880 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.880 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.880 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.880 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.880 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.880 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.880 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.880 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.880 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.880 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.880 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.880 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.880 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.880 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.880 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.880 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.880 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.880 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.880 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.880 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.880 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.880 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.880 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.880 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.880 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.880 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.880 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.880 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.880 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.880 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.880 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.880 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:59.880 03:08:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:59.880 03:08:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:59.880 03:08:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:59.880 03:08:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:59.880 03:08:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:59.880 03:08:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:59.880 node0=512 expecting 512 00:04:59.880 03:08:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:59.880 03:08:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:59.880 03:08:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:59.880 03:08:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node1=1024 expecting 1024' 00:04:59.880 node1=1024 expecting 1024 00:04:59.880 03:08:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@130 -- # [[ 512,1024 == \5\1\2\,\1\0\2\4 ]] 00:04:59.880 00:04:59.880 real 0m1.384s 00:04:59.880 user 0m0.562s 00:04:59.880 sys 0m0.779s 00:04:59.880 03:08:05 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:59.880 03:08:05 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:59.880 ************************************ 00:04:59.880 END TEST custom_alloc 00:04:59.880 ************************************ 00:04:59.880 03:08:05 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:04:59.880 03:08:05 setup.sh.hugepages -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:04:59.880 03:08:05 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:59.880 03:08:05 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:59.880 03:08:05 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:59.880 ************************************ 00:04:59.880 START TEST no_shrink_alloc 00:04:59.880 ************************************ 00:04:59.880 03:08:06 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1123 -- # no_shrink_alloc 00:04:59.880 03:08:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:04:59.880 03:08:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:04:59.880 03:08:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:59.880 03:08:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # shift 00:04:59.880 03:08:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:59.880 03:08:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:04:59.880 03:08:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:59.880 03:08:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:59.880 03:08:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:59.880 03:08:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:59.880 03:08:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:59.880 03:08:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:59.880 03:08:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:59.880 03:08:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:59.880 03:08:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:59.880 03:08:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:59.880 03:08:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:59.880 03:08:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:04:59.880 03:08:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@73 -- # return 0 00:04:59.880 03:08:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@198 -- # setup output 00:04:59.880 03:08:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:59.880 03:08:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:01.265 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:05:01.265 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:05:01.265 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:05:01.265 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:05:01.265 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:05:01.265 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:05:01.265 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:05:01.265 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:05:01.265 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:05:01.265 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:05:01.265 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:05:01.265 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:05:01.265 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:05:01.265 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:05:01.265 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:05:01.265 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:05:01.265 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:05:01.265 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:05:01.265 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:05:01.265 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:05:01.265 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:05:01.265 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:05:01.265 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:05:01.265 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:05:01.265 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:01.265 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:01.265 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:01.265 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:01.265 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:01.265 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:01.265 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:01.265 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:01.265 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:01.265 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:01.265 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:01.265 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.265 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.265 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 43197480 kB' 'MemAvailable: 46699272 kB' 'Buffers: 3736 kB' 'Cached: 12794596 kB' 'SwapCached: 0 kB' 'Active: 9772696 kB' 'Inactive: 3501484 kB' 'Active(anon): 9378488 kB' 'Inactive(anon): 0 kB' 'Active(file): 394208 kB' 'Inactive(file): 3501484 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 479040 kB' 'Mapped: 192604 kB' 'Shmem: 8902640 kB' 'KReclaimable: 199880 kB' 'Slab: 568980 kB' 'SReclaimable: 199880 kB' 'SUnreclaim: 369100 kB' 'KernelStack: 12672 kB' 'PageTables: 7564 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610872 kB' 'Committed_AS: 10473996 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196064 kB' 'VmallocChunk: 0 kB' 'Percpu: 34368 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1740380 kB' 'DirectMap2M: 13907968 kB' 'DirectMap1G: 53477376 kB' 00:05:01.265 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.265 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.265 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.265 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.265 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.265 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.265 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.265 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.265 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.265 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.265 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.265 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.265 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.265 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.265 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.265 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.265 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.265 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.265 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.265 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.265 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.265 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.265 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.265 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.265 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.265 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.265 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.265 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.265 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.265 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.265 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.265 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.265 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.265 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.265 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.265 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.265 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.265 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.265 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.265 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.265 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.265 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.265 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.265 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.265 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.265 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.265 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.265 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.265 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.265 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.265 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.265 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.265 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.265 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.265 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.265 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.265 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.265 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.265 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.265 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.265 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.265 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.265 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.265 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.265 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.265 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.265 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.265 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.265 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.265 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.265 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.265 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.265 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.265 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.265 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.265 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.265 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.265 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.265 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.265 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.266 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.266 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.266 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.266 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.266 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.266 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.266 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.266 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.266 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.266 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.266 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.266 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.266 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.266 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.266 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.266 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.266 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.266 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.266 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.266 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.266 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.266 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.266 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.266 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.266 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.266 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.266 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.266 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.266 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.266 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.266 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.266 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.266 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.266 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.266 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.266 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.266 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.266 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.266 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.266 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.266 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.266 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.266 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.266 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.266 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.266 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.266 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.266 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.266 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.266 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.266 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.266 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.266 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.266 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.266 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.266 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.266 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.266 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.266 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.266 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.266 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.266 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.266 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.266 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.266 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.266 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.266 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.266 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.266 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.266 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.266 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.266 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.266 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.266 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.266 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.266 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.266 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.266 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.266 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.266 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.266 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.266 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:01.266 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:01.266 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:05:01.266 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:01.266 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:01.266 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:01.266 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:01.266 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:01.266 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:01.266 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:01.266 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:01.266 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:01.266 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:01.266 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.266 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.266 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 43198196 kB' 'MemAvailable: 46699988 kB' 'Buffers: 3736 kB' 'Cached: 12794604 kB' 'SwapCached: 0 kB' 'Active: 9772208 kB' 'Inactive: 3501484 kB' 'Active(anon): 9378000 kB' 'Inactive(anon): 0 kB' 'Active(file): 394208 kB' 'Inactive(file): 3501484 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 478556 kB' 'Mapped: 192536 kB' 'Shmem: 8902648 kB' 'KReclaimable: 199880 kB' 'Slab: 568964 kB' 'SReclaimable: 199880 kB' 'SUnreclaim: 369084 kB' 'KernelStack: 12688 kB' 'PageTables: 7588 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610872 kB' 'Committed_AS: 10474012 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196016 kB' 'VmallocChunk: 0 kB' 'Percpu: 34368 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1740380 kB' 'DirectMap2M: 13907968 kB' 'DirectMap1G: 53477376 kB' 00:05:01.266 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.266 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.266 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.266 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.266 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.266 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.266 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.266 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.266 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.266 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.266 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.266 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.266 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.266 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.266 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.266 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.266 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.266 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.266 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.266 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.267 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.267 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.267 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.267 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.267 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.267 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.267 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.267 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.267 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.267 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.267 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.267 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.267 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.267 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.267 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.267 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.267 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.267 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.267 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.267 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.267 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.267 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.267 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.267 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.267 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.267 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.267 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.267 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.267 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.267 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.267 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.267 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.267 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.267 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.267 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.267 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.267 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.267 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.267 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.267 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.267 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.267 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.267 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.267 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.267 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.267 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.267 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.267 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.267 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.267 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.267 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.267 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.267 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.267 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.267 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.267 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.267 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.267 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.267 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.267 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.267 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.267 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.267 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.267 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.267 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.267 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.267 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.267 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.267 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.267 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.267 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.267 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.267 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.267 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.267 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.267 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.267 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.267 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.267 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.267 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.267 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.267 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.267 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.267 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.267 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.267 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.267 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.267 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.267 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.267 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.267 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.267 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.267 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.267 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.267 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.267 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.267 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.267 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.267 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.267 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.267 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.267 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.267 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.267 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.267 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.267 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.267 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.267 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.267 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.267 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.267 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.267 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.267 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.267 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.267 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.267 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.267 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.267 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.267 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.267 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.267 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.267 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.267 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.267 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.267 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.267 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.267 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.267 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.268 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.268 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.268 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.268 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.268 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.268 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.268 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.268 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.268 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.268 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.268 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.268 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.268 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.268 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.268 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.268 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.268 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.268 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.268 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.268 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.268 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.268 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.268 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.268 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.268 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.268 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.268 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.268 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.268 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.268 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.268 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.268 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.268 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.268 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.268 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.268 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.268 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.268 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.268 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.268 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.268 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.268 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.268 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.268 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.268 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.268 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.268 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.268 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.268 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.268 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.268 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.268 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.268 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.268 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.268 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.268 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.268 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.268 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:01.268 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:01.268 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:05:01.268 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:01.268 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:01.268 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:01.268 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:01.268 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:01.268 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:01.268 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:01.268 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:01.268 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:01.268 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:01.268 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.268 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.268 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 43198468 kB' 'MemAvailable: 46700260 kB' 'Buffers: 3736 kB' 'Cached: 12794624 kB' 'SwapCached: 0 kB' 'Active: 9771984 kB' 'Inactive: 3501484 kB' 'Active(anon): 9377776 kB' 'Inactive(anon): 0 kB' 'Active(file): 394208 kB' 'Inactive(file): 3501484 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 478320 kB' 'Mapped: 192536 kB' 'Shmem: 8902668 kB' 'KReclaimable: 199880 kB' 'Slab: 569028 kB' 'SReclaimable: 199880 kB' 'SUnreclaim: 369148 kB' 'KernelStack: 12688 kB' 'PageTables: 7596 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610872 kB' 'Committed_AS: 10474036 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196016 kB' 'VmallocChunk: 0 kB' 'Percpu: 34368 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1740380 kB' 'DirectMap2M: 13907968 kB' 'DirectMap1G: 53477376 kB' 00:05:01.268 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.268 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.268 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.268 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.268 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.268 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.268 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.268 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.268 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.268 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.268 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.268 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.268 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.268 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.268 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.268 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.268 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.268 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.268 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.268 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.268 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.268 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.268 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.268 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.268 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.268 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.268 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.268 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.268 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.268 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.268 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.268 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.268 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.268 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.268 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.268 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.268 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.268 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.268 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.268 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.268 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.268 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.268 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.269 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.269 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.269 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.269 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.269 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.269 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.269 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.269 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.269 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.269 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.269 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.269 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.269 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.269 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.269 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.269 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.269 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.269 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.269 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.269 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.269 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.269 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.269 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.269 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.269 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.269 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.269 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.269 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.269 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.269 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.269 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.269 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.269 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.269 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.269 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.269 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.269 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.269 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.269 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.269 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.269 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.269 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.269 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.269 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.269 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.269 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.269 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.269 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.269 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.269 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.269 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.269 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.269 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.269 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.269 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.269 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.269 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.269 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.269 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.269 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.269 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.269 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.269 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.269 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.269 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.269 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.269 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.269 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.269 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.269 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.269 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.269 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.269 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.269 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.269 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.269 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.269 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.269 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.269 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.269 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.269 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.269 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.269 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.269 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.269 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.269 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.269 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.269 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.269 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.269 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.269 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.269 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.269 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.269 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.269 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.269 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.269 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.269 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.269 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.269 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.269 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.269 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.269 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.269 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.270 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.270 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.270 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.270 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.270 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.270 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.270 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.270 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.270 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.270 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.270 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.270 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.270 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.270 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.270 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.270 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.270 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.270 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.270 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.270 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.270 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.270 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.270 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.270 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.270 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.270 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.270 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.270 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.270 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.270 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.270 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.270 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.270 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.270 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.270 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.270 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.270 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.270 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.270 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.270 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.270 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.270 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.270 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.270 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.270 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.270 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.270 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.270 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.270 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.270 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.270 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.270 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.270 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.270 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.270 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:01.270 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:01.270 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:05:01.270 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:01.270 nr_hugepages=1024 00:05:01.270 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:01.270 resv_hugepages=0 00:05:01.270 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:01.270 surplus_hugepages=0 00:05:01.270 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:01.270 anon_hugepages=0 00:05:01.270 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:01.270 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:01.270 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:01.270 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:01.270 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:01.270 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:01.270 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:01.270 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:01.270 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:01.270 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:01.270 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:01.270 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:01.270 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.270 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.270 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 43198904 kB' 'MemAvailable: 46700696 kB' 'Buffers: 3736 kB' 'Cached: 12794644 kB' 'SwapCached: 0 kB' 'Active: 9772544 kB' 'Inactive: 3501484 kB' 'Active(anon): 9378336 kB' 'Inactive(anon): 0 kB' 'Active(file): 394208 kB' 'Inactive(file): 3501484 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 478856 kB' 'Mapped: 192536 kB' 'Shmem: 8902688 kB' 'KReclaimable: 199880 kB' 'Slab: 569028 kB' 'SReclaimable: 199880 kB' 'SUnreclaim: 369148 kB' 'KernelStack: 12704 kB' 'PageTables: 7648 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610872 kB' 'Committed_AS: 10474056 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196016 kB' 'VmallocChunk: 0 kB' 'Percpu: 34368 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1740380 kB' 'DirectMap2M: 13907968 kB' 'DirectMap1G: 53477376 kB' 00:05:01.270 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.270 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.270 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.270 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.270 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.270 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.270 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.270 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.270 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.270 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.270 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.270 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.270 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.270 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.270 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.270 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.270 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.270 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.270 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.270 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.270 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.270 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.270 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.270 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.270 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.270 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.270 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.270 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.270 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.270 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.270 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.270 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.270 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.270 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.270 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.270 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.270 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.270 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.271 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.271 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.271 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.271 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.271 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.271 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.271 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.271 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.271 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.271 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.271 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.271 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.271 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.271 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.271 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.271 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.271 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.271 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.271 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.271 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.271 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.271 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.271 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.271 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.271 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.271 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.271 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.271 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.271 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.271 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.271 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.271 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.271 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.271 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.271 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.271 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.271 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.271 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.271 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.271 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.271 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.271 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.271 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.271 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.271 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.271 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.271 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.271 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.271 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.271 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.271 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.271 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.271 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.271 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.271 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.271 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.271 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.271 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.271 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.271 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.271 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.271 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.271 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.271 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.271 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.271 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.271 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.271 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.271 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.271 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.271 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.271 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.271 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.271 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.271 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.271 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.271 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.271 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.271 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.271 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.271 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.271 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.271 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.271 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.271 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.271 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.271 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.271 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.271 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.271 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.271 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.271 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.271 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.271 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.271 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.271 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.271 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.271 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.271 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.271 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.271 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.271 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.271 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.271 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.271 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.271 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.271 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.271 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.271 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.271 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.271 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.271 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.271 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.271 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.271 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.271 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.271 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.271 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.271 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.271 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.271 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.271 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.271 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.271 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.271 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.271 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.271 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.272 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.272 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.272 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.272 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.272 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.272 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.272 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.272 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.272 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.272 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.272 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.272 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.272 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.272 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.272 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.272 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.272 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.272 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.272 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.272 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.272 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.272 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.272 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.272 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.272 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.272 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.272 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.272 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.272 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:05:01.272 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:01.272 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:01.272 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:05:01.272 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:05:01.272 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:01.272 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:01.272 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:01.272 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:05:01.272 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:05:01.272 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:01.272 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:01.272 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:01.272 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:01.272 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:01.272 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:05:01.272 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:01.272 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:01.272 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:01.272 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:01.272 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:01.272 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:01.272 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:01.272 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.272 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.272 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32876940 kB' 'MemFree: 19038800 kB' 'MemUsed: 13838140 kB' 'SwapCached: 0 kB' 'Active: 7313792 kB' 'Inactive: 3266244 kB' 'Active(anon): 7125220 kB' 'Inactive(anon): 0 kB' 'Active(file): 188572 kB' 'Inactive(file): 3266244 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 10344920 kB' 'Mapped: 66824 kB' 'AnonPages: 238220 kB' 'Shmem: 6890104 kB' 'KernelStack: 7624 kB' 'PageTables: 4476 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 125672 kB' 'Slab: 326360 kB' 'SReclaimable: 125672 kB' 'SUnreclaim: 200688 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:01.272 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.272 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.272 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.272 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.272 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.272 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.272 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.272 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.272 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.272 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.272 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.272 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.272 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.272 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.272 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.272 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.272 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.272 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.272 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.272 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.272 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.272 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.272 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.272 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.272 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.272 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.272 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.272 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.272 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.272 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.272 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.272 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.272 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.272 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.272 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.272 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.272 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.272 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.272 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.272 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.272 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.272 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.272 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.272 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.272 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.272 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.272 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.272 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.272 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.272 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.272 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.272 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.272 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.272 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.272 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.272 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.272 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.272 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.272 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.272 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.272 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.272 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.272 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.272 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.272 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.272 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.272 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.273 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.273 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.273 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.273 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.273 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.273 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.273 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.273 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.273 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.273 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.273 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.273 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.273 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.273 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.273 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.273 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.273 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.273 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.273 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.273 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.273 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.273 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.273 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.273 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.273 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.273 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.273 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.273 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.273 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.273 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.273 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.273 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.273 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.273 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.273 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.273 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.273 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.273 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.273 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.273 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.273 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.273 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.273 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.273 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.273 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.273 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.273 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.273 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.273 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.273 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.273 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.273 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.273 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.273 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.273 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.273 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.273 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.273 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.273 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.273 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.273 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.273 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.273 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.273 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.273 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.273 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.273 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.273 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.273 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.273 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.273 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.273 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.273 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.274 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.274 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.274 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.274 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.274 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.274 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:01.274 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:01.274 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:01.274 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:01.274 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:01.274 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:01.274 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:05:01.274 node0=1024 expecting 1024 00:05:01.274 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:05:01.274 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:05:01.274 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # NRHUGE=512 00:05:01.274 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # setup output 00:05:01.274 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:05:01.274 03:08:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:02.659 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:05:02.659 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:05:02.659 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:05:02.659 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:05:02.659 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:05:02.659 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:05:02.659 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:05:02.659 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:05:02.659 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:05:02.659 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:05:02.659 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:05:02.659 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:05:02.659 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:05:02.659 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:05:02.659 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:05:02.659 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:05:02.659 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:05:02.659 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:05:02.659 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:05:02.659 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:05:02.659 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:05:02.659 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:05:02.659 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:05:02.659 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:05:02.660 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:05:02.660 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:02.660 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:02.660 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:02.660 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:02.660 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:02.660 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:02.660 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:02.660 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:02.660 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:02.660 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:02.660 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:02.660 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.660 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.660 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 43232084 kB' 'MemAvailable: 46733876 kB' 'Buffers: 3736 kB' 'Cached: 12794712 kB' 'SwapCached: 0 kB' 'Active: 9772252 kB' 'Inactive: 3501484 kB' 'Active(anon): 9378044 kB' 'Inactive(anon): 0 kB' 'Active(file): 394208 kB' 'Inactive(file): 3501484 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 478436 kB' 'Mapped: 192676 kB' 'Shmem: 8902756 kB' 'KReclaimable: 199880 kB' 'Slab: 568976 kB' 'SReclaimable: 199880 kB' 'SUnreclaim: 369096 kB' 'KernelStack: 12688 kB' 'PageTables: 7560 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610872 kB' 'Committed_AS: 10474232 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196080 kB' 'VmallocChunk: 0 kB' 'Percpu: 34368 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1740380 kB' 'DirectMap2M: 13907968 kB' 'DirectMap1G: 53477376 kB' 00:05:02.660 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.660 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.660 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.660 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.660 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.660 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.660 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.660 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.660 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.660 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.660 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.660 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.660 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.660 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.660 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.660 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.660 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.660 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.660 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.660 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.660 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.660 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.660 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.660 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.660 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.660 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.660 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.660 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.660 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.660 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.660 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.660 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.660 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.660 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.660 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.660 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.660 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.660 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.660 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.660 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.660 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.660 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.660 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.660 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.660 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.660 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.660 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.660 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.660 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.660 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.660 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.660 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.660 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.660 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.660 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.660 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.660 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.660 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.660 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.660 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.660 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.660 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.660 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.660 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.660 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.660 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.660 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.660 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.660 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.660 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.660 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.660 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.660 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.660 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.660 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.660 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.660 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.660 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.660 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.660 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.660 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.660 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.661 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.661 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.661 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.661 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.661 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.661 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.661 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.661 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.661 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.661 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.661 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.661 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.661 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.661 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.661 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.661 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.661 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.661 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.661 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.661 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.661 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.661 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.661 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.661 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.661 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.661 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.661 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.661 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.661 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.661 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.661 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.661 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.661 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.661 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.661 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.661 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.661 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.661 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.661 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.661 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.661 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.661 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.661 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.661 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.661 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.661 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.661 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.661 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.661 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.661 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.661 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.661 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.661 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.661 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.661 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.661 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.661 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.661 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.661 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.661 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.661 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.661 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.661 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.661 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.661 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.661 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.661 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.661 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.661 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.661 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.661 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.661 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.661 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.661 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.661 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.661 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.661 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.661 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.661 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.661 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:02.661 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:02.661 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:05:02.661 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:02.661 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:02.661 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:02.661 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:02.661 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:02.661 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:02.661 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:02.661 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:02.661 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:02.661 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:02.661 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.661 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.661 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 43232500 kB' 'MemAvailable: 46734292 kB' 'Buffers: 3736 kB' 'Cached: 12794716 kB' 'SwapCached: 0 kB' 'Active: 9772588 kB' 'Inactive: 3501484 kB' 'Active(anon): 9378380 kB' 'Inactive(anon): 0 kB' 'Active(file): 394208 kB' 'Inactive(file): 3501484 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 478812 kB' 'Mapped: 192620 kB' 'Shmem: 8902760 kB' 'KReclaimable: 199880 kB' 'Slab: 568960 kB' 'SReclaimable: 199880 kB' 'SUnreclaim: 369080 kB' 'KernelStack: 12720 kB' 'PageTables: 7640 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610872 kB' 'Committed_AS: 10474252 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196016 kB' 'VmallocChunk: 0 kB' 'Percpu: 34368 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1740380 kB' 'DirectMap2M: 13907968 kB' 'DirectMap1G: 53477376 kB' 00:05:02.661 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.661 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.661 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.661 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.661 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.661 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.661 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.661 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.661 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.661 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.661 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.661 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.661 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.661 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.661 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.661 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.661 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.661 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.661 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.661 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.661 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.661 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.662 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.662 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.662 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.662 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.662 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.662 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.662 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.662 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.662 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.662 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.662 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.662 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.662 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.662 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.662 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.662 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.662 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.662 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.662 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.662 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.662 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.662 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.662 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.662 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.662 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.662 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.662 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.662 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.662 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.662 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.662 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.662 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.662 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.662 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.662 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.662 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.662 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.662 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.662 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.662 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.662 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.662 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.662 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.662 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.662 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.662 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.662 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.662 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.662 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.662 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.662 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.662 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.662 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.662 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.662 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.662 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.662 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.662 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.662 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.662 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.662 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.662 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.662 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.662 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.662 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.662 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.662 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.662 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.662 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.662 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.662 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.662 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.662 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.662 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.662 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.662 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.662 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.662 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.662 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.662 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.662 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.662 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.662 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.662 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.662 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.662 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.662 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.662 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.662 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.662 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.662 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.662 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.662 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.662 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.662 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.662 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.662 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.662 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.662 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.662 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.662 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.662 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.662 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.662 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.662 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.662 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.662 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.662 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.662 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.662 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.662 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.662 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.662 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.662 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.662 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.662 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.662 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.662 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.662 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.662 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.662 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.662 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.662 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.662 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.662 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.662 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.662 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.662 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.662 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.663 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.663 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.663 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.663 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.663 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.663 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.663 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.663 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.663 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.663 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.663 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.663 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.663 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.663 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.663 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.663 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.663 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.663 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.663 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.663 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.663 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.663 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.663 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.663 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.663 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.663 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.663 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.663 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.663 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.663 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.663 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.663 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.663 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.663 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.663 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.663 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.663 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.663 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.663 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.663 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.663 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.663 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.663 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.663 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.663 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.663 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.663 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.663 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.663 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.663 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.663 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.663 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.663 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.663 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.663 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:02.663 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:02.663 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:05:02.663 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:02.663 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:02.663 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:02.663 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:02.663 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:02.663 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:02.663 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:02.663 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:02.663 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:02.663 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:02.663 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.663 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.663 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 43233628 kB' 'MemAvailable: 46735420 kB' 'Buffers: 3736 kB' 'Cached: 12794716 kB' 'SwapCached: 0 kB' 'Active: 9772168 kB' 'Inactive: 3501484 kB' 'Active(anon): 9377960 kB' 'Inactive(anon): 0 kB' 'Active(file): 394208 kB' 'Inactive(file): 3501484 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 478360 kB' 'Mapped: 192544 kB' 'Shmem: 8902760 kB' 'KReclaimable: 199880 kB' 'Slab: 568960 kB' 'SReclaimable: 199880 kB' 'SUnreclaim: 369080 kB' 'KernelStack: 12720 kB' 'PageTables: 7636 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610872 kB' 'Committed_AS: 10474272 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196016 kB' 'VmallocChunk: 0 kB' 'Percpu: 34368 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1740380 kB' 'DirectMap2M: 13907968 kB' 'DirectMap1G: 53477376 kB' 00:05:02.663 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.663 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.663 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.663 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.663 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.663 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.663 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.663 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.663 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.663 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.663 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.663 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.663 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.663 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.663 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.663 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.663 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.663 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.663 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.663 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.663 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.663 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.663 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.663 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.663 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.663 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.663 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.663 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.663 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.663 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.663 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.663 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.663 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.663 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.663 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.663 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.663 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.663 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.663 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.663 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.663 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.663 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.663 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.663 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.664 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.664 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.664 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.664 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.664 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.664 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.664 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.664 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.664 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.664 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.664 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.664 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.664 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.664 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.664 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.664 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.664 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.664 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.664 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.664 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.664 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.664 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.664 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.664 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.664 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.664 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.664 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.664 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.664 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.664 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.664 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.664 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.664 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.664 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.664 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.664 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.664 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.664 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.664 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.664 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.664 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.664 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.664 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.664 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.664 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.664 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.664 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.664 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.664 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.664 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.664 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.664 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.664 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.664 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.664 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.664 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.664 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.664 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.664 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.664 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.664 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.664 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.664 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.664 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.664 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.664 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.664 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.664 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.664 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.664 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.664 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.664 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.664 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.664 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.664 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.664 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.664 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.664 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.664 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.664 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.664 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.664 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.664 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.664 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.664 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.664 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.664 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.664 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.664 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.664 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.664 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.664 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.664 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.664 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.664 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.664 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.664 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.664 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.664 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.664 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.664 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.664 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.664 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.664 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.664 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.664 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.665 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.665 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.665 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.665 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.665 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.665 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.665 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.665 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.665 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.665 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.665 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.665 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.665 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.665 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.665 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.665 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.665 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.665 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.665 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.665 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.665 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.665 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.665 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.665 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.665 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.665 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.665 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.665 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.665 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.665 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.665 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.665 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.665 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.665 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.665 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.665 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.665 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.665 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.665 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.665 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.665 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.665 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.665 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.665 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.665 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.665 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.665 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.665 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.665 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.665 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.665 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.665 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:02.665 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:02.665 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:05:02.665 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:02.665 nr_hugepages=1024 00:05:02.665 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:02.665 resv_hugepages=0 00:05:02.665 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:02.665 surplus_hugepages=0 00:05:02.665 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:02.665 anon_hugepages=0 00:05:02.665 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:02.665 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:02.665 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:02.665 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:02.665 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:02.665 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:02.665 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:02.665 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:02.665 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:02.665 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:02.665 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:02.665 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:02.665 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.665 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.665 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 43233968 kB' 'MemAvailable: 46735760 kB' 'Buffers: 3736 kB' 'Cached: 12794756 kB' 'SwapCached: 0 kB' 'Active: 9772496 kB' 'Inactive: 3501484 kB' 'Active(anon): 9378288 kB' 'Inactive(anon): 0 kB' 'Active(file): 394208 kB' 'Inactive(file): 3501484 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 478656 kB' 'Mapped: 192544 kB' 'Shmem: 8902800 kB' 'KReclaimable: 199880 kB' 'Slab: 568960 kB' 'SReclaimable: 199880 kB' 'SUnreclaim: 369080 kB' 'KernelStack: 12720 kB' 'PageTables: 7636 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610872 kB' 'Committed_AS: 10474296 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196016 kB' 'VmallocChunk: 0 kB' 'Percpu: 34368 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1740380 kB' 'DirectMap2M: 13907968 kB' 'DirectMap1G: 53477376 kB' 00:05:02.665 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.665 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.665 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.665 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.665 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.665 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.665 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.665 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.665 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.665 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.665 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.665 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.665 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.665 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.665 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.665 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.665 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.665 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.665 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.665 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.665 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.665 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.665 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.665 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.665 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.665 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.665 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.665 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.665 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.665 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.665 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.665 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.665 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.665 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.665 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.665 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.665 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.665 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.665 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.665 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.666 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.666 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.666 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.666 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.666 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.666 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.666 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.666 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.666 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.666 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.666 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.666 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.666 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.666 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.666 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.666 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.666 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.666 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.666 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.666 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.666 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.666 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.666 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.666 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.666 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.666 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.666 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.666 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.666 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.666 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.666 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.666 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.666 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.666 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.666 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.666 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.666 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.666 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.666 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.666 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.666 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.666 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.666 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.666 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.666 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.666 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.666 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.666 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.666 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.666 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.666 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.666 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.666 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.666 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.666 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.666 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.666 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.666 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.666 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.666 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.666 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.666 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.666 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.666 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.666 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.666 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.666 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.666 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.666 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.666 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.666 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.666 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.666 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.666 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.666 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.666 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.666 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.666 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.666 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.666 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.666 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.666 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.666 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.666 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.666 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.666 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.666 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.666 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.666 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.666 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.666 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.666 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.666 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.666 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.666 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.666 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.666 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.666 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.666 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.666 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.666 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.666 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.666 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.666 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.666 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.666 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.666 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.666 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.666 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.666 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.666 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.666 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.666 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.666 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.666 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.666 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.666 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.666 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.666 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.666 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.666 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.666 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.666 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.666 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.666 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.666 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.666 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.666 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.667 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.667 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.667 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.667 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.667 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.667 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.667 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.667 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.667 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.667 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.667 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.667 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.667 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.667 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.667 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.667 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.667 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.667 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.667 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.667 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.667 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.667 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.667 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.667 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.667 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.667 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:05:02.667 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:02.667 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:02.667 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:05:02.667 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:05:02.667 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:02.667 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:02.667 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:02.667 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:05:02.667 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:05:02.667 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:02.667 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:02.667 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:02.667 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:02.667 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:02.667 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:05:02.667 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:02.667 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:02.667 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:02.667 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:02.667 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:02.667 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:02.667 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:02.667 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.667 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32876940 kB' 'MemFree: 19048788 kB' 'MemUsed: 13828152 kB' 'SwapCached: 0 kB' 'Active: 7314144 kB' 'Inactive: 3266244 kB' 'Active(anon): 7125572 kB' 'Inactive(anon): 0 kB' 'Active(file): 188572 kB' 'Inactive(file): 3266244 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 10344928 kB' 'Mapped: 66828 kB' 'AnonPages: 238564 kB' 'Shmem: 6890112 kB' 'KernelStack: 7672 kB' 'PageTables: 4568 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 125672 kB' 'Slab: 326300 kB' 'SReclaimable: 125672 kB' 'SUnreclaim: 200628 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:02.667 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.667 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.667 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.667 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.667 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.667 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.667 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.667 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.667 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.667 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.667 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.667 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.667 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.667 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.667 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.667 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.667 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.667 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.667 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.667 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.667 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.667 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.667 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.667 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.667 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.667 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.667 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.667 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.667 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.667 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.667 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.667 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.667 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.667 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.667 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.667 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.667 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.667 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.667 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.667 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.667 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.667 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.667 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.667 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.667 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.667 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.667 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.667 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.667 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.667 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.667 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.667 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.667 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.667 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.667 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.667 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.667 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.667 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.667 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.667 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.667 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.667 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.667 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.667 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.667 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.667 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.667 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.667 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.667 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.667 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.668 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.668 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.668 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.668 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.668 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.668 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.668 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.668 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.668 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.668 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.668 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.668 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.668 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.668 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.668 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.668 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.668 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.668 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.668 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.668 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.668 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.668 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.668 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.668 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.668 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.668 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.668 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.668 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.668 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.668 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.668 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.668 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.668 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.668 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.668 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.668 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.668 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.668 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.668 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.668 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.668 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.668 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.668 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.668 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.668 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.668 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.668 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.668 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.668 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.668 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.668 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.668 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.668 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.668 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.668 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.668 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.668 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.668 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.668 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.668 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.668 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.668 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.668 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.668 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.668 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.668 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.668 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.668 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.668 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.668 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.668 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.668 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.668 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.668 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.668 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.668 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.668 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:02.668 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:02.668 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:02.668 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:02.668 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:02.668 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:02.668 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:05:02.668 node0=1024 expecting 1024 00:05:02.668 03:08:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:05:02.668 00:05:02.668 real 0m2.686s 00:05:02.668 user 0m1.083s 00:05:02.668 sys 0m1.512s 00:05:02.668 03:08:08 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:02.668 03:08:08 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:05:02.668 ************************************ 00:05:02.668 END TEST no_shrink_alloc 00:05:02.668 ************************************ 00:05:02.668 03:08:08 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:05:02.668 03:08:08 setup.sh.hugepages -- setup/hugepages.sh@217 -- # clear_hp 00:05:02.668 03:08:08 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:05:02.668 03:08:08 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:05:02.668 03:08:08 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:02.668 03:08:08 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:05:02.668 03:08:08 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:02.668 03:08:08 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:05:02.668 03:08:08 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:05:02.668 03:08:08 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:02.668 03:08:08 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:05:02.668 03:08:08 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:02.668 03:08:08 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:05:02.668 03:08:08 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:05:02.668 03:08:08 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:05:02.668 00:05:02.668 real 0m11.159s 00:05:02.668 user 0m4.293s 00:05:02.668 sys 0m5.746s 00:05:02.668 03:08:08 setup.sh.hugepages -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:02.668 03:08:08 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:02.668 ************************************ 00:05:02.668 END TEST hugepages 00:05:02.668 ************************************ 00:05:02.668 03:08:08 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:05:02.668 03:08:08 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:05:02.668 03:08:08 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:02.668 03:08:08 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:02.669 03:08:08 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:05:02.669 ************************************ 00:05:02.669 START TEST driver 00:05:02.669 ************************************ 00:05:02.669 03:08:08 setup.sh.driver -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:05:02.926 * Looking for test storage... 00:05:02.926 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:05:02.926 03:08:08 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:05:02.926 03:08:08 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:02.926 03:08:08 setup.sh.driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:05:05.465 03:08:11 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:05:05.465 03:08:11 setup.sh.driver -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:05.465 03:08:11 setup.sh.driver -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:05.465 03:08:11 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:05:05.465 ************************************ 00:05:05.465 START TEST guess_driver 00:05:05.465 ************************************ 00:05:05.465 03:08:11 setup.sh.driver.guess_driver -- common/autotest_common.sh@1123 -- # guess_driver 00:05:05.465 03:08:11 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:05:05.465 03:08:11 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:05:05.465 03:08:11 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:05:05.465 03:08:11 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:05:05.465 03:08:11 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:05:05.465 03:08:11 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:05:05.465 03:08:11 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:05:05.465 03:08:11 setup.sh.driver.guess_driver -- setup/driver.sh@25 -- # unsafe_vfio=N 00:05:05.465 03:08:11 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:05:05.465 03:08:11 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 141 > 0 )) 00:05:05.465 03:08:11 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # is_driver vfio_pci 00:05:05.465 03:08:11 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod vfio_pci 00:05:05.465 03:08:11 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep vfio_pci 00:05:05.465 03:08:11 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends vfio_pci 00:05:05.465 03:08:11 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/virt/lib/irqbypass.ko.xz 00:05:05.465 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:05:05.465 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:05:05.465 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:05:05.465 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:05:05.465 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz 00:05:05.465 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci-core.ko.xz 00:05:05.465 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz == *\.\k\o* ]] 00:05:05.465 03:08:11 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # return 0 00:05:05.465 03:08:11 setup.sh.driver.guess_driver -- setup/driver.sh@37 -- # echo vfio-pci 00:05:05.465 03:08:11 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=vfio-pci 00:05:05.465 03:08:11 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ vfio-pci == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:05:05.465 03:08:11 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=vfio-pci' 00:05:05.465 Looking for driver=vfio-pci 00:05:05.465 03:08:11 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:05.465 03:08:11 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:05:05.465 03:08:11 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:05:05.465 03:08:11 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:05:06.406 03:08:12 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:06.406 03:08:12 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:06.406 03:08:12 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:06.406 03:08:12 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:06.406 03:08:12 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:06.406 03:08:12 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:06.406 03:08:12 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:06.406 03:08:12 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:06.406 03:08:12 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:06.406 03:08:12 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:06.406 03:08:12 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:06.406 03:08:12 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:06.406 03:08:12 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:06.406 03:08:12 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:06.406 03:08:12 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:06.406 03:08:12 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:06.406 03:08:12 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:06.406 03:08:12 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:06.406 03:08:12 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:06.406 03:08:12 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:06.406 03:08:12 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:06.406 03:08:12 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:06.406 03:08:12 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:06.406 03:08:12 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:06.406 03:08:12 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:06.406 03:08:12 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:06.406 03:08:12 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:06.406 03:08:12 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:06.406 03:08:12 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:06.406 03:08:12 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:06.406 03:08:12 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:06.406 03:08:12 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:06.406 03:08:12 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:06.406 03:08:12 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:06.406 03:08:12 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:06.406 03:08:12 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:06.406 03:08:12 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:06.406 03:08:12 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:06.406 03:08:12 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:06.406 03:08:12 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:06.406 03:08:12 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:06.406 03:08:12 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:06.406 03:08:12 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:06.406 03:08:12 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:06.406 03:08:12 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:06.406 03:08:12 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:06.406 03:08:12 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:06.406 03:08:12 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:07.344 03:08:13 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:07.344 03:08:13 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:07.344 03:08:13 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:07.602 03:08:13 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:05:07.602 03:08:13 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:05:07.602 03:08:13 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:07.602 03:08:13 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:05:10.137 00:05:10.137 real 0m4.587s 00:05:10.137 user 0m1.041s 00:05:10.137 sys 0m1.653s 00:05:10.137 03:08:15 setup.sh.driver.guess_driver -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:10.137 03:08:15 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:05:10.137 ************************************ 00:05:10.137 END TEST guess_driver 00:05:10.137 ************************************ 00:05:10.137 03:08:15 setup.sh.driver -- common/autotest_common.sh@1142 -- # return 0 00:05:10.137 00:05:10.137 real 0m7.168s 00:05:10.137 user 0m1.635s 00:05:10.137 sys 0m2.653s 00:05:10.137 03:08:15 setup.sh.driver -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:10.137 03:08:15 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:05:10.137 ************************************ 00:05:10.137 END TEST driver 00:05:10.137 ************************************ 00:05:10.137 03:08:15 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:05:10.137 03:08:15 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:05:10.137 03:08:15 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:10.137 03:08:15 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:10.137 03:08:15 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:05:10.137 ************************************ 00:05:10.137 START TEST devices 00:05:10.137 ************************************ 00:05:10.137 03:08:15 setup.sh.devices -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:05:10.137 * Looking for test storage... 00:05:10.137 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:05:10.137 03:08:16 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:05:10.137 03:08:16 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:05:10.137 03:08:16 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:10.138 03:08:16 setup.sh.devices -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:05:11.546 03:08:17 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:05:11.546 03:08:17 setup.sh.devices -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:05:11.546 03:08:17 setup.sh.devices -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:05:11.546 03:08:17 setup.sh.devices -- common/autotest_common.sh@1670 -- # local nvme bdf 00:05:11.546 03:08:17 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:05:11.546 03:08:17 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:05:11.546 03:08:17 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:05:11.546 03:08:17 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:11.546 03:08:17 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:05:11.546 03:08:17 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:05:11.546 03:08:17 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:05:11.546 03:08:17 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:05:11.546 03:08:17 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:05:11.546 03:08:17 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:05:11.546 03:08:17 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:05:11.546 03:08:17 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:05:11.546 03:08:17 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:05:11.546 03:08:17 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:88:00.0 00:05:11.546 03:08:17 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\8\8\:\0\0\.\0* ]] 00:05:11.546 03:08:17 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:05:11.546 03:08:17 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:05:11.546 03:08:17 setup.sh.devices -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:05:11.546 No valid GPT data, bailing 00:05:11.546 03:08:17 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:05:11.546 03:08:17 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:05:11.546 03:08:17 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:05:11.546 03:08:17 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:05:11.546 03:08:17 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:05:11.546 03:08:17 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:05:11.546 03:08:17 setup.sh.devices -- setup/common.sh@80 -- # echo 1000204886016 00:05:11.546 03:08:17 setup.sh.devices -- setup/devices.sh@204 -- # (( 1000204886016 >= min_disk_size )) 00:05:11.546 03:08:17 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:05:11.546 03:08:17 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:88:00.0 00:05:11.546 03:08:17 setup.sh.devices -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:05:11.546 03:08:17 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:05:11.546 03:08:17 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:05:11.546 03:08:17 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:11.546 03:08:17 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:11.546 03:08:17 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:05:11.546 ************************************ 00:05:11.546 START TEST nvme_mount 00:05:11.546 ************************************ 00:05:11.546 03:08:17 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1123 -- # nvme_mount 00:05:11.546 03:08:17 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:05:11.546 03:08:17 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:05:11.546 03:08:17 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:11.546 03:08:17 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:11.546 03:08:17 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:05:11.546 03:08:17 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:05:11.546 03:08:17 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:05:11.546 03:08:17 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:05:11.546 03:08:17 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:05:11.546 03:08:17 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:05:11.546 03:08:17 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:05:11.546 03:08:17 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:05:11.546 03:08:17 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:11.546 03:08:17 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:11.546 03:08:17 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:05:11.546 03:08:17 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:11.546 03:08:17 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:05:11.546 03:08:17 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:05:11.546 03:08:17 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:05:12.486 Creating new GPT entries in memory. 00:05:12.486 GPT data structures destroyed! You may now partition the disk using fdisk or 00:05:12.486 other utilities. 00:05:12.486 03:08:18 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:05:12.486 03:08:18 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:12.486 03:08:18 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:12.486 03:08:18 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:12.486 03:08:18 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:05:13.864 Creating new GPT entries in memory. 00:05:13.864 The operation has completed successfully. 00:05:13.864 03:08:19 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:05:13.864 03:08:19 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:13.865 03:08:19 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 3049604 00:05:13.865 03:08:19 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:13.865 03:08:19 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size= 00:05:13.865 03:08:19 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:13.865 03:08:19 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:05:13.865 03:08:19 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:05:13.865 03:08:19 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:13.865 03:08:19 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:88:00.0 nvme0n1:nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:13.865 03:08:19 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:05:13.865 03:08:19 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:05:13.865 03:08:19 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:13.865 03:08:19 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:13.865 03:08:19 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:05:13.865 03:08:19 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:13.865 03:08:19 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:05:13.865 03:08:19 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:05:13.865 03:08:19 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:13.865 03:08:19 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:05:13.865 03:08:19 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:05:13.865 03:08:19 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:13.865 03:08:19 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:05:14.802 03:08:20 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:14.802 03:08:20 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:05:14.802 03:08:20 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:05:14.802 03:08:20 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:14.802 03:08:20 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:14.802 03:08:20 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:14.802 03:08:20 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:14.802 03:08:20 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:14.802 03:08:20 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:14.802 03:08:20 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:14.802 03:08:20 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:14.802 03:08:20 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:14.802 03:08:20 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:14.802 03:08:20 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:14.802 03:08:20 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:14.802 03:08:20 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:14.802 03:08:20 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:14.802 03:08:20 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:14.802 03:08:20 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:14.802 03:08:20 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:14.802 03:08:20 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:14.802 03:08:20 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:14.802 03:08:20 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:14.802 03:08:20 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:14.802 03:08:20 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:14.802 03:08:20 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:14.802 03:08:20 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:14.802 03:08:20 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:14.802 03:08:20 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:14.802 03:08:20 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:14.802 03:08:20 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:14.802 03:08:20 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:14.802 03:08:20 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:14.802 03:08:20 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:14.802 03:08:20 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:14.802 03:08:20 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:14.802 03:08:20 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:14.802 03:08:20 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:05:14.802 03:08:20 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:14.802 03:08:20 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:14.802 03:08:20 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:14.802 03:08:20 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:05:14.802 03:08:20 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:14.802 03:08:20 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:15.061 03:08:20 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:15.061 03:08:20 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:05:15.061 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:15.061 03:08:20 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:15.061 03:08:20 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:15.319 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:05:15.319 /dev/nvme0n1: 8 bytes were erased at offset 0xe8e0db5e00 (gpt): 45 46 49 20 50 41 52 54 00:05:15.319 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:05:15.319 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:05:15.319 03:08:21 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 1024M 00:05:15.319 03:08:21 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size=1024M 00:05:15.319 03:08:21 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:15.319 03:08:21 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:05:15.319 03:08:21 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:05:15.319 03:08:21 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:15.319 03:08:21 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:88:00.0 nvme0n1:nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:15.319 03:08:21 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:05:15.319 03:08:21 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:05:15.319 03:08:21 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:15.319 03:08:21 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:15.319 03:08:21 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:05:15.319 03:08:21 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:15.319 03:08:21 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:05:15.319 03:08:21 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:05:15.319 03:08:21 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:15.319 03:08:21 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:05:15.319 03:08:21 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:05:15.319 03:08:21 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:15.319 03:08:21 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:05:16.258 03:08:22 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:16.258 03:08:22 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:05:16.258 03:08:22 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:05:16.258 03:08:22 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:16.258 03:08:22 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:16.258 03:08:22 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:16.258 03:08:22 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:16.258 03:08:22 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:16.258 03:08:22 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:16.258 03:08:22 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:16.258 03:08:22 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:16.258 03:08:22 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:16.258 03:08:22 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:16.258 03:08:22 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:16.258 03:08:22 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:16.258 03:08:22 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:16.258 03:08:22 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:16.258 03:08:22 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:16.258 03:08:22 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:16.258 03:08:22 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:16.258 03:08:22 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:16.258 03:08:22 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:16.258 03:08:22 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:16.258 03:08:22 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:16.258 03:08:22 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:16.258 03:08:22 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:16.258 03:08:22 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:16.258 03:08:22 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:16.258 03:08:22 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:16.258 03:08:22 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:16.258 03:08:22 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:16.258 03:08:22 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:16.258 03:08:22 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:16.258 03:08:22 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:16.258 03:08:22 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:16.258 03:08:22 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:16.258 03:08:22 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:16.258 03:08:22 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:05:16.258 03:08:22 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:16.258 03:08:22 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:16.258 03:08:22 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:16.258 03:08:22 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:16.517 03:08:22 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:88:00.0 data@nvme0n1 '' '' 00:05:16.517 03:08:22 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:05:16.517 03:08:22 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:05:16.517 03:08:22 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:05:16.517 03:08:22 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:05:16.517 03:08:22 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:05:16.517 03:08:22 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:05:16.517 03:08:22 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:05:16.517 03:08:22 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:16.517 03:08:22 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:05:16.517 03:08:22 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:05:16.517 03:08:22 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:16.517 03:08:22 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:05:17.893 03:08:23 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:17.893 03:08:23 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:05:17.893 03:08:23 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:05:17.893 03:08:23 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:17.893 03:08:23 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:17.893 03:08:23 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:17.893 03:08:23 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:17.893 03:08:23 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:17.893 03:08:23 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:17.893 03:08:23 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:17.893 03:08:23 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:17.893 03:08:23 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:17.893 03:08:23 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:17.893 03:08:23 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:17.893 03:08:23 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:17.893 03:08:23 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:17.893 03:08:23 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:17.893 03:08:23 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:17.893 03:08:23 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:17.893 03:08:23 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:17.893 03:08:23 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:17.893 03:08:23 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:17.893 03:08:23 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:17.893 03:08:23 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:17.893 03:08:23 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:17.893 03:08:23 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:17.893 03:08:23 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:17.893 03:08:23 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:17.893 03:08:23 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:17.893 03:08:23 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:17.893 03:08:23 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:17.893 03:08:23 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:17.893 03:08:23 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:17.893 03:08:23 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:17.893 03:08:23 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:17.893 03:08:23 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:17.893 03:08:23 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:17.893 03:08:23 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:05:17.893 03:08:23 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:05:17.893 03:08:23 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:05:17.893 03:08:23 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:17.893 03:08:23 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:17.893 03:08:23 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:17.893 03:08:23 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:17.893 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:17.893 00:05:17.893 real 0m6.249s 00:05:17.893 user 0m1.461s 00:05:17.893 sys 0m2.363s 00:05:17.893 03:08:23 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:17.893 03:08:23 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:05:17.893 ************************************ 00:05:17.893 END TEST nvme_mount 00:05:17.893 ************************************ 00:05:17.893 03:08:23 setup.sh.devices -- common/autotest_common.sh@1142 -- # return 0 00:05:17.893 03:08:23 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:05:17.893 03:08:23 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:17.893 03:08:23 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:17.893 03:08:23 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:05:17.893 ************************************ 00:05:17.893 START TEST dm_mount 00:05:17.893 ************************************ 00:05:17.893 03:08:23 setup.sh.devices.dm_mount -- common/autotest_common.sh@1123 -- # dm_mount 00:05:17.893 03:08:23 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:05:17.893 03:08:23 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:05:17.893 03:08:23 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:05:17.893 03:08:23 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:05:17.893 03:08:23 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:05:17.893 03:08:23 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:05:17.893 03:08:23 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:05:17.893 03:08:23 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:05:17.893 03:08:23 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:05:17.893 03:08:23 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:05:17.893 03:08:23 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:05:17.893 03:08:23 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:17.893 03:08:23 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:17.893 03:08:23 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:05:17.893 03:08:23 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:17.893 03:08:23 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:17.893 03:08:23 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:05:17.894 03:08:23 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:17.894 03:08:23 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:05:17.894 03:08:23 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:05:17.894 03:08:23 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:05:18.830 Creating new GPT entries in memory. 00:05:18.830 GPT data structures destroyed! You may now partition the disk using fdisk or 00:05:18.830 other utilities. 00:05:18.830 03:08:24 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:05:18.830 03:08:24 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:18.830 03:08:24 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:18.830 03:08:24 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:18.830 03:08:24 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:05:19.766 Creating new GPT entries in memory. 00:05:19.766 The operation has completed successfully. 00:05:19.766 03:08:25 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:05:19.766 03:08:25 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:19.766 03:08:25 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:19.766 03:08:25 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:19.766 03:08:25 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:2099200:4196351 00:05:21.144 The operation has completed successfully. 00:05:21.145 03:08:26 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:05:21.145 03:08:26 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:21.145 03:08:26 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 3051975 00:05:21.145 03:08:26 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:05:21.145 03:08:26 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:21.145 03:08:26 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:05:21.145 03:08:26 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:05:21.145 03:08:26 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:05:21.145 03:08:26 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:21.145 03:08:26 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:05:21.145 03:08:26 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:21.145 03:08:26 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:05:21.145 03:08:26 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:05:21.145 03:08:26 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-0 00:05:21.145 03:08:26 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:05:21.145 03:08:26 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:05:21.145 03:08:26 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:21.145 03:08:26 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount size= 00:05:21.145 03:08:26 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:21.145 03:08:26 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:21.145 03:08:26 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:05:21.145 03:08:26 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:21.145 03:08:26 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:88:00.0 nvme0n1:nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:05:21.145 03:08:26 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:05:21.145 03:08:26 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:05:21.145 03:08:26 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:21.145 03:08:26 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:05:21.145 03:08:26 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:05:21.145 03:08:26 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:05:21.145 03:08:26 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:05:21.145 03:08:26 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:05:21.145 03:08:26 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:21.145 03:08:26 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:05:21.145 03:08:26 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:05:21.145 03:08:26 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:21.145 03:08:26 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:05:22.082 03:08:27 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:22.082 03:08:27 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:05:22.082 03:08:27 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:05:22.082 03:08:27 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:22.082 03:08:27 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:22.082 03:08:27 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:22.082 03:08:27 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:22.082 03:08:27 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:22.082 03:08:27 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:22.082 03:08:27 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:22.082 03:08:27 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:22.082 03:08:27 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:22.082 03:08:27 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:22.082 03:08:27 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:22.082 03:08:27 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:22.082 03:08:27 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:22.082 03:08:27 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:22.082 03:08:27 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:22.082 03:08:27 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:22.082 03:08:27 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:22.082 03:08:27 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:22.082 03:08:27 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:22.083 03:08:27 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:22.083 03:08:27 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:22.083 03:08:27 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:22.083 03:08:27 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:22.083 03:08:27 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:22.083 03:08:27 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:22.083 03:08:27 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:22.083 03:08:27 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:22.083 03:08:27 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:22.083 03:08:27 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:22.083 03:08:27 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:22.083 03:08:27 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:22.083 03:08:27 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:22.083 03:08:27 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:22.083 03:08:28 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:22.083 03:08:28 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount ]] 00:05:22.083 03:08:28 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:22.083 03:08:28 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:05:22.083 03:08:28 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:05:22.083 03:08:28 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:22.083 03:08:28 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:88:00.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:05:22.083 03:08:28 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:05:22.083 03:08:28 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:05:22.083 03:08:28 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:05:22.083 03:08:28 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:05:22.083 03:08:28 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:05:22.083 03:08:28 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:05:22.083 03:08:28 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:05:22.083 03:08:28 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:22.083 03:08:28 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:05:22.083 03:08:28 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:05:22.083 03:08:28 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:22.083 03:08:28 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:05:23.455 03:08:29 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:23.455 03:08:29 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:05:23.455 03:08:29 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:05:23.455 03:08:29 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:23.455 03:08:29 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:23.455 03:08:29 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:23.455 03:08:29 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:23.455 03:08:29 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:23.455 03:08:29 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:23.455 03:08:29 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:23.455 03:08:29 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:23.455 03:08:29 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:23.455 03:08:29 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:23.455 03:08:29 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:23.455 03:08:29 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:23.455 03:08:29 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:23.455 03:08:29 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:23.455 03:08:29 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:23.455 03:08:29 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:23.455 03:08:29 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:23.455 03:08:29 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:23.455 03:08:29 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:23.455 03:08:29 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:23.455 03:08:29 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:23.455 03:08:29 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:23.455 03:08:29 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:23.455 03:08:29 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:23.455 03:08:29 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:23.455 03:08:29 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:23.455 03:08:29 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:23.455 03:08:29 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:23.455 03:08:29 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:23.455 03:08:29 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:23.455 03:08:29 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:23.455 03:08:29 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:23.455 03:08:29 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:23.455 03:08:29 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:23.455 03:08:29 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:05:23.455 03:08:29 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:05:23.455 03:08:29 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:05:23.455 03:08:29 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:23.455 03:08:29 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:05:23.455 03:08:29 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:05:23.455 03:08:29 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:23.455 03:08:29 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:05:23.455 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:23.455 03:08:29 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:05:23.455 03:08:29 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:05:23.455 00:05:23.455 real 0m5.636s 00:05:23.455 user 0m0.932s 00:05:23.455 sys 0m1.561s 00:05:23.455 03:08:29 setup.sh.devices.dm_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:23.455 03:08:29 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:05:23.455 ************************************ 00:05:23.455 END TEST dm_mount 00:05:23.455 ************************************ 00:05:23.455 03:08:29 setup.sh.devices -- common/autotest_common.sh@1142 -- # return 0 00:05:23.455 03:08:29 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:05:23.455 03:08:29 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:05:23.455 03:08:29 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:23.455 03:08:29 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:23.455 03:08:29 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:05:23.455 03:08:29 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:23.455 03:08:29 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:23.715 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:05:23.715 /dev/nvme0n1: 8 bytes were erased at offset 0xe8e0db5e00 (gpt): 45 46 49 20 50 41 52 54 00:05:23.715 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:05:23.715 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:05:23.715 03:08:29 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:05:23.715 03:08:29 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:23.715 03:08:29 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:05:23.715 03:08:29 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:23.715 03:08:29 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:05:23.715 03:08:29 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:05:23.715 03:08:29 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:05:23.715 00:05:23.715 real 0m13.783s 00:05:23.715 user 0m3.033s 00:05:23.715 sys 0m4.944s 00:05:23.715 03:08:29 setup.sh.devices -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:23.715 03:08:29 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:05:23.715 ************************************ 00:05:23.715 END TEST devices 00:05:23.715 ************************************ 00:05:23.715 03:08:29 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:05:23.715 00:05:23.715 real 0m42.809s 00:05:23.715 user 0m12.298s 00:05:23.715 sys 0m18.694s 00:05:23.715 03:08:29 setup.sh -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:23.715 03:08:29 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:05:23.715 ************************************ 00:05:23.715 END TEST setup.sh 00:05:23.715 ************************************ 00:05:23.715 03:08:29 -- common/autotest_common.sh@1142 -- # return 0 00:05:23.715 03:08:29 -- spdk/autotest.sh@128 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:05:25.105 Hugepages 00:05:25.105 node hugesize free / total 00:05:25.105 node0 1048576kB 0 / 0 00:05:25.105 node0 2048kB 2048 / 2048 00:05:25.105 node1 1048576kB 0 / 0 00:05:25.105 node1 2048kB 0 / 0 00:05:25.105 00:05:25.105 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:25.105 I/OAT 0000:00:04.0 8086 0e20 0 ioatdma - - 00:05:25.105 I/OAT 0000:00:04.1 8086 0e21 0 ioatdma - - 00:05:25.105 I/OAT 0000:00:04.2 8086 0e22 0 ioatdma - - 00:05:25.105 I/OAT 0000:00:04.3 8086 0e23 0 ioatdma - - 00:05:25.105 I/OAT 0000:00:04.4 8086 0e24 0 ioatdma - - 00:05:25.105 I/OAT 0000:00:04.5 8086 0e25 0 ioatdma - - 00:05:25.105 I/OAT 0000:00:04.6 8086 0e26 0 ioatdma - - 00:05:25.105 I/OAT 0000:00:04.7 8086 0e27 0 ioatdma - - 00:05:25.105 I/OAT 0000:80:04.0 8086 0e20 1 ioatdma - - 00:05:25.105 I/OAT 0000:80:04.1 8086 0e21 1 ioatdma - - 00:05:25.105 I/OAT 0000:80:04.2 8086 0e22 1 ioatdma - - 00:05:25.105 I/OAT 0000:80:04.3 8086 0e23 1 ioatdma - - 00:05:25.105 I/OAT 0000:80:04.4 8086 0e24 1 ioatdma - - 00:05:25.105 I/OAT 0000:80:04.5 8086 0e25 1 ioatdma - - 00:05:25.105 I/OAT 0000:80:04.6 8086 0e26 1 ioatdma - - 00:05:25.105 I/OAT 0000:80:04.7 8086 0e27 1 ioatdma - - 00:05:25.105 NVMe 0000:88:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:05:25.105 03:08:31 -- spdk/autotest.sh@130 -- # uname -s 00:05:25.105 03:08:31 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:05:25.105 03:08:31 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:05:25.105 03:08:31 -- common/autotest_common.sh@1531 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:26.041 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:05:26.301 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:05:26.301 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:05:26.301 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:05:26.301 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:05:26.301 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:05:26.301 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:05:26.301 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:05:26.301 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:05:26.301 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:05:26.302 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:05:26.302 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:05:26.302 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:05:26.302 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:05:26.302 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:05:26.302 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:05:27.237 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:05:27.237 03:08:33 -- common/autotest_common.sh@1532 -- # sleep 1 00:05:28.612 03:08:34 -- common/autotest_common.sh@1533 -- # bdfs=() 00:05:28.612 03:08:34 -- common/autotest_common.sh@1533 -- # local bdfs 00:05:28.612 03:08:34 -- common/autotest_common.sh@1534 -- # bdfs=($(get_nvme_bdfs)) 00:05:28.612 03:08:34 -- common/autotest_common.sh@1534 -- # get_nvme_bdfs 00:05:28.612 03:08:34 -- common/autotest_common.sh@1513 -- # bdfs=() 00:05:28.612 03:08:34 -- common/autotest_common.sh@1513 -- # local bdfs 00:05:28.612 03:08:34 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:28.612 03:08:34 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:05:28.612 03:08:34 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:05:28.612 03:08:34 -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:05:28.612 03:08:34 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:88:00.0 00:05:28.612 03:08:34 -- common/autotest_common.sh@1536 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:05:29.548 Waiting for block devices as requested 00:05:29.548 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:05:29.548 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:05:29.809 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:05:29.809 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:05:29.809 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:05:30.068 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:05:30.068 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:05:30.068 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:05:30.068 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:05:30.068 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:05:30.327 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:05:30.327 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:05:30.327 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:05:30.327 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:05:30.587 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:05:30.587 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:05:30.587 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:05:30.846 03:08:36 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:05:30.846 03:08:36 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:88:00.0 00:05:30.846 03:08:36 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 00:05:30.846 03:08:36 -- common/autotest_common.sh@1502 -- # grep 0000:88:00.0/nvme/nvme 00:05:30.846 03:08:36 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:80/0000:80:03.0/0000:88:00.0/nvme/nvme0 00:05:30.846 03:08:36 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:80/0000:80:03.0/0000:88:00.0/nvme/nvme0 ]] 00:05:30.846 03:08:36 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:80/0000:80:03.0/0000:88:00.0/nvme/nvme0 00:05:30.846 03:08:36 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme0 00:05:30.846 03:08:36 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme0 00:05:30.846 03:08:36 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme0 ]] 00:05:30.846 03:08:36 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme0 00:05:30.846 03:08:36 -- common/autotest_common.sh@1545 -- # grep oacs 00:05:30.846 03:08:36 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:05:30.846 03:08:36 -- common/autotest_common.sh@1545 -- # oacs=' 0xf' 00:05:30.846 03:08:36 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:05:30.846 03:08:36 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:05:30.846 03:08:36 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme0 00:05:30.847 03:08:36 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:05:30.847 03:08:36 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:05:30.847 03:08:36 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:05:30.847 03:08:36 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:05:30.847 03:08:36 -- common/autotest_common.sh@1557 -- # continue 00:05:30.847 03:08:36 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:05:30.847 03:08:36 -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:30.847 03:08:36 -- common/autotest_common.sh@10 -- # set +x 00:05:30.847 03:08:36 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:05:30.847 03:08:36 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:30.847 03:08:36 -- common/autotest_common.sh@10 -- # set +x 00:05:30.847 03:08:36 -- spdk/autotest.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:31.812 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:05:31.812 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:05:31.812 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:05:31.812 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:05:32.073 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:05:32.073 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:05:32.073 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:05:32.073 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:05:32.073 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:05:32.073 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:05:32.073 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:05:32.073 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:05:32.073 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:05:32.073 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:05:32.073 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:05:32.073 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:05:33.042 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:05:33.042 03:08:39 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:05:33.042 03:08:39 -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:33.042 03:08:39 -- common/autotest_common.sh@10 -- # set +x 00:05:33.043 03:08:39 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:05:33.043 03:08:39 -- common/autotest_common.sh@1591 -- # mapfile -t bdfs 00:05:33.043 03:08:39 -- common/autotest_common.sh@1591 -- # get_nvme_bdfs_by_id 0x0a54 00:05:33.043 03:08:39 -- common/autotest_common.sh@1577 -- # bdfs=() 00:05:33.043 03:08:39 -- common/autotest_common.sh@1577 -- # local bdfs 00:05:33.043 03:08:39 -- common/autotest_common.sh@1579 -- # get_nvme_bdfs 00:05:33.043 03:08:39 -- common/autotest_common.sh@1513 -- # bdfs=() 00:05:33.043 03:08:39 -- common/autotest_common.sh@1513 -- # local bdfs 00:05:33.043 03:08:39 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:33.043 03:08:39 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:05:33.043 03:08:39 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:05:33.301 03:08:39 -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:05:33.301 03:08:39 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:88:00.0 00:05:33.301 03:08:39 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:05:33.301 03:08:39 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:88:00.0/device 00:05:33.301 03:08:39 -- common/autotest_common.sh@1580 -- # device=0x0a54 00:05:33.301 03:08:39 -- common/autotest_common.sh@1581 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:05:33.301 03:08:39 -- common/autotest_common.sh@1582 -- # bdfs+=($bdf) 00:05:33.301 03:08:39 -- common/autotest_common.sh@1586 -- # printf '%s\n' 0000:88:00.0 00:05:33.302 03:08:39 -- common/autotest_common.sh@1592 -- # [[ -z 0000:88:00.0 ]] 00:05:33.302 03:08:39 -- common/autotest_common.sh@1597 -- # spdk_tgt_pid=3057161 00:05:33.302 03:08:39 -- common/autotest_common.sh@1596 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:33.302 03:08:39 -- common/autotest_common.sh@1598 -- # waitforlisten 3057161 00:05:33.302 03:08:39 -- common/autotest_common.sh@829 -- # '[' -z 3057161 ']' 00:05:33.302 03:08:39 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:33.302 03:08:39 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:33.302 03:08:39 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:33.302 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:33.302 03:08:39 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:33.302 03:08:39 -- common/autotest_common.sh@10 -- # set +x 00:05:33.302 [2024-07-15 03:08:39.289489] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:05:33.302 [2024-07-15 03:08:39.289575] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3057161 ] 00:05:33.302 EAL: No free 2048 kB hugepages reported on node 1 00:05:33.302 [2024-07-15 03:08:39.348413] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:33.302 [2024-07-15 03:08:39.435789] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:33.559 03:08:39 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:33.559 03:08:39 -- common/autotest_common.sh@862 -- # return 0 00:05:33.559 03:08:39 -- common/autotest_common.sh@1600 -- # bdf_id=0 00:05:33.559 03:08:39 -- common/autotest_common.sh@1601 -- # for bdf in "${bdfs[@]}" 00:05:33.559 03:08:39 -- common/autotest_common.sh@1602 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:88:00.0 00:05:36.841 nvme0n1 00:05:36.841 03:08:42 -- common/autotest_common.sh@1604 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:05:37.099 [2024-07-15 03:08:43.017014] nvme_opal.c:2063:spdk_opal_cmd_revert_tper: *ERROR*: Error on starting admin SP session with error 18 00:05:37.099 [2024-07-15 03:08:43.017062] vbdev_opal_rpc.c: 134:rpc_bdev_nvme_opal_revert: *ERROR*: Revert TPer failure: 18 00:05:37.099 request: 00:05:37.099 { 00:05:37.099 "nvme_ctrlr_name": "nvme0", 00:05:37.099 "password": "test", 00:05:37.099 "method": "bdev_nvme_opal_revert", 00:05:37.099 "req_id": 1 00:05:37.099 } 00:05:37.099 Got JSON-RPC error response 00:05:37.099 response: 00:05:37.099 { 00:05:37.099 "code": -32603, 00:05:37.099 "message": "Internal error" 00:05:37.099 } 00:05:37.099 03:08:43 -- common/autotest_common.sh@1604 -- # true 00:05:37.099 03:08:43 -- common/autotest_common.sh@1605 -- # (( ++bdf_id )) 00:05:37.099 03:08:43 -- common/autotest_common.sh@1608 -- # killprocess 3057161 00:05:37.099 03:08:43 -- common/autotest_common.sh@948 -- # '[' -z 3057161 ']' 00:05:37.099 03:08:43 -- common/autotest_common.sh@952 -- # kill -0 3057161 00:05:37.099 03:08:43 -- common/autotest_common.sh@953 -- # uname 00:05:37.099 03:08:43 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:37.099 03:08:43 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3057161 00:05:37.099 03:08:43 -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:37.099 03:08:43 -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:37.099 03:08:43 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3057161' 00:05:37.099 killing process with pid 3057161 00:05:37.099 03:08:43 -- common/autotest_common.sh@967 -- # kill 3057161 00:05:37.099 03:08:43 -- common/autotest_common.sh@972 -- # wait 3057161 00:05:38.996 03:08:44 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:05:38.996 03:08:44 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:05:38.996 03:08:44 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:05:38.996 03:08:44 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:05:38.996 03:08:44 -- spdk/autotest.sh@162 -- # timing_enter lib 00:05:38.996 03:08:44 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:38.997 03:08:44 -- common/autotest_common.sh@10 -- # set +x 00:05:38.997 03:08:44 -- spdk/autotest.sh@164 -- # [[ 0 -eq 1 ]] 00:05:38.997 03:08:44 -- spdk/autotest.sh@168 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:05:38.997 03:08:44 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:38.997 03:08:44 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:38.997 03:08:44 -- common/autotest_common.sh@10 -- # set +x 00:05:38.997 ************************************ 00:05:38.997 START TEST env 00:05:38.997 ************************************ 00:05:38.997 03:08:44 env -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:05:38.997 * Looking for test storage... 00:05:38.997 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:05:38.997 03:08:44 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:05:38.997 03:08:44 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:38.997 03:08:44 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:38.997 03:08:44 env -- common/autotest_common.sh@10 -- # set +x 00:05:38.997 ************************************ 00:05:38.997 START TEST env_memory 00:05:38.997 ************************************ 00:05:38.997 03:08:44 env.env_memory -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:05:38.997 00:05:38.997 00:05:38.997 CUnit - A unit testing framework for C - Version 2.1-3 00:05:38.997 http://cunit.sourceforge.net/ 00:05:38.997 00:05:38.997 00:05:38.997 Suite: memory 00:05:38.997 Test: alloc and free memory map ...[2024-07-15 03:08:44.921550] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:05:38.997 passed 00:05:38.997 Test: mem map translation ...[2024-07-15 03:08:44.941509] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:05:38.997 [2024-07-15 03:08:44.941530] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:05:38.997 [2024-07-15 03:08:44.941587] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:05:38.997 [2024-07-15 03:08:44.941599] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:05:38.997 passed 00:05:38.997 Test: mem map registration ...[2024-07-15 03:08:44.982213] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:05:38.997 [2024-07-15 03:08:44.982232] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:05:38.997 passed 00:05:38.997 Test: mem map adjacent registrations ...passed 00:05:38.997 00:05:38.997 Run Summary: Type Total Ran Passed Failed Inactive 00:05:38.997 suites 1 1 n/a 0 0 00:05:38.997 tests 4 4 4 0 0 00:05:38.997 asserts 152 152 152 0 n/a 00:05:38.997 00:05:38.997 Elapsed time = 0.139 seconds 00:05:38.997 00:05:38.997 real 0m0.147s 00:05:38.997 user 0m0.141s 00:05:38.997 sys 0m0.006s 00:05:38.997 03:08:45 env.env_memory -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:38.997 03:08:45 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:05:38.997 ************************************ 00:05:38.997 END TEST env_memory 00:05:38.997 ************************************ 00:05:38.997 03:08:45 env -- common/autotest_common.sh@1142 -- # return 0 00:05:38.997 03:08:45 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:05:38.997 03:08:45 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:38.997 03:08:45 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:38.997 03:08:45 env -- common/autotest_common.sh@10 -- # set +x 00:05:38.997 ************************************ 00:05:38.997 START TEST env_vtophys 00:05:38.997 ************************************ 00:05:38.997 03:08:45 env.env_vtophys -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:05:38.997 EAL: lib.eal log level changed from notice to debug 00:05:38.997 EAL: Detected lcore 0 as core 0 on socket 0 00:05:38.997 EAL: Detected lcore 1 as core 1 on socket 0 00:05:38.997 EAL: Detected lcore 2 as core 2 on socket 0 00:05:38.997 EAL: Detected lcore 3 as core 3 on socket 0 00:05:38.997 EAL: Detected lcore 4 as core 4 on socket 0 00:05:38.997 EAL: Detected lcore 5 as core 5 on socket 0 00:05:38.997 EAL: Detected lcore 6 as core 8 on socket 0 00:05:38.997 EAL: Detected lcore 7 as core 9 on socket 0 00:05:38.997 EAL: Detected lcore 8 as core 10 on socket 0 00:05:38.997 EAL: Detected lcore 9 as core 11 on socket 0 00:05:38.997 EAL: Detected lcore 10 as core 12 on socket 0 00:05:38.997 EAL: Detected lcore 11 as core 13 on socket 0 00:05:38.997 EAL: Detected lcore 12 as core 0 on socket 1 00:05:38.997 EAL: Detected lcore 13 as core 1 on socket 1 00:05:38.997 EAL: Detected lcore 14 as core 2 on socket 1 00:05:38.997 EAL: Detected lcore 15 as core 3 on socket 1 00:05:38.997 EAL: Detected lcore 16 as core 4 on socket 1 00:05:38.997 EAL: Detected lcore 17 as core 5 on socket 1 00:05:38.997 EAL: Detected lcore 18 as core 8 on socket 1 00:05:38.997 EAL: Detected lcore 19 as core 9 on socket 1 00:05:38.997 EAL: Detected lcore 20 as core 10 on socket 1 00:05:38.997 EAL: Detected lcore 21 as core 11 on socket 1 00:05:38.997 EAL: Detected lcore 22 as core 12 on socket 1 00:05:38.997 EAL: Detected lcore 23 as core 13 on socket 1 00:05:38.997 EAL: Detected lcore 24 as core 0 on socket 0 00:05:38.997 EAL: Detected lcore 25 as core 1 on socket 0 00:05:38.997 EAL: Detected lcore 26 as core 2 on socket 0 00:05:38.997 EAL: Detected lcore 27 as core 3 on socket 0 00:05:38.997 EAL: Detected lcore 28 as core 4 on socket 0 00:05:38.997 EAL: Detected lcore 29 as core 5 on socket 0 00:05:38.997 EAL: Detected lcore 30 as core 8 on socket 0 00:05:38.997 EAL: Detected lcore 31 as core 9 on socket 0 00:05:38.997 EAL: Detected lcore 32 as core 10 on socket 0 00:05:38.997 EAL: Detected lcore 33 as core 11 on socket 0 00:05:38.997 EAL: Detected lcore 34 as core 12 on socket 0 00:05:38.997 EAL: Detected lcore 35 as core 13 on socket 0 00:05:38.997 EAL: Detected lcore 36 as core 0 on socket 1 00:05:38.997 EAL: Detected lcore 37 as core 1 on socket 1 00:05:38.997 EAL: Detected lcore 38 as core 2 on socket 1 00:05:38.997 EAL: Detected lcore 39 as core 3 on socket 1 00:05:38.997 EAL: Detected lcore 40 as core 4 on socket 1 00:05:38.997 EAL: Detected lcore 41 as core 5 on socket 1 00:05:38.997 EAL: Detected lcore 42 as core 8 on socket 1 00:05:38.997 EAL: Detected lcore 43 as core 9 on socket 1 00:05:38.997 EAL: Detected lcore 44 as core 10 on socket 1 00:05:38.997 EAL: Detected lcore 45 as core 11 on socket 1 00:05:38.997 EAL: Detected lcore 46 as core 12 on socket 1 00:05:38.997 EAL: Detected lcore 47 as core 13 on socket 1 00:05:38.997 EAL: Maximum logical cores by configuration: 128 00:05:38.997 EAL: Detected CPU lcores: 48 00:05:38.997 EAL: Detected NUMA nodes: 2 00:05:38.997 EAL: Checking presence of .so 'librte_eal.so.24.0' 00:05:38.997 EAL: Detected shared linkage of DPDK 00:05:38.997 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so.24.0 00:05:38.997 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so.24.0 00:05:38.997 EAL: Registered [vdev] bus. 00:05:38.997 EAL: bus.vdev log level changed from disabled to notice 00:05:38.997 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so.24.0 00:05:38.997 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so.24.0 00:05:38.997 EAL: pmd.net.i40e.init log level changed from disabled to notice 00:05:38.997 EAL: pmd.net.i40e.driver log level changed from disabled to notice 00:05:38.997 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so 00:05:38.997 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so 00:05:38.997 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so 00:05:38.997 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so 00:05:38.997 EAL: No shared files mode enabled, IPC will be disabled 00:05:38.997 EAL: No shared files mode enabled, IPC is disabled 00:05:38.997 EAL: Bus pci wants IOVA as 'DC' 00:05:38.997 EAL: Bus vdev wants IOVA as 'DC' 00:05:38.997 EAL: Buses did not request a specific IOVA mode. 00:05:38.997 EAL: IOMMU is available, selecting IOVA as VA mode. 00:05:38.997 EAL: Selected IOVA mode 'VA' 00:05:38.997 EAL: No free 2048 kB hugepages reported on node 1 00:05:38.997 EAL: Probing VFIO support... 00:05:38.997 EAL: IOMMU type 1 (Type 1) is supported 00:05:38.997 EAL: IOMMU type 7 (sPAPR) is not supported 00:05:38.997 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:05:38.997 EAL: VFIO support initialized 00:05:38.997 EAL: Ask a virtual area of 0x2e000 bytes 00:05:38.997 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:05:38.997 EAL: Setting up physically contiguous memory... 00:05:38.997 EAL: Setting maximum number of open files to 524288 00:05:38.997 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:05:38.997 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:05:38.997 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:05:38.997 EAL: Ask a virtual area of 0x61000 bytes 00:05:38.997 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:05:38.997 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:38.997 EAL: Ask a virtual area of 0x400000000 bytes 00:05:38.997 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:05:38.997 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:05:38.997 EAL: Ask a virtual area of 0x61000 bytes 00:05:38.997 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:05:38.997 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:38.997 EAL: Ask a virtual area of 0x400000000 bytes 00:05:38.997 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:05:38.997 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:05:38.997 EAL: Ask a virtual area of 0x61000 bytes 00:05:38.997 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:05:38.997 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:38.997 EAL: Ask a virtual area of 0x400000000 bytes 00:05:38.997 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:05:38.997 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:05:38.997 EAL: Ask a virtual area of 0x61000 bytes 00:05:38.997 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:05:38.997 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:38.997 EAL: Ask a virtual area of 0x400000000 bytes 00:05:38.997 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:05:38.997 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:05:38.997 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:05:38.997 EAL: Ask a virtual area of 0x61000 bytes 00:05:38.998 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:05:38.998 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:38.998 EAL: Ask a virtual area of 0x400000000 bytes 00:05:38.998 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:05:38.998 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:05:38.998 EAL: Ask a virtual area of 0x61000 bytes 00:05:38.998 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:05:38.998 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:38.998 EAL: Ask a virtual area of 0x400000000 bytes 00:05:38.998 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:05:38.998 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:05:38.998 EAL: Ask a virtual area of 0x61000 bytes 00:05:38.998 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:05:38.998 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:38.998 EAL: Ask a virtual area of 0x400000000 bytes 00:05:38.998 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:05:38.998 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:05:38.998 EAL: Ask a virtual area of 0x61000 bytes 00:05:38.998 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:05:38.998 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:38.998 EAL: Ask a virtual area of 0x400000000 bytes 00:05:38.998 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:05:38.998 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:05:38.998 EAL: Hugepages will be freed exactly as allocated. 00:05:38.998 EAL: No shared files mode enabled, IPC is disabled 00:05:38.998 EAL: No shared files mode enabled, IPC is disabled 00:05:38.998 EAL: TSC frequency is ~2700000 KHz 00:05:38.998 EAL: Main lcore 0 is ready (tid=7f03546f3a00;cpuset=[0]) 00:05:38.998 EAL: Trying to obtain current memory policy. 00:05:38.998 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:38.998 EAL: Restoring previous memory policy: 0 00:05:38.998 EAL: request: mp_malloc_sync 00:05:38.998 EAL: No shared files mode enabled, IPC is disabled 00:05:38.998 EAL: Heap on socket 0 was expanded by 2MB 00:05:38.998 EAL: No shared files mode enabled, IPC is disabled 00:05:39.256 EAL: No shared files mode enabled, IPC is disabled 00:05:39.256 EAL: No PCI address specified using 'addr=' in: bus=pci 00:05:39.256 EAL: Mem event callback 'spdk:(nil)' registered 00:05:39.256 00:05:39.256 00:05:39.256 CUnit - A unit testing framework for C - Version 2.1-3 00:05:39.256 http://cunit.sourceforge.net/ 00:05:39.256 00:05:39.256 00:05:39.256 Suite: components_suite 00:05:39.256 Test: vtophys_malloc_test ...passed 00:05:39.256 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:05:39.256 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:39.256 EAL: Restoring previous memory policy: 4 00:05:39.256 EAL: Calling mem event callback 'spdk:(nil)' 00:05:39.256 EAL: request: mp_malloc_sync 00:05:39.256 EAL: No shared files mode enabled, IPC is disabled 00:05:39.256 EAL: Heap on socket 0 was expanded by 4MB 00:05:39.256 EAL: Calling mem event callback 'spdk:(nil)' 00:05:39.256 EAL: request: mp_malloc_sync 00:05:39.256 EAL: No shared files mode enabled, IPC is disabled 00:05:39.256 EAL: Heap on socket 0 was shrunk by 4MB 00:05:39.256 EAL: Trying to obtain current memory policy. 00:05:39.256 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:39.256 EAL: Restoring previous memory policy: 4 00:05:39.256 EAL: Calling mem event callback 'spdk:(nil)' 00:05:39.256 EAL: request: mp_malloc_sync 00:05:39.256 EAL: No shared files mode enabled, IPC is disabled 00:05:39.256 EAL: Heap on socket 0 was expanded by 6MB 00:05:39.256 EAL: Calling mem event callback 'spdk:(nil)' 00:05:39.256 EAL: request: mp_malloc_sync 00:05:39.256 EAL: No shared files mode enabled, IPC is disabled 00:05:39.256 EAL: Heap on socket 0 was shrunk by 6MB 00:05:39.256 EAL: Trying to obtain current memory policy. 00:05:39.256 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:39.256 EAL: Restoring previous memory policy: 4 00:05:39.256 EAL: Calling mem event callback 'spdk:(nil)' 00:05:39.256 EAL: request: mp_malloc_sync 00:05:39.256 EAL: No shared files mode enabled, IPC is disabled 00:05:39.256 EAL: Heap on socket 0 was expanded by 10MB 00:05:39.256 EAL: Calling mem event callback 'spdk:(nil)' 00:05:39.256 EAL: request: mp_malloc_sync 00:05:39.256 EAL: No shared files mode enabled, IPC is disabled 00:05:39.256 EAL: Heap on socket 0 was shrunk by 10MB 00:05:39.256 EAL: Trying to obtain current memory policy. 00:05:39.256 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:39.256 EAL: Restoring previous memory policy: 4 00:05:39.256 EAL: Calling mem event callback 'spdk:(nil)' 00:05:39.256 EAL: request: mp_malloc_sync 00:05:39.256 EAL: No shared files mode enabled, IPC is disabled 00:05:39.256 EAL: Heap on socket 0 was expanded by 18MB 00:05:39.256 EAL: Calling mem event callback 'spdk:(nil)' 00:05:39.256 EAL: request: mp_malloc_sync 00:05:39.256 EAL: No shared files mode enabled, IPC is disabled 00:05:39.256 EAL: Heap on socket 0 was shrunk by 18MB 00:05:39.256 EAL: Trying to obtain current memory policy. 00:05:39.256 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:39.256 EAL: Restoring previous memory policy: 4 00:05:39.256 EAL: Calling mem event callback 'spdk:(nil)' 00:05:39.256 EAL: request: mp_malloc_sync 00:05:39.256 EAL: No shared files mode enabled, IPC is disabled 00:05:39.256 EAL: Heap on socket 0 was expanded by 34MB 00:05:39.256 EAL: Calling mem event callback 'spdk:(nil)' 00:05:39.256 EAL: request: mp_malloc_sync 00:05:39.256 EAL: No shared files mode enabled, IPC is disabled 00:05:39.256 EAL: Heap on socket 0 was shrunk by 34MB 00:05:39.256 EAL: Trying to obtain current memory policy. 00:05:39.256 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:39.256 EAL: Restoring previous memory policy: 4 00:05:39.256 EAL: Calling mem event callback 'spdk:(nil)' 00:05:39.256 EAL: request: mp_malloc_sync 00:05:39.257 EAL: No shared files mode enabled, IPC is disabled 00:05:39.257 EAL: Heap on socket 0 was expanded by 66MB 00:05:39.257 EAL: Calling mem event callback 'spdk:(nil)' 00:05:39.257 EAL: request: mp_malloc_sync 00:05:39.257 EAL: No shared files mode enabled, IPC is disabled 00:05:39.257 EAL: Heap on socket 0 was shrunk by 66MB 00:05:39.257 EAL: Trying to obtain current memory policy. 00:05:39.257 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:39.257 EAL: Restoring previous memory policy: 4 00:05:39.257 EAL: Calling mem event callback 'spdk:(nil)' 00:05:39.257 EAL: request: mp_malloc_sync 00:05:39.257 EAL: No shared files mode enabled, IPC is disabled 00:05:39.257 EAL: Heap on socket 0 was expanded by 130MB 00:05:39.257 EAL: Calling mem event callback 'spdk:(nil)' 00:05:39.257 EAL: request: mp_malloc_sync 00:05:39.257 EAL: No shared files mode enabled, IPC is disabled 00:05:39.257 EAL: Heap on socket 0 was shrunk by 130MB 00:05:39.257 EAL: Trying to obtain current memory policy. 00:05:39.257 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:39.257 EAL: Restoring previous memory policy: 4 00:05:39.257 EAL: Calling mem event callback 'spdk:(nil)' 00:05:39.257 EAL: request: mp_malloc_sync 00:05:39.257 EAL: No shared files mode enabled, IPC is disabled 00:05:39.257 EAL: Heap on socket 0 was expanded by 258MB 00:05:39.514 EAL: Calling mem event callback 'spdk:(nil)' 00:05:39.514 EAL: request: mp_malloc_sync 00:05:39.514 EAL: No shared files mode enabled, IPC is disabled 00:05:39.514 EAL: Heap on socket 0 was shrunk by 258MB 00:05:39.514 EAL: Trying to obtain current memory policy. 00:05:39.514 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:39.514 EAL: Restoring previous memory policy: 4 00:05:39.514 EAL: Calling mem event callback 'spdk:(nil)' 00:05:39.514 EAL: request: mp_malloc_sync 00:05:39.514 EAL: No shared files mode enabled, IPC is disabled 00:05:39.514 EAL: Heap on socket 0 was expanded by 514MB 00:05:39.772 EAL: Calling mem event callback 'spdk:(nil)' 00:05:39.773 EAL: request: mp_malloc_sync 00:05:39.773 EAL: No shared files mode enabled, IPC is disabled 00:05:39.773 EAL: Heap on socket 0 was shrunk by 514MB 00:05:39.773 EAL: Trying to obtain current memory policy. 00:05:39.773 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:40.031 EAL: Restoring previous memory policy: 4 00:05:40.031 EAL: Calling mem event callback 'spdk:(nil)' 00:05:40.031 EAL: request: mp_malloc_sync 00:05:40.031 EAL: No shared files mode enabled, IPC is disabled 00:05:40.031 EAL: Heap on socket 0 was expanded by 1026MB 00:05:40.312 EAL: Calling mem event callback 'spdk:(nil)' 00:05:40.571 EAL: request: mp_malloc_sync 00:05:40.571 EAL: No shared files mode enabled, IPC is disabled 00:05:40.571 EAL: Heap on socket 0 was shrunk by 1026MB 00:05:40.571 passed 00:05:40.571 00:05:40.571 Run Summary: Type Total Ran Passed Failed Inactive 00:05:40.571 suites 1 1 n/a 0 0 00:05:40.571 tests 2 2 2 0 0 00:05:40.571 asserts 497 497 497 0 n/a 00:05:40.571 00:05:40.571 Elapsed time = 1.363 seconds 00:05:40.571 EAL: Calling mem event callback 'spdk:(nil)' 00:05:40.571 EAL: request: mp_malloc_sync 00:05:40.571 EAL: No shared files mode enabled, IPC is disabled 00:05:40.571 EAL: Heap on socket 0 was shrunk by 2MB 00:05:40.571 EAL: No shared files mode enabled, IPC is disabled 00:05:40.571 EAL: No shared files mode enabled, IPC is disabled 00:05:40.571 EAL: No shared files mode enabled, IPC is disabled 00:05:40.571 00:05:40.571 real 0m1.487s 00:05:40.571 user 0m0.845s 00:05:40.571 sys 0m0.602s 00:05:40.571 03:08:46 env.env_vtophys -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:40.571 03:08:46 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:05:40.571 ************************************ 00:05:40.571 END TEST env_vtophys 00:05:40.571 ************************************ 00:05:40.571 03:08:46 env -- common/autotest_common.sh@1142 -- # return 0 00:05:40.571 03:08:46 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:05:40.571 03:08:46 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:40.571 03:08:46 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:40.571 03:08:46 env -- common/autotest_common.sh@10 -- # set +x 00:05:40.571 ************************************ 00:05:40.571 START TEST env_pci 00:05:40.571 ************************************ 00:05:40.571 03:08:46 env.env_pci -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:05:40.571 00:05:40.571 00:05:40.571 CUnit - A unit testing framework for C - Version 2.1-3 00:05:40.571 http://cunit.sourceforge.net/ 00:05:40.571 00:05:40.571 00:05:40.571 Suite: pci 00:05:40.571 Test: pci_hook ...[2024-07-15 03:08:46.629014] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 3058047 has claimed it 00:05:40.571 EAL: Cannot find device (10000:00:01.0) 00:05:40.571 EAL: Failed to attach device on primary process 00:05:40.571 passed 00:05:40.571 00:05:40.571 Run Summary: Type Total Ran Passed Failed Inactive 00:05:40.571 suites 1 1 n/a 0 0 00:05:40.571 tests 1 1 1 0 0 00:05:40.571 asserts 25 25 25 0 n/a 00:05:40.571 00:05:40.571 Elapsed time = 0.022 seconds 00:05:40.571 00:05:40.571 real 0m0.035s 00:05:40.571 user 0m0.013s 00:05:40.571 sys 0m0.022s 00:05:40.571 03:08:46 env.env_pci -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:40.571 03:08:46 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:05:40.571 ************************************ 00:05:40.571 END TEST env_pci 00:05:40.571 ************************************ 00:05:40.571 03:08:46 env -- common/autotest_common.sh@1142 -- # return 0 00:05:40.571 03:08:46 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:05:40.571 03:08:46 env -- env/env.sh@15 -- # uname 00:05:40.571 03:08:46 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:05:40.571 03:08:46 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:05:40.571 03:08:46 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:40.571 03:08:46 env -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:05:40.571 03:08:46 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:40.571 03:08:46 env -- common/autotest_common.sh@10 -- # set +x 00:05:40.571 ************************************ 00:05:40.571 START TEST env_dpdk_post_init 00:05:40.571 ************************************ 00:05:40.571 03:08:46 env.env_dpdk_post_init -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:40.830 EAL: Detected CPU lcores: 48 00:05:40.830 EAL: Detected NUMA nodes: 2 00:05:40.830 EAL: Detected shared linkage of DPDK 00:05:40.830 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:40.830 EAL: Selected IOVA mode 'VA' 00:05:40.830 EAL: No free 2048 kB hugepages reported on node 1 00:05:40.830 EAL: VFIO support initialized 00:05:40.830 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:40.830 EAL: Using IOMMU type 1 (Type 1) 00:05:40.830 EAL: Probe PCI driver: spdk_ioat (8086:0e20) device: 0000:00:04.0 (socket 0) 00:05:40.831 EAL: Probe PCI driver: spdk_ioat (8086:0e21) device: 0000:00:04.1 (socket 0) 00:05:40.831 EAL: Probe PCI driver: spdk_ioat (8086:0e22) device: 0000:00:04.2 (socket 0) 00:05:40.831 EAL: Probe PCI driver: spdk_ioat (8086:0e23) device: 0000:00:04.3 (socket 0) 00:05:40.831 EAL: Probe PCI driver: spdk_ioat (8086:0e24) device: 0000:00:04.4 (socket 0) 00:05:40.831 EAL: Probe PCI driver: spdk_ioat (8086:0e25) device: 0000:00:04.5 (socket 0) 00:05:40.831 EAL: Probe PCI driver: spdk_ioat (8086:0e26) device: 0000:00:04.6 (socket 0) 00:05:40.831 EAL: Probe PCI driver: spdk_ioat (8086:0e27) device: 0000:00:04.7 (socket 0) 00:05:40.831 EAL: Probe PCI driver: spdk_ioat (8086:0e20) device: 0000:80:04.0 (socket 1) 00:05:40.831 EAL: Probe PCI driver: spdk_ioat (8086:0e21) device: 0000:80:04.1 (socket 1) 00:05:40.831 EAL: Probe PCI driver: spdk_ioat (8086:0e22) device: 0000:80:04.2 (socket 1) 00:05:40.831 EAL: Probe PCI driver: spdk_ioat (8086:0e23) device: 0000:80:04.3 (socket 1) 00:05:40.831 EAL: Probe PCI driver: spdk_ioat (8086:0e24) device: 0000:80:04.4 (socket 1) 00:05:40.831 EAL: Probe PCI driver: spdk_ioat (8086:0e25) device: 0000:80:04.5 (socket 1) 00:05:41.089 EAL: Probe PCI driver: spdk_ioat (8086:0e26) device: 0000:80:04.6 (socket 1) 00:05:41.089 EAL: Probe PCI driver: spdk_ioat (8086:0e27) device: 0000:80:04.7 (socket 1) 00:05:41.654 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:88:00.0 (socket 1) 00:05:44.935 EAL: Releasing PCI mapped resource for 0000:88:00.0 00:05:44.935 EAL: Calling pci_unmap_resource for 0000:88:00.0 at 0x202001040000 00:05:45.194 Starting DPDK initialization... 00:05:45.194 Starting SPDK post initialization... 00:05:45.194 SPDK NVMe probe 00:05:45.194 Attaching to 0000:88:00.0 00:05:45.194 Attached to 0000:88:00.0 00:05:45.194 Cleaning up... 00:05:45.194 00:05:45.194 real 0m4.398s 00:05:45.194 user 0m3.275s 00:05:45.194 sys 0m0.185s 00:05:45.194 03:08:51 env.env_dpdk_post_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:45.194 03:08:51 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:05:45.194 ************************************ 00:05:45.194 END TEST env_dpdk_post_init 00:05:45.194 ************************************ 00:05:45.194 03:08:51 env -- common/autotest_common.sh@1142 -- # return 0 00:05:45.194 03:08:51 env -- env/env.sh@26 -- # uname 00:05:45.194 03:08:51 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:05:45.194 03:08:51 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:45.194 03:08:51 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:45.194 03:08:51 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:45.194 03:08:51 env -- common/autotest_common.sh@10 -- # set +x 00:05:45.194 ************************************ 00:05:45.194 START TEST env_mem_callbacks 00:05:45.194 ************************************ 00:05:45.194 03:08:51 env.env_mem_callbacks -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:45.194 EAL: Detected CPU lcores: 48 00:05:45.194 EAL: Detected NUMA nodes: 2 00:05:45.194 EAL: Detected shared linkage of DPDK 00:05:45.194 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:45.194 EAL: Selected IOVA mode 'VA' 00:05:45.194 EAL: No free 2048 kB hugepages reported on node 1 00:05:45.194 EAL: VFIO support initialized 00:05:45.194 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:45.194 00:05:45.194 00:05:45.194 CUnit - A unit testing framework for C - Version 2.1-3 00:05:45.194 http://cunit.sourceforge.net/ 00:05:45.194 00:05:45.194 00:05:45.194 Suite: memory 00:05:45.194 Test: test ... 00:05:45.194 register 0x200000200000 2097152 00:05:45.194 malloc 3145728 00:05:45.194 register 0x200000400000 4194304 00:05:45.194 buf 0x200000500000 len 3145728 PASSED 00:05:45.194 malloc 64 00:05:45.194 buf 0x2000004fff40 len 64 PASSED 00:05:45.194 malloc 4194304 00:05:45.194 register 0x200000800000 6291456 00:05:45.194 buf 0x200000a00000 len 4194304 PASSED 00:05:45.194 free 0x200000500000 3145728 00:05:45.194 free 0x2000004fff40 64 00:05:45.194 unregister 0x200000400000 4194304 PASSED 00:05:45.194 free 0x200000a00000 4194304 00:05:45.194 unregister 0x200000800000 6291456 PASSED 00:05:45.194 malloc 8388608 00:05:45.194 register 0x200000400000 10485760 00:05:45.194 buf 0x200000600000 len 8388608 PASSED 00:05:45.194 free 0x200000600000 8388608 00:05:45.194 unregister 0x200000400000 10485760 PASSED 00:05:45.194 passed 00:05:45.194 00:05:45.194 Run Summary: Type Total Ran Passed Failed Inactive 00:05:45.194 suites 1 1 n/a 0 0 00:05:45.194 tests 1 1 1 0 0 00:05:45.194 asserts 15 15 15 0 n/a 00:05:45.194 00:05:45.194 Elapsed time = 0.005 seconds 00:05:45.194 00:05:45.194 real 0m0.047s 00:05:45.194 user 0m0.014s 00:05:45.194 sys 0m0.033s 00:05:45.194 03:08:51 env.env_mem_callbacks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:45.194 03:08:51 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:05:45.194 ************************************ 00:05:45.194 END TEST env_mem_callbacks 00:05:45.194 ************************************ 00:05:45.194 03:08:51 env -- common/autotest_common.sh@1142 -- # return 0 00:05:45.194 00:05:45.194 real 0m6.408s 00:05:45.194 user 0m4.407s 00:05:45.194 sys 0m1.040s 00:05:45.194 03:08:51 env -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:45.194 03:08:51 env -- common/autotest_common.sh@10 -- # set +x 00:05:45.194 ************************************ 00:05:45.194 END TEST env 00:05:45.194 ************************************ 00:05:45.194 03:08:51 -- common/autotest_common.sh@1142 -- # return 0 00:05:45.194 03:08:51 -- spdk/autotest.sh@169 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:05:45.194 03:08:51 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:45.194 03:08:51 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:45.194 03:08:51 -- common/autotest_common.sh@10 -- # set +x 00:05:45.194 ************************************ 00:05:45.194 START TEST rpc 00:05:45.194 ************************************ 00:05:45.194 03:08:51 rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:05:45.194 * Looking for test storage... 00:05:45.194 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:45.194 03:08:51 rpc -- rpc/rpc.sh@65 -- # spdk_pid=3058706 00:05:45.194 03:08:51 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:05:45.194 03:08:51 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:45.194 03:08:51 rpc -- rpc/rpc.sh@67 -- # waitforlisten 3058706 00:05:45.194 03:08:51 rpc -- common/autotest_common.sh@829 -- # '[' -z 3058706 ']' 00:05:45.194 03:08:51 rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:45.194 03:08:51 rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:45.194 03:08:51 rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:45.194 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:45.194 03:08:51 rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:45.194 03:08:51 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:45.453 [2024-07-15 03:08:51.369811] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:05:45.453 [2024-07-15 03:08:51.369916] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3058706 ] 00:05:45.453 EAL: No free 2048 kB hugepages reported on node 1 00:05:45.453 [2024-07-15 03:08:51.428238] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:45.453 [2024-07-15 03:08:51.514373] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:05:45.453 [2024-07-15 03:08:51.514429] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 3058706' to capture a snapshot of events at runtime. 00:05:45.453 [2024-07-15 03:08:51.514457] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:45.453 [2024-07-15 03:08:51.514468] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:45.453 [2024-07-15 03:08:51.514477] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid3058706 for offline analysis/debug. 00:05:45.453 [2024-07-15 03:08:51.514502] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:45.712 03:08:51 rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:45.712 03:08:51 rpc -- common/autotest_common.sh@862 -- # return 0 00:05:45.712 03:08:51 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:45.712 03:08:51 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:45.712 03:08:51 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:05:45.712 03:08:51 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:05:45.712 03:08:51 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:45.712 03:08:51 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:45.712 03:08:51 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:45.712 ************************************ 00:05:45.712 START TEST rpc_integrity 00:05:45.712 ************************************ 00:05:45.712 03:08:51 rpc.rpc_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:05:45.712 03:08:51 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:45.712 03:08:51 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:45.712 03:08:51 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:45.712 03:08:51 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:45.712 03:08:51 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:45.712 03:08:51 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:45.712 03:08:51 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:45.712 03:08:51 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:45.712 03:08:51 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:45.712 03:08:51 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:45.712 03:08:51 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:45.970 03:08:51 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:05:45.970 03:08:51 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:45.970 03:08:51 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:45.970 03:08:51 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:45.970 03:08:51 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:45.970 03:08:51 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:45.970 { 00:05:45.970 "name": "Malloc0", 00:05:45.970 "aliases": [ 00:05:45.970 "4bec19e4-9501-400b-ac64-0f133fabbd6f" 00:05:45.970 ], 00:05:45.970 "product_name": "Malloc disk", 00:05:45.971 "block_size": 512, 00:05:45.971 "num_blocks": 16384, 00:05:45.971 "uuid": "4bec19e4-9501-400b-ac64-0f133fabbd6f", 00:05:45.971 "assigned_rate_limits": { 00:05:45.971 "rw_ios_per_sec": 0, 00:05:45.971 "rw_mbytes_per_sec": 0, 00:05:45.971 "r_mbytes_per_sec": 0, 00:05:45.971 "w_mbytes_per_sec": 0 00:05:45.971 }, 00:05:45.971 "claimed": false, 00:05:45.971 "zoned": false, 00:05:45.971 "supported_io_types": { 00:05:45.971 "read": true, 00:05:45.971 "write": true, 00:05:45.971 "unmap": true, 00:05:45.971 "flush": true, 00:05:45.971 "reset": true, 00:05:45.971 "nvme_admin": false, 00:05:45.971 "nvme_io": false, 00:05:45.971 "nvme_io_md": false, 00:05:45.971 "write_zeroes": true, 00:05:45.971 "zcopy": true, 00:05:45.971 "get_zone_info": false, 00:05:45.971 "zone_management": false, 00:05:45.971 "zone_append": false, 00:05:45.971 "compare": false, 00:05:45.971 "compare_and_write": false, 00:05:45.971 "abort": true, 00:05:45.971 "seek_hole": false, 00:05:45.971 "seek_data": false, 00:05:45.971 "copy": true, 00:05:45.971 "nvme_iov_md": false 00:05:45.971 }, 00:05:45.971 "memory_domains": [ 00:05:45.971 { 00:05:45.971 "dma_device_id": "system", 00:05:45.971 "dma_device_type": 1 00:05:45.971 }, 00:05:45.971 { 00:05:45.971 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:45.971 "dma_device_type": 2 00:05:45.971 } 00:05:45.971 ], 00:05:45.971 "driver_specific": {} 00:05:45.971 } 00:05:45.971 ]' 00:05:45.971 03:08:51 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:45.971 03:08:51 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:45.971 03:08:51 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:05:45.971 03:08:51 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:45.971 03:08:51 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:45.971 [2024-07-15 03:08:51.907797] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:05:45.971 [2024-07-15 03:08:51.907846] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:45.971 [2024-07-15 03:08:51.907869] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xbc6bb0 00:05:45.971 [2024-07-15 03:08:51.907893] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:45.971 [2024-07-15 03:08:51.909404] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:45.971 [2024-07-15 03:08:51.909436] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:45.971 Passthru0 00:05:45.971 03:08:51 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:45.971 03:08:51 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:45.971 03:08:51 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:45.971 03:08:51 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:45.971 03:08:51 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:45.971 03:08:51 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:45.971 { 00:05:45.971 "name": "Malloc0", 00:05:45.971 "aliases": [ 00:05:45.971 "4bec19e4-9501-400b-ac64-0f133fabbd6f" 00:05:45.971 ], 00:05:45.971 "product_name": "Malloc disk", 00:05:45.971 "block_size": 512, 00:05:45.971 "num_blocks": 16384, 00:05:45.971 "uuid": "4bec19e4-9501-400b-ac64-0f133fabbd6f", 00:05:45.971 "assigned_rate_limits": { 00:05:45.971 "rw_ios_per_sec": 0, 00:05:45.971 "rw_mbytes_per_sec": 0, 00:05:45.971 "r_mbytes_per_sec": 0, 00:05:45.971 "w_mbytes_per_sec": 0 00:05:45.971 }, 00:05:45.971 "claimed": true, 00:05:45.971 "claim_type": "exclusive_write", 00:05:45.971 "zoned": false, 00:05:45.971 "supported_io_types": { 00:05:45.971 "read": true, 00:05:45.971 "write": true, 00:05:45.971 "unmap": true, 00:05:45.971 "flush": true, 00:05:45.971 "reset": true, 00:05:45.971 "nvme_admin": false, 00:05:45.971 "nvme_io": false, 00:05:45.971 "nvme_io_md": false, 00:05:45.971 "write_zeroes": true, 00:05:45.971 "zcopy": true, 00:05:45.971 "get_zone_info": false, 00:05:45.971 "zone_management": false, 00:05:45.971 "zone_append": false, 00:05:45.971 "compare": false, 00:05:45.971 "compare_and_write": false, 00:05:45.971 "abort": true, 00:05:45.971 "seek_hole": false, 00:05:45.971 "seek_data": false, 00:05:45.971 "copy": true, 00:05:45.971 "nvme_iov_md": false 00:05:45.971 }, 00:05:45.971 "memory_domains": [ 00:05:45.971 { 00:05:45.971 "dma_device_id": "system", 00:05:45.971 "dma_device_type": 1 00:05:45.971 }, 00:05:45.971 { 00:05:45.971 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:45.971 "dma_device_type": 2 00:05:45.971 } 00:05:45.971 ], 00:05:45.971 "driver_specific": {} 00:05:45.971 }, 00:05:45.971 { 00:05:45.971 "name": "Passthru0", 00:05:45.971 "aliases": [ 00:05:45.971 "d206f7e2-2e7b-5f1d-9a42-e909b5138917" 00:05:45.971 ], 00:05:45.971 "product_name": "passthru", 00:05:45.971 "block_size": 512, 00:05:45.971 "num_blocks": 16384, 00:05:45.971 "uuid": "d206f7e2-2e7b-5f1d-9a42-e909b5138917", 00:05:45.971 "assigned_rate_limits": { 00:05:45.971 "rw_ios_per_sec": 0, 00:05:45.971 "rw_mbytes_per_sec": 0, 00:05:45.971 "r_mbytes_per_sec": 0, 00:05:45.971 "w_mbytes_per_sec": 0 00:05:45.971 }, 00:05:45.971 "claimed": false, 00:05:45.971 "zoned": false, 00:05:45.971 "supported_io_types": { 00:05:45.971 "read": true, 00:05:45.971 "write": true, 00:05:45.971 "unmap": true, 00:05:45.971 "flush": true, 00:05:45.971 "reset": true, 00:05:45.971 "nvme_admin": false, 00:05:45.971 "nvme_io": false, 00:05:45.971 "nvme_io_md": false, 00:05:45.971 "write_zeroes": true, 00:05:45.971 "zcopy": true, 00:05:45.971 "get_zone_info": false, 00:05:45.971 "zone_management": false, 00:05:45.971 "zone_append": false, 00:05:45.971 "compare": false, 00:05:45.971 "compare_and_write": false, 00:05:45.971 "abort": true, 00:05:45.971 "seek_hole": false, 00:05:45.971 "seek_data": false, 00:05:45.971 "copy": true, 00:05:45.971 "nvme_iov_md": false 00:05:45.971 }, 00:05:45.971 "memory_domains": [ 00:05:45.971 { 00:05:45.971 "dma_device_id": "system", 00:05:45.971 "dma_device_type": 1 00:05:45.971 }, 00:05:45.971 { 00:05:45.971 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:45.971 "dma_device_type": 2 00:05:45.971 } 00:05:45.971 ], 00:05:45.971 "driver_specific": { 00:05:45.971 "passthru": { 00:05:45.971 "name": "Passthru0", 00:05:45.971 "base_bdev_name": "Malloc0" 00:05:45.971 } 00:05:45.971 } 00:05:45.971 } 00:05:45.971 ]' 00:05:45.971 03:08:51 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:45.971 03:08:51 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:45.971 03:08:51 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:45.971 03:08:51 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:45.971 03:08:51 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:45.971 03:08:51 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:45.971 03:08:51 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:05:45.971 03:08:51 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:45.971 03:08:51 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:45.971 03:08:51 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:45.971 03:08:51 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:45.971 03:08:51 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:45.971 03:08:51 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:45.971 03:08:51 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:45.971 03:08:51 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:45.971 03:08:51 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:45.971 03:08:52 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:45.971 00:05:45.971 real 0m0.233s 00:05:45.971 user 0m0.154s 00:05:45.971 sys 0m0.018s 00:05:45.971 03:08:52 rpc.rpc_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:45.971 03:08:52 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:45.971 ************************************ 00:05:45.971 END TEST rpc_integrity 00:05:45.971 ************************************ 00:05:45.971 03:08:52 rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:45.971 03:08:52 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:05:45.971 03:08:52 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:45.971 03:08:52 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:45.971 03:08:52 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:45.971 ************************************ 00:05:45.971 START TEST rpc_plugins 00:05:45.971 ************************************ 00:05:45.971 03:08:52 rpc.rpc_plugins -- common/autotest_common.sh@1123 -- # rpc_plugins 00:05:45.971 03:08:52 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:05:45.971 03:08:52 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:45.971 03:08:52 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:45.971 03:08:52 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:45.971 03:08:52 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:05:45.971 03:08:52 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:05:45.971 03:08:52 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:45.971 03:08:52 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:45.971 03:08:52 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:45.971 03:08:52 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:05:45.971 { 00:05:45.971 "name": "Malloc1", 00:05:45.971 "aliases": [ 00:05:45.971 "379495c2-f0ff-42de-b6e0-07456393866a" 00:05:45.971 ], 00:05:45.971 "product_name": "Malloc disk", 00:05:45.971 "block_size": 4096, 00:05:45.971 "num_blocks": 256, 00:05:45.971 "uuid": "379495c2-f0ff-42de-b6e0-07456393866a", 00:05:45.971 "assigned_rate_limits": { 00:05:45.971 "rw_ios_per_sec": 0, 00:05:45.971 "rw_mbytes_per_sec": 0, 00:05:45.971 "r_mbytes_per_sec": 0, 00:05:45.971 "w_mbytes_per_sec": 0 00:05:45.971 }, 00:05:45.971 "claimed": false, 00:05:45.971 "zoned": false, 00:05:45.971 "supported_io_types": { 00:05:45.971 "read": true, 00:05:45.971 "write": true, 00:05:45.972 "unmap": true, 00:05:45.972 "flush": true, 00:05:45.972 "reset": true, 00:05:45.972 "nvme_admin": false, 00:05:45.972 "nvme_io": false, 00:05:45.972 "nvme_io_md": false, 00:05:45.972 "write_zeroes": true, 00:05:45.972 "zcopy": true, 00:05:45.972 "get_zone_info": false, 00:05:45.972 "zone_management": false, 00:05:45.972 "zone_append": false, 00:05:45.972 "compare": false, 00:05:45.972 "compare_and_write": false, 00:05:45.972 "abort": true, 00:05:45.972 "seek_hole": false, 00:05:45.972 "seek_data": false, 00:05:45.972 "copy": true, 00:05:45.972 "nvme_iov_md": false 00:05:45.972 }, 00:05:45.972 "memory_domains": [ 00:05:45.972 { 00:05:45.972 "dma_device_id": "system", 00:05:45.972 "dma_device_type": 1 00:05:45.972 }, 00:05:45.972 { 00:05:45.972 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:45.972 "dma_device_type": 2 00:05:45.972 } 00:05:45.972 ], 00:05:45.972 "driver_specific": {} 00:05:45.972 } 00:05:45.972 ]' 00:05:45.972 03:08:52 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:05:46.229 03:08:52 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:05:46.229 03:08:52 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:05:46.229 03:08:52 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:46.229 03:08:52 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:46.229 03:08:52 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:46.229 03:08:52 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:05:46.229 03:08:52 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:46.229 03:08:52 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:46.229 03:08:52 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:46.229 03:08:52 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:05:46.229 03:08:52 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:05:46.229 03:08:52 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:05:46.229 00:05:46.229 real 0m0.109s 00:05:46.229 user 0m0.073s 00:05:46.229 sys 0m0.010s 00:05:46.229 03:08:52 rpc.rpc_plugins -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:46.229 03:08:52 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:46.229 ************************************ 00:05:46.229 END TEST rpc_plugins 00:05:46.229 ************************************ 00:05:46.229 03:08:52 rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:46.229 03:08:52 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:05:46.229 03:08:52 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:46.229 03:08:52 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:46.229 03:08:52 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:46.229 ************************************ 00:05:46.229 START TEST rpc_trace_cmd_test 00:05:46.229 ************************************ 00:05:46.229 03:08:52 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1123 -- # rpc_trace_cmd_test 00:05:46.229 03:08:52 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:05:46.229 03:08:52 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:05:46.229 03:08:52 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:46.229 03:08:52 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:46.229 03:08:52 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:46.229 03:08:52 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:05:46.229 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid3058706", 00:05:46.229 "tpoint_group_mask": "0x8", 00:05:46.229 "iscsi_conn": { 00:05:46.229 "mask": "0x2", 00:05:46.229 "tpoint_mask": "0x0" 00:05:46.229 }, 00:05:46.229 "scsi": { 00:05:46.229 "mask": "0x4", 00:05:46.229 "tpoint_mask": "0x0" 00:05:46.229 }, 00:05:46.229 "bdev": { 00:05:46.229 "mask": "0x8", 00:05:46.229 "tpoint_mask": "0xffffffffffffffff" 00:05:46.229 }, 00:05:46.229 "nvmf_rdma": { 00:05:46.229 "mask": "0x10", 00:05:46.229 "tpoint_mask": "0x0" 00:05:46.229 }, 00:05:46.229 "nvmf_tcp": { 00:05:46.229 "mask": "0x20", 00:05:46.229 "tpoint_mask": "0x0" 00:05:46.229 }, 00:05:46.229 "ftl": { 00:05:46.229 "mask": "0x40", 00:05:46.229 "tpoint_mask": "0x0" 00:05:46.229 }, 00:05:46.229 "blobfs": { 00:05:46.229 "mask": "0x80", 00:05:46.229 "tpoint_mask": "0x0" 00:05:46.229 }, 00:05:46.229 "dsa": { 00:05:46.229 "mask": "0x200", 00:05:46.229 "tpoint_mask": "0x0" 00:05:46.229 }, 00:05:46.229 "thread": { 00:05:46.229 "mask": "0x400", 00:05:46.229 "tpoint_mask": "0x0" 00:05:46.229 }, 00:05:46.229 "nvme_pcie": { 00:05:46.229 "mask": "0x800", 00:05:46.229 "tpoint_mask": "0x0" 00:05:46.229 }, 00:05:46.229 "iaa": { 00:05:46.229 "mask": "0x1000", 00:05:46.229 "tpoint_mask": "0x0" 00:05:46.229 }, 00:05:46.229 "nvme_tcp": { 00:05:46.229 "mask": "0x2000", 00:05:46.229 "tpoint_mask": "0x0" 00:05:46.229 }, 00:05:46.229 "bdev_nvme": { 00:05:46.229 "mask": "0x4000", 00:05:46.229 "tpoint_mask": "0x0" 00:05:46.229 }, 00:05:46.229 "sock": { 00:05:46.229 "mask": "0x8000", 00:05:46.229 "tpoint_mask": "0x0" 00:05:46.229 } 00:05:46.229 }' 00:05:46.229 03:08:52 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:05:46.229 03:08:52 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:05:46.229 03:08:52 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:05:46.229 03:08:52 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:05:46.229 03:08:52 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:05:46.229 03:08:52 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:05:46.229 03:08:52 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:05:46.487 03:08:52 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:05:46.487 03:08:52 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:05:46.487 03:08:52 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:05:46.487 00:05:46.487 real 0m0.190s 00:05:46.487 user 0m0.167s 00:05:46.487 sys 0m0.014s 00:05:46.487 03:08:52 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:46.487 03:08:52 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:46.487 ************************************ 00:05:46.487 END TEST rpc_trace_cmd_test 00:05:46.487 ************************************ 00:05:46.487 03:08:52 rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:46.487 03:08:52 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:05:46.487 03:08:52 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:05:46.487 03:08:52 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:05:46.487 03:08:52 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:46.487 03:08:52 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:46.487 03:08:52 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:46.487 ************************************ 00:05:46.487 START TEST rpc_daemon_integrity 00:05:46.487 ************************************ 00:05:46.487 03:08:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:05:46.487 03:08:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:46.487 03:08:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:46.487 03:08:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:46.487 03:08:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:46.487 03:08:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:46.487 03:08:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:46.487 03:08:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:46.487 03:08:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:46.487 03:08:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:46.487 03:08:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:46.487 03:08:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:46.487 03:08:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:05:46.487 03:08:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:46.487 03:08:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:46.487 03:08:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:46.487 03:08:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:46.487 03:08:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:46.487 { 00:05:46.487 "name": "Malloc2", 00:05:46.487 "aliases": [ 00:05:46.487 "92b879d2-22cb-4ad3-9f87-f37ee60e3735" 00:05:46.487 ], 00:05:46.487 "product_name": "Malloc disk", 00:05:46.487 "block_size": 512, 00:05:46.487 "num_blocks": 16384, 00:05:46.487 "uuid": "92b879d2-22cb-4ad3-9f87-f37ee60e3735", 00:05:46.487 "assigned_rate_limits": { 00:05:46.487 "rw_ios_per_sec": 0, 00:05:46.487 "rw_mbytes_per_sec": 0, 00:05:46.487 "r_mbytes_per_sec": 0, 00:05:46.487 "w_mbytes_per_sec": 0 00:05:46.487 }, 00:05:46.487 "claimed": false, 00:05:46.487 "zoned": false, 00:05:46.487 "supported_io_types": { 00:05:46.487 "read": true, 00:05:46.487 "write": true, 00:05:46.487 "unmap": true, 00:05:46.487 "flush": true, 00:05:46.487 "reset": true, 00:05:46.487 "nvme_admin": false, 00:05:46.487 "nvme_io": false, 00:05:46.487 "nvme_io_md": false, 00:05:46.487 "write_zeroes": true, 00:05:46.487 "zcopy": true, 00:05:46.487 "get_zone_info": false, 00:05:46.487 "zone_management": false, 00:05:46.487 "zone_append": false, 00:05:46.487 "compare": false, 00:05:46.487 "compare_and_write": false, 00:05:46.487 "abort": true, 00:05:46.487 "seek_hole": false, 00:05:46.487 "seek_data": false, 00:05:46.487 "copy": true, 00:05:46.487 "nvme_iov_md": false 00:05:46.487 }, 00:05:46.487 "memory_domains": [ 00:05:46.487 { 00:05:46.487 "dma_device_id": "system", 00:05:46.487 "dma_device_type": 1 00:05:46.487 }, 00:05:46.487 { 00:05:46.487 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:46.487 "dma_device_type": 2 00:05:46.487 } 00:05:46.487 ], 00:05:46.487 "driver_specific": {} 00:05:46.487 } 00:05:46.487 ]' 00:05:46.487 03:08:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:46.487 03:08:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:46.487 03:08:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:05:46.487 03:08:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:46.487 03:08:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:46.487 [2024-07-15 03:08:52.581937] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:05:46.487 [2024-07-15 03:08:52.581985] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:46.487 [2024-07-15 03:08:52.582010] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xbc75b0 00:05:46.487 [2024-07-15 03:08:52.582025] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:46.487 [2024-07-15 03:08:52.583370] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:46.487 [2024-07-15 03:08:52.583410] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:46.487 Passthru0 00:05:46.487 03:08:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:46.487 03:08:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:46.487 03:08:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:46.487 03:08:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:46.487 03:08:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:46.488 03:08:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:46.488 { 00:05:46.488 "name": "Malloc2", 00:05:46.488 "aliases": [ 00:05:46.488 "92b879d2-22cb-4ad3-9f87-f37ee60e3735" 00:05:46.488 ], 00:05:46.488 "product_name": "Malloc disk", 00:05:46.488 "block_size": 512, 00:05:46.488 "num_blocks": 16384, 00:05:46.488 "uuid": "92b879d2-22cb-4ad3-9f87-f37ee60e3735", 00:05:46.488 "assigned_rate_limits": { 00:05:46.488 "rw_ios_per_sec": 0, 00:05:46.488 "rw_mbytes_per_sec": 0, 00:05:46.488 "r_mbytes_per_sec": 0, 00:05:46.488 "w_mbytes_per_sec": 0 00:05:46.488 }, 00:05:46.488 "claimed": true, 00:05:46.488 "claim_type": "exclusive_write", 00:05:46.488 "zoned": false, 00:05:46.488 "supported_io_types": { 00:05:46.488 "read": true, 00:05:46.488 "write": true, 00:05:46.488 "unmap": true, 00:05:46.488 "flush": true, 00:05:46.488 "reset": true, 00:05:46.488 "nvme_admin": false, 00:05:46.488 "nvme_io": false, 00:05:46.488 "nvme_io_md": false, 00:05:46.488 "write_zeroes": true, 00:05:46.488 "zcopy": true, 00:05:46.488 "get_zone_info": false, 00:05:46.488 "zone_management": false, 00:05:46.488 "zone_append": false, 00:05:46.488 "compare": false, 00:05:46.488 "compare_and_write": false, 00:05:46.488 "abort": true, 00:05:46.488 "seek_hole": false, 00:05:46.488 "seek_data": false, 00:05:46.488 "copy": true, 00:05:46.488 "nvme_iov_md": false 00:05:46.488 }, 00:05:46.488 "memory_domains": [ 00:05:46.488 { 00:05:46.488 "dma_device_id": "system", 00:05:46.488 "dma_device_type": 1 00:05:46.488 }, 00:05:46.488 { 00:05:46.488 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:46.488 "dma_device_type": 2 00:05:46.488 } 00:05:46.488 ], 00:05:46.488 "driver_specific": {} 00:05:46.488 }, 00:05:46.488 { 00:05:46.488 "name": "Passthru0", 00:05:46.488 "aliases": [ 00:05:46.488 "5deae2cc-d1c6-5477-b607-ef1162fecb67" 00:05:46.488 ], 00:05:46.488 "product_name": "passthru", 00:05:46.488 "block_size": 512, 00:05:46.488 "num_blocks": 16384, 00:05:46.488 "uuid": "5deae2cc-d1c6-5477-b607-ef1162fecb67", 00:05:46.488 "assigned_rate_limits": { 00:05:46.488 "rw_ios_per_sec": 0, 00:05:46.488 "rw_mbytes_per_sec": 0, 00:05:46.488 "r_mbytes_per_sec": 0, 00:05:46.488 "w_mbytes_per_sec": 0 00:05:46.488 }, 00:05:46.488 "claimed": false, 00:05:46.488 "zoned": false, 00:05:46.488 "supported_io_types": { 00:05:46.488 "read": true, 00:05:46.488 "write": true, 00:05:46.488 "unmap": true, 00:05:46.488 "flush": true, 00:05:46.488 "reset": true, 00:05:46.488 "nvme_admin": false, 00:05:46.488 "nvme_io": false, 00:05:46.488 "nvme_io_md": false, 00:05:46.488 "write_zeroes": true, 00:05:46.488 "zcopy": true, 00:05:46.488 "get_zone_info": false, 00:05:46.488 "zone_management": false, 00:05:46.488 "zone_append": false, 00:05:46.488 "compare": false, 00:05:46.488 "compare_and_write": false, 00:05:46.488 "abort": true, 00:05:46.488 "seek_hole": false, 00:05:46.488 "seek_data": false, 00:05:46.488 "copy": true, 00:05:46.488 "nvme_iov_md": false 00:05:46.488 }, 00:05:46.488 "memory_domains": [ 00:05:46.488 { 00:05:46.488 "dma_device_id": "system", 00:05:46.488 "dma_device_type": 1 00:05:46.488 }, 00:05:46.488 { 00:05:46.488 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:46.488 "dma_device_type": 2 00:05:46.488 } 00:05:46.488 ], 00:05:46.488 "driver_specific": { 00:05:46.488 "passthru": { 00:05:46.488 "name": "Passthru0", 00:05:46.488 "base_bdev_name": "Malloc2" 00:05:46.488 } 00:05:46.488 } 00:05:46.488 } 00:05:46.488 ]' 00:05:46.488 03:08:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:46.746 03:08:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:46.746 03:08:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:46.746 03:08:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:46.746 03:08:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:46.746 03:08:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:46.746 03:08:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:05:46.746 03:08:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:46.746 03:08:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:46.746 03:08:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:46.746 03:08:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:46.746 03:08:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:46.746 03:08:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:46.746 03:08:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:46.746 03:08:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:46.746 03:08:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:46.746 03:08:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:46.746 00:05:46.746 real 0m0.235s 00:05:46.746 user 0m0.156s 00:05:46.746 sys 0m0.022s 00:05:46.746 03:08:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:46.746 03:08:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:46.746 ************************************ 00:05:46.746 END TEST rpc_daemon_integrity 00:05:46.746 ************************************ 00:05:46.746 03:08:52 rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:46.746 03:08:52 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:05:46.746 03:08:52 rpc -- rpc/rpc.sh@84 -- # killprocess 3058706 00:05:46.746 03:08:52 rpc -- common/autotest_common.sh@948 -- # '[' -z 3058706 ']' 00:05:46.746 03:08:52 rpc -- common/autotest_common.sh@952 -- # kill -0 3058706 00:05:46.746 03:08:52 rpc -- common/autotest_common.sh@953 -- # uname 00:05:46.746 03:08:52 rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:46.746 03:08:52 rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3058706 00:05:46.746 03:08:52 rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:46.746 03:08:52 rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:46.746 03:08:52 rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3058706' 00:05:46.746 killing process with pid 3058706 00:05:46.746 03:08:52 rpc -- common/autotest_common.sh@967 -- # kill 3058706 00:05:46.746 03:08:52 rpc -- common/autotest_common.sh@972 -- # wait 3058706 00:05:47.310 00:05:47.310 real 0m1.888s 00:05:47.310 user 0m2.387s 00:05:47.310 sys 0m0.584s 00:05:47.310 03:08:53 rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:47.310 03:08:53 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:47.310 ************************************ 00:05:47.310 END TEST rpc 00:05:47.310 ************************************ 00:05:47.310 03:08:53 -- common/autotest_common.sh@1142 -- # return 0 00:05:47.310 03:08:53 -- spdk/autotest.sh@170 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:05:47.310 03:08:53 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:47.310 03:08:53 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:47.310 03:08:53 -- common/autotest_common.sh@10 -- # set +x 00:05:47.310 ************************************ 00:05:47.310 START TEST skip_rpc 00:05:47.310 ************************************ 00:05:47.310 03:08:53 skip_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:05:47.310 * Looking for test storage... 00:05:47.310 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:47.310 03:08:53 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:47.310 03:08:53 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:47.310 03:08:53 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:05:47.310 03:08:53 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:47.310 03:08:53 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:47.310 03:08:53 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:47.310 ************************************ 00:05:47.310 START TEST skip_rpc 00:05:47.310 ************************************ 00:05:47.310 03:08:53 skip_rpc.skip_rpc -- common/autotest_common.sh@1123 -- # test_skip_rpc 00:05:47.310 03:08:53 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=3059141 00:05:47.310 03:08:53 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:05:47.310 03:08:53 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:47.310 03:08:53 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:05:47.310 [2024-07-15 03:08:53.331900] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:05:47.310 [2024-07-15 03:08:53.331975] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3059141 ] 00:05:47.310 EAL: No free 2048 kB hugepages reported on node 1 00:05:47.310 [2024-07-15 03:08:53.392121] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:47.567 [2024-07-15 03:08:53.482219] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:52.822 03:08:58 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:05:52.822 03:08:58 skip_rpc.skip_rpc -- common/autotest_common.sh@648 -- # local es=0 00:05:52.822 03:08:58 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd spdk_get_version 00:05:52.822 03:08:58 skip_rpc.skip_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:05:52.822 03:08:58 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:52.822 03:08:58 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:05:52.822 03:08:58 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:52.822 03:08:58 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # rpc_cmd spdk_get_version 00:05:52.822 03:08:58 skip_rpc.skip_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:52.822 03:08:58 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:52.822 03:08:58 skip_rpc.skip_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:05:52.822 03:08:58 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # es=1 00:05:52.822 03:08:58 skip_rpc.skip_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:52.822 03:08:58 skip_rpc.skip_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:52.822 03:08:58 skip_rpc.skip_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:52.822 03:08:58 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:05:52.822 03:08:58 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 3059141 00:05:52.822 03:08:58 skip_rpc.skip_rpc -- common/autotest_common.sh@948 -- # '[' -z 3059141 ']' 00:05:52.822 03:08:58 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # kill -0 3059141 00:05:52.822 03:08:58 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # uname 00:05:52.822 03:08:58 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:52.822 03:08:58 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3059141 00:05:52.822 03:08:58 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:52.822 03:08:58 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:52.822 03:08:58 skip_rpc.skip_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3059141' 00:05:52.822 killing process with pid 3059141 00:05:52.822 03:08:58 skip_rpc.skip_rpc -- common/autotest_common.sh@967 -- # kill 3059141 00:05:52.822 03:08:58 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # wait 3059141 00:05:52.822 00:05:52.822 real 0m5.432s 00:05:52.822 user 0m5.126s 00:05:52.822 sys 0m0.310s 00:05:52.822 03:08:58 skip_rpc.skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:52.822 03:08:58 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:52.822 ************************************ 00:05:52.822 END TEST skip_rpc 00:05:52.822 ************************************ 00:05:52.822 03:08:58 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:52.822 03:08:58 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:05:52.822 03:08:58 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:52.822 03:08:58 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:52.822 03:08:58 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:52.822 ************************************ 00:05:52.822 START TEST skip_rpc_with_json 00:05:52.822 ************************************ 00:05:52.822 03:08:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_json 00:05:52.823 03:08:58 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:05:52.823 03:08:58 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=3059828 00:05:52.823 03:08:58 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:52.823 03:08:58 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:52.823 03:08:58 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 3059828 00:05:52.823 03:08:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@829 -- # '[' -z 3059828 ']' 00:05:52.823 03:08:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:52.823 03:08:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:52.823 03:08:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:52.823 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:52.823 03:08:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:52.823 03:08:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:52.823 [2024-07-15 03:08:58.812936] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:05:52.823 [2024-07-15 03:08:58.813022] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3059828 ] 00:05:52.823 EAL: No free 2048 kB hugepages reported on node 1 00:05:52.823 [2024-07-15 03:08:58.878617] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:53.081 [2024-07-15 03:08:58.969261] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:53.338 03:08:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:53.338 03:08:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@862 -- # return 0 00:05:53.338 03:08:59 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:05:53.338 03:08:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:53.338 03:08:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:53.338 [2024-07-15 03:08:59.231430] nvmf_rpc.c:2562:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:05:53.338 request: 00:05:53.338 { 00:05:53.338 "trtype": "tcp", 00:05:53.338 "method": "nvmf_get_transports", 00:05:53.338 "req_id": 1 00:05:53.338 } 00:05:53.338 Got JSON-RPC error response 00:05:53.338 response: 00:05:53.338 { 00:05:53.338 "code": -19, 00:05:53.338 "message": "No such device" 00:05:53.338 } 00:05:53.338 03:08:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:05:53.338 03:08:59 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:05:53.338 03:08:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:53.338 03:08:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:53.338 [2024-07-15 03:08:59.239554] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:53.338 03:08:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:53.338 03:08:59 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:05:53.338 03:08:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:53.338 03:08:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:53.338 03:08:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:53.338 03:08:59 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:53.338 { 00:05:53.338 "subsystems": [ 00:05:53.338 { 00:05:53.338 "subsystem": "vfio_user_target", 00:05:53.338 "config": null 00:05:53.338 }, 00:05:53.338 { 00:05:53.338 "subsystem": "keyring", 00:05:53.338 "config": [] 00:05:53.338 }, 00:05:53.338 { 00:05:53.338 "subsystem": "iobuf", 00:05:53.338 "config": [ 00:05:53.339 { 00:05:53.339 "method": "iobuf_set_options", 00:05:53.339 "params": { 00:05:53.339 "small_pool_count": 8192, 00:05:53.339 "large_pool_count": 1024, 00:05:53.339 "small_bufsize": 8192, 00:05:53.339 "large_bufsize": 135168 00:05:53.339 } 00:05:53.339 } 00:05:53.339 ] 00:05:53.339 }, 00:05:53.339 { 00:05:53.339 "subsystem": "sock", 00:05:53.339 "config": [ 00:05:53.339 { 00:05:53.339 "method": "sock_set_default_impl", 00:05:53.339 "params": { 00:05:53.339 "impl_name": "posix" 00:05:53.339 } 00:05:53.339 }, 00:05:53.339 { 00:05:53.339 "method": "sock_impl_set_options", 00:05:53.339 "params": { 00:05:53.339 "impl_name": "ssl", 00:05:53.339 "recv_buf_size": 4096, 00:05:53.339 "send_buf_size": 4096, 00:05:53.339 "enable_recv_pipe": true, 00:05:53.339 "enable_quickack": false, 00:05:53.339 "enable_placement_id": 0, 00:05:53.339 "enable_zerocopy_send_server": true, 00:05:53.339 "enable_zerocopy_send_client": false, 00:05:53.339 "zerocopy_threshold": 0, 00:05:53.339 "tls_version": 0, 00:05:53.339 "enable_ktls": false 00:05:53.339 } 00:05:53.339 }, 00:05:53.339 { 00:05:53.339 "method": "sock_impl_set_options", 00:05:53.339 "params": { 00:05:53.339 "impl_name": "posix", 00:05:53.339 "recv_buf_size": 2097152, 00:05:53.339 "send_buf_size": 2097152, 00:05:53.339 "enable_recv_pipe": true, 00:05:53.339 "enable_quickack": false, 00:05:53.339 "enable_placement_id": 0, 00:05:53.339 "enable_zerocopy_send_server": true, 00:05:53.339 "enable_zerocopy_send_client": false, 00:05:53.339 "zerocopy_threshold": 0, 00:05:53.339 "tls_version": 0, 00:05:53.339 "enable_ktls": false 00:05:53.339 } 00:05:53.339 } 00:05:53.339 ] 00:05:53.339 }, 00:05:53.339 { 00:05:53.339 "subsystem": "vmd", 00:05:53.339 "config": [] 00:05:53.339 }, 00:05:53.339 { 00:05:53.339 "subsystem": "accel", 00:05:53.339 "config": [ 00:05:53.339 { 00:05:53.339 "method": "accel_set_options", 00:05:53.339 "params": { 00:05:53.339 "small_cache_size": 128, 00:05:53.339 "large_cache_size": 16, 00:05:53.339 "task_count": 2048, 00:05:53.339 "sequence_count": 2048, 00:05:53.339 "buf_count": 2048 00:05:53.339 } 00:05:53.339 } 00:05:53.339 ] 00:05:53.339 }, 00:05:53.339 { 00:05:53.339 "subsystem": "bdev", 00:05:53.339 "config": [ 00:05:53.339 { 00:05:53.339 "method": "bdev_set_options", 00:05:53.339 "params": { 00:05:53.339 "bdev_io_pool_size": 65535, 00:05:53.339 "bdev_io_cache_size": 256, 00:05:53.339 "bdev_auto_examine": true, 00:05:53.339 "iobuf_small_cache_size": 128, 00:05:53.339 "iobuf_large_cache_size": 16 00:05:53.339 } 00:05:53.339 }, 00:05:53.339 { 00:05:53.339 "method": "bdev_raid_set_options", 00:05:53.339 "params": { 00:05:53.339 "process_window_size_kb": 1024 00:05:53.339 } 00:05:53.339 }, 00:05:53.339 { 00:05:53.339 "method": "bdev_iscsi_set_options", 00:05:53.339 "params": { 00:05:53.339 "timeout_sec": 30 00:05:53.339 } 00:05:53.339 }, 00:05:53.339 { 00:05:53.339 "method": "bdev_nvme_set_options", 00:05:53.339 "params": { 00:05:53.339 "action_on_timeout": "none", 00:05:53.339 "timeout_us": 0, 00:05:53.339 "timeout_admin_us": 0, 00:05:53.339 "keep_alive_timeout_ms": 10000, 00:05:53.339 "arbitration_burst": 0, 00:05:53.339 "low_priority_weight": 0, 00:05:53.339 "medium_priority_weight": 0, 00:05:53.339 "high_priority_weight": 0, 00:05:53.339 "nvme_adminq_poll_period_us": 10000, 00:05:53.339 "nvme_ioq_poll_period_us": 0, 00:05:53.339 "io_queue_requests": 0, 00:05:53.339 "delay_cmd_submit": true, 00:05:53.339 "transport_retry_count": 4, 00:05:53.339 "bdev_retry_count": 3, 00:05:53.339 "transport_ack_timeout": 0, 00:05:53.339 "ctrlr_loss_timeout_sec": 0, 00:05:53.339 "reconnect_delay_sec": 0, 00:05:53.339 "fast_io_fail_timeout_sec": 0, 00:05:53.339 "disable_auto_failback": false, 00:05:53.339 "generate_uuids": false, 00:05:53.339 "transport_tos": 0, 00:05:53.339 "nvme_error_stat": false, 00:05:53.339 "rdma_srq_size": 0, 00:05:53.339 "io_path_stat": false, 00:05:53.339 "allow_accel_sequence": false, 00:05:53.339 "rdma_max_cq_size": 0, 00:05:53.339 "rdma_cm_event_timeout_ms": 0, 00:05:53.339 "dhchap_digests": [ 00:05:53.339 "sha256", 00:05:53.339 "sha384", 00:05:53.339 "sha512" 00:05:53.339 ], 00:05:53.339 "dhchap_dhgroups": [ 00:05:53.339 "null", 00:05:53.339 "ffdhe2048", 00:05:53.339 "ffdhe3072", 00:05:53.339 "ffdhe4096", 00:05:53.339 "ffdhe6144", 00:05:53.339 "ffdhe8192" 00:05:53.339 ] 00:05:53.339 } 00:05:53.339 }, 00:05:53.339 { 00:05:53.339 "method": "bdev_nvme_set_hotplug", 00:05:53.339 "params": { 00:05:53.339 "period_us": 100000, 00:05:53.339 "enable": false 00:05:53.339 } 00:05:53.339 }, 00:05:53.339 { 00:05:53.339 "method": "bdev_wait_for_examine" 00:05:53.339 } 00:05:53.339 ] 00:05:53.339 }, 00:05:53.339 { 00:05:53.339 "subsystem": "scsi", 00:05:53.339 "config": null 00:05:53.339 }, 00:05:53.339 { 00:05:53.339 "subsystem": "scheduler", 00:05:53.339 "config": [ 00:05:53.339 { 00:05:53.339 "method": "framework_set_scheduler", 00:05:53.339 "params": { 00:05:53.339 "name": "static" 00:05:53.339 } 00:05:53.339 } 00:05:53.339 ] 00:05:53.339 }, 00:05:53.339 { 00:05:53.339 "subsystem": "vhost_scsi", 00:05:53.339 "config": [] 00:05:53.339 }, 00:05:53.339 { 00:05:53.339 "subsystem": "vhost_blk", 00:05:53.339 "config": [] 00:05:53.339 }, 00:05:53.339 { 00:05:53.339 "subsystem": "ublk", 00:05:53.339 "config": [] 00:05:53.339 }, 00:05:53.339 { 00:05:53.339 "subsystem": "nbd", 00:05:53.339 "config": [] 00:05:53.339 }, 00:05:53.339 { 00:05:53.339 "subsystem": "nvmf", 00:05:53.339 "config": [ 00:05:53.339 { 00:05:53.339 "method": "nvmf_set_config", 00:05:53.339 "params": { 00:05:53.339 "discovery_filter": "match_any", 00:05:53.339 "admin_cmd_passthru": { 00:05:53.339 "identify_ctrlr": false 00:05:53.339 } 00:05:53.339 } 00:05:53.339 }, 00:05:53.339 { 00:05:53.339 "method": "nvmf_set_max_subsystems", 00:05:53.339 "params": { 00:05:53.339 "max_subsystems": 1024 00:05:53.339 } 00:05:53.339 }, 00:05:53.339 { 00:05:53.339 "method": "nvmf_set_crdt", 00:05:53.339 "params": { 00:05:53.339 "crdt1": 0, 00:05:53.339 "crdt2": 0, 00:05:53.339 "crdt3": 0 00:05:53.339 } 00:05:53.339 }, 00:05:53.339 { 00:05:53.339 "method": "nvmf_create_transport", 00:05:53.339 "params": { 00:05:53.339 "trtype": "TCP", 00:05:53.339 "max_queue_depth": 128, 00:05:53.339 "max_io_qpairs_per_ctrlr": 127, 00:05:53.339 "in_capsule_data_size": 4096, 00:05:53.339 "max_io_size": 131072, 00:05:53.339 "io_unit_size": 131072, 00:05:53.339 "max_aq_depth": 128, 00:05:53.339 "num_shared_buffers": 511, 00:05:53.339 "buf_cache_size": 4294967295, 00:05:53.339 "dif_insert_or_strip": false, 00:05:53.339 "zcopy": false, 00:05:53.339 "c2h_success": true, 00:05:53.339 "sock_priority": 0, 00:05:53.339 "abort_timeout_sec": 1, 00:05:53.339 "ack_timeout": 0, 00:05:53.339 "data_wr_pool_size": 0 00:05:53.339 } 00:05:53.339 } 00:05:53.339 ] 00:05:53.339 }, 00:05:53.339 { 00:05:53.339 "subsystem": "iscsi", 00:05:53.339 "config": [ 00:05:53.339 { 00:05:53.339 "method": "iscsi_set_options", 00:05:53.339 "params": { 00:05:53.339 "node_base": "iqn.2016-06.io.spdk", 00:05:53.339 "max_sessions": 128, 00:05:53.339 "max_connections_per_session": 2, 00:05:53.339 "max_queue_depth": 64, 00:05:53.339 "default_time2wait": 2, 00:05:53.339 "default_time2retain": 20, 00:05:53.339 "first_burst_length": 8192, 00:05:53.339 "immediate_data": true, 00:05:53.339 "allow_duplicated_isid": false, 00:05:53.339 "error_recovery_level": 0, 00:05:53.339 "nop_timeout": 60, 00:05:53.339 "nop_in_interval": 30, 00:05:53.339 "disable_chap": false, 00:05:53.339 "require_chap": false, 00:05:53.339 "mutual_chap": false, 00:05:53.339 "chap_group": 0, 00:05:53.339 "max_large_datain_per_connection": 64, 00:05:53.339 "max_r2t_per_connection": 4, 00:05:53.339 "pdu_pool_size": 36864, 00:05:53.339 "immediate_data_pool_size": 16384, 00:05:53.339 "data_out_pool_size": 2048 00:05:53.339 } 00:05:53.339 } 00:05:53.339 ] 00:05:53.339 } 00:05:53.339 ] 00:05:53.339 } 00:05:53.339 03:08:59 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:05:53.339 03:08:59 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 3059828 00:05:53.339 03:08:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 3059828 ']' 00:05:53.339 03:08:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 3059828 00:05:53.339 03:08:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:05:53.339 03:08:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:53.339 03:08:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3059828 00:05:53.339 03:08:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:53.339 03:08:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:53.339 03:08:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3059828' 00:05:53.339 killing process with pid 3059828 00:05:53.339 03:08:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 3059828 00:05:53.339 03:08:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 3059828 00:05:53.905 03:08:59 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=3059968 00:05:53.905 03:08:59 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:53.905 03:08:59 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:05:59.237 03:09:04 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 3059968 00:05:59.237 03:09:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 3059968 ']' 00:05:59.237 03:09:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 3059968 00:05:59.237 03:09:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:05:59.237 03:09:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:59.237 03:09:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3059968 00:05:59.237 03:09:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:59.237 03:09:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:59.237 03:09:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3059968' 00:05:59.237 killing process with pid 3059968 00:05:59.237 03:09:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 3059968 00:05:59.237 03:09:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 3059968 00:05:59.237 03:09:05 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:59.237 03:09:05 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:59.237 00:05:59.237 real 0m6.481s 00:05:59.237 user 0m6.060s 00:05:59.237 sys 0m0.695s 00:05:59.237 03:09:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:59.237 03:09:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:59.237 ************************************ 00:05:59.237 END TEST skip_rpc_with_json 00:05:59.237 ************************************ 00:05:59.237 03:09:05 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:59.237 03:09:05 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:05:59.237 03:09:05 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:59.237 03:09:05 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:59.237 03:09:05 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:59.237 ************************************ 00:05:59.237 START TEST skip_rpc_with_delay 00:05:59.237 ************************************ 00:05:59.237 03:09:05 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_delay 00:05:59.237 03:09:05 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:59.237 03:09:05 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@648 -- # local es=0 00:05:59.237 03:09:05 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:59.237 03:09:05 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:59.237 03:09:05 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:59.237 03:09:05 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:59.237 03:09:05 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:59.237 03:09:05 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:59.237 03:09:05 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:59.237 03:09:05 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:59.237 03:09:05 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:05:59.237 03:09:05 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:59.237 [2024-07-15 03:09:05.344272] app.c: 831:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:05:59.237 [2024-07-15 03:09:05.344403] app.c: 710:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:05:59.237 03:09:05 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # es=1 00:05:59.237 03:09:05 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:59.237 03:09:05 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:59.237 03:09:05 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:59.237 00:05:59.237 real 0m0.070s 00:05:59.237 user 0m0.042s 00:05:59.237 sys 0m0.026s 00:05:59.237 03:09:05 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:59.237 03:09:05 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:05:59.237 ************************************ 00:05:59.237 END TEST skip_rpc_with_delay 00:05:59.237 ************************************ 00:05:59.237 03:09:05 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:59.237 03:09:05 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:05:59.494 03:09:05 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:05:59.494 03:09:05 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:05:59.494 03:09:05 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:59.494 03:09:05 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:59.494 03:09:05 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:59.495 ************************************ 00:05:59.495 START TEST exit_on_failed_rpc_init 00:05:59.495 ************************************ 00:05:59.495 03:09:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1123 -- # test_exit_on_failed_rpc_init 00:05:59.495 03:09:05 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=3060815 00:05:59.495 03:09:05 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:59.495 03:09:05 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 3060815 00:05:59.495 03:09:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@829 -- # '[' -z 3060815 ']' 00:05:59.495 03:09:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:59.495 03:09:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:59.495 03:09:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:59.495 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:59.495 03:09:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:59.495 03:09:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:59.495 [2024-07-15 03:09:05.457846] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:05:59.495 [2024-07-15 03:09:05.457971] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3060815 ] 00:05:59.495 EAL: No free 2048 kB hugepages reported on node 1 00:05:59.495 [2024-07-15 03:09:05.516356] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:59.495 [2024-07-15 03:09:05.604603] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:59.752 03:09:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:59.752 03:09:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@862 -- # return 0 00:05:59.752 03:09:05 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:59.752 03:09:05 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:59.752 03:09:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@648 -- # local es=0 00:05:59.752 03:09:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:59.752 03:09:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:59.752 03:09:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:59.752 03:09:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:59.752 03:09:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:59.752 03:09:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:59.752 03:09:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:59.752 03:09:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:59.752 03:09:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:05:59.752 03:09:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:06:00.010 [2024-07-15 03:09:05.912634] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:06:00.010 [2024-07-15 03:09:05.912726] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3060820 ] 00:06:00.010 EAL: No free 2048 kB hugepages reported on node 1 00:06:00.010 [2024-07-15 03:09:05.974568] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:00.010 [2024-07-15 03:09:06.069337] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:00.010 [2024-07-15 03:09:06.069470] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:06:00.010 [2024-07-15 03:09:06.069492] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:06:00.010 [2024-07-15 03:09:06.069506] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:00.267 03:09:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # es=234 00:06:00.267 03:09:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:00.267 03:09:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@660 -- # es=106 00:06:00.267 03:09:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # case "$es" in 00:06:00.267 03:09:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@668 -- # es=1 00:06:00.267 03:09:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:00.267 03:09:06 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:06:00.267 03:09:06 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 3060815 00:06:00.267 03:09:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@948 -- # '[' -z 3060815 ']' 00:06:00.267 03:09:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # kill -0 3060815 00:06:00.267 03:09:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # uname 00:06:00.267 03:09:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:00.267 03:09:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3060815 00:06:00.267 03:09:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:00.267 03:09:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:00.267 03:09:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3060815' 00:06:00.267 killing process with pid 3060815 00:06:00.267 03:09:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@967 -- # kill 3060815 00:06:00.267 03:09:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # wait 3060815 00:06:00.525 00:06:00.525 real 0m1.188s 00:06:00.525 user 0m1.287s 00:06:00.525 sys 0m0.460s 00:06:00.525 03:09:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:00.525 03:09:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:06:00.525 ************************************ 00:06:00.525 END TEST exit_on_failed_rpc_init 00:06:00.525 ************************************ 00:06:00.525 03:09:06 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:06:00.525 03:09:06 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:06:00.525 00:06:00.525 real 0m13.412s 00:06:00.525 user 0m12.615s 00:06:00.525 sys 0m1.648s 00:06:00.525 03:09:06 skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:00.525 03:09:06 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:00.525 ************************************ 00:06:00.525 END TEST skip_rpc 00:06:00.525 ************************************ 00:06:00.525 03:09:06 -- common/autotest_common.sh@1142 -- # return 0 00:06:00.525 03:09:06 -- spdk/autotest.sh@171 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:06:00.525 03:09:06 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:00.525 03:09:06 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:00.525 03:09:06 -- common/autotest_common.sh@10 -- # set +x 00:06:00.783 ************************************ 00:06:00.783 START TEST rpc_client 00:06:00.783 ************************************ 00:06:00.783 03:09:06 rpc_client -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:06:00.783 * Looking for test storage... 00:06:00.783 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:06:00.783 03:09:06 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:06:00.783 OK 00:06:00.783 03:09:06 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:06:00.783 00:06:00.783 real 0m0.068s 00:06:00.783 user 0m0.029s 00:06:00.783 sys 0m0.043s 00:06:00.783 03:09:06 rpc_client -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:00.783 03:09:06 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:06:00.783 ************************************ 00:06:00.783 END TEST rpc_client 00:06:00.783 ************************************ 00:06:00.783 03:09:06 -- common/autotest_common.sh@1142 -- # return 0 00:06:00.783 03:09:06 -- spdk/autotest.sh@172 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:06:00.783 03:09:06 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:00.783 03:09:06 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:00.783 03:09:06 -- common/autotest_common.sh@10 -- # set +x 00:06:00.783 ************************************ 00:06:00.783 START TEST json_config 00:06:00.783 ************************************ 00:06:00.783 03:09:06 json_config -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:06:00.783 03:09:06 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:00.783 03:09:06 json_config -- nvmf/common.sh@7 -- # uname -s 00:06:00.783 03:09:06 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:00.783 03:09:06 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:00.783 03:09:06 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:00.783 03:09:06 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:00.783 03:09:06 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:00.783 03:09:06 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:00.783 03:09:06 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:00.783 03:09:06 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:00.783 03:09:06 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:00.783 03:09:06 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:00.783 03:09:06 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:06:00.783 03:09:06 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:06:00.783 03:09:06 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:00.783 03:09:06 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:00.783 03:09:06 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:00.783 03:09:06 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:00.783 03:09:06 json_config -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:00.783 03:09:06 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:00.783 03:09:06 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:00.783 03:09:06 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:00.783 03:09:06 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:00.783 03:09:06 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:00.783 03:09:06 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:00.783 03:09:06 json_config -- paths/export.sh@5 -- # export PATH 00:06:00.783 03:09:06 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:00.783 03:09:06 json_config -- nvmf/common.sh@47 -- # : 0 00:06:00.783 03:09:06 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:00.783 03:09:06 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:00.783 03:09:06 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:00.783 03:09:06 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:00.783 03:09:06 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:00.783 03:09:06 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:00.783 03:09:06 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:00.783 03:09:06 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:00.783 03:09:06 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:06:00.783 03:09:06 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:06:00.783 03:09:06 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:06:00.783 03:09:06 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:06:00.783 03:09:06 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:06:00.783 03:09:06 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:06:00.783 03:09:06 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:06:00.783 03:09:06 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:06:00.783 03:09:06 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:06:00.783 03:09:06 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:06:00.783 03:09:06 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:06:00.783 03:09:06 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:06:00.783 03:09:06 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:06:00.783 03:09:06 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:06:00.783 03:09:06 json_config -- json_config/json_config.sh@355 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:06:00.783 03:09:06 json_config -- json_config/json_config.sh@356 -- # echo 'INFO: JSON configuration test init' 00:06:00.783 INFO: JSON configuration test init 00:06:00.783 03:09:06 json_config -- json_config/json_config.sh@357 -- # json_config_test_init 00:06:00.783 03:09:06 json_config -- json_config/json_config.sh@262 -- # timing_enter json_config_test_init 00:06:00.783 03:09:06 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:00.784 03:09:06 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:00.784 03:09:06 json_config -- json_config/json_config.sh@263 -- # timing_enter json_config_setup_target 00:06:00.784 03:09:06 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:00.784 03:09:06 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:00.784 03:09:06 json_config -- json_config/json_config.sh@265 -- # json_config_test_start_app target --wait-for-rpc 00:06:00.784 03:09:06 json_config -- json_config/common.sh@9 -- # local app=target 00:06:00.784 03:09:06 json_config -- json_config/common.sh@10 -- # shift 00:06:00.784 03:09:06 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:00.784 03:09:06 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:00.784 03:09:06 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:06:00.784 03:09:06 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:00.784 03:09:06 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:00.784 03:09:06 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=3061065 00:06:00.784 03:09:06 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:06:00.784 03:09:06 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:00.784 Waiting for target to run... 00:06:00.784 03:09:06 json_config -- json_config/common.sh@25 -- # waitforlisten 3061065 /var/tmp/spdk_tgt.sock 00:06:00.784 03:09:06 json_config -- common/autotest_common.sh@829 -- # '[' -z 3061065 ']' 00:06:00.784 03:09:06 json_config -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:00.784 03:09:06 json_config -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:00.784 03:09:06 json_config -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:00.784 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:00.784 03:09:06 json_config -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:00.784 03:09:06 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:00.784 [2024-07-15 03:09:06.890846] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:06:00.784 [2024-07-15 03:09:06.890975] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3061065 ] 00:06:00.784 EAL: No free 2048 kB hugepages reported on node 1 00:06:01.350 [2024-07-15 03:09:07.234535] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:01.350 [2024-07-15 03:09:07.311855] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:01.916 03:09:07 json_config -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:01.916 03:09:07 json_config -- common/autotest_common.sh@862 -- # return 0 00:06:01.916 03:09:07 json_config -- json_config/common.sh@26 -- # echo '' 00:06:01.916 00:06:01.916 03:09:07 json_config -- json_config/json_config.sh@269 -- # create_accel_config 00:06:01.916 03:09:07 json_config -- json_config/json_config.sh@93 -- # timing_enter create_accel_config 00:06:01.916 03:09:07 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:01.916 03:09:07 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:01.916 03:09:07 json_config -- json_config/json_config.sh@95 -- # [[ 0 -eq 1 ]] 00:06:01.916 03:09:07 json_config -- json_config/json_config.sh@101 -- # timing_exit create_accel_config 00:06:01.916 03:09:07 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:01.916 03:09:07 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:01.916 03:09:07 json_config -- json_config/json_config.sh@273 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:06:01.916 03:09:07 json_config -- json_config/json_config.sh@274 -- # tgt_rpc load_config 00:06:01.916 03:09:07 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:06:05.199 03:09:11 json_config -- json_config/json_config.sh@276 -- # tgt_check_notification_types 00:06:05.199 03:09:11 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:06:05.199 03:09:11 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:05.199 03:09:11 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:05.199 03:09:11 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:06:05.199 03:09:11 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:06:05.199 03:09:11 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:06:05.199 03:09:11 json_config -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:06:05.199 03:09:11 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:06:05.199 03:09:11 json_config -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:06:05.199 03:09:11 json_config -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:06:05.199 03:09:11 json_config -- json_config/json_config.sh@48 -- # local get_types 00:06:05.199 03:09:11 json_config -- json_config/json_config.sh@49 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:06:05.199 03:09:11 json_config -- json_config/json_config.sh@54 -- # timing_exit tgt_check_notification_types 00:06:05.199 03:09:11 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:05.199 03:09:11 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:05.199 03:09:11 json_config -- json_config/json_config.sh@55 -- # return 0 00:06:05.199 03:09:11 json_config -- json_config/json_config.sh@278 -- # [[ 0 -eq 1 ]] 00:06:05.199 03:09:11 json_config -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:06:05.199 03:09:11 json_config -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:06:05.199 03:09:11 json_config -- json_config/json_config.sh@290 -- # [[ 1 -eq 1 ]] 00:06:05.199 03:09:11 json_config -- json_config/json_config.sh@291 -- # create_nvmf_subsystem_config 00:06:05.199 03:09:11 json_config -- json_config/json_config.sh@230 -- # timing_enter create_nvmf_subsystem_config 00:06:05.199 03:09:11 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:05.199 03:09:11 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:05.199 03:09:11 json_config -- json_config/json_config.sh@232 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:06:05.199 03:09:11 json_config -- json_config/json_config.sh@233 -- # [[ tcp == \r\d\m\a ]] 00:06:05.199 03:09:11 json_config -- json_config/json_config.sh@237 -- # [[ -z 127.0.0.1 ]] 00:06:05.199 03:09:11 json_config -- json_config/json_config.sh@242 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:06:05.199 03:09:11 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:06:05.457 MallocForNvmf0 00:06:05.457 03:09:11 json_config -- json_config/json_config.sh@243 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:06:05.457 03:09:11 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:06:05.715 MallocForNvmf1 00:06:05.715 03:09:11 json_config -- json_config/json_config.sh@245 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:06:05.715 03:09:11 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:06:05.973 [2024-07-15 03:09:12.019086] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:05.973 03:09:12 json_config -- json_config/json_config.sh@246 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:05.973 03:09:12 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:06.231 03:09:12 json_config -- json_config/json_config.sh@247 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:06:06.231 03:09:12 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:06:06.489 03:09:12 json_config -- json_config/json_config.sh@248 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:06:06.489 03:09:12 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:06:06.747 03:09:12 json_config -- json_config/json_config.sh@249 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:06:06.747 03:09:12 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:06:07.005 [2024-07-15 03:09:13.010359] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:06:07.005 03:09:13 json_config -- json_config/json_config.sh@251 -- # timing_exit create_nvmf_subsystem_config 00:06:07.005 03:09:13 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:07.005 03:09:13 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:07.005 03:09:13 json_config -- json_config/json_config.sh@293 -- # timing_exit json_config_setup_target 00:06:07.005 03:09:13 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:07.005 03:09:13 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:07.005 03:09:13 json_config -- json_config/json_config.sh@295 -- # [[ 0 -eq 1 ]] 00:06:07.005 03:09:13 json_config -- json_config/json_config.sh@300 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:06:07.005 03:09:13 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:06:07.271 MallocBdevForConfigChangeCheck 00:06:07.271 03:09:13 json_config -- json_config/json_config.sh@302 -- # timing_exit json_config_test_init 00:06:07.271 03:09:13 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:07.271 03:09:13 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:07.271 03:09:13 json_config -- json_config/json_config.sh@359 -- # tgt_rpc save_config 00:06:07.271 03:09:13 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:07.861 03:09:13 json_config -- json_config/json_config.sh@361 -- # echo 'INFO: shutting down applications...' 00:06:07.861 INFO: shutting down applications... 00:06:07.861 03:09:13 json_config -- json_config/json_config.sh@362 -- # [[ 0 -eq 1 ]] 00:06:07.861 03:09:13 json_config -- json_config/json_config.sh@368 -- # json_config_clear target 00:06:07.861 03:09:13 json_config -- json_config/json_config.sh@332 -- # [[ -n 22 ]] 00:06:07.861 03:09:13 json_config -- json_config/json_config.sh@333 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:06:09.234 Calling clear_iscsi_subsystem 00:06:09.234 Calling clear_nvmf_subsystem 00:06:09.234 Calling clear_nbd_subsystem 00:06:09.234 Calling clear_ublk_subsystem 00:06:09.234 Calling clear_vhost_blk_subsystem 00:06:09.234 Calling clear_vhost_scsi_subsystem 00:06:09.235 Calling clear_bdev_subsystem 00:06:09.235 03:09:15 json_config -- json_config/json_config.sh@337 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:06:09.235 03:09:15 json_config -- json_config/json_config.sh@343 -- # count=100 00:06:09.235 03:09:15 json_config -- json_config/json_config.sh@344 -- # '[' 100 -gt 0 ']' 00:06:09.235 03:09:15 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:09.235 03:09:15 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:06:09.235 03:09:15 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:06:09.800 03:09:15 json_config -- json_config/json_config.sh@345 -- # break 00:06:09.801 03:09:15 json_config -- json_config/json_config.sh@350 -- # '[' 100 -eq 0 ']' 00:06:09.801 03:09:15 json_config -- json_config/json_config.sh@369 -- # json_config_test_shutdown_app target 00:06:09.801 03:09:15 json_config -- json_config/common.sh@31 -- # local app=target 00:06:09.801 03:09:15 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:06:09.801 03:09:15 json_config -- json_config/common.sh@35 -- # [[ -n 3061065 ]] 00:06:09.801 03:09:15 json_config -- json_config/common.sh@38 -- # kill -SIGINT 3061065 00:06:09.801 03:09:15 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:06:09.801 03:09:15 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:09.801 03:09:15 json_config -- json_config/common.sh@41 -- # kill -0 3061065 00:06:09.801 03:09:15 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:06:10.369 03:09:16 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:06:10.369 03:09:16 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:10.369 03:09:16 json_config -- json_config/common.sh@41 -- # kill -0 3061065 00:06:10.369 03:09:16 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:06:10.369 03:09:16 json_config -- json_config/common.sh@43 -- # break 00:06:10.369 03:09:16 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:06:10.369 03:09:16 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:06:10.369 SPDK target shutdown done 00:06:10.369 03:09:16 json_config -- json_config/json_config.sh@371 -- # echo 'INFO: relaunching applications...' 00:06:10.369 INFO: relaunching applications... 00:06:10.369 03:09:16 json_config -- json_config/json_config.sh@372 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:10.369 03:09:16 json_config -- json_config/common.sh@9 -- # local app=target 00:06:10.369 03:09:16 json_config -- json_config/common.sh@10 -- # shift 00:06:10.369 03:09:16 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:10.369 03:09:16 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:10.369 03:09:16 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:06:10.369 03:09:16 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:10.369 03:09:16 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:10.369 03:09:16 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=3062812 00:06:10.369 03:09:16 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:10.369 03:09:16 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:10.369 Waiting for target to run... 00:06:10.369 03:09:16 json_config -- json_config/common.sh@25 -- # waitforlisten 3062812 /var/tmp/spdk_tgt.sock 00:06:10.369 03:09:16 json_config -- common/autotest_common.sh@829 -- # '[' -z 3062812 ']' 00:06:10.369 03:09:16 json_config -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:10.369 03:09:16 json_config -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:10.369 03:09:16 json_config -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:10.369 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:10.369 03:09:16 json_config -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:10.369 03:09:16 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:10.369 [2024-07-15 03:09:16.280787] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:06:10.369 [2024-07-15 03:09:16.280894] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3062812 ] 00:06:10.369 EAL: No free 2048 kB hugepages reported on node 1 00:06:10.937 [2024-07-15 03:09:16.827339] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:10.937 [2024-07-15 03:09:16.905006] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:14.218 [2024-07-15 03:09:19.936705] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:14.218 [2024-07-15 03:09:19.969171] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:06:14.783 03:09:20 json_config -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:14.783 03:09:20 json_config -- common/autotest_common.sh@862 -- # return 0 00:06:14.783 03:09:20 json_config -- json_config/common.sh@26 -- # echo '' 00:06:14.783 00:06:14.783 03:09:20 json_config -- json_config/json_config.sh@373 -- # [[ 0 -eq 1 ]] 00:06:14.783 03:09:20 json_config -- json_config/json_config.sh@377 -- # echo 'INFO: Checking if target configuration is the same...' 00:06:14.783 INFO: Checking if target configuration is the same... 00:06:14.783 03:09:20 json_config -- json_config/json_config.sh@378 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:14.783 03:09:20 json_config -- json_config/json_config.sh@378 -- # tgt_rpc save_config 00:06:14.783 03:09:20 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:14.783 + '[' 2 -ne 2 ']' 00:06:14.783 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:06:14.783 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:06:14.783 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:06:14.783 +++ basename /dev/fd/62 00:06:14.783 ++ mktemp /tmp/62.XXX 00:06:14.783 + tmp_file_1=/tmp/62.1LK 00:06:14.783 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:14.783 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:06:14.783 + tmp_file_2=/tmp/spdk_tgt_config.json.wSq 00:06:14.783 + ret=0 00:06:14.783 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:15.041 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:15.041 + diff -u /tmp/62.1LK /tmp/spdk_tgt_config.json.wSq 00:06:15.041 + echo 'INFO: JSON config files are the same' 00:06:15.041 INFO: JSON config files are the same 00:06:15.041 + rm /tmp/62.1LK /tmp/spdk_tgt_config.json.wSq 00:06:15.041 + exit 0 00:06:15.041 03:09:21 json_config -- json_config/json_config.sh@379 -- # [[ 0 -eq 1 ]] 00:06:15.041 03:09:21 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:06:15.041 INFO: changing configuration and checking if this can be detected... 00:06:15.041 03:09:21 json_config -- json_config/json_config.sh@386 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:06:15.041 03:09:21 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:06:15.299 03:09:21 json_config -- json_config/json_config.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:15.299 03:09:21 json_config -- json_config/json_config.sh@387 -- # tgt_rpc save_config 00:06:15.299 03:09:21 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:15.299 + '[' 2 -ne 2 ']' 00:06:15.299 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:06:15.299 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:06:15.299 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:06:15.299 +++ basename /dev/fd/62 00:06:15.299 ++ mktemp /tmp/62.XXX 00:06:15.299 + tmp_file_1=/tmp/62.a8j 00:06:15.299 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:15.299 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:06:15.299 + tmp_file_2=/tmp/spdk_tgt_config.json.9DG 00:06:15.299 + ret=0 00:06:15.299 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:15.881 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:15.881 + diff -u /tmp/62.a8j /tmp/spdk_tgt_config.json.9DG 00:06:15.881 + ret=1 00:06:15.881 + echo '=== Start of file: /tmp/62.a8j ===' 00:06:15.881 + cat /tmp/62.a8j 00:06:15.881 + echo '=== End of file: /tmp/62.a8j ===' 00:06:15.881 + echo '' 00:06:15.881 + echo '=== Start of file: /tmp/spdk_tgt_config.json.9DG ===' 00:06:15.881 + cat /tmp/spdk_tgt_config.json.9DG 00:06:15.881 + echo '=== End of file: /tmp/spdk_tgt_config.json.9DG ===' 00:06:15.881 + echo '' 00:06:15.881 + rm /tmp/62.a8j /tmp/spdk_tgt_config.json.9DG 00:06:15.881 + exit 1 00:06:15.881 03:09:21 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: configuration change detected.' 00:06:15.881 INFO: configuration change detected. 00:06:15.881 03:09:21 json_config -- json_config/json_config.sh@394 -- # json_config_test_fini 00:06:15.881 03:09:21 json_config -- json_config/json_config.sh@306 -- # timing_enter json_config_test_fini 00:06:15.881 03:09:21 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:15.881 03:09:21 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:15.881 03:09:21 json_config -- json_config/json_config.sh@307 -- # local ret=0 00:06:15.881 03:09:21 json_config -- json_config/json_config.sh@309 -- # [[ -n '' ]] 00:06:15.881 03:09:21 json_config -- json_config/json_config.sh@317 -- # [[ -n 3062812 ]] 00:06:15.881 03:09:21 json_config -- json_config/json_config.sh@320 -- # cleanup_bdev_subsystem_config 00:06:15.881 03:09:21 json_config -- json_config/json_config.sh@184 -- # timing_enter cleanup_bdev_subsystem_config 00:06:15.881 03:09:21 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:15.881 03:09:21 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:15.881 03:09:21 json_config -- json_config/json_config.sh@186 -- # [[ 0 -eq 1 ]] 00:06:15.881 03:09:21 json_config -- json_config/json_config.sh@193 -- # uname -s 00:06:15.881 03:09:21 json_config -- json_config/json_config.sh@193 -- # [[ Linux = Linux ]] 00:06:15.881 03:09:21 json_config -- json_config/json_config.sh@194 -- # rm -f /sample_aio 00:06:15.881 03:09:21 json_config -- json_config/json_config.sh@197 -- # [[ 0 -eq 1 ]] 00:06:15.881 03:09:21 json_config -- json_config/json_config.sh@201 -- # timing_exit cleanup_bdev_subsystem_config 00:06:15.881 03:09:21 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:15.881 03:09:21 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:15.881 03:09:21 json_config -- json_config/json_config.sh@323 -- # killprocess 3062812 00:06:15.881 03:09:21 json_config -- common/autotest_common.sh@948 -- # '[' -z 3062812 ']' 00:06:15.881 03:09:21 json_config -- common/autotest_common.sh@952 -- # kill -0 3062812 00:06:15.881 03:09:21 json_config -- common/autotest_common.sh@953 -- # uname 00:06:15.881 03:09:21 json_config -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:15.881 03:09:21 json_config -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3062812 00:06:15.881 03:09:21 json_config -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:15.881 03:09:21 json_config -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:15.881 03:09:21 json_config -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3062812' 00:06:15.881 killing process with pid 3062812 00:06:15.881 03:09:21 json_config -- common/autotest_common.sh@967 -- # kill 3062812 00:06:15.881 03:09:21 json_config -- common/autotest_common.sh@972 -- # wait 3062812 00:06:17.820 03:09:23 json_config -- json_config/json_config.sh@326 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:17.820 03:09:23 json_config -- json_config/json_config.sh@327 -- # timing_exit json_config_test_fini 00:06:17.820 03:09:23 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:17.820 03:09:23 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:17.820 03:09:23 json_config -- json_config/json_config.sh@328 -- # return 0 00:06:17.820 03:09:23 json_config -- json_config/json_config.sh@396 -- # echo 'INFO: Success' 00:06:17.820 INFO: Success 00:06:17.820 00:06:17.820 real 0m16.725s 00:06:17.820 user 0m18.682s 00:06:17.820 sys 0m2.029s 00:06:17.820 03:09:23 json_config -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:17.820 03:09:23 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:17.820 ************************************ 00:06:17.820 END TEST json_config 00:06:17.820 ************************************ 00:06:17.820 03:09:23 -- common/autotest_common.sh@1142 -- # return 0 00:06:17.820 03:09:23 -- spdk/autotest.sh@173 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:06:17.820 03:09:23 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:17.820 03:09:23 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:17.820 03:09:23 -- common/autotest_common.sh@10 -- # set +x 00:06:17.820 ************************************ 00:06:17.820 START TEST json_config_extra_key 00:06:17.820 ************************************ 00:06:17.820 03:09:23 json_config_extra_key -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:06:17.820 03:09:23 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:17.820 03:09:23 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:06:17.820 03:09:23 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:17.820 03:09:23 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:17.820 03:09:23 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:17.820 03:09:23 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:17.820 03:09:23 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:17.820 03:09:23 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:17.820 03:09:23 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:17.820 03:09:23 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:17.820 03:09:23 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:17.820 03:09:23 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:17.820 03:09:23 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:06:17.820 03:09:23 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:06:17.820 03:09:23 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:17.820 03:09:23 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:17.820 03:09:23 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:17.820 03:09:23 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:17.820 03:09:23 json_config_extra_key -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:17.820 03:09:23 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:17.820 03:09:23 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:17.820 03:09:23 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:17.820 03:09:23 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:17.820 03:09:23 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:17.820 03:09:23 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:17.820 03:09:23 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:06:17.820 03:09:23 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:17.820 03:09:23 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:06:17.820 03:09:23 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:17.820 03:09:23 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:17.820 03:09:23 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:17.820 03:09:23 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:17.820 03:09:23 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:17.820 03:09:23 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:17.820 03:09:23 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:17.820 03:09:23 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:17.820 03:09:23 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:06:17.820 03:09:23 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:06:17.820 03:09:23 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:06:17.820 03:09:23 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:06:17.820 03:09:23 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:06:17.820 03:09:23 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:06:17.820 03:09:23 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:06:17.820 03:09:23 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:06:17.820 03:09:23 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:06:17.820 03:09:23 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:06:17.820 03:09:23 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:06:17.820 INFO: launching applications... 00:06:17.820 03:09:23 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:06:17.820 03:09:23 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:06:17.820 03:09:23 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:06:17.820 03:09:23 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:17.820 03:09:23 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:17.820 03:09:23 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:06:17.820 03:09:23 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:17.820 03:09:23 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:17.820 03:09:23 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=3063804 00:06:17.820 03:09:23 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:17.820 03:09:23 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:06:17.820 Waiting for target to run... 00:06:17.820 03:09:23 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 3063804 /var/tmp/spdk_tgt.sock 00:06:17.820 03:09:23 json_config_extra_key -- common/autotest_common.sh@829 -- # '[' -z 3063804 ']' 00:06:17.820 03:09:23 json_config_extra_key -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:17.821 03:09:23 json_config_extra_key -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:17.821 03:09:23 json_config_extra_key -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:17.821 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:17.821 03:09:23 json_config_extra_key -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:17.821 03:09:23 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:17.821 [2024-07-15 03:09:23.666536] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:06:17.821 [2024-07-15 03:09:23.666633] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3063804 ] 00:06:17.821 EAL: No free 2048 kB hugepages reported on node 1 00:06:18.080 [2024-07-15 03:09:24.014303] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:18.080 [2024-07-15 03:09:24.078605] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:18.646 03:09:24 json_config_extra_key -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:18.646 03:09:24 json_config_extra_key -- common/autotest_common.sh@862 -- # return 0 00:06:18.646 03:09:24 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:06:18.646 00:06:18.646 03:09:24 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:06:18.646 INFO: shutting down applications... 00:06:18.646 03:09:24 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:06:18.646 03:09:24 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:06:18.646 03:09:24 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:06:18.646 03:09:24 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 3063804 ]] 00:06:18.646 03:09:24 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 3063804 00:06:18.646 03:09:24 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:06:18.646 03:09:24 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:18.646 03:09:24 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 3063804 00:06:18.646 03:09:24 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:19.211 03:09:25 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:19.211 03:09:25 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:19.211 03:09:25 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 3063804 00:06:19.211 03:09:25 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:06:19.211 03:09:25 json_config_extra_key -- json_config/common.sh@43 -- # break 00:06:19.211 03:09:25 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:06:19.211 03:09:25 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:06:19.211 SPDK target shutdown done 00:06:19.211 03:09:25 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:06:19.211 Success 00:06:19.211 00:06:19.211 real 0m1.570s 00:06:19.211 user 0m1.512s 00:06:19.211 sys 0m0.448s 00:06:19.211 03:09:25 json_config_extra_key -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:19.211 03:09:25 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:19.211 ************************************ 00:06:19.211 END TEST json_config_extra_key 00:06:19.211 ************************************ 00:06:19.211 03:09:25 -- common/autotest_common.sh@1142 -- # return 0 00:06:19.211 03:09:25 -- spdk/autotest.sh@174 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:19.211 03:09:25 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:19.211 03:09:25 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:19.211 03:09:25 -- common/autotest_common.sh@10 -- # set +x 00:06:19.211 ************************************ 00:06:19.211 START TEST alias_rpc 00:06:19.211 ************************************ 00:06:19.211 03:09:25 alias_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:19.211 * Looking for test storage... 00:06:19.211 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:06:19.211 03:09:25 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:19.211 03:09:25 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=3064007 00:06:19.211 03:09:25 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:19.211 03:09:25 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 3064007 00:06:19.211 03:09:25 alias_rpc -- common/autotest_common.sh@829 -- # '[' -z 3064007 ']' 00:06:19.211 03:09:25 alias_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:19.211 03:09:25 alias_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:19.211 03:09:25 alias_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:19.211 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:19.211 03:09:25 alias_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:19.211 03:09:25 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:19.211 [2024-07-15 03:09:25.280014] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:06:19.211 [2024-07-15 03:09:25.280095] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3064007 ] 00:06:19.211 EAL: No free 2048 kB hugepages reported on node 1 00:06:19.211 [2024-07-15 03:09:25.340984] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:19.469 [2024-07-15 03:09:25.426454] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:19.728 03:09:25 alias_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:19.728 03:09:25 alias_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:19.728 03:09:25 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:06:19.985 03:09:25 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 3064007 00:06:19.985 03:09:25 alias_rpc -- common/autotest_common.sh@948 -- # '[' -z 3064007 ']' 00:06:19.985 03:09:25 alias_rpc -- common/autotest_common.sh@952 -- # kill -0 3064007 00:06:19.985 03:09:25 alias_rpc -- common/autotest_common.sh@953 -- # uname 00:06:19.985 03:09:25 alias_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:19.985 03:09:25 alias_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3064007 00:06:19.985 03:09:25 alias_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:19.985 03:09:25 alias_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:19.985 03:09:25 alias_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3064007' 00:06:19.985 killing process with pid 3064007 00:06:19.985 03:09:25 alias_rpc -- common/autotest_common.sh@967 -- # kill 3064007 00:06:19.985 03:09:25 alias_rpc -- common/autotest_common.sh@972 -- # wait 3064007 00:06:20.243 00:06:20.243 real 0m1.201s 00:06:20.243 user 0m1.270s 00:06:20.243 sys 0m0.421s 00:06:20.243 03:09:26 alias_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:20.243 03:09:26 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:20.243 ************************************ 00:06:20.243 END TEST alias_rpc 00:06:20.243 ************************************ 00:06:20.502 03:09:26 -- common/autotest_common.sh@1142 -- # return 0 00:06:20.502 03:09:26 -- spdk/autotest.sh@176 -- # [[ 0 -eq 0 ]] 00:06:20.502 03:09:26 -- spdk/autotest.sh@177 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:06:20.502 03:09:26 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:20.502 03:09:26 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:20.502 03:09:26 -- common/autotest_common.sh@10 -- # set +x 00:06:20.502 ************************************ 00:06:20.502 START TEST spdkcli_tcp 00:06:20.502 ************************************ 00:06:20.502 03:09:26 spdkcli_tcp -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:06:20.502 * Looking for test storage... 00:06:20.502 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:06:20.502 03:09:26 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:06:20.502 03:09:26 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:06:20.502 03:09:26 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:06:20.502 03:09:26 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:06:20.502 03:09:26 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:06:20.502 03:09:26 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:06:20.502 03:09:26 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:06:20.502 03:09:26 spdkcli_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:20.502 03:09:26 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:20.502 03:09:26 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=3064304 00:06:20.502 03:09:26 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:06:20.502 03:09:26 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 3064304 00:06:20.502 03:09:26 spdkcli_tcp -- common/autotest_common.sh@829 -- # '[' -z 3064304 ']' 00:06:20.502 03:09:26 spdkcli_tcp -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:20.502 03:09:26 spdkcli_tcp -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:20.502 03:09:26 spdkcli_tcp -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:20.502 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:20.502 03:09:26 spdkcli_tcp -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:20.502 03:09:26 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:20.502 [2024-07-15 03:09:26.526218] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:06:20.502 [2024-07-15 03:09:26.526298] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3064304 ] 00:06:20.502 EAL: No free 2048 kB hugepages reported on node 1 00:06:20.502 [2024-07-15 03:09:26.583055] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:20.761 [2024-07-15 03:09:26.668600] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:20.761 [2024-07-15 03:09:26.668603] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:21.019 03:09:26 spdkcli_tcp -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:21.019 03:09:26 spdkcli_tcp -- common/autotest_common.sh@862 -- # return 0 00:06:21.019 03:09:26 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=3064308 00:06:21.019 03:09:26 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:06:21.019 03:09:26 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:06:21.278 [ 00:06:21.278 "bdev_malloc_delete", 00:06:21.278 "bdev_malloc_create", 00:06:21.278 "bdev_null_resize", 00:06:21.278 "bdev_null_delete", 00:06:21.278 "bdev_null_create", 00:06:21.278 "bdev_nvme_cuse_unregister", 00:06:21.278 "bdev_nvme_cuse_register", 00:06:21.278 "bdev_opal_new_user", 00:06:21.278 "bdev_opal_set_lock_state", 00:06:21.278 "bdev_opal_delete", 00:06:21.278 "bdev_opal_get_info", 00:06:21.278 "bdev_opal_create", 00:06:21.278 "bdev_nvme_opal_revert", 00:06:21.278 "bdev_nvme_opal_init", 00:06:21.278 "bdev_nvme_send_cmd", 00:06:21.278 "bdev_nvme_get_path_iostat", 00:06:21.278 "bdev_nvme_get_mdns_discovery_info", 00:06:21.278 "bdev_nvme_stop_mdns_discovery", 00:06:21.278 "bdev_nvme_start_mdns_discovery", 00:06:21.278 "bdev_nvme_set_multipath_policy", 00:06:21.278 "bdev_nvme_set_preferred_path", 00:06:21.278 "bdev_nvme_get_io_paths", 00:06:21.278 "bdev_nvme_remove_error_injection", 00:06:21.278 "bdev_nvme_add_error_injection", 00:06:21.278 "bdev_nvme_get_discovery_info", 00:06:21.278 "bdev_nvme_stop_discovery", 00:06:21.278 "bdev_nvme_start_discovery", 00:06:21.278 "bdev_nvme_get_controller_health_info", 00:06:21.278 "bdev_nvme_disable_controller", 00:06:21.278 "bdev_nvme_enable_controller", 00:06:21.278 "bdev_nvme_reset_controller", 00:06:21.278 "bdev_nvme_get_transport_statistics", 00:06:21.278 "bdev_nvme_apply_firmware", 00:06:21.278 "bdev_nvme_detach_controller", 00:06:21.278 "bdev_nvme_get_controllers", 00:06:21.278 "bdev_nvme_attach_controller", 00:06:21.278 "bdev_nvme_set_hotplug", 00:06:21.278 "bdev_nvme_set_options", 00:06:21.278 "bdev_passthru_delete", 00:06:21.278 "bdev_passthru_create", 00:06:21.278 "bdev_lvol_set_parent_bdev", 00:06:21.278 "bdev_lvol_set_parent", 00:06:21.278 "bdev_lvol_check_shallow_copy", 00:06:21.278 "bdev_lvol_start_shallow_copy", 00:06:21.278 "bdev_lvol_grow_lvstore", 00:06:21.278 "bdev_lvol_get_lvols", 00:06:21.278 "bdev_lvol_get_lvstores", 00:06:21.278 "bdev_lvol_delete", 00:06:21.278 "bdev_lvol_set_read_only", 00:06:21.278 "bdev_lvol_resize", 00:06:21.278 "bdev_lvol_decouple_parent", 00:06:21.278 "bdev_lvol_inflate", 00:06:21.278 "bdev_lvol_rename", 00:06:21.278 "bdev_lvol_clone_bdev", 00:06:21.278 "bdev_lvol_clone", 00:06:21.278 "bdev_lvol_snapshot", 00:06:21.278 "bdev_lvol_create", 00:06:21.278 "bdev_lvol_delete_lvstore", 00:06:21.278 "bdev_lvol_rename_lvstore", 00:06:21.278 "bdev_lvol_create_lvstore", 00:06:21.278 "bdev_raid_set_options", 00:06:21.278 "bdev_raid_remove_base_bdev", 00:06:21.278 "bdev_raid_add_base_bdev", 00:06:21.278 "bdev_raid_delete", 00:06:21.278 "bdev_raid_create", 00:06:21.278 "bdev_raid_get_bdevs", 00:06:21.278 "bdev_error_inject_error", 00:06:21.278 "bdev_error_delete", 00:06:21.278 "bdev_error_create", 00:06:21.278 "bdev_split_delete", 00:06:21.278 "bdev_split_create", 00:06:21.278 "bdev_delay_delete", 00:06:21.278 "bdev_delay_create", 00:06:21.278 "bdev_delay_update_latency", 00:06:21.278 "bdev_zone_block_delete", 00:06:21.278 "bdev_zone_block_create", 00:06:21.278 "blobfs_create", 00:06:21.278 "blobfs_detect", 00:06:21.278 "blobfs_set_cache_size", 00:06:21.278 "bdev_aio_delete", 00:06:21.278 "bdev_aio_rescan", 00:06:21.278 "bdev_aio_create", 00:06:21.278 "bdev_ftl_set_property", 00:06:21.278 "bdev_ftl_get_properties", 00:06:21.278 "bdev_ftl_get_stats", 00:06:21.278 "bdev_ftl_unmap", 00:06:21.278 "bdev_ftl_unload", 00:06:21.278 "bdev_ftl_delete", 00:06:21.278 "bdev_ftl_load", 00:06:21.278 "bdev_ftl_create", 00:06:21.278 "bdev_virtio_attach_controller", 00:06:21.278 "bdev_virtio_scsi_get_devices", 00:06:21.278 "bdev_virtio_detach_controller", 00:06:21.278 "bdev_virtio_blk_set_hotplug", 00:06:21.278 "bdev_iscsi_delete", 00:06:21.278 "bdev_iscsi_create", 00:06:21.278 "bdev_iscsi_set_options", 00:06:21.278 "accel_error_inject_error", 00:06:21.278 "ioat_scan_accel_module", 00:06:21.278 "dsa_scan_accel_module", 00:06:21.278 "iaa_scan_accel_module", 00:06:21.278 "vfu_virtio_create_scsi_endpoint", 00:06:21.278 "vfu_virtio_scsi_remove_target", 00:06:21.278 "vfu_virtio_scsi_add_target", 00:06:21.278 "vfu_virtio_create_blk_endpoint", 00:06:21.278 "vfu_virtio_delete_endpoint", 00:06:21.278 "keyring_file_remove_key", 00:06:21.278 "keyring_file_add_key", 00:06:21.278 "keyring_linux_set_options", 00:06:21.278 "iscsi_get_histogram", 00:06:21.278 "iscsi_enable_histogram", 00:06:21.278 "iscsi_set_options", 00:06:21.278 "iscsi_get_auth_groups", 00:06:21.278 "iscsi_auth_group_remove_secret", 00:06:21.278 "iscsi_auth_group_add_secret", 00:06:21.278 "iscsi_delete_auth_group", 00:06:21.278 "iscsi_create_auth_group", 00:06:21.278 "iscsi_set_discovery_auth", 00:06:21.278 "iscsi_get_options", 00:06:21.278 "iscsi_target_node_request_logout", 00:06:21.278 "iscsi_target_node_set_redirect", 00:06:21.278 "iscsi_target_node_set_auth", 00:06:21.278 "iscsi_target_node_add_lun", 00:06:21.278 "iscsi_get_stats", 00:06:21.278 "iscsi_get_connections", 00:06:21.278 "iscsi_portal_group_set_auth", 00:06:21.278 "iscsi_start_portal_group", 00:06:21.278 "iscsi_delete_portal_group", 00:06:21.278 "iscsi_create_portal_group", 00:06:21.278 "iscsi_get_portal_groups", 00:06:21.278 "iscsi_delete_target_node", 00:06:21.278 "iscsi_target_node_remove_pg_ig_maps", 00:06:21.278 "iscsi_target_node_add_pg_ig_maps", 00:06:21.278 "iscsi_create_target_node", 00:06:21.278 "iscsi_get_target_nodes", 00:06:21.278 "iscsi_delete_initiator_group", 00:06:21.278 "iscsi_initiator_group_remove_initiators", 00:06:21.278 "iscsi_initiator_group_add_initiators", 00:06:21.278 "iscsi_create_initiator_group", 00:06:21.278 "iscsi_get_initiator_groups", 00:06:21.278 "nvmf_set_crdt", 00:06:21.278 "nvmf_set_config", 00:06:21.278 "nvmf_set_max_subsystems", 00:06:21.278 "nvmf_stop_mdns_prr", 00:06:21.278 "nvmf_publish_mdns_prr", 00:06:21.278 "nvmf_subsystem_get_listeners", 00:06:21.278 "nvmf_subsystem_get_qpairs", 00:06:21.278 "nvmf_subsystem_get_controllers", 00:06:21.278 "nvmf_get_stats", 00:06:21.278 "nvmf_get_transports", 00:06:21.278 "nvmf_create_transport", 00:06:21.278 "nvmf_get_targets", 00:06:21.278 "nvmf_delete_target", 00:06:21.278 "nvmf_create_target", 00:06:21.278 "nvmf_subsystem_allow_any_host", 00:06:21.278 "nvmf_subsystem_remove_host", 00:06:21.278 "nvmf_subsystem_add_host", 00:06:21.278 "nvmf_ns_remove_host", 00:06:21.278 "nvmf_ns_add_host", 00:06:21.278 "nvmf_subsystem_remove_ns", 00:06:21.278 "nvmf_subsystem_add_ns", 00:06:21.278 "nvmf_subsystem_listener_set_ana_state", 00:06:21.278 "nvmf_discovery_get_referrals", 00:06:21.278 "nvmf_discovery_remove_referral", 00:06:21.278 "nvmf_discovery_add_referral", 00:06:21.278 "nvmf_subsystem_remove_listener", 00:06:21.278 "nvmf_subsystem_add_listener", 00:06:21.278 "nvmf_delete_subsystem", 00:06:21.278 "nvmf_create_subsystem", 00:06:21.278 "nvmf_get_subsystems", 00:06:21.278 "env_dpdk_get_mem_stats", 00:06:21.278 "nbd_get_disks", 00:06:21.278 "nbd_stop_disk", 00:06:21.278 "nbd_start_disk", 00:06:21.278 "ublk_recover_disk", 00:06:21.278 "ublk_get_disks", 00:06:21.278 "ublk_stop_disk", 00:06:21.278 "ublk_start_disk", 00:06:21.278 "ublk_destroy_target", 00:06:21.278 "ublk_create_target", 00:06:21.278 "virtio_blk_create_transport", 00:06:21.278 "virtio_blk_get_transports", 00:06:21.278 "vhost_controller_set_coalescing", 00:06:21.278 "vhost_get_controllers", 00:06:21.278 "vhost_delete_controller", 00:06:21.278 "vhost_create_blk_controller", 00:06:21.278 "vhost_scsi_controller_remove_target", 00:06:21.278 "vhost_scsi_controller_add_target", 00:06:21.278 "vhost_start_scsi_controller", 00:06:21.278 "vhost_create_scsi_controller", 00:06:21.278 "thread_set_cpumask", 00:06:21.278 "framework_get_governor", 00:06:21.278 "framework_get_scheduler", 00:06:21.278 "framework_set_scheduler", 00:06:21.278 "framework_get_reactors", 00:06:21.278 "thread_get_io_channels", 00:06:21.278 "thread_get_pollers", 00:06:21.278 "thread_get_stats", 00:06:21.278 "framework_monitor_context_switch", 00:06:21.278 "spdk_kill_instance", 00:06:21.278 "log_enable_timestamps", 00:06:21.278 "log_get_flags", 00:06:21.278 "log_clear_flag", 00:06:21.278 "log_set_flag", 00:06:21.278 "log_get_level", 00:06:21.278 "log_set_level", 00:06:21.278 "log_get_print_level", 00:06:21.278 "log_set_print_level", 00:06:21.279 "framework_enable_cpumask_locks", 00:06:21.279 "framework_disable_cpumask_locks", 00:06:21.279 "framework_wait_init", 00:06:21.279 "framework_start_init", 00:06:21.279 "scsi_get_devices", 00:06:21.279 "bdev_get_histogram", 00:06:21.279 "bdev_enable_histogram", 00:06:21.279 "bdev_set_qos_limit", 00:06:21.279 "bdev_set_qd_sampling_period", 00:06:21.279 "bdev_get_bdevs", 00:06:21.279 "bdev_reset_iostat", 00:06:21.279 "bdev_get_iostat", 00:06:21.279 "bdev_examine", 00:06:21.279 "bdev_wait_for_examine", 00:06:21.279 "bdev_set_options", 00:06:21.279 "notify_get_notifications", 00:06:21.279 "notify_get_types", 00:06:21.279 "accel_get_stats", 00:06:21.279 "accel_set_options", 00:06:21.279 "accel_set_driver", 00:06:21.279 "accel_crypto_key_destroy", 00:06:21.279 "accel_crypto_keys_get", 00:06:21.279 "accel_crypto_key_create", 00:06:21.279 "accel_assign_opc", 00:06:21.279 "accel_get_module_info", 00:06:21.279 "accel_get_opc_assignments", 00:06:21.279 "vmd_rescan", 00:06:21.279 "vmd_remove_device", 00:06:21.279 "vmd_enable", 00:06:21.279 "sock_get_default_impl", 00:06:21.279 "sock_set_default_impl", 00:06:21.279 "sock_impl_set_options", 00:06:21.279 "sock_impl_get_options", 00:06:21.279 "iobuf_get_stats", 00:06:21.279 "iobuf_set_options", 00:06:21.279 "keyring_get_keys", 00:06:21.279 "framework_get_pci_devices", 00:06:21.279 "framework_get_config", 00:06:21.279 "framework_get_subsystems", 00:06:21.279 "vfu_tgt_set_base_path", 00:06:21.279 "trace_get_info", 00:06:21.279 "trace_get_tpoint_group_mask", 00:06:21.279 "trace_disable_tpoint_group", 00:06:21.279 "trace_enable_tpoint_group", 00:06:21.279 "trace_clear_tpoint_mask", 00:06:21.279 "trace_set_tpoint_mask", 00:06:21.279 "spdk_get_version", 00:06:21.279 "rpc_get_methods" 00:06:21.279 ] 00:06:21.279 03:09:27 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:06:21.279 03:09:27 spdkcli_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:21.279 03:09:27 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:21.279 03:09:27 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:06:21.279 03:09:27 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 3064304 00:06:21.279 03:09:27 spdkcli_tcp -- common/autotest_common.sh@948 -- # '[' -z 3064304 ']' 00:06:21.279 03:09:27 spdkcli_tcp -- common/autotest_common.sh@952 -- # kill -0 3064304 00:06:21.279 03:09:27 spdkcli_tcp -- common/autotest_common.sh@953 -- # uname 00:06:21.279 03:09:27 spdkcli_tcp -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:21.279 03:09:27 spdkcli_tcp -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3064304 00:06:21.279 03:09:27 spdkcli_tcp -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:21.279 03:09:27 spdkcli_tcp -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:21.279 03:09:27 spdkcli_tcp -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3064304' 00:06:21.279 killing process with pid 3064304 00:06:21.279 03:09:27 spdkcli_tcp -- common/autotest_common.sh@967 -- # kill 3064304 00:06:21.279 03:09:27 spdkcli_tcp -- common/autotest_common.sh@972 -- # wait 3064304 00:06:21.538 00:06:21.538 real 0m1.199s 00:06:21.538 user 0m2.136s 00:06:21.538 sys 0m0.437s 00:06:21.538 03:09:27 spdkcli_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:21.538 03:09:27 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:21.538 ************************************ 00:06:21.538 END TEST spdkcli_tcp 00:06:21.538 ************************************ 00:06:21.538 03:09:27 -- common/autotest_common.sh@1142 -- # return 0 00:06:21.538 03:09:27 -- spdk/autotest.sh@180 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:21.538 03:09:27 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:21.538 03:09:27 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:21.538 03:09:27 -- common/autotest_common.sh@10 -- # set +x 00:06:21.538 ************************************ 00:06:21.538 START TEST dpdk_mem_utility 00:06:21.538 ************************************ 00:06:21.538 03:09:27 dpdk_mem_utility -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:21.796 * Looking for test storage... 00:06:21.796 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:06:21.796 03:09:27 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:06:21.796 03:09:27 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=3064506 00:06:21.796 03:09:27 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:21.796 03:09:27 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 3064506 00:06:21.796 03:09:27 dpdk_mem_utility -- common/autotest_common.sh@829 -- # '[' -z 3064506 ']' 00:06:21.796 03:09:27 dpdk_mem_utility -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:21.796 03:09:27 dpdk_mem_utility -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:21.796 03:09:27 dpdk_mem_utility -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:21.796 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:21.796 03:09:27 dpdk_mem_utility -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:21.796 03:09:27 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:21.796 [2024-07-15 03:09:27.770361] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:06:21.796 [2024-07-15 03:09:27.770454] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3064506 ] 00:06:21.796 EAL: No free 2048 kB hugepages reported on node 1 00:06:21.796 [2024-07-15 03:09:27.828307] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:21.796 [2024-07-15 03:09:27.912241] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:22.054 03:09:28 dpdk_mem_utility -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:22.054 03:09:28 dpdk_mem_utility -- common/autotest_common.sh@862 -- # return 0 00:06:22.054 03:09:28 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:06:22.054 03:09:28 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:06:22.054 03:09:28 dpdk_mem_utility -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:22.054 03:09:28 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:22.054 { 00:06:22.054 "filename": "/tmp/spdk_mem_dump.txt" 00:06:22.054 } 00:06:22.054 03:09:28 dpdk_mem_utility -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:22.054 03:09:28 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:06:22.313 DPDK memory size 814.000000 MiB in 1 heap(s) 00:06:22.313 1 heaps totaling size 814.000000 MiB 00:06:22.313 size: 814.000000 MiB heap id: 0 00:06:22.313 end heaps---------- 00:06:22.313 8 mempools totaling size 598.116089 MiB 00:06:22.313 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:06:22.313 size: 158.602051 MiB name: PDU_data_out_Pool 00:06:22.313 size: 84.521057 MiB name: bdev_io_3064506 00:06:22.313 size: 51.011292 MiB name: evtpool_3064506 00:06:22.313 size: 50.003479 MiB name: msgpool_3064506 00:06:22.313 size: 21.763794 MiB name: PDU_Pool 00:06:22.313 size: 19.513306 MiB name: SCSI_TASK_Pool 00:06:22.313 size: 0.026123 MiB name: Session_Pool 00:06:22.313 end mempools------- 00:06:22.313 6 memzones totaling size 4.142822 MiB 00:06:22.313 size: 1.000366 MiB name: RG_ring_0_3064506 00:06:22.313 size: 1.000366 MiB name: RG_ring_1_3064506 00:06:22.313 size: 1.000366 MiB name: RG_ring_4_3064506 00:06:22.313 size: 1.000366 MiB name: RG_ring_5_3064506 00:06:22.313 size: 0.125366 MiB name: RG_ring_2_3064506 00:06:22.313 size: 0.015991 MiB name: RG_ring_3_3064506 00:06:22.313 end memzones------- 00:06:22.313 03:09:28 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:06:22.313 heap id: 0 total size: 814.000000 MiB number of busy elements: 41 number of free elements: 15 00:06:22.313 list of free elements. size: 12.519348 MiB 00:06:22.313 element at address: 0x200000400000 with size: 1.999512 MiB 00:06:22.313 element at address: 0x200018e00000 with size: 0.999878 MiB 00:06:22.313 element at address: 0x200019000000 with size: 0.999878 MiB 00:06:22.313 element at address: 0x200003e00000 with size: 0.996277 MiB 00:06:22.313 element at address: 0x200031c00000 with size: 0.994446 MiB 00:06:22.313 element at address: 0x200013800000 with size: 0.978699 MiB 00:06:22.313 element at address: 0x200007000000 with size: 0.959839 MiB 00:06:22.313 element at address: 0x200019200000 with size: 0.936584 MiB 00:06:22.313 element at address: 0x200000200000 with size: 0.841614 MiB 00:06:22.313 element at address: 0x20001aa00000 with size: 0.582886 MiB 00:06:22.313 element at address: 0x20000b200000 with size: 0.490723 MiB 00:06:22.313 element at address: 0x200000800000 with size: 0.487793 MiB 00:06:22.313 element at address: 0x200019400000 with size: 0.485657 MiB 00:06:22.313 element at address: 0x200027e00000 with size: 0.410034 MiB 00:06:22.313 element at address: 0x200003a00000 with size: 0.355530 MiB 00:06:22.313 list of standard malloc elements. size: 199.218079 MiB 00:06:22.313 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:06:22.313 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:06:22.313 element at address: 0x200018efff80 with size: 1.000122 MiB 00:06:22.313 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:06:22.313 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:06:22.313 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:06:22.313 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:06:22.313 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:06:22.313 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:06:22.313 element at address: 0x2000002d7740 with size: 0.000183 MiB 00:06:22.313 element at address: 0x2000002d7800 with size: 0.000183 MiB 00:06:22.313 element at address: 0x2000002d78c0 with size: 0.000183 MiB 00:06:22.313 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:06:22.313 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:06:22.313 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:06:22.313 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:06:22.313 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:06:22.313 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:06:22.313 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:06:22.313 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:06:22.313 element at address: 0x200003adb300 with size: 0.000183 MiB 00:06:22.313 element at address: 0x200003adb500 with size: 0.000183 MiB 00:06:22.313 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:06:22.313 element at address: 0x200003affa80 with size: 0.000183 MiB 00:06:22.313 element at address: 0x200003affb40 with size: 0.000183 MiB 00:06:22.313 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:06:22.313 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:06:22.313 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:06:22.313 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:06:22.313 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:06:22.313 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:06:22.313 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:06:22.313 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:06:22.313 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:06:22.313 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:06:22.313 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:06:22.313 element at address: 0x200027e68f80 with size: 0.000183 MiB 00:06:22.313 element at address: 0x200027e69040 with size: 0.000183 MiB 00:06:22.313 element at address: 0x200027e6fc40 with size: 0.000183 MiB 00:06:22.313 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:06:22.313 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:06:22.313 list of memzone associated elements. size: 602.262573 MiB 00:06:22.313 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:06:22.313 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:06:22.313 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:06:22.313 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:06:22.313 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:06:22.313 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_3064506_0 00:06:22.313 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:06:22.313 associated memzone info: size: 48.002930 MiB name: MP_evtpool_3064506_0 00:06:22.313 element at address: 0x200003fff380 with size: 48.003052 MiB 00:06:22.313 associated memzone info: size: 48.002930 MiB name: MP_msgpool_3064506_0 00:06:22.313 element at address: 0x2000195be940 with size: 20.255554 MiB 00:06:22.313 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:06:22.313 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:06:22.313 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:06:22.313 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:06:22.313 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_3064506 00:06:22.313 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:06:22.313 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_3064506 00:06:22.313 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:06:22.313 associated memzone info: size: 1.007996 MiB name: MP_evtpool_3064506 00:06:22.313 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:06:22.313 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:06:22.313 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:06:22.313 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:06:22.313 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:06:22.313 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:06:22.313 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:06:22.313 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:06:22.313 element at address: 0x200003eff180 with size: 1.000488 MiB 00:06:22.313 associated memzone info: size: 1.000366 MiB name: RG_ring_0_3064506 00:06:22.313 element at address: 0x200003affc00 with size: 1.000488 MiB 00:06:22.313 associated memzone info: size: 1.000366 MiB name: RG_ring_1_3064506 00:06:22.313 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:06:22.313 associated memzone info: size: 1.000366 MiB name: RG_ring_4_3064506 00:06:22.313 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:06:22.313 associated memzone info: size: 1.000366 MiB name: RG_ring_5_3064506 00:06:22.313 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:06:22.313 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_3064506 00:06:22.313 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:06:22.313 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:06:22.313 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:06:22.313 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:06:22.313 element at address: 0x20001947c540 with size: 0.250488 MiB 00:06:22.313 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:06:22.313 element at address: 0x200003adf880 with size: 0.125488 MiB 00:06:22.313 associated memzone info: size: 0.125366 MiB name: RG_ring_2_3064506 00:06:22.313 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:06:22.313 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:06:22.313 element at address: 0x200027e69100 with size: 0.023743 MiB 00:06:22.313 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:06:22.313 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:06:22.313 associated memzone info: size: 0.015991 MiB name: RG_ring_3_3064506 00:06:22.313 element at address: 0x200027e6f240 with size: 0.002441 MiB 00:06:22.313 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:06:22.313 element at address: 0x2000002d7980 with size: 0.000305 MiB 00:06:22.313 associated memzone info: size: 0.000183 MiB name: MP_msgpool_3064506 00:06:22.313 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:06:22.313 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_3064506 00:06:22.313 element at address: 0x200027e6fd00 with size: 0.000305 MiB 00:06:22.313 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:06:22.314 03:09:28 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:06:22.314 03:09:28 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 3064506 00:06:22.314 03:09:28 dpdk_mem_utility -- common/autotest_common.sh@948 -- # '[' -z 3064506 ']' 00:06:22.314 03:09:28 dpdk_mem_utility -- common/autotest_common.sh@952 -- # kill -0 3064506 00:06:22.314 03:09:28 dpdk_mem_utility -- common/autotest_common.sh@953 -- # uname 00:06:22.314 03:09:28 dpdk_mem_utility -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:22.314 03:09:28 dpdk_mem_utility -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3064506 00:06:22.314 03:09:28 dpdk_mem_utility -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:22.314 03:09:28 dpdk_mem_utility -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:22.314 03:09:28 dpdk_mem_utility -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3064506' 00:06:22.314 killing process with pid 3064506 00:06:22.314 03:09:28 dpdk_mem_utility -- common/autotest_common.sh@967 -- # kill 3064506 00:06:22.314 03:09:28 dpdk_mem_utility -- common/autotest_common.sh@972 -- # wait 3064506 00:06:22.572 00:06:22.572 real 0m1.046s 00:06:22.572 user 0m0.999s 00:06:22.572 sys 0m0.413s 00:06:22.572 03:09:28 dpdk_mem_utility -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:22.572 03:09:28 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:22.572 ************************************ 00:06:22.572 END TEST dpdk_mem_utility 00:06:22.572 ************************************ 00:06:22.830 03:09:28 -- common/autotest_common.sh@1142 -- # return 0 00:06:22.830 03:09:28 -- spdk/autotest.sh@181 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:06:22.830 03:09:28 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:22.830 03:09:28 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:22.830 03:09:28 -- common/autotest_common.sh@10 -- # set +x 00:06:22.830 ************************************ 00:06:22.830 START TEST event 00:06:22.830 ************************************ 00:06:22.830 03:09:28 event -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:06:22.830 * Looking for test storage... 00:06:22.830 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:06:22.830 03:09:28 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:06:22.830 03:09:28 event -- bdev/nbd_common.sh@6 -- # set -e 00:06:22.830 03:09:28 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:22.830 03:09:28 event -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:06:22.830 03:09:28 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:22.830 03:09:28 event -- common/autotest_common.sh@10 -- # set +x 00:06:22.830 ************************************ 00:06:22.830 START TEST event_perf 00:06:22.830 ************************************ 00:06:22.830 03:09:28 event.event_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:22.830 Running I/O for 1 seconds...[2024-07-15 03:09:28.857465] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:06:22.830 [2024-07-15 03:09:28.857533] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3064694 ] 00:06:22.830 EAL: No free 2048 kB hugepages reported on node 1 00:06:22.830 [2024-07-15 03:09:28.917834] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:23.088 [2024-07-15 03:09:29.009565] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:23.088 [2024-07-15 03:09:29.009630] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:23.088 [2024-07-15 03:09:29.009695] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:23.088 [2024-07-15 03:09:29.009698] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:24.022 Running I/O for 1 seconds... 00:06:24.022 lcore 0: 233537 00:06:24.022 lcore 1: 233537 00:06:24.022 lcore 2: 233537 00:06:24.022 lcore 3: 233537 00:06:24.022 done. 00:06:24.022 00:06:24.022 real 0m1.251s 00:06:24.022 user 0m4.159s 00:06:24.022 sys 0m0.087s 00:06:24.022 03:09:30 event.event_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:24.022 03:09:30 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:06:24.022 ************************************ 00:06:24.022 END TEST event_perf 00:06:24.022 ************************************ 00:06:24.022 03:09:30 event -- common/autotest_common.sh@1142 -- # return 0 00:06:24.022 03:09:30 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:06:24.022 03:09:30 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:06:24.022 03:09:30 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:24.022 03:09:30 event -- common/autotest_common.sh@10 -- # set +x 00:06:24.022 ************************************ 00:06:24.022 START TEST event_reactor 00:06:24.022 ************************************ 00:06:24.022 03:09:30 event.event_reactor -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:06:24.022 [2024-07-15 03:09:30.160144] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:06:24.022 [2024-07-15 03:09:30.160218] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3064851 ] 00:06:24.283 EAL: No free 2048 kB hugepages reported on node 1 00:06:24.283 [2024-07-15 03:09:30.225614] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:24.283 [2024-07-15 03:09:30.316606] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:25.656 test_start 00:06:25.656 oneshot 00:06:25.656 tick 100 00:06:25.656 tick 100 00:06:25.656 tick 250 00:06:25.656 tick 100 00:06:25.656 tick 100 00:06:25.656 tick 100 00:06:25.656 tick 250 00:06:25.656 tick 500 00:06:25.656 tick 100 00:06:25.656 tick 100 00:06:25.656 tick 250 00:06:25.656 tick 100 00:06:25.656 tick 100 00:06:25.656 test_end 00:06:25.656 00:06:25.656 real 0m1.253s 00:06:25.656 user 0m1.160s 00:06:25.656 sys 0m0.088s 00:06:25.656 03:09:31 event.event_reactor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:25.656 03:09:31 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:06:25.656 ************************************ 00:06:25.656 END TEST event_reactor 00:06:25.656 ************************************ 00:06:25.656 03:09:31 event -- common/autotest_common.sh@1142 -- # return 0 00:06:25.656 03:09:31 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:25.656 03:09:31 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:06:25.656 03:09:31 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:25.656 03:09:31 event -- common/autotest_common.sh@10 -- # set +x 00:06:25.656 ************************************ 00:06:25.656 START TEST event_reactor_perf 00:06:25.656 ************************************ 00:06:25.656 03:09:31 event.event_reactor_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:25.656 [2024-07-15 03:09:31.461119] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:06:25.656 [2024-07-15 03:09:31.461185] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3065009 ] 00:06:25.656 EAL: No free 2048 kB hugepages reported on node 1 00:06:25.656 [2024-07-15 03:09:31.525126] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:25.656 [2024-07-15 03:09:31.614980] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:26.590 test_start 00:06:26.590 test_end 00:06:26.590 Performance: 352816 events per second 00:06:26.590 00:06:26.590 real 0m1.251s 00:06:26.590 user 0m1.159s 00:06:26.590 sys 0m0.088s 00:06:26.590 03:09:32 event.event_reactor_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:26.590 03:09:32 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:06:26.590 ************************************ 00:06:26.590 END TEST event_reactor_perf 00:06:26.590 ************************************ 00:06:26.590 03:09:32 event -- common/autotest_common.sh@1142 -- # return 0 00:06:26.590 03:09:32 event -- event/event.sh@49 -- # uname -s 00:06:26.590 03:09:32 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:06:26.590 03:09:32 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:06:26.590 03:09:32 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:26.590 03:09:32 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:26.590 03:09:32 event -- common/autotest_common.sh@10 -- # set +x 00:06:26.849 ************************************ 00:06:26.849 START TEST event_scheduler 00:06:26.849 ************************************ 00:06:26.849 03:09:32 event.event_scheduler -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:06:26.849 * Looking for test storage... 00:06:26.849 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:06:26.849 03:09:32 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:06:26.849 03:09:32 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=3065193 00:06:26.849 03:09:32 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:06:26.849 03:09:32 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:06:26.849 03:09:32 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 3065193 00:06:26.849 03:09:32 event.event_scheduler -- common/autotest_common.sh@829 -- # '[' -z 3065193 ']' 00:06:26.849 03:09:32 event.event_scheduler -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:26.849 03:09:32 event.event_scheduler -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:26.849 03:09:32 event.event_scheduler -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:26.849 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:26.849 03:09:32 event.event_scheduler -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:26.849 03:09:32 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:26.849 [2024-07-15 03:09:32.839769] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:06:26.849 [2024-07-15 03:09:32.839841] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3065193 ] 00:06:26.849 EAL: No free 2048 kB hugepages reported on node 1 00:06:26.849 [2024-07-15 03:09:32.901955] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:26.849 [2024-07-15 03:09:32.989685] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:26.849 [2024-07-15 03:09:32.989771] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:26.849 [2024-07-15 03:09:32.989821] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:26.849 [2024-07-15 03:09:32.989824] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:27.108 03:09:33 event.event_scheduler -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:27.108 03:09:33 event.event_scheduler -- common/autotest_common.sh@862 -- # return 0 00:06:27.108 03:09:33 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:06:27.108 03:09:33 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:27.108 03:09:33 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:27.108 [2024-07-15 03:09:33.058676] dpdk_governor.c: 173:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:06:27.108 [2024-07-15 03:09:33.058702] scheduler_dynamic.c: 270:init: *NOTICE*: Unable to initialize dpdk governor 00:06:27.108 [2024-07-15 03:09:33.058718] scheduler_dynamic.c: 416:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:06:27.108 [2024-07-15 03:09:33.058728] scheduler_dynamic.c: 418:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:06:27.108 [2024-07-15 03:09:33.058737] scheduler_dynamic.c: 420:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:06:27.108 03:09:33 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:27.108 03:09:33 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:06:27.108 03:09:33 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:27.108 03:09:33 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:27.108 [2024-07-15 03:09:33.156143] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:06:27.108 03:09:33 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:27.108 03:09:33 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:06:27.108 03:09:33 event.event_scheduler -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:27.108 03:09:33 event.event_scheduler -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:27.108 03:09:33 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:27.108 ************************************ 00:06:27.108 START TEST scheduler_create_thread 00:06:27.108 ************************************ 00:06:27.108 03:09:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1123 -- # scheduler_create_thread 00:06:27.108 03:09:33 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:06:27.108 03:09:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:27.108 03:09:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:27.108 2 00:06:27.108 03:09:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:27.108 03:09:33 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:06:27.108 03:09:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:27.108 03:09:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:27.108 3 00:06:27.108 03:09:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:27.108 03:09:33 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:06:27.108 03:09:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:27.108 03:09:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:27.108 4 00:06:27.108 03:09:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:27.108 03:09:33 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:06:27.108 03:09:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:27.108 03:09:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:27.108 5 00:06:27.108 03:09:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:27.108 03:09:33 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:06:27.109 03:09:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:27.109 03:09:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:27.109 6 00:06:27.109 03:09:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:27.109 03:09:33 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:06:27.109 03:09:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:27.109 03:09:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:27.109 7 00:06:27.109 03:09:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:27.109 03:09:33 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:06:27.109 03:09:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:27.109 03:09:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:27.109 8 00:06:27.109 03:09:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:27.109 03:09:33 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:06:27.109 03:09:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:27.109 03:09:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:27.109 9 00:06:27.109 03:09:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:27.109 03:09:33 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:06:27.109 03:09:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:27.109 03:09:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:27.365 10 00:06:27.365 03:09:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:27.365 03:09:33 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:06:27.365 03:09:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:27.365 03:09:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:27.365 03:09:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:27.365 03:09:33 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:06:27.366 03:09:33 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:06:27.366 03:09:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:27.366 03:09:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:27.366 03:09:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:27.366 03:09:33 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:06:27.366 03:09:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:27.366 03:09:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:27.366 03:09:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:27.366 03:09:33 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:06:27.366 03:09:33 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:06:27.366 03:09:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:27.366 03:09:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:27.928 03:09:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:27.929 00:06:27.929 real 0m0.591s 00:06:27.929 user 0m0.009s 00:06:27.929 sys 0m0.005s 00:06:27.929 03:09:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:27.929 03:09:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:27.929 ************************************ 00:06:27.929 END TEST scheduler_create_thread 00:06:27.929 ************************************ 00:06:27.929 03:09:33 event.event_scheduler -- common/autotest_common.sh@1142 -- # return 0 00:06:27.929 03:09:33 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:06:27.929 03:09:33 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 3065193 00:06:27.929 03:09:33 event.event_scheduler -- common/autotest_common.sh@948 -- # '[' -z 3065193 ']' 00:06:27.929 03:09:33 event.event_scheduler -- common/autotest_common.sh@952 -- # kill -0 3065193 00:06:27.929 03:09:33 event.event_scheduler -- common/autotest_common.sh@953 -- # uname 00:06:27.929 03:09:33 event.event_scheduler -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:27.929 03:09:33 event.event_scheduler -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3065193 00:06:27.929 03:09:33 event.event_scheduler -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:06:27.929 03:09:33 event.event_scheduler -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:06:27.929 03:09:33 event.event_scheduler -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3065193' 00:06:27.929 killing process with pid 3065193 00:06:27.929 03:09:33 event.event_scheduler -- common/autotest_common.sh@967 -- # kill 3065193 00:06:27.929 03:09:33 event.event_scheduler -- common/autotest_common.sh@972 -- # wait 3065193 00:06:28.186 [2024-07-15 03:09:34.252238] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:06:28.444 00:06:28.444 real 0m1.709s 00:06:28.444 user 0m2.195s 00:06:28.444 sys 0m0.340s 00:06:28.444 03:09:34 event.event_scheduler -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:28.444 03:09:34 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:28.444 ************************************ 00:06:28.444 END TEST event_scheduler 00:06:28.444 ************************************ 00:06:28.444 03:09:34 event -- common/autotest_common.sh@1142 -- # return 0 00:06:28.444 03:09:34 event -- event/event.sh@51 -- # modprobe -n nbd 00:06:28.444 03:09:34 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:06:28.444 03:09:34 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:28.444 03:09:34 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:28.444 03:09:34 event -- common/autotest_common.sh@10 -- # set +x 00:06:28.444 ************************************ 00:06:28.444 START TEST app_repeat 00:06:28.444 ************************************ 00:06:28.444 03:09:34 event.app_repeat -- common/autotest_common.sh@1123 -- # app_repeat_test 00:06:28.444 03:09:34 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:28.444 03:09:34 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:28.444 03:09:34 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:06:28.444 03:09:34 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:28.444 03:09:34 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:06:28.444 03:09:34 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:06:28.444 03:09:34 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:06:28.444 03:09:34 event.app_repeat -- event/event.sh@19 -- # repeat_pid=3065503 00:06:28.444 03:09:34 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:06:28.444 03:09:34 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:06:28.445 03:09:34 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 3065503' 00:06:28.445 Process app_repeat pid: 3065503 00:06:28.445 03:09:34 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:28.445 03:09:34 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:06:28.445 spdk_app_start Round 0 00:06:28.445 03:09:34 event.app_repeat -- event/event.sh@25 -- # waitforlisten 3065503 /var/tmp/spdk-nbd.sock 00:06:28.445 03:09:34 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 3065503 ']' 00:06:28.445 03:09:34 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:28.445 03:09:34 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:28.445 03:09:34 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:28.445 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:28.445 03:09:34 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:28.445 03:09:34 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:28.445 [2024-07-15 03:09:34.538910] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:06:28.445 [2024-07-15 03:09:34.538994] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3065503 ] 00:06:28.445 EAL: No free 2048 kB hugepages reported on node 1 00:06:28.703 [2024-07-15 03:09:34.603125] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:28.703 [2024-07-15 03:09:34.692791] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:28.703 [2024-07-15 03:09:34.692796] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:28.703 03:09:34 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:28.703 03:09:34 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:06:28.703 03:09:34 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:28.961 Malloc0 00:06:28.961 03:09:35 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:29.219 Malloc1 00:06:29.219 03:09:35 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:29.219 03:09:35 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:29.219 03:09:35 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:29.219 03:09:35 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:29.219 03:09:35 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:29.219 03:09:35 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:29.219 03:09:35 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:29.219 03:09:35 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:29.219 03:09:35 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:29.219 03:09:35 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:29.219 03:09:35 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:29.219 03:09:35 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:29.219 03:09:35 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:29.219 03:09:35 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:29.219 03:09:35 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:29.219 03:09:35 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:29.476 /dev/nbd0 00:06:29.476 03:09:35 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:29.476 03:09:35 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:29.476 03:09:35 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:06:29.476 03:09:35 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:29.476 03:09:35 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:29.476 03:09:35 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:29.476 03:09:35 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:06:29.476 03:09:35 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:29.476 03:09:35 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:29.476 03:09:35 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:29.476 03:09:35 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:29.476 1+0 records in 00:06:29.476 1+0 records out 00:06:29.476 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000200293 s, 20.5 MB/s 00:06:29.476 03:09:35 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:29.476 03:09:35 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:29.476 03:09:35 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:29.476 03:09:35 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:29.734 03:09:35 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:29.734 03:09:35 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:29.734 03:09:35 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:29.734 03:09:35 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:29.734 /dev/nbd1 00:06:29.734 03:09:35 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:29.734 03:09:35 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:29.734 03:09:35 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:06:29.734 03:09:35 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:29.734 03:09:35 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:29.734 03:09:35 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:29.734 03:09:35 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:06:29.734 03:09:35 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:29.734 03:09:35 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:29.734 03:09:35 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:29.734 03:09:35 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:29.992 1+0 records in 00:06:29.992 1+0 records out 00:06:29.992 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000207771 s, 19.7 MB/s 00:06:29.992 03:09:35 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:29.992 03:09:35 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:29.992 03:09:35 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:29.992 03:09:35 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:29.992 03:09:35 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:29.992 03:09:35 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:29.993 03:09:35 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:29.993 03:09:35 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:29.993 03:09:35 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:29.993 03:09:35 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:29.993 03:09:36 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:29.993 { 00:06:29.993 "nbd_device": "/dev/nbd0", 00:06:29.993 "bdev_name": "Malloc0" 00:06:29.993 }, 00:06:29.993 { 00:06:29.993 "nbd_device": "/dev/nbd1", 00:06:29.993 "bdev_name": "Malloc1" 00:06:29.993 } 00:06:29.993 ]' 00:06:29.993 03:09:36 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:29.993 { 00:06:29.993 "nbd_device": "/dev/nbd0", 00:06:29.993 "bdev_name": "Malloc0" 00:06:29.993 }, 00:06:29.993 { 00:06:29.993 "nbd_device": "/dev/nbd1", 00:06:29.993 "bdev_name": "Malloc1" 00:06:29.993 } 00:06:29.993 ]' 00:06:29.993 03:09:36 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:30.251 03:09:36 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:30.251 /dev/nbd1' 00:06:30.251 03:09:36 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:30.251 /dev/nbd1' 00:06:30.251 03:09:36 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:30.251 03:09:36 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:30.251 03:09:36 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:30.251 03:09:36 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:30.251 03:09:36 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:30.251 03:09:36 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:30.252 03:09:36 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:30.252 03:09:36 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:30.252 03:09:36 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:30.252 03:09:36 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:30.252 03:09:36 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:30.252 03:09:36 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:30.252 256+0 records in 00:06:30.252 256+0 records out 00:06:30.252 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00507707 s, 207 MB/s 00:06:30.252 03:09:36 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:30.252 03:09:36 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:30.252 256+0 records in 00:06:30.252 256+0 records out 00:06:30.252 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0234003 s, 44.8 MB/s 00:06:30.252 03:09:36 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:30.252 03:09:36 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:30.252 256+0 records in 00:06:30.252 256+0 records out 00:06:30.252 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0229441 s, 45.7 MB/s 00:06:30.252 03:09:36 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:30.252 03:09:36 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:30.252 03:09:36 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:30.252 03:09:36 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:30.252 03:09:36 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:30.252 03:09:36 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:30.252 03:09:36 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:30.252 03:09:36 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:30.252 03:09:36 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:30.252 03:09:36 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:30.252 03:09:36 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:30.252 03:09:36 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:30.252 03:09:36 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:30.252 03:09:36 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:30.252 03:09:36 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:30.252 03:09:36 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:30.252 03:09:36 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:30.252 03:09:36 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:30.252 03:09:36 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:30.510 03:09:36 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:30.510 03:09:36 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:30.510 03:09:36 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:30.510 03:09:36 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:30.510 03:09:36 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:30.510 03:09:36 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:30.510 03:09:36 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:30.510 03:09:36 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:30.510 03:09:36 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:30.510 03:09:36 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:30.769 03:09:36 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:30.769 03:09:36 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:30.769 03:09:36 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:30.769 03:09:36 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:30.769 03:09:36 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:30.769 03:09:36 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:30.769 03:09:36 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:30.769 03:09:36 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:30.769 03:09:36 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:30.769 03:09:36 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:30.769 03:09:36 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:31.027 03:09:37 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:31.027 03:09:37 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:31.027 03:09:37 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:31.027 03:09:37 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:31.027 03:09:37 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:31.027 03:09:37 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:31.027 03:09:37 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:31.027 03:09:37 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:31.027 03:09:37 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:31.027 03:09:37 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:31.027 03:09:37 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:31.027 03:09:37 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:31.027 03:09:37 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:31.285 03:09:37 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:31.543 [2024-07-15 03:09:37.573397] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:31.543 [2024-07-15 03:09:37.662689] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:31.543 [2024-07-15 03:09:37.662690] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:31.833 [2024-07-15 03:09:37.721764] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:31.833 [2024-07-15 03:09:37.721847] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:34.362 03:09:40 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:34.362 03:09:40 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:06:34.362 spdk_app_start Round 1 00:06:34.362 03:09:40 event.app_repeat -- event/event.sh@25 -- # waitforlisten 3065503 /var/tmp/spdk-nbd.sock 00:06:34.362 03:09:40 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 3065503 ']' 00:06:34.362 03:09:40 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:34.362 03:09:40 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:34.362 03:09:40 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:34.362 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:34.362 03:09:40 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:34.362 03:09:40 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:34.620 03:09:40 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:34.620 03:09:40 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:06:34.620 03:09:40 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:34.878 Malloc0 00:06:34.878 03:09:40 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:35.136 Malloc1 00:06:35.136 03:09:41 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:35.136 03:09:41 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:35.136 03:09:41 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:35.136 03:09:41 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:35.136 03:09:41 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:35.136 03:09:41 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:35.136 03:09:41 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:35.136 03:09:41 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:35.136 03:09:41 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:35.136 03:09:41 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:35.136 03:09:41 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:35.136 03:09:41 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:35.136 03:09:41 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:35.136 03:09:41 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:35.136 03:09:41 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:35.136 03:09:41 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:35.394 /dev/nbd0 00:06:35.394 03:09:41 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:35.394 03:09:41 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:35.394 03:09:41 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:06:35.394 03:09:41 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:35.394 03:09:41 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:35.394 03:09:41 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:35.394 03:09:41 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:06:35.394 03:09:41 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:35.394 03:09:41 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:35.394 03:09:41 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:35.394 03:09:41 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:35.394 1+0 records in 00:06:35.394 1+0 records out 00:06:35.394 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000172646 s, 23.7 MB/s 00:06:35.394 03:09:41 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:35.394 03:09:41 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:35.394 03:09:41 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:35.394 03:09:41 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:35.394 03:09:41 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:35.394 03:09:41 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:35.394 03:09:41 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:35.394 03:09:41 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:35.652 /dev/nbd1 00:06:35.652 03:09:41 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:35.652 03:09:41 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:35.652 03:09:41 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:06:35.652 03:09:41 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:35.652 03:09:41 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:35.652 03:09:41 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:35.652 03:09:41 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:06:35.652 03:09:41 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:35.652 03:09:41 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:35.652 03:09:41 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:35.652 03:09:41 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:35.652 1+0 records in 00:06:35.652 1+0 records out 00:06:35.652 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000213661 s, 19.2 MB/s 00:06:35.652 03:09:41 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:35.652 03:09:41 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:35.652 03:09:41 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:35.652 03:09:41 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:35.652 03:09:41 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:35.652 03:09:41 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:35.652 03:09:41 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:35.652 03:09:41 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:35.652 03:09:41 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:35.652 03:09:41 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:35.931 03:09:41 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:35.931 { 00:06:35.931 "nbd_device": "/dev/nbd0", 00:06:35.931 "bdev_name": "Malloc0" 00:06:35.931 }, 00:06:35.931 { 00:06:35.931 "nbd_device": "/dev/nbd1", 00:06:35.931 "bdev_name": "Malloc1" 00:06:35.931 } 00:06:35.931 ]' 00:06:35.931 03:09:41 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:35.931 { 00:06:35.931 "nbd_device": "/dev/nbd0", 00:06:35.931 "bdev_name": "Malloc0" 00:06:35.931 }, 00:06:35.931 { 00:06:35.931 "nbd_device": "/dev/nbd1", 00:06:35.931 "bdev_name": "Malloc1" 00:06:35.931 } 00:06:35.931 ]' 00:06:35.931 03:09:41 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:35.931 03:09:41 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:35.931 /dev/nbd1' 00:06:35.931 03:09:41 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:35.931 /dev/nbd1' 00:06:35.931 03:09:41 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:35.931 03:09:41 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:35.931 03:09:41 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:35.931 03:09:41 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:35.931 03:09:41 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:35.931 03:09:41 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:35.931 03:09:41 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:35.931 03:09:41 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:35.931 03:09:41 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:35.931 03:09:41 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:35.931 03:09:41 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:35.931 03:09:41 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:35.931 256+0 records in 00:06:35.931 256+0 records out 00:06:35.931 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00487659 s, 215 MB/s 00:06:35.931 03:09:41 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:35.931 03:09:41 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:35.931 256+0 records in 00:06:35.931 256+0 records out 00:06:35.931 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0237523 s, 44.1 MB/s 00:06:35.931 03:09:41 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:35.931 03:09:41 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:35.931 256+0 records in 00:06:35.931 256+0 records out 00:06:35.931 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0254352 s, 41.2 MB/s 00:06:35.931 03:09:42 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:35.931 03:09:42 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:35.931 03:09:42 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:35.931 03:09:42 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:35.931 03:09:42 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:35.931 03:09:42 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:35.931 03:09:42 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:35.931 03:09:42 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:35.931 03:09:42 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:35.931 03:09:42 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:35.931 03:09:42 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:35.931 03:09:42 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:35.931 03:09:42 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:35.931 03:09:42 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:35.931 03:09:42 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:35.932 03:09:42 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:35.932 03:09:42 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:35.932 03:09:42 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:35.932 03:09:42 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:36.189 03:09:42 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:36.189 03:09:42 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:36.189 03:09:42 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:36.189 03:09:42 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:36.189 03:09:42 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:36.189 03:09:42 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:36.189 03:09:42 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:36.189 03:09:42 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:36.189 03:09:42 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:36.189 03:09:42 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:36.445 03:09:42 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:36.445 03:09:42 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:36.445 03:09:42 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:36.445 03:09:42 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:36.445 03:09:42 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:36.445 03:09:42 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:36.445 03:09:42 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:36.445 03:09:42 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:36.445 03:09:42 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:36.445 03:09:42 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:36.445 03:09:42 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:36.702 03:09:42 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:36.702 03:09:42 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:36.702 03:09:42 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:36.959 03:09:42 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:36.959 03:09:42 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:36.959 03:09:42 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:36.959 03:09:42 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:36.959 03:09:42 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:36.959 03:09:42 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:36.959 03:09:42 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:36.959 03:09:42 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:36.959 03:09:42 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:36.959 03:09:42 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:37.217 03:09:43 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:37.217 [2024-07-15 03:09:43.339406] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:37.475 [2024-07-15 03:09:43.431157] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:37.475 [2024-07-15 03:09:43.431160] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:37.475 [2024-07-15 03:09:43.493853] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:37.475 [2024-07-15 03:09:43.493951] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:39.995 03:09:46 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:39.995 03:09:46 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:06:39.995 spdk_app_start Round 2 00:06:39.995 03:09:46 event.app_repeat -- event/event.sh@25 -- # waitforlisten 3065503 /var/tmp/spdk-nbd.sock 00:06:39.995 03:09:46 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 3065503 ']' 00:06:39.995 03:09:46 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:39.995 03:09:46 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:39.995 03:09:46 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:39.995 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:39.995 03:09:46 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:39.995 03:09:46 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:40.252 03:09:46 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:40.252 03:09:46 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:06:40.252 03:09:46 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:40.510 Malloc0 00:06:40.510 03:09:46 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:40.767 Malloc1 00:06:40.767 03:09:46 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:40.767 03:09:46 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:40.767 03:09:46 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:40.767 03:09:46 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:40.767 03:09:46 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:40.767 03:09:46 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:41.025 03:09:46 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:41.025 03:09:46 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:41.025 03:09:46 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:41.025 03:09:46 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:41.025 03:09:46 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:41.025 03:09:46 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:41.025 03:09:46 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:41.025 03:09:46 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:41.025 03:09:46 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:41.025 03:09:46 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:41.025 /dev/nbd0 00:06:41.025 03:09:47 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:41.025 03:09:47 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:41.025 03:09:47 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:06:41.025 03:09:47 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:41.025 03:09:47 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:41.025 03:09:47 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:41.025 03:09:47 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:06:41.025 03:09:47 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:41.025 03:09:47 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:41.025 03:09:47 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:41.025 03:09:47 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:41.282 1+0 records in 00:06:41.282 1+0 records out 00:06:41.282 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000174367 s, 23.5 MB/s 00:06:41.282 03:09:47 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:41.282 03:09:47 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:41.282 03:09:47 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:41.282 03:09:47 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:41.282 03:09:47 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:41.282 03:09:47 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:41.282 03:09:47 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:41.282 03:09:47 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:41.282 /dev/nbd1 00:06:41.539 03:09:47 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:41.539 03:09:47 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:41.539 03:09:47 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:06:41.539 03:09:47 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:41.539 03:09:47 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:41.539 03:09:47 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:41.539 03:09:47 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:06:41.539 03:09:47 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:41.539 03:09:47 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:41.539 03:09:47 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:41.540 03:09:47 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:41.540 1+0 records in 00:06:41.540 1+0 records out 00:06:41.540 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000184589 s, 22.2 MB/s 00:06:41.540 03:09:47 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:41.540 03:09:47 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:41.540 03:09:47 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:41.540 03:09:47 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:41.540 03:09:47 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:41.540 03:09:47 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:41.540 03:09:47 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:41.540 03:09:47 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:41.540 03:09:47 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:41.540 03:09:47 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:41.797 03:09:47 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:41.797 { 00:06:41.797 "nbd_device": "/dev/nbd0", 00:06:41.797 "bdev_name": "Malloc0" 00:06:41.797 }, 00:06:41.797 { 00:06:41.797 "nbd_device": "/dev/nbd1", 00:06:41.797 "bdev_name": "Malloc1" 00:06:41.797 } 00:06:41.797 ]' 00:06:41.797 03:09:47 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:41.797 { 00:06:41.797 "nbd_device": "/dev/nbd0", 00:06:41.797 "bdev_name": "Malloc0" 00:06:41.797 }, 00:06:41.797 { 00:06:41.797 "nbd_device": "/dev/nbd1", 00:06:41.797 "bdev_name": "Malloc1" 00:06:41.797 } 00:06:41.797 ]' 00:06:41.797 03:09:47 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:41.797 03:09:47 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:41.797 /dev/nbd1' 00:06:41.797 03:09:47 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:41.797 /dev/nbd1' 00:06:41.797 03:09:47 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:41.797 03:09:47 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:41.797 03:09:47 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:41.797 03:09:47 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:41.797 03:09:47 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:41.797 03:09:47 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:41.797 03:09:47 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:41.797 03:09:47 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:41.797 03:09:47 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:41.797 03:09:47 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:41.797 03:09:47 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:41.797 03:09:47 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:41.797 256+0 records in 00:06:41.797 256+0 records out 00:06:41.797 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00377909 s, 277 MB/s 00:06:41.797 03:09:47 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:41.797 03:09:47 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:41.797 256+0 records in 00:06:41.797 256+0 records out 00:06:41.797 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0237453 s, 44.2 MB/s 00:06:41.797 03:09:47 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:41.797 03:09:47 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:41.797 256+0 records in 00:06:41.797 256+0 records out 00:06:41.797 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0257582 s, 40.7 MB/s 00:06:41.797 03:09:47 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:41.797 03:09:47 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:41.797 03:09:47 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:41.797 03:09:47 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:41.797 03:09:47 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:41.797 03:09:47 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:41.797 03:09:47 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:41.797 03:09:47 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:41.797 03:09:47 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:41.797 03:09:47 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:41.797 03:09:47 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:41.797 03:09:47 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:41.797 03:09:47 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:41.797 03:09:47 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:41.797 03:09:47 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:41.797 03:09:47 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:41.797 03:09:47 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:41.797 03:09:47 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:41.797 03:09:47 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:42.055 03:09:48 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:42.055 03:09:48 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:42.055 03:09:48 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:42.055 03:09:48 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:42.055 03:09:48 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:42.055 03:09:48 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:42.055 03:09:48 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:42.055 03:09:48 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:42.055 03:09:48 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:42.055 03:09:48 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:42.336 03:09:48 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:42.336 03:09:48 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:42.336 03:09:48 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:42.336 03:09:48 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:42.336 03:09:48 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:42.336 03:09:48 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:42.336 03:09:48 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:42.336 03:09:48 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:42.336 03:09:48 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:42.336 03:09:48 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:42.336 03:09:48 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:42.593 03:09:48 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:42.593 03:09:48 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:42.593 03:09:48 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:42.593 03:09:48 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:42.593 03:09:48 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:42.593 03:09:48 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:42.593 03:09:48 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:42.593 03:09:48 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:42.593 03:09:48 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:42.593 03:09:48 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:42.593 03:09:48 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:42.593 03:09:48 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:42.593 03:09:48 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:42.850 03:09:48 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:43.109 [2024-07-15 03:09:49.122975] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:43.109 [2024-07-15 03:09:49.212283] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:43.109 [2024-07-15 03:09:49.212289] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:43.367 [2024-07-15 03:09:49.274077] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:43.367 [2024-07-15 03:09:49.274152] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:45.892 03:09:51 event.app_repeat -- event/event.sh@38 -- # waitforlisten 3065503 /var/tmp/spdk-nbd.sock 00:06:45.892 03:09:51 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 3065503 ']' 00:06:45.892 03:09:51 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:45.892 03:09:51 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:45.892 03:09:51 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:45.892 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:45.892 03:09:51 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:45.892 03:09:51 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:46.149 03:09:52 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:46.149 03:09:52 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:06:46.149 03:09:52 event.app_repeat -- event/event.sh@39 -- # killprocess 3065503 00:06:46.149 03:09:52 event.app_repeat -- common/autotest_common.sh@948 -- # '[' -z 3065503 ']' 00:06:46.149 03:09:52 event.app_repeat -- common/autotest_common.sh@952 -- # kill -0 3065503 00:06:46.149 03:09:52 event.app_repeat -- common/autotest_common.sh@953 -- # uname 00:06:46.149 03:09:52 event.app_repeat -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:46.149 03:09:52 event.app_repeat -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3065503 00:06:46.149 03:09:52 event.app_repeat -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:46.149 03:09:52 event.app_repeat -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:46.149 03:09:52 event.app_repeat -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3065503' 00:06:46.149 killing process with pid 3065503 00:06:46.149 03:09:52 event.app_repeat -- common/autotest_common.sh@967 -- # kill 3065503 00:06:46.149 03:09:52 event.app_repeat -- common/autotest_common.sh@972 -- # wait 3065503 00:06:46.407 spdk_app_start is called in Round 0. 00:06:46.407 Shutdown signal received, stop current app iteration 00:06:46.407 Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 reinitialization... 00:06:46.407 spdk_app_start is called in Round 1. 00:06:46.407 Shutdown signal received, stop current app iteration 00:06:46.407 Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 reinitialization... 00:06:46.407 spdk_app_start is called in Round 2. 00:06:46.407 Shutdown signal received, stop current app iteration 00:06:46.407 Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 reinitialization... 00:06:46.407 spdk_app_start is called in Round 3. 00:06:46.407 Shutdown signal received, stop current app iteration 00:06:46.407 03:09:52 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:06:46.407 03:09:52 event.app_repeat -- event/event.sh@42 -- # return 0 00:06:46.407 00:06:46.407 real 0m17.868s 00:06:46.407 user 0m38.818s 00:06:46.407 sys 0m3.270s 00:06:46.407 03:09:52 event.app_repeat -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:46.407 03:09:52 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:46.407 ************************************ 00:06:46.407 END TEST app_repeat 00:06:46.407 ************************************ 00:06:46.408 03:09:52 event -- common/autotest_common.sh@1142 -- # return 0 00:06:46.408 03:09:52 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:06:46.408 03:09:52 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:46.408 03:09:52 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:46.408 03:09:52 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:46.408 03:09:52 event -- common/autotest_common.sh@10 -- # set +x 00:06:46.408 ************************************ 00:06:46.408 START TEST cpu_locks 00:06:46.408 ************************************ 00:06:46.408 03:09:52 event.cpu_locks -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:46.408 * Looking for test storage... 00:06:46.408 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:06:46.408 03:09:52 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:06:46.408 03:09:52 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:06:46.408 03:09:52 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:06:46.408 03:09:52 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:06:46.408 03:09:52 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:46.408 03:09:52 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:46.408 03:09:52 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:46.408 ************************************ 00:06:46.408 START TEST default_locks 00:06:46.408 ************************************ 00:06:46.408 03:09:52 event.cpu_locks.default_locks -- common/autotest_common.sh@1123 -- # default_locks 00:06:46.408 03:09:52 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=3067848 00:06:46.408 03:09:52 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:46.408 03:09:52 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 3067848 00:06:46.408 03:09:52 event.cpu_locks.default_locks -- common/autotest_common.sh@829 -- # '[' -z 3067848 ']' 00:06:46.408 03:09:52 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:46.408 03:09:52 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:46.408 03:09:52 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:46.408 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:46.408 03:09:52 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:46.408 03:09:52 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:46.665 [2024-07-15 03:09:52.553447] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:06:46.666 [2024-07-15 03:09:52.553541] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3067848 ] 00:06:46.666 EAL: No free 2048 kB hugepages reported on node 1 00:06:46.666 [2024-07-15 03:09:52.616306] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:46.666 [2024-07-15 03:09:52.707323] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:46.924 03:09:52 event.cpu_locks.default_locks -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:46.924 03:09:52 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # return 0 00:06:46.924 03:09:52 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 3067848 00:06:46.924 03:09:52 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 3067848 00:06:46.924 03:09:52 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:47.489 lslocks: write error 00:06:47.489 03:09:53 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 3067848 00:06:47.489 03:09:53 event.cpu_locks.default_locks -- common/autotest_common.sh@948 -- # '[' -z 3067848 ']' 00:06:47.489 03:09:53 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # kill -0 3067848 00:06:47.489 03:09:53 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # uname 00:06:47.489 03:09:53 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:47.489 03:09:53 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3067848 00:06:47.489 03:09:53 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:47.489 03:09:53 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:47.489 03:09:53 event.cpu_locks.default_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3067848' 00:06:47.489 killing process with pid 3067848 00:06:47.489 03:09:53 event.cpu_locks.default_locks -- common/autotest_common.sh@967 -- # kill 3067848 00:06:47.489 03:09:53 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # wait 3067848 00:06:47.747 03:09:53 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 3067848 00:06:47.747 03:09:53 event.cpu_locks.default_locks -- common/autotest_common.sh@648 -- # local es=0 00:06:47.747 03:09:53 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 3067848 00:06:47.747 03:09:53 event.cpu_locks.default_locks -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:06:47.747 03:09:53 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:47.747 03:09:53 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:06:47.747 03:09:53 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:47.747 03:09:53 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # waitforlisten 3067848 00:06:47.747 03:09:53 event.cpu_locks.default_locks -- common/autotest_common.sh@829 -- # '[' -z 3067848 ']' 00:06:47.747 03:09:53 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:47.747 03:09:53 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:47.747 03:09:53 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:47.747 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:47.747 03:09:53 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:47.747 03:09:53 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:47.747 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (3067848) - No such process 00:06:47.747 ERROR: process (pid: 3067848) is no longer running 00:06:47.747 03:09:53 event.cpu_locks.default_locks -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:47.747 03:09:53 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # return 1 00:06:47.747 03:09:53 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # es=1 00:06:47.747 03:09:53 event.cpu_locks.default_locks -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:47.747 03:09:53 event.cpu_locks.default_locks -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:47.747 03:09:53 event.cpu_locks.default_locks -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:47.747 03:09:53 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:06:47.747 03:09:53 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:47.747 03:09:53 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:06:47.747 03:09:53 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:47.747 00:06:47.747 real 0m1.305s 00:06:47.747 user 0m1.237s 00:06:47.747 sys 0m0.554s 00:06:47.747 03:09:53 event.cpu_locks.default_locks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:47.747 03:09:53 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:47.747 ************************************ 00:06:47.747 END TEST default_locks 00:06:47.747 ************************************ 00:06:47.747 03:09:53 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:47.747 03:09:53 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:06:47.747 03:09:53 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:47.747 03:09:53 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:47.747 03:09:53 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:47.747 ************************************ 00:06:47.747 START TEST default_locks_via_rpc 00:06:47.747 ************************************ 00:06:47.747 03:09:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1123 -- # default_locks_via_rpc 00:06:47.747 03:09:53 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=3068017 00:06:47.747 03:09:53 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:47.747 03:09:53 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 3068017 00:06:47.747 03:09:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 3068017 ']' 00:06:47.747 03:09:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:47.747 03:09:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:47.747 03:09:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:47.747 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:47.747 03:09:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:47.747 03:09:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:48.006 [2024-07-15 03:09:53.909722] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:06:48.006 [2024-07-15 03:09:53.909822] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3068017 ] 00:06:48.006 EAL: No free 2048 kB hugepages reported on node 1 00:06:48.006 [2024-07-15 03:09:53.972331] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:48.006 [2024-07-15 03:09:54.059951] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:48.264 03:09:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:48.264 03:09:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:48.264 03:09:54 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:06:48.264 03:09:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:48.264 03:09:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:48.264 03:09:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:48.264 03:09:54 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:06:48.264 03:09:54 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:48.264 03:09:54 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:06:48.264 03:09:54 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:48.264 03:09:54 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:06:48.264 03:09:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:48.264 03:09:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:48.264 03:09:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:48.264 03:09:54 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 3068017 00:06:48.264 03:09:54 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 3068017 00:06:48.264 03:09:54 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:48.521 03:09:54 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 3068017 00:06:48.521 03:09:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@948 -- # '[' -z 3068017 ']' 00:06:48.521 03:09:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # kill -0 3068017 00:06:48.521 03:09:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # uname 00:06:48.521 03:09:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:48.521 03:09:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3068017 00:06:48.521 03:09:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:48.521 03:09:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:48.521 03:09:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3068017' 00:06:48.521 killing process with pid 3068017 00:06:48.521 03:09:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@967 -- # kill 3068017 00:06:48.521 03:09:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # wait 3068017 00:06:49.087 00:06:49.087 real 0m1.191s 00:06:49.087 user 0m1.130s 00:06:49.087 sys 0m0.514s 00:06:49.087 03:09:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:49.087 03:09:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:49.087 ************************************ 00:06:49.087 END TEST default_locks_via_rpc 00:06:49.087 ************************************ 00:06:49.087 03:09:55 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:49.087 03:09:55 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:49.088 03:09:55 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:49.088 03:09:55 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:49.088 03:09:55 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:49.088 ************************************ 00:06:49.088 START TEST non_locking_app_on_locked_coremask 00:06:49.088 ************************************ 00:06:49.088 03:09:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # non_locking_app_on_locked_coremask 00:06:49.088 03:09:55 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=3068179 00:06:49.088 03:09:55 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:49.088 03:09:55 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 3068179 /var/tmp/spdk.sock 00:06:49.088 03:09:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 3068179 ']' 00:06:49.088 03:09:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:49.088 03:09:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:49.088 03:09:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:49.088 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:49.088 03:09:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:49.088 03:09:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:49.088 [2024-07-15 03:09:55.148556] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:06:49.088 [2024-07-15 03:09:55.148642] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3068179 ] 00:06:49.088 EAL: No free 2048 kB hugepages reported on node 1 00:06:49.088 [2024-07-15 03:09:55.206787] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:49.347 [2024-07-15 03:09:55.295840] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:49.605 03:09:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:49.605 03:09:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:06:49.605 03:09:55 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=3068187 00:06:49.605 03:09:55 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:49.605 03:09:55 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 3068187 /var/tmp/spdk2.sock 00:06:49.605 03:09:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 3068187 ']' 00:06:49.605 03:09:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:49.605 03:09:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:49.605 03:09:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:49.605 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:49.605 03:09:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:49.605 03:09:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:49.605 [2024-07-15 03:09:55.587417] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:06:49.605 [2024-07-15 03:09:55.587492] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3068187 ] 00:06:49.605 EAL: No free 2048 kB hugepages reported on node 1 00:06:49.605 [2024-07-15 03:09:55.679624] app.c: 905:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:49.605 [2024-07-15 03:09:55.679657] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:49.887 [2024-07-15 03:09:55.869730] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:50.462 03:09:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:50.462 03:09:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:06:50.462 03:09:56 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 3068179 00:06:50.462 03:09:56 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 3068179 00:06:50.462 03:09:56 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:51.399 lslocks: write error 00:06:51.399 03:09:57 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 3068179 00:06:51.399 03:09:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 3068179 ']' 00:06:51.399 03:09:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 3068179 00:06:51.399 03:09:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:06:51.399 03:09:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:51.399 03:09:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3068179 00:06:51.399 03:09:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:51.399 03:09:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:51.399 03:09:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3068179' 00:06:51.399 killing process with pid 3068179 00:06:51.399 03:09:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 3068179 00:06:51.399 03:09:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 3068179 00:06:51.963 03:09:58 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 3068187 00:06:51.963 03:09:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 3068187 ']' 00:06:51.963 03:09:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 3068187 00:06:51.963 03:09:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:06:51.963 03:09:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:51.963 03:09:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3068187 00:06:51.963 03:09:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:51.963 03:09:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:51.963 03:09:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3068187' 00:06:51.963 killing process with pid 3068187 00:06:51.963 03:09:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 3068187 00:06:51.963 03:09:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 3068187 00:06:52.528 00:06:52.528 real 0m3.400s 00:06:52.528 user 0m3.559s 00:06:52.528 sys 0m1.085s 00:06:52.528 03:09:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:52.528 03:09:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:52.528 ************************************ 00:06:52.528 END TEST non_locking_app_on_locked_coremask 00:06:52.528 ************************************ 00:06:52.528 03:09:58 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:52.528 03:09:58 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:52.528 03:09:58 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:52.528 03:09:58 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:52.528 03:09:58 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:52.528 ************************************ 00:06:52.528 START TEST locking_app_on_unlocked_coremask 00:06:52.528 ************************************ 00:06:52.528 03:09:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1123 -- # locking_app_on_unlocked_coremask 00:06:52.528 03:09:58 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=3068615 00:06:52.528 03:09:58 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:52.528 03:09:58 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 3068615 /var/tmp/spdk.sock 00:06:52.528 03:09:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@829 -- # '[' -z 3068615 ']' 00:06:52.528 03:09:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:52.528 03:09:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:52.528 03:09:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:52.528 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:52.528 03:09:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:52.528 03:09:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:52.528 [2024-07-15 03:09:58.601901] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:06:52.528 [2024-07-15 03:09:58.601990] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3068615 ] 00:06:52.528 EAL: No free 2048 kB hugepages reported on node 1 00:06:52.528 [2024-07-15 03:09:58.661450] app.c: 905:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:52.528 [2024-07-15 03:09:58.661488] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:52.786 [2024-07-15 03:09:58.747798] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:53.044 03:09:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:53.044 03:09:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # return 0 00:06:53.044 03:09:59 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=3068629 00:06:53.044 03:09:59 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 3068629 /var/tmp/spdk2.sock 00:06:53.044 03:09:59 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:53.044 03:09:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@829 -- # '[' -z 3068629 ']' 00:06:53.044 03:09:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:53.044 03:09:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:53.044 03:09:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:53.044 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:53.044 03:09:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:53.044 03:09:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:53.044 [2024-07-15 03:09:59.050956] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:06:53.044 [2024-07-15 03:09:59.051031] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3068629 ] 00:06:53.044 EAL: No free 2048 kB hugepages reported on node 1 00:06:53.044 [2024-07-15 03:09:59.145164] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:53.302 [2024-07-15 03:09:59.329671] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:53.869 03:09:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:53.869 03:09:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # return 0 00:06:53.869 03:09:59 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 3068629 00:06:53.869 03:09:59 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 3068629 00:06:53.869 03:10:00 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:54.803 lslocks: write error 00:06:54.803 03:10:00 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 3068615 00:06:54.803 03:10:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # '[' -z 3068615 ']' 00:06:54.803 03:10:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # kill -0 3068615 00:06:54.803 03:10:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # uname 00:06:54.803 03:10:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:54.803 03:10:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3068615 00:06:54.803 03:10:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:54.803 03:10:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:54.803 03:10:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3068615' 00:06:54.803 killing process with pid 3068615 00:06:54.803 03:10:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # kill 3068615 00:06:54.803 03:10:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # wait 3068615 00:06:55.369 03:10:01 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 3068629 00:06:55.369 03:10:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # '[' -z 3068629 ']' 00:06:55.369 03:10:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # kill -0 3068629 00:06:55.369 03:10:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # uname 00:06:55.369 03:10:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:55.369 03:10:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3068629 00:06:55.369 03:10:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:55.369 03:10:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:55.369 03:10:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3068629' 00:06:55.369 killing process with pid 3068629 00:06:55.369 03:10:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # kill 3068629 00:06:55.369 03:10:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # wait 3068629 00:06:55.935 00:06:55.935 real 0m3.348s 00:06:55.935 user 0m3.486s 00:06:55.935 sys 0m1.110s 00:06:55.935 03:10:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:55.935 03:10:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:55.935 ************************************ 00:06:55.935 END TEST locking_app_on_unlocked_coremask 00:06:55.935 ************************************ 00:06:55.935 03:10:01 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:55.935 03:10:01 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:55.935 03:10:01 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:55.935 03:10:01 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:55.935 03:10:01 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:55.935 ************************************ 00:06:55.935 START TEST locking_app_on_locked_coremask 00:06:55.935 ************************************ 00:06:55.935 03:10:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # locking_app_on_locked_coremask 00:06:55.935 03:10:01 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=3069055 00:06:55.935 03:10:01 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:55.935 03:10:01 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 3069055 /var/tmp/spdk.sock 00:06:55.935 03:10:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 3069055 ']' 00:06:55.936 03:10:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:55.936 03:10:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:55.936 03:10:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:55.936 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:55.936 03:10:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:55.936 03:10:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:55.936 [2024-07-15 03:10:02.002581] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:06:55.936 [2024-07-15 03:10:02.002683] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3069055 ] 00:06:55.936 EAL: No free 2048 kB hugepages reported on node 1 00:06:55.936 [2024-07-15 03:10:02.065463] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:56.194 [2024-07-15 03:10:02.156757] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:56.452 03:10:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:56.452 03:10:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:06:56.452 03:10:02 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=3069063 00:06:56.453 03:10:02 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:56.453 03:10:02 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 3069063 /var/tmp/spdk2.sock 00:06:56.453 03:10:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@648 -- # local es=0 00:06:56.453 03:10:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 3069063 /var/tmp/spdk2.sock 00:06:56.453 03:10:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:06:56.453 03:10:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:56.453 03:10:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:06:56.453 03:10:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:56.453 03:10:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # waitforlisten 3069063 /var/tmp/spdk2.sock 00:06:56.453 03:10:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 3069063 ']' 00:06:56.453 03:10:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:56.453 03:10:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:56.453 03:10:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:56.453 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:56.453 03:10:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:56.453 03:10:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:56.453 [2024-07-15 03:10:02.467618] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:06:56.453 [2024-07-15 03:10:02.467712] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3069063 ] 00:06:56.453 EAL: No free 2048 kB hugepages reported on node 1 00:06:56.453 [2024-07-15 03:10:02.566354] app.c: 770:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 3069055 has claimed it. 00:06:56.453 [2024-07-15 03:10:02.566428] app.c: 901:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:57.386 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (3069063) - No such process 00:06:57.386 ERROR: process (pid: 3069063) is no longer running 00:06:57.386 03:10:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:57.386 03:10:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 1 00:06:57.386 03:10:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # es=1 00:06:57.386 03:10:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:57.386 03:10:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:57.386 03:10:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:57.386 03:10:03 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 3069055 00:06:57.386 03:10:03 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 3069055 00:06:57.386 03:10:03 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:57.386 lslocks: write error 00:06:57.386 03:10:03 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 3069055 00:06:57.386 03:10:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 3069055 ']' 00:06:57.386 03:10:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 3069055 00:06:57.387 03:10:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:06:57.387 03:10:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:57.387 03:10:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3069055 00:06:57.387 03:10:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:57.387 03:10:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:57.387 03:10:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3069055' 00:06:57.387 killing process with pid 3069055 00:06:57.387 03:10:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 3069055 00:06:57.387 03:10:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 3069055 00:06:57.951 00:06:57.951 real 0m1.853s 00:06:57.951 user 0m2.001s 00:06:57.951 sys 0m0.611s 00:06:57.951 03:10:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:57.951 03:10:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:57.951 ************************************ 00:06:57.951 END TEST locking_app_on_locked_coremask 00:06:57.951 ************************************ 00:06:57.951 03:10:03 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:57.951 03:10:03 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:57.951 03:10:03 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:57.951 03:10:03 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:57.951 03:10:03 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:57.951 ************************************ 00:06:57.951 START TEST locking_overlapped_coremask 00:06:57.951 ************************************ 00:06:57.951 03:10:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1123 -- # locking_overlapped_coremask 00:06:57.951 03:10:03 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=3069350 00:06:57.951 03:10:03 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:06:57.951 03:10:03 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 3069350 /var/tmp/spdk.sock 00:06:57.951 03:10:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@829 -- # '[' -z 3069350 ']' 00:06:57.951 03:10:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:57.951 03:10:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:57.951 03:10:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:57.951 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:57.951 03:10:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:57.951 03:10:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:57.951 [2024-07-15 03:10:03.901064] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:06:57.951 [2024-07-15 03:10:03.901152] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3069350 ] 00:06:57.951 EAL: No free 2048 kB hugepages reported on node 1 00:06:57.951 [2024-07-15 03:10:03.959899] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:57.951 [2024-07-15 03:10:04.049672] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:57.951 [2024-07-15 03:10:04.049738] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:57.951 [2024-07-15 03:10:04.049740] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:58.210 03:10:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:58.210 03:10:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # return 0 00:06:58.210 03:10:04 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=3069362 00:06:58.210 03:10:04 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 3069362 /var/tmp/spdk2.sock 00:06:58.210 03:10:04 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:58.210 03:10:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@648 -- # local es=0 00:06:58.210 03:10:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 3069362 /var/tmp/spdk2.sock 00:06:58.210 03:10:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:06:58.210 03:10:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:58.210 03:10:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:06:58.210 03:10:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:58.210 03:10:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # waitforlisten 3069362 /var/tmp/spdk2.sock 00:06:58.210 03:10:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@829 -- # '[' -z 3069362 ']' 00:06:58.210 03:10:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:58.210 03:10:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:58.210 03:10:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:58.210 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:58.210 03:10:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:58.210 03:10:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:58.210 [2024-07-15 03:10:04.351160] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:06:58.210 [2024-07-15 03:10:04.351268] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3069362 ] 00:06:58.467 EAL: No free 2048 kB hugepages reported on node 1 00:06:58.467 [2024-07-15 03:10:04.439293] app.c: 770:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 3069350 has claimed it. 00:06:58.467 [2024-07-15 03:10:04.439351] app.c: 901:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:59.032 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (3069362) - No such process 00:06:59.032 ERROR: process (pid: 3069362) is no longer running 00:06:59.032 03:10:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:59.032 03:10:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # return 1 00:06:59.032 03:10:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # es=1 00:06:59.032 03:10:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:59.032 03:10:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:59.032 03:10:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:59.032 03:10:05 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:59.032 03:10:05 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:59.032 03:10:05 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:59.032 03:10:05 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:59.032 03:10:05 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 3069350 00:06:59.032 03:10:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@948 -- # '[' -z 3069350 ']' 00:06:59.032 03:10:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # kill -0 3069350 00:06:59.032 03:10:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # uname 00:06:59.032 03:10:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:59.032 03:10:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3069350 00:06:59.032 03:10:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:59.032 03:10:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:59.032 03:10:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3069350' 00:06:59.032 killing process with pid 3069350 00:06:59.032 03:10:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@967 -- # kill 3069350 00:06:59.032 03:10:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # wait 3069350 00:06:59.595 00:06:59.595 real 0m1.628s 00:06:59.595 user 0m4.412s 00:06:59.595 sys 0m0.437s 00:06:59.595 03:10:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:59.595 03:10:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:59.595 ************************************ 00:06:59.595 END TEST locking_overlapped_coremask 00:06:59.595 ************************************ 00:06:59.595 03:10:05 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:59.595 03:10:05 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:59.595 03:10:05 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:59.595 03:10:05 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:59.595 03:10:05 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:59.595 ************************************ 00:06:59.595 START TEST locking_overlapped_coremask_via_rpc 00:06:59.595 ************************************ 00:06:59.595 03:10:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1123 -- # locking_overlapped_coremask_via_rpc 00:06:59.595 03:10:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=3069524 00:06:59.595 03:10:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:59.595 03:10:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 3069524 /var/tmp/spdk.sock 00:06:59.595 03:10:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 3069524 ']' 00:06:59.595 03:10:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:59.595 03:10:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:59.595 03:10:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:59.595 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:59.595 03:10:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:59.595 03:10:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:59.595 [2024-07-15 03:10:05.579521] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:06:59.595 [2024-07-15 03:10:05.579624] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3069524 ] 00:06:59.595 EAL: No free 2048 kB hugepages reported on node 1 00:06:59.595 [2024-07-15 03:10:05.642738] app.c: 905:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:59.595 [2024-07-15 03:10:05.642776] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:59.595 [2024-07-15 03:10:05.734092] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:59.595 [2024-07-15 03:10:05.734142] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:59.595 [2024-07-15 03:10:05.734159] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:59.853 03:10:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:59.853 03:10:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:59.853 03:10:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=3069633 00:06:59.853 03:10:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:59.853 03:10:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 3069633 /var/tmp/spdk2.sock 00:06:59.853 03:10:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 3069633 ']' 00:06:59.853 03:10:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:59.853 03:10:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:59.853 03:10:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:59.853 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:59.853 03:10:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:59.853 03:10:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:00.112 [2024-07-15 03:10:06.026738] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:07:00.112 [2024-07-15 03:10:06.026839] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3069633 ] 00:07:00.112 EAL: No free 2048 kB hugepages reported on node 1 00:07:00.112 [2024-07-15 03:10:06.116465] app.c: 905:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:00.112 [2024-07-15 03:10:06.116510] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:00.370 [2024-07-15 03:10:06.293045] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:00.370 [2024-07-15 03:10:06.293106] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:07:00.370 [2024-07-15 03:10:06.293108] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:00.936 03:10:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:00.936 03:10:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:07:00.936 03:10:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:07:00.936 03:10:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:00.936 03:10:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:00.936 03:10:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:00.936 03:10:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:00.936 03:10:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@648 -- # local es=0 00:07:00.936 03:10:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:00.936 03:10:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:07:00.936 03:10:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:00.936 03:10:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:07:00.936 03:10:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:00.936 03:10:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:00.936 03:10:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:00.936 03:10:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:00.936 [2024-07-15 03:10:06.979992] app.c: 770:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 3069524 has claimed it. 00:07:00.936 request: 00:07:00.936 { 00:07:00.936 "method": "framework_enable_cpumask_locks", 00:07:00.936 "req_id": 1 00:07:00.936 } 00:07:00.936 Got JSON-RPC error response 00:07:00.936 response: 00:07:00.936 { 00:07:00.936 "code": -32603, 00:07:00.936 "message": "Failed to claim CPU core: 2" 00:07:00.936 } 00:07:00.936 03:10:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:07:00.936 03:10:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # es=1 00:07:00.936 03:10:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:00.936 03:10:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:00.936 03:10:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:00.936 03:10:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 3069524 /var/tmp/spdk.sock 00:07:00.936 03:10:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 3069524 ']' 00:07:00.936 03:10:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:00.936 03:10:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:00.936 03:10:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:00.936 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:00.936 03:10:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:00.936 03:10:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:01.193 03:10:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:01.193 03:10:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:07:01.193 03:10:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 3069633 /var/tmp/spdk2.sock 00:07:01.193 03:10:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 3069633 ']' 00:07:01.193 03:10:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:01.193 03:10:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:01.193 03:10:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:01.193 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:01.193 03:10:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:01.193 03:10:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:01.450 03:10:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:01.450 03:10:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:07:01.450 03:10:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:07:01.450 03:10:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:01.450 03:10:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:01.450 03:10:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:01.450 00:07:01.450 real 0m1.961s 00:07:01.450 user 0m1.007s 00:07:01.450 sys 0m0.195s 00:07:01.450 03:10:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:01.450 03:10:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:01.450 ************************************ 00:07:01.450 END TEST locking_overlapped_coremask_via_rpc 00:07:01.450 ************************************ 00:07:01.450 03:10:07 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:07:01.450 03:10:07 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:07:01.450 03:10:07 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 3069524 ]] 00:07:01.450 03:10:07 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 3069524 00:07:01.450 03:10:07 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 3069524 ']' 00:07:01.450 03:10:07 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 3069524 00:07:01.450 03:10:07 event.cpu_locks -- common/autotest_common.sh@953 -- # uname 00:07:01.450 03:10:07 event.cpu_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:01.450 03:10:07 event.cpu_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3069524 00:07:01.450 03:10:07 event.cpu_locks -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:01.450 03:10:07 event.cpu_locks -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:01.450 03:10:07 event.cpu_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3069524' 00:07:01.450 killing process with pid 3069524 00:07:01.450 03:10:07 event.cpu_locks -- common/autotest_common.sh@967 -- # kill 3069524 00:07:01.450 03:10:07 event.cpu_locks -- common/autotest_common.sh@972 -- # wait 3069524 00:07:02.016 03:10:07 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 3069633 ]] 00:07:02.016 03:10:07 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 3069633 00:07:02.016 03:10:07 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 3069633 ']' 00:07:02.016 03:10:07 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 3069633 00:07:02.016 03:10:07 event.cpu_locks -- common/autotest_common.sh@953 -- # uname 00:07:02.016 03:10:07 event.cpu_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:02.016 03:10:07 event.cpu_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3069633 00:07:02.016 03:10:07 event.cpu_locks -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:07:02.016 03:10:07 event.cpu_locks -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:07:02.016 03:10:07 event.cpu_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3069633' 00:07:02.016 killing process with pid 3069633 00:07:02.016 03:10:07 event.cpu_locks -- common/autotest_common.sh@967 -- # kill 3069633 00:07:02.016 03:10:07 event.cpu_locks -- common/autotest_common.sh@972 -- # wait 3069633 00:07:02.274 03:10:08 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:07:02.274 03:10:08 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:07:02.274 03:10:08 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 3069524 ]] 00:07:02.274 03:10:08 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 3069524 00:07:02.274 03:10:08 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 3069524 ']' 00:07:02.274 03:10:08 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 3069524 00:07:02.274 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (3069524) - No such process 00:07:02.274 03:10:08 event.cpu_locks -- common/autotest_common.sh@975 -- # echo 'Process with pid 3069524 is not found' 00:07:02.274 Process with pid 3069524 is not found 00:07:02.274 03:10:08 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 3069633 ]] 00:07:02.274 03:10:08 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 3069633 00:07:02.274 03:10:08 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 3069633 ']' 00:07:02.274 03:10:08 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 3069633 00:07:02.274 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (3069633) - No such process 00:07:02.274 03:10:08 event.cpu_locks -- common/autotest_common.sh@975 -- # echo 'Process with pid 3069633 is not found' 00:07:02.274 Process with pid 3069633 is not found 00:07:02.274 03:10:08 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:07:02.274 00:07:02.274 real 0m15.952s 00:07:02.274 user 0m27.581s 00:07:02.274 sys 0m5.392s 00:07:02.274 03:10:08 event.cpu_locks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:02.274 03:10:08 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:02.274 ************************************ 00:07:02.274 END TEST cpu_locks 00:07:02.274 ************************************ 00:07:02.274 03:10:08 event -- common/autotest_common.sh@1142 -- # return 0 00:07:02.274 00:07:02.274 real 0m39.640s 00:07:02.274 user 1m15.198s 00:07:02.274 sys 0m9.516s 00:07:02.274 03:10:08 event -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:02.274 03:10:08 event -- common/autotest_common.sh@10 -- # set +x 00:07:02.274 ************************************ 00:07:02.274 END TEST event 00:07:02.274 ************************************ 00:07:02.532 03:10:08 -- common/autotest_common.sh@1142 -- # return 0 00:07:02.532 03:10:08 -- spdk/autotest.sh@182 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:07:02.532 03:10:08 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:02.532 03:10:08 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:02.532 03:10:08 -- common/autotest_common.sh@10 -- # set +x 00:07:02.532 ************************************ 00:07:02.532 START TEST thread 00:07:02.532 ************************************ 00:07:02.532 03:10:08 thread -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:07:02.532 * Looking for test storage... 00:07:02.532 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:07:02.532 03:10:08 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:02.532 03:10:08 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:07:02.532 03:10:08 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:02.532 03:10:08 thread -- common/autotest_common.sh@10 -- # set +x 00:07:02.532 ************************************ 00:07:02.532 START TEST thread_poller_perf 00:07:02.532 ************************************ 00:07:02.532 03:10:08 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:02.532 [2024-07-15 03:10:08.540053] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:07:02.532 [2024-07-15 03:10:08.540113] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3070018 ] 00:07:02.532 EAL: No free 2048 kB hugepages reported on node 1 00:07:02.532 [2024-07-15 03:10:08.604598] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:02.791 [2024-07-15 03:10:08.694253] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:02.791 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:07:03.723 ====================================== 00:07:03.723 busy:2711289507 (cyc) 00:07:03.723 total_run_count: 294000 00:07:03.723 tsc_hz: 2700000000 (cyc) 00:07:03.723 ====================================== 00:07:03.723 poller_cost: 9222 (cyc), 3415 (nsec) 00:07:03.723 00:07:03.723 real 0m1.258s 00:07:03.723 user 0m1.173s 00:07:03.723 sys 0m0.079s 00:07:03.723 03:10:09 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:03.723 03:10:09 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:03.723 ************************************ 00:07:03.723 END TEST thread_poller_perf 00:07:03.723 ************************************ 00:07:03.723 03:10:09 thread -- common/autotest_common.sh@1142 -- # return 0 00:07:03.723 03:10:09 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:03.723 03:10:09 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:07:03.723 03:10:09 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:03.723 03:10:09 thread -- common/autotest_common.sh@10 -- # set +x 00:07:03.723 ************************************ 00:07:03.723 START TEST thread_poller_perf 00:07:03.723 ************************************ 00:07:03.723 03:10:09 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:03.723 [2024-07-15 03:10:09.849362] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:07:03.723 [2024-07-15 03:10:09.849428] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3070180 ] 00:07:03.981 EAL: No free 2048 kB hugepages reported on node 1 00:07:03.981 [2024-07-15 03:10:09.911836] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:03.981 [2024-07-15 03:10:10.003989] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:03.981 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:07:05.354 ====================================== 00:07:05.354 busy:2703052394 (cyc) 00:07:05.354 total_run_count: 3858000 00:07:05.354 tsc_hz: 2700000000 (cyc) 00:07:05.354 ====================================== 00:07:05.354 poller_cost: 700 (cyc), 259 (nsec) 00:07:05.354 00:07:05.354 real 0m1.251s 00:07:05.354 user 0m1.159s 00:07:05.354 sys 0m0.086s 00:07:05.354 03:10:11 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:05.354 03:10:11 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:05.354 ************************************ 00:07:05.354 END TEST thread_poller_perf 00:07:05.354 ************************************ 00:07:05.354 03:10:11 thread -- common/autotest_common.sh@1142 -- # return 0 00:07:05.354 03:10:11 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:07:05.354 00:07:05.354 real 0m2.658s 00:07:05.354 user 0m2.385s 00:07:05.354 sys 0m0.273s 00:07:05.354 03:10:11 thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:05.354 03:10:11 thread -- common/autotest_common.sh@10 -- # set +x 00:07:05.354 ************************************ 00:07:05.354 END TEST thread 00:07:05.354 ************************************ 00:07:05.354 03:10:11 -- common/autotest_common.sh@1142 -- # return 0 00:07:05.354 03:10:11 -- spdk/autotest.sh@183 -- # run_test accel /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:07:05.354 03:10:11 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:05.354 03:10:11 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:05.354 03:10:11 -- common/autotest_common.sh@10 -- # set +x 00:07:05.354 ************************************ 00:07:05.354 START TEST accel 00:07:05.354 ************************************ 00:07:05.354 03:10:11 accel -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:07:05.354 * Looking for test storage... 00:07:05.354 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:07:05.354 03:10:11 accel -- accel/accel.sh@81 -- # declare -A expected_opcs 00:07:05.354 03:10:11 accel -- accel/accel.sh@82 -- # get_expected_opcs 00:07:05.354 03:10:11 accel -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:07:05.354 03:10:11 accel -- accel/accel.sh@62 -- # spdk_tgt_pid=3070376 00:07:05.354 03:10:11 accel -- accel/accel.sh@63 -- # waitforlisten 3070376 00:07:05.354 03:10:11 accel -- common/autotest_common.sh@829 -- # '[' -z 3070376 ']' 00:07:05.354 03:10:11 accel -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:05.354 03:10:11 accel -- accel/accel.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:07:05.354 03:10:11 accel -- accel/accel.sh@61 -- # build_accel_config 00:07:05.354 03:10:11 accel -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:05.354 03:10:11 accel -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:05.354 03:10:11 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:05.354 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:05.354 03:10:11 accel -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:05.354 03:10:11 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:05.354 03:10:11 accel -- common/autotest_common.sh@10 -- # set +x 00:07:05.354 03:10:11 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:05.354 03:10:11 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:05.354 03:10:11 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:05.354 03:10:11 accel -- accel/accel.sh@40 -- # local IFS=, 00:07:05.354 03:10:11 accel -- accel/accel.sh@41 -- # jq -r . 00:07:05.354 [2024-07-15 03:10:11.257661] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:07:05.354 [2024-07-15 03:10:11.257750] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3070376 ] 00:07:05.354 EAL: No free 2048 kB hugepages reported on node 1 00:07:05.354 [2024-07-15 03:10:11.321286] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:05.354 [2024-07-15 03:10:11.417078] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:05.613 03:10:11 accel -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:05.613 03:10:11 accel -- common/autotest_common.sh@862 -- # return 0 00:07:05.613 03:10:11 accel -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:07:05.613 03:10:11 accel -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:07:05.613 03:10:11 accel -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:07:05.613 03:10:11 accel -- accel/accel.sh@68 -- # [[ -n '' ]] 00:07:05.613 03:10:11 accel -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:07:05.613 03:10:11 accel -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:07:05.613 03:10:11 accel -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:07:05.613 03:10:11 accel -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:05.613 03:10:11 accel -- common/autotest_common.sh@10 -- # set +x 00:07:05.613 03:10:11 accel -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:05.613 03:10:11 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:05.613 03:10:11 accel -- accel/accel.sh@72 -- # IFS== 00:07:05.613 03:10:11 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:05.613 03:10:11 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:05.614 03:10:11 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:05.614 03:10:11 accel -- accel/accel.sh@72 -- # IFS== 00:07:05.614 03:10:11 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:05.614 03:10:11 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:05.614 03:10:11 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:05.614 03:10:11 accel -- accel/accel.sh@72 -- # IFS== 00:07:05.614 03:10:11 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:05.614 03:10:11 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:05.614 03:10:11 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:05.614 03:10:11 accel -- accel/accel.sh@72 -- # IFS== 00:07:05.614 03:10:11 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:05.614 03:10:11 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:05.614 03:10:11 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:05.614 03:10:11 accel -- accel/accel.sh@72 -- # IFS== 00:07:05.614 03:10:11 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:05.614 03:10:11 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:05.614 03:10:11 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:05.614 03:10:11 accel -- accel/accel.sh@72 -- # IFS== 00:07:05.614 03:10:11 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:05.614 03:10:11 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:05.614 03:10:11 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:05.614 03:10:11 accel -- accel/accel.sh@72 -- # IFS== 00:07:05.614 03:10:11 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:05.614 03:10:11 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:05.614 03:10:11 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:05.614 03:10:11 accel -- accel/accel.sh@72 -- # IFS== 00:07:05.614 03:10:11 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:05.614 03:10:11 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:05.614 03:10:11 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:05.614 03:10:11 accel -- accel/accel.sh@72 -- # IFS== 00:07:05.614 03:10:11 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:05.614 03:10:11 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:05.614 03:10:11 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:05.614 03:10:11 accel -- accel/accel.sh@72 -- # IFS== 00:07:05.614 03:10:11 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:05.614 03:10:11 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:05.614 03:10:11 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:05.614 03:10:11 accel -- accel/accel.sh@72 -- # IFS== 00:07:05.614 03:10:11 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:05.614 03:10:11 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:05.614 03:10:11 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:05.614 03:10:11 accel -- accel/accel.sh@72 -- # IFS== 00:07:05.614 03:10:11 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:05.614 03:10:11 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:05.614 03:10:11 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:05.614 03:10:11 accel -- accel/accel.sh@72 -- # IFS== 00:07:05.614 03:10:11 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:05.614 03:10:11 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:05.614 03:10:11 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:05.614 03:10:11 accel -- accel/accel.sh@72 -- # IFS== 00:07:05.614 03:10:11 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:05.614 03:10:11 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:05.614 03:10:11 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:05.614 03:10:11 accel -- accel/accel.sh@72 -- # IFS== 00:07:05.614 03:10:11 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:05.614 03:10:11 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:05.614 03:10:11 accel -- accel/accel.sh@75 -- # killprocess 3070376 00:07:05.614 03:10:11 accel -- common/autotest_common.sh@948 -- # '[' -z 3070376 ']' 00:07:05.614 03:10:11 accel -- common/autotest_common.sh@952 -- # kill -0 3070376 00:07:05.614 03:10:11 accel -- common/autotest_common.sh@953 -- # uname 00:07:05.614 03:10:11 accel -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:05.614 03:10:11 accel -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3070376 00:07:05.614 03:10:11 accel -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:05.614 03:10:11 accel -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:05.614 03:10:11 accel -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3070376' 00:07:05.614 killing process with pid 3070376 00:07:05.614 03:10:11 accel -- common/autotest_common.sh@967 -- # kill 3070376 00:07:05.614 03:10:11 accel -- common/autotest_common.sh@972 -- # wait 3070376 00:07:06.189 03:10:12 accel -- accel/accel.sh@76 -- # trap - ERR 00:07:06.189 03:10:12 accel -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:07:06.189 03:10:12 accel -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:06.189 03:10:12 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:06.189 03:10:12 accel -- common/autotest_common.sh@10 -- # set +x 00:07:06.189 03:10:12 accel.accel_help -- common/autotest_common.sh@1123 -- # accel_perf -h 00:07:06.189 03:10:12 accel.accel_help -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:07:06.189 03:10:12 accel.accel_help -- accel/accel.sh@12 -- # build_accel_config 00:07:06.189 03:10:12 accel.accel_help -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:06.189 03:10:12 accel.accel_help -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:06.189 03:10:12 accel.accel_help -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:06.189 03:10:12 accel.accel_help -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:06.189 03:10:12 accel.accel_help -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:06.189 03:10:12 accel.accel_help -- accel/accel.sh@40 -- # local IFS=, 00:07:06.189 03:10:12 accel.accel_help -- accel/accel.sh@41 -- # jq -r . 00:07:06.189 03:10:12 accel.accel_help -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:06.189 03:10:12 accel.accel_help -- common/autotest_common.sh@10 -- # set +x 00:07:06.189 03:10:12 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:06.189 03:10:12 accel -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:07:06.189 03:10:12 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:07:06.189 03:10:12 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:06.189 03:10:12 accel -- common/autotest_common.sh@10 -- # set +x 00:07:06.189 ************************************ 00:07:06.189 START TEST accel_missing_filename 00:07:06.189 ************************************ 00:07:06.189 03:10:12 accel.accel_missing_filename -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress 00:07:06.189 03:10:12 accel.accel_missing_filename -- common/autotest_common.sh@648 -- # local es=0 00:07:06.189 03:10:12 accel.accel_missing_filename -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress 00:07:06.189 03:10:12 accel.accel_missing_filename -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:07:06.189 03:10:12 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:06.189 03:10:12 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # type -t accel_perf 00:07:06.189 03:10:12 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:06.189 03:10:12 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress 00:07:06.189 03:10:12 accel.accel_missing_filename -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:07:06.189 03:10:12 accel.accel_missing_filename -- accel/accel.sh@12 -- # build_accel_config 00:07:06.189 03:10:12 accel.accel_missing_filename -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:06.189 03:10:12 accel.accel_missing_filename -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:06.189 03:10:12 accel.accel_missing_filename -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:06.189 03:10:12 accel.accel_missing_filename -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:06.189 03:10:12 accel.accel_missing_filename -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:06.189 03:10:12 accel.accel_missing_filename -- accel/accel.sh@40 -- # local IFS=, 00:07:06.189 03:10:12 accel.accel_missing_filename -- accel/accel.sh@41 -- # jq -r . 00:07:06.189 [2024-07-15 03:10:12.265356] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:07:06.189 [2024-07-15 03:10:12.265422] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3070545 ] 00:07:06.189 EAL: No free 2048 kB hugepages reported on node 1 00:07:06.486 [2024-07-15 03:10:12.327717] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:06.486 [2024-07-15 03:10:12.423946] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:06.486 [2024-07-15 03:10:12.485479] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:06.486 [2024-07-15 03:10:12.569831] accel_perf.c:1464:main: *ERROR*: ERROR starting application 00:07:06.746 A filename is required. 00:07:06.746 03:10:12 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # es=234 00:07:06.746 03:10:12 accel.accel_missing_filename -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:06.746 03:10:12 accel.accel_missing_filename -- common/autotest_common.sh@660 -- # es=106 00:07:06.746 03:10:12 accel.accel_missing_filename -- common/autotest_common.sh@661 -- # case "$es" in 00:07:06.746 03:10:12 accel.accel_missing_filename -- common/autotest_common.sh@668 -- # es=1 00:07:06.746 03:10:12 accel.accel_missing_filename -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:06.746 00:07:06.746 real 0m0.407s 00:07:06.746 user 0m0.289s 00:07:06.746 sys 0m0.152s 00:07:06.746 03:10:12 accel.accel_missing_filename -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:06.746 03:10:12 accel.accel_missing_filename -- common/autotest_common.sh@10 -- # set +x 00:07:06.746 ************************************ 00:07:06.746 END TEST accel_missing_filename 00:07:06.746 ************************************ 00:07:06.746 03:10:12 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:06.746 03:10:12 accel -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:06.746 03:10:12 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:07:06.746 03:10:12 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:06.746 03:10:12 accel -- common/autotest_common.sh@10 -- # set +x 00:07:06.746 ************************************ 00:07:06.746 START TEST accel_compress_verify 00:07:06.746 ************************************ 00:07:06.746 03:10:12 accel.accel_compress_verify -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:06.746 03:10:12 accel.accel_compress_verify -- common/autotest_common.sh@648 -- # local es=0 00:07:06.746 03:10:12 accel.accel_compress_verify -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:06.746 03:10:12 accel.accel_compress_verify -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:07:06.746 03:10:12 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:06.746 03:10:12 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # type -t accel_perf 00:07:06.746 03:10:12 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:06.746 03:10:12 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:06.746 03:10:12 accel.accel_compress_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:06.746 03:10:12 accel.accel_compress_verify -- accel/accel.sh@12 -- # build_accel_config 00:07:06.746 03:10:12 accel.accel_compress_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:06.746 03:10:12 accel.accel_compress_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:06.746 03:10:12 accel.accel_compress_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:06.746 03:10:12 accel.accel_compress_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:06.746 03:10:12 accel.accel_compress_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:06.746 03:10:12 accel.accel_compress_verify -- accel/accel.sh@40 -- # local IFS=, 00:07:06.746 03:10:12 accel.accel_compress_verify -- accel/accel.sh@41 -- # jq -r . 00:07:06.746 [2024-07-15 03:10:12.720689] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:07:06.746 [2024-07-15 03:10:12.720762] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3070591 ] 00:07:06.746 EAL: No free 2048 kB hugepages reported on node 1 00:07:06.746 [2024-07-15 03:10:12.787692] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:06.746 [2024-07-15 03:10:12.880933] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:07.005 [2024-07-15 03:10:12.942779] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:07.005 [2024-07-15 03:10:13.022607] accel_perf.c:1464:main: *ERROR*: ERROR starting application 00:07:07.005 00:07:07.005 Compression does not support the verify option, aborting. 00:07:07.005 03:10:13 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # es=161 00:07:07.005 03:10:13 accel.accel_compress_verify -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:07.005 03:10:13 accel.accel_compress_verify -- common/autotest_common.sh@660 -- # es=33 00:07:07.005 03:10:13 accel.accel_compress_verify -- common/autotest_common.sh@661 -- # case "$es" in 00:07:07.005 03:10:13 accel.accel_compress_verify -- common/autotest_common.sh@668 -- # es=1 00:07:07.005 03:10:13 accel.accel_compress_verify -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:07.005 00:07:07.005 real 0m0.401s 00:07:07.005 user 0m0.288s 00:07:07.005 sys 0m0.149s 00:07:07.005 03:10:13 accel.accel_compress_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:07.005 03:10:13 accel.accel_compress_verify -- common/autotest_common.sh@10 -- # set +x 00:07:07.005 ************************************ 00:07:07.005 END TEST accel_compress_verify 00:07:07.005 ************************************ 00:07:07.005 03:10:13 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:07.005 03:10:13 accel -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:07:07.005 03:10:13 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:07:07.005 03:10:13 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:07.005 03:10:13 accel -- common/autotest_common.sh@10 -- # set +x 00:07:07.005 ************************************ 00:07:07.005 START TEST accel_wrong_workload 00:07:07.005 ************************************ 00:07:07.005 03:10:13 accel.accel_wrong_workload -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w foobar 00:07:07.005 03:10:13 accel.accel_wrong_workload -- common/autotest_common.sh@648 -- # local es=0 00:07:07.005 03:10:13 accel.accel_wrong_workload -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:07:07.270 03:10:13 accel.accel_wrong_workload -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:07:07.270 03:10:13 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:07.270 03:10:13 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # type -t accel_perf 00:07:07.270 03:10:13 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:07.270 03:10:13 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w foobar 00:07:07.270 03:10:13 accel.accel_wrong_workload -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:07:07.270 03:10:13 accel.accel_wrong_workload -- accel/accel.sh@12 -- # build_accel_config 00:07:07.270 03:10:13 accel.accel_wrong_workload -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:07.270 03:10:13 accel.accel_wrong_workload -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:07.271 03:10:13 accel.accel_wrong_workload -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:07.271 03:10:13 accel.accel_wrong_workload -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:07.271 03:10:13 accel.accel_wrong_workload -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:07.271 03:10:13 accel.accel_wrong_workload -- accel/accel.sh@40 -- # local IFS=, 00:07:07.271 03:10:13 accel.accel_wrong_workload -- accel/accel.sh@41 -- # jq -r . 00:07:07.271 Unsupported workload type: foobar 00:07:07.271 [2024-07-15 03:10:13.167355] app.c:1450:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:07:07.271 accel_perf options: 00:07:07.271 [-h help message] 00:07:07.271 [-q queue depth per core] 00:07:07.271 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:07:07.271 [-T number of threads per core 00:07:07.271 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:07:07.271 [-t time in seconds] 00:07:07.271 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:07:07.271 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:07:07.271 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:07:07.271 [-l for compress/decompress workloads, name of uncompressed input file 00:07:07.271 [-S for crc32c workload, use this seed value (default 0) 00:07:07.271 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:07:07.271 [-f for fill workload, use this BYTE value (default 255) 00:07:07.271 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:07:07.271 [-y verify result if this switch is on] 00:07:07.271 [-a tasks to allocate per core (default: same value as -q)] 00:07:07.271 Can be used to spread operations across a wider range of memory. 00:07:07.272 03:10:13 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # es=1 00:07:07.272 03:10:13 accel.accel_wrong_workload -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:07.272 03:10:13 accel.accel_wrong_workload -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:07.272 03:10:13 accel.accel_wrong_workload -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:07.272 00:07:07.272 real 0m0.024s 00:07:07.272 user 0m0.017s 00:07:07.272 sys 0m0.007s 00:07:07.272 03:10:13 accel.accel_wrong_workload -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:07.272 03:10:13 accel.accel_wrong_workload -- common/autotest_common.sh@10 -- # set +x 00:07:07.272 ************************************ 00:07:07.272 END TEST accel_wrong_workload 00:07:07.272 ************************************ 00:07:07.272 Error: writing output failed: Broken pipe 00:07:07.272 03:10:13 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:07.272 03:10:13 accel -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:07:07.272 03:10:13 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:07:07.272 03:10:13 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:07.272 03:10:13 accel -- common/autotest_common.sh@10 -- # set +x 00:07:07.272 ************************************ 00:07:07.272 START TEST accel_negative_buffers 00:07:07.272 ************************************ 00:07:07.272 03:10:13 accel.accel_negative_buffers -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:07:07.272 03:10:13 accel.accel_negative_buffers -- common/autotest_common.sh@648 -- # local es=0 00:07:07.272 03:10:13 accel.accel_negative_buffers -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:07:07.272 03:10:13 accel.accel_negative_buffers -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:07:07.272 03:10:13 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:07.272 03:10:13 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # type -t accel_perf 00:07:07.272 03:10:13 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:07.272 03:10:13 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w xor -y -x -1 00:07:07.272 03:10:13 accel.accel_negative_buffers -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:07:07.272 03:10:13 accel.accel_negative_buffers -- accel/accel.sh@12 -- # build_accel_config 00:07:07.273 03:10:13 accel.accel_negative_buffers -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:07.273 03:10:13 accel.accel_negative_buffers -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:07.273 03:10:13 accel.accel_negative_buffers -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:07.273 03:10:13 accel.accel_negative_buffers -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:07.273 03:10:13 accel.accel_negative_buffers -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:07.273 03:10:13 accel.accel_negative_buffers -- accel/accel.sh@40 -- # local IFS=, 00:07:07.273 03:10:13 accel.accel_negative_buffers -- accel/accel.sh@41 -- # jq -r . 00:07:07.273 -x option must be non-negative. 00:07:07.273 [2024-07-15 03:10:13.239307] app.c:1450:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:07:07.273 accel_perf options: 00:07:07.273 [-h help message] 00:07:07.273 [-q queue depth per core] 00:07:07.273 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:07:07.273 [-T number of threads per core 00:07:07.273 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:07:07.273 [-t time in seconds] 00:07:07.273 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:07:07.273 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:07:07.273 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:07:07.273 [-l for compress/decompress workloads, name of uncompressed input file 00:07:07.273 [-S for crc32c workload, use this seed value (default 0) 00:07:07.273 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:07:07.273 [-f for fill workload, use this BYTE value (default 255) 00:07:07.273 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:07:07.273 [-y verify result if this switch is on] 00:07:07.273 [-a tasks to allocate per core (default: same value as -q)] 00:07:07.274 Can be used to spread operations across a wider range of memory. 00:07:07.274 03:10:13 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # es=1 00:07:07.274 03:10:13 accel.accel_negative_buffers -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:07.274 03:10:13 accel.accel_negative_buffers -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:07.274 03:10:13 accel.accel_negative_buffers -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:07.274 00:07:07.274 real 0m0.023s 00:07:07.274 user 0m0.013s 00:07:07.274 sys 0m0.010s 00:07:07.274 03:10:13 accel.accel_negative_buffers -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:07.274 03:10:13 accel.accel_negative_buffers -- common/autotest_common.sh@10 -- # set +x 00:07:07.274 ************************************ 00:07:07.274 END TEST accel_negative_buffers 00:07:07.274 ************************************ 00:07:07.274 Error: writing output failed: Broken pipe 00:07:07.274 03:10:13 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:07.274 03:10:13 accel -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:07:07.274 03:10:13 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:07:07.274 03:10:13 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:07.274 03:10:13 accel -- common/autotest_common.sh@10 -- # set +x 00:07:07.274 ************************************ 00:07:07.274 START TEST accel_crc32c 00:07:07.274 ************************************ 00:07:07.274 03:10:13 accel.accel_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -S 32 -y 00:07:07.274 03:10:13 accel.accel_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:07:07.274 03:10:13 accel.accel_crc32c -- accel/accel.sh@17 -- # local accel_module 00:07:07.274 03:10:13 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:07.274 03:10:13 accel.accel_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:07:07.274 03:10:13 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:07.274 03:10:13 accel.accel_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:07:07.274 03:10:13 accel.accel_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:07:07.274 03:10:13 accel.accel_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:07.274 03:10:13 accel.accel_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:07.274 03:10:13 accel.accel_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:07.274 03:10:13 accel.accel_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:07.274 03:10:13 accel.accel_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:07.274 03:10:13 accel.accel_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:07:07.274 03:10:13 accel.accel_crc32c -- accel/accel.sh@41 -- # jq -r . 00:07:07.275 [2024-07-15 03:10:13.303291] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:07:07.275 [2024-07-15 03:10:13.303356] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3070754 ] 00:07:07.275 EAL: No free 2048 kB hugepages reported on node 1 00:07:07.275 [2024-07-15 03:10:13.370408] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:07.540 [2024-07-15 03:10:13.464708] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:07.540 03:10:13 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:07.540 03:10:13 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:07.540 03:10:13 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:07.540 03:10:13 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:07.540 03:10:13 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:07.540 03:10:13 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:07.540 03:10:13 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:07.540 03:10:13 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:07.540 03:10:13 accel.accel_crc32c -- accel/accel.sh@20 -- # val=0x1 00:07:07.540 03:10:13 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:07.540 03:10:13 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:07.540 03:10:13 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:07.540 03:10:13 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:07.540 03:10:13 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:07.540 03:10:13 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:07.540 03:10:13 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:07.540 03:10:13 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:07.540 03:10:13 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:07.540 03:10:13 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:07.540 03:10:13 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:07.540 03:10:13 accel.accel_crc32c -- accel/accel.sh@20 -- # val=crc32c 00:07:07.540 03:10:13 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:07.540 03:10:13 accel.accel_crc32c -- accel/accel.sh@23 -- # accel_opc=crc32c 00:07:07.540 03:10:13 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:07.541 03:10:13 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:07.541 03:10:13 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:07:07.541 03:10:13 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:07.541 03:10:13 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:07.541 03:10:13 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:07.541 03:10:13 accel.accel_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:07.541 03:10:13 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:07.541 03:10:13 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:07.541 03:10:13 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:07.541 03:10:13 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:07.541 03:10:13 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:07.541 03:10:13 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:07.541 03:10:13 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:07.541 03:10:13 accel.accel_crc32c -- accel/accel.sh@20 -- # val=software 00:07:07.541 03:10:13 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:07.541 03:10:13 accel.accel_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:07:07.541 03:10:13 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:07.541 03:10:13 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:07.541 03:10:13 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:07:07.541 03:10:13 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:07.541 03:10:13 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:07.541 03:10:13 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:07.541 03:10:13 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:07:07.541 03:10:13 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:07.541 03:10:13 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:07.541 03:10:13 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:07.541 03:10:13 accel.accel_crc32c -- accel/accel.sh@20 -- # val=1 00:07:07.541 03:10:13 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:07.541 03:10:13 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:07.541 03:10:13 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:07.541 03:10:13 accel.accel_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:07:07.541 03:10:13 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:07.541 03:10:13 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:07.541 03:10:13 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:07.541 03:10:13 accel.accel_crc32c -- accel/accel.sh@20 -- # val=Yes 00:07:07.541 03:10:13 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:07.541 03:10:13 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:07.541 03:10:13 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:07.541 03:10:13 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:07.541 03:10:13 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:07.541 03:10:13 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:07.541 03:10:13 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:07.541 03:10:13 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:07.541 03:10:13 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:07.541 03:10:13 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:07.541 03:10:13 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:08.913 03:10:14 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:08.913 03:10:14 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:08.913 03:10:14 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:08.913 03:10:14 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:08.914 03:10:14 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:08.914 03:10:14 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:08.914 03:10:14 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:08.914 03:10:14 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:08.914 03:10:14 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:08.914 03:10:14 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:08.914 03:10:14 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:08.914 03:10:14 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:08.914 03:10:14 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:08.914 03:10:14 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:08.914 03:10:14 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:08.914 03:10:14 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:08.914 03:10:14 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:08.914 03:10:14 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:08.914 03:10:14 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:08.914 03:10:14 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:08.914 03:10:14 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:08.914 03:10:14 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:08.914 03:10:14 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:08.914 03:10:14 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:08.914 03:10:14 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:08.914 03:10:14 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:07:08.914 03:10:14 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:08.914 00:07:08.914 real 0m1.398s 00:07:08.914 user 0m1.253s 00:07:08.914 sys 0m0.149s 00:07:08.914 03:10:14 accel.accel_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:08.914 03:10:14 accel.accel_crc32c -- common/autotest_common.sh@10 -- # set +x 00:07:08.914 ************************************ 00:07:08.914 END TEST accel_crc32c 00:07:08.914 ************************************ 00:07:08.914 03:10:14 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:08.914 03:10:14 accel -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:07:08.914 03:10:14 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:07:08.914 03:10:14 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:08.914 03:10:14 accel -- common/autotest_common.sh@10 -- # set +x 00:07:08.914 ************************************ 00:07:08.914 START TEST accel_crc32c_C2 00:07:08.914 ************************************ 00:07:08.914 03:10:14 accel.accel_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -y -C 2 00:07:08.914 03:10:14 accel.accel_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:07:08.914 03:10:14 accel.accel_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:07:08.914 03:10:14 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:08.914 03:10:14 accel.accel_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:07:08.914 03:10:14 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:08.914 03:10:14 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:07:08.914 03:10:14 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:07:08.914 03:10:14 accel.accel_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:08.914 03:10:14 accel.accel_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:08.914 03:10:14 accel.accel_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:08.914 03:10:14 accel.accel_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:08.914 03:10:14 accel.accel_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:08.914 03:10:14 accel.accel_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:07:08.914 03:10:14 accel.accel_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:07:08.914 [2024-07-15 03:10:14.754827] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:07:08.914 [2024-07-15 03:10:14.754903] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3070910 ] 00:07:08.914 EAL: No free 2048 kB hugepages reported on node 1 00:07:08.914 [2024-07-15 03:10:14.819697] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:08.914 [2024-07-15 03:10:14.910508] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:08.914 03:10:14 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:08.914 03:10:14 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:08.914 03:10:14 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:08.914 03:10:14 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:08.914 03:10:14 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:08.914 03:10:14 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:08.914 03:10:14 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:08.914 03:10:14 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:08.914 03:10:14 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:07:08.914 03:10:14 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:08.914 03:10:14 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:08.914 03:10:14 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:08.914 03:10:14 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:08.914 03:10:14 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:08.914 03:10:14 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:08.914 03:10:14 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:08.914 03:10:14 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:08.914 03:10:14 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:08.914 03:10:14 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:08.914 03:10:14 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:08.914 03:10:14 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=crc32c 00:07:08.914 03:10:14 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:08.914 03:10:14 accel.accel_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:07:08.914 03:10:14 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:08.914 03:10:14 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:08.914 03:10:14 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:07:08.914 03:10:14 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:08.914 03:10:14 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:08.914 03:10:14 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:08.914 03:10:14 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:08.914 03:10:14 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:08.914 03:10:14 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:08.914 03:10:14 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:08.914 03:10:14 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:08.914 03:10:14 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:08.914 03:10:14 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:08.914 03:10:14 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:08.914 03:10:14 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:07:08.914 03:10:14 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:08.914 03:10:14 accel.accel_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:07:08.914 03:10:14 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:08.914 03:10:14 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:08.914 03:10:14 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:07:08.914 03:10:14 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:08.914 03:10:14 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:08.914 03:10:14 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:08.914 03:10:14 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:07:08.914 03:10:14 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:08.914 03:10:14 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:08.914 03:10:14 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:08.914 03:10:14 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:07:08.914 03:10:14 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:08.914 03:10:14 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:08.914 03:10:14 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:08.914 03:10:14 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:07:08.914 03:10:14 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:08.914 03:10:14 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:08.914 03:10:14 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:08.914 03:10:14 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:07:08.914 03:10:14 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:08.914 03:10:14 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:08.914 03:10:14 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:08.914 03:10:14 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:08.914 03:10:14 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:08.914 03:10:14 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:08.914 03:10:14 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:08.914 03:10:14 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:08.914 03:10:14 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:08.914 03:10:14 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:08.914 03:10:14 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:10.290 03:10:16 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:10.290 03:10:16 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:10.290 03:10:16 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:10.290 03:10:16 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:10.290 03:10:16 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:10.290 03:10:16 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:10.290 03:10:16 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:10.290 03:10:16 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:10.290 03:10:16 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:10.290 03:10:16 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:10.290 03:10:16 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:10.290 03:10:16 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:10.290 03:10:16 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:10.290 03:10:16 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:10.290 03:10:16 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:10.290 03:10:16 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:10.290 03:10:16 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:10.290 03:10:16 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:10.290 03:10:16 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:10.290 03:10:16 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:10.290 03:10:16 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:10.290 03:10:16 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:10.290 03:10:16 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:10.290 03:10:16 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:10.290 03:10:16 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:10.290 03:10:16 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:07:10.290 03:10:16 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:10.290 00:07:10.290 real 0m1.405s 00:07:10.290 user 0m1.257s 00:07:10.290 sys 0m0.151s 00:07:10.290 03:10:16 accel.accel_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:10.290 03:10:16 accel.accel_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:07:10.290 ************************************ 00:07:10.290 END TEST accel_crc32c_C2 00:07:10.290 ************************************ 00:07:10.290 03:10:16 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:10.290 03:10:16 accel -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:07:10.290 03:10:16 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:07:10.290 03:10:16 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:10.290 03:10:16 accel -- common/autotest_common.sh@10 -- # set +x 00:07:10.290 ************************************ 00:07:10.290 START TEST accel_copy 00:07:10.290 ************************************ 00:07:10.290 03:10:16 accel.accel_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy -y 00:07:10.290 03:10:16 accel.accel_copy -- accel/accel.sh@16 -- # local accel_opc 00:07:10.290 03:10:16 accel.accel_copy -- accel/accel.sh@17 -- # local accel_module 00:07:10.290 03:10:16 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:10.290 03:10:16 accel.accel_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:07:10.290 03:10:16 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:10.290 03:10:16 accel.accel_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:07:10.290 03:10:16 accel.accel_copy -- accel/accel.sh@12 -- # build_accel_config 00:07:10.290 03:10:16 accel.accel_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:10.290 03:10:16 accel.accel_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:10.290 03:10:16 accel.accel_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:10.290 03:10:16 accel.accel_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:10.290 03:10:16 accel.accel_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:10.290 03:10:16 accel.accel_copy -- accel/accel.sh@40 -- # local IFS=, 00:07:10.290 03:10:16 accel.accel_copy -- accel/accel.sh@41 -- # jq -r . 00:07:10.290 [2024-07-15 03:10:16.204352] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:07:10.290 [2024-07-15 03:10:16.204420] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3071148 ] 00:07:10.290 EAL: No free 2048 kB hugepages reported on node 1 00:07:10.290 [2024-07-15 03:10:16.261169] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:10.290 [2024-07-15 03:10:16.343884] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:10.290 03:10:16 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:10.290 03:10:16 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:10.290 03:10:16 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:10.290 03:10:16 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:10.290 03:10:16 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:10.290 03:10:16 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:10.290 03:10:16 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:10.290 03:10:16 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:10.290 03:10:16 accel.accel_copy -- accel/accel.sh@20 -- # val=0x1 00:07:10.290 03:10:16 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:10.290 03:10:16 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:10.290 03:10:16 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:10.290 03:10:16 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:10.291 03:10:16 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:10.291 03:10:16 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:10.291 03:10:16 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:10.291 03:10:16 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:10.291 03:10:16 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:10.291 03:10:16 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:10.291 03:10:16 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:10.291 03:10:16 accel.accel_copy -- accel/accel.sh@20 -- # val=copy 00:07:10.291 03:10:16 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:10.291 03:10:16 accel.accel_copy -- accel/accel.sh@23 -- # accel_opc=copy 00:07:10.291 03:10:16 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:10.291 03:10:16 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:10.291 03:10:16 accel.accel_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:10.291 03:10:16 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:10.291 03:10:16 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:10.291 03:10:16 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:10.291 03:10:16 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:10.291 03:10:16 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:10.291 03:10:16 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:10.291 03:10:16 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:10.291 03:10:16 accel.accel_copy -- accel/accel.sh@20 -- # val=software 00:07:10.291 03:10:16 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:10.291 03:10:16 accel.accel_copy -- accel/accel.sh@22 -- # accel_module=software 00:07:10.291 03:10:16 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:10.291 03:10:16 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:10.291 03:10:16 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:07:10.291 03:10:16 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:10.291 03:10:16 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:10.291 03:10:16 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:10.291 03:10:16 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:07:10.291 03:10:16 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:10.291 03:10:16 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:10.291 03:10:16 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:10.291 03:10:16 accel.accel_copy -- accel/accel.sh@20 -- # val=1 00:07:10.291 03:10:16 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:10.291 03:10:16 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:10.291 03:10:16 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:10.291 03:10:16 accel.accel_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:07:10.291 03:10:16 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:10.291 03:10:16 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:10.291 03:10:16 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:10.291 03:10:16 accel.accel_copy -- accel/accel.sh@20 -- # val=Yes 00:07:10.291 03:10:16 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:10.291 03:10:16 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:10.291 03:10:16 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:10.291 03:10:16 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:10.291 03:10:16 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:10.291 03:10:16 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:10.291 03:10:16 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:10.291 03:10:16 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:10.291 03:10:16 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:10.291 03:10:16 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:10.291 03:10:16 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:11.664 03:10:17 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:11.664 03:10:17 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:11.664 03:10:17 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:11.664 03:10:17 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:11.664 03:10:17 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:11.664 03:10:17 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:11.664 03:10:17 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:11.664 03:10:17 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:11.664 03:10:17 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:11.664 03:10:17 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:11.664 03:10:17 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:11.664 03:10:17 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:11.664 03:10:17 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:11.664 03:10:17 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:11.664 03:10:17 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:11.664 03:10:17 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:11.664 03:10:17 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:11.664 03:10:17 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:11.664 03:10:17 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:11.664 03:10:17 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:11.664 03:10:17 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:11.664 03:10:17 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:11.664 03:10:17 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:11.664 03:10:17 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:11.664 03:10:17 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:11.664 03:10:17 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n copy ]] 00:07:11.664 03:10:17 accel.accel_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:11.664 00:07:11.664 real 0m1.391s 00:07:11.664 user 0m1.261s 00:07:11.664 sys 0m0.132s 00:07:11.664 03:10:17 accel.accel_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:11.664 03:10:17 accel.accel_copy -- common/autotest_common.sh@10 -- # set +x 00:07:11.664 ************************************ 00:07:11.664 END TEST accel_copy 00:07:11.664 ************************************ 00:07:11.664 03:10:17 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:11.664 03:10:17 accel -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:11.664 03:10:17 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:07:11.664 03:10:17 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:11.664 03:10:17 accel -- common/autotest_common.sh@10 -- # set +x 00:07:11.664 ************************************ 00:07:11.664 START TEST accel_fill 00:07:11.664 ************************************ 00:07:11.664 03:10:17 accel.accel_fill -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:11.664 03:10:17 accel.accel_fill -- accel/accel.sh@16 -- # local accel_opc 00:07:11.664 03:10:17 accel.accel_fill -- accel/accel.sh@17 -- # local accel_module 00:07:11.664 03:10:17 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:11.664 03:10:17 accel.accel_fill -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:11.664 03:10:17 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:11.664 03:10:17 accel.accel_fill -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:11.664 03:10:17 accel.accel_fill -- accel/accel.sh@12 -- # build_accel_config 00:07:11.664 03:10:17 accel.accel_fill -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:11.664 03:10:17 accel.accel_fill -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:11.664 03:10:17 accel.accel_fill -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:11.664 03:10:17 accel.accel_fill -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:11.664 03:10:17 accel.accel_fill -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:11.665 03:10:17 accel.accel_fill -- accel/accel.sh@40 -- # local IFS=, 00:07:11.665 03:10:17 accel.accel_fill -- accel/accel.sh@41 -- # jq -r . 00:07:11.665 [2024-07-15 03:10:17.640178] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:07:11.665 [2024-07-15 03:10:17.640247] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3071339 ] 00:07:11.665 EAL: No free 2048 kB hugepages reported on node 1 00:07:11.665 [2024-07-15 03:10:17.704293] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:11.665 [2024-07-15 03:10:17.796978] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:11.923 03:10:17 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:11.923 03:10:17 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:11.923 03:10:17 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:11.923 03:10:17 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:11.923 03:10:17 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:11.923 03:10:17 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:11.923 03:10:17 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:11.923 03:10:17 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:11.923 03:10:17 accel.accel_fill -- accel/accel.sh@20 -- # val=0x1 00:07:11.923 03:10:17 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:11.923 03:10:17 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:11.923 03:10:17 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:11.923 03:10:17 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:11.923 03:10:17 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:11.923 03:10:17 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:11.923 03:10:17 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:11.923 03:10:17 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:11.923 03:10:17 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:11.923 03:10:17 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:11.923 03:10:17 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:11.923 03:10:17 accel.accel_fill -- accel/accel.sh@20 -- # val=fill 00:07:11.923 03:10:17 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:11.923 03:10:17 accel.accel_fill -- accel/accel.sh@23 -- # accel_opc=fill 00:07:11.923 03:10:17 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:11.923 03:10:17 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:11.923 03:10:17 accel.accel_fill -- accel/accel.sh@20 -- # val=0x80 00:07:11.923 03:10:17 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:11.923 03:10:17 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:11.923 03:10:17 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:11.923 03:10:17 accel.accel_fill -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:11.923 03:10:17 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:11.923 03:10:17 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:11.923 03:10:17 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:11.923 03:10:17 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:11.923 03:10:17 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:11.923 03:10:17 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:11.923 03:10:17 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:11.923 03:10:17 accel.accel_fill -- accel/accel.sh@20 -- # val=software 00:07:11.923 03:10:17 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:11.923 03:10:17 accel.accel_fill -- accel/accel.sh@22 -- # accel_module=software 00:07:11.923 03:10:17 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:11.923 03:10:17 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:11.923 03:10:17 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:07:11.923 03:10:17 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:11.923 03:10:17 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:11.923 03:10:17 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:11.923 03:10:17 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:07:11.923 03:10:17 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:11.923 03:10:17 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:11.923 03:10:17 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:11.923 03:10:17 accel.accel_fill -- accel/accel.sh@20 -- # val=1 00:07:11.923 03:10:17 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:11.923 03:10:17 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:11.923 03:10:17 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:11.924 03:10:17 accel.accel_fill -- accel/accel.sh@20 -- # val='1 seconds' 00:07:11.924 03:10:17 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:11.924 03:10:17 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:11.924 03:10:17 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:11.924 03:10:17 accel.accel_fill -- accel/accel.sh@20 -- # val=Yes 00:07:11.924 03:10:17 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:11.924 03:10:17 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:11.924 03:10:17 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:11.924 03:10:17 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:11.924 03:10:17 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:11.924 03:10:17 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:11.924 03:10:17 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:11.924 03:10:17 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:11.924 03:10:17 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:11.924 03:10:17 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:11.924 03:10:17 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:13.300 03:10:19 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:13.300 03:10:19 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:13.300 03:10:19 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:13.300 03:10:19 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:13.300 03:10:19 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:13.300 03:10:19 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:13.300 03:10:19 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:13.300 03:10:19 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:13.300 03:10:19 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:13.300 03:10:19 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:13.300 03:10:19 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:13.300 03:10:19 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:13.300 03:10:19 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:13.300 03:10:19 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:13.300 03:10:19 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:13.300 03:10:19 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:13.300 03:10:19 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:13.300 03:10:19 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:13.300 03:10:19 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:13.300 03:10:19 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:13.300 03:10:19 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:13.300 03:10:19 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:13.300 03:10:19 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:13.300 03:10:19 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:13.300 03:10:19 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:13.300 03:10:19 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n fill ]] 00:07:13.300 03:10:19 accel.accel_fill -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:13.300 00:07:13.300 real 0m1.412s 00:07:13.300 user 0m1.272s 00:07:13.300 sys 0m0.142s 00:07:13.300 03:10:19 accel.accel_fill -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:13.300 03:10:19 accel.accel_fill -- common/autotest_common.sh@10 -- # set +x 00:07:13.300 ************************************ 00:07:13.300 END TEST accel_fill 00:07:13.300 ************************************ 00:07:13.300 03:10:19 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:13.300 03:10:19 accel -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:07:13.300 03:10:19 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:07:13.300 03:10:19 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:13.300 03:10:19 accel -- common/autotest_common.sh@10 -- # set +x 00:07:13.300 ************************************ 00:07:13.300 START TEST accel_copy_crc32c 00:07:13.300 ************************************ 00:07:13.300 03:10:19 accel.accel_copy_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y 00:07:13.300 03:10:19 accel.accel_copy_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:07:13.300 03:10:19 accel.accel_copy_crc32c -- accel/accel.sh@17 -- # local accel_module 00:07:13.300 03:10:19 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:13.300 03:10:19 accel.accel_copy_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:07:13.300 03:10:19 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:13.300 03:10:19 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:07:13.300 03:10:19 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:07:13.300 03:10:19 accel.accel_copy_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:13.301 03:10:19 accel.accel_copy_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:13.301 03:10:19 accel.accel_copy_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:13.301 03:10:19 accel.accel_copy_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:13.301 03:10:19 accel.accel_copy_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:13.301 03:10:19 accel.accel_copy_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:07:13.301 03:10:19 accel.accel_copy_crc32c -- accel/accel.sh@41 -- # jq -r . 00:07:13.301 [2024-07-15 03:10:19.098951] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:07:13.301 [2024-07-15 03:10:19.099016] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3071499 ] 00:07:13.301 EAL: No free 2048 kB hugepages reported on node 1 00:07:13.301 [2024-07-15 03:10:19.157701] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:13.301 [2024-07-15 03:10:19.245542] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:13.301 03:10:19 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:13.301 03:10:19 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:13.301 03:10:19 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:13.301 03:10:19 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:13.301 03:10:19 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:13.301 03:10:19 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:13.301 03:10:19 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:13.301 03:10:19 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:13.301 03:10:19 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0x1 00:07:13.301 03:10:19 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:13.301 03:10:19 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:13.301 03:10:19 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:13.301 03:10:19 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:13.301 03:10:19 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:13.301 03:10:19 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:13.301 03:10:19 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:13.301 03:10:19 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:13.301 03:10:19 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:13.301 03:10:19 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:13.301 03:10:19 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:13.301 03:10:19 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=copy_crc32c 00:07:13.301 03:10:19 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:13.301 03:10:19 accel.accel_copy_crc32c -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:07:13.301 03:10:19 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:13.301 03:10:19 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:13.301 03:10:19 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0 00:07:13.301 03:10:19 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:13.301 03:10:19 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:13.301 03:10:19 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:13.301 03:10:19 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:13.301 03:10:19 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:13.301 03:10:19 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:13.301 03:10:19 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:13.301 03:10:19 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:13.301 03:10:19 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:13.301 03:10:19 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:13.301 03:10:19 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:13.301 03:10:19 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:13.301 03:10:19 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:13.301 03:10:19 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:13.301 03:10:19 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:13.301 03:10:19 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=software 00:07:13.301 03:10:19 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:13.301 03:10:19 accel.accel_copy_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:07:13.301 03:10:19 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:13.301 03:10:19 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:13.301 03:10:19 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:07:13.301 03:10:19 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:13.301 03:10:19 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:13.301 03:10:19 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:13.301 03:10:19 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:07:13.301 03:10:19 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:13.301 03:10:19 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:13.301 03:10:19 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:13.301 03:10:19 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=1 00:07:13.301 03:10:19 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:13.301 03:10:19 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:13.301 03:10:19 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:13.301 03:10:19 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:07:13.301 03:10:19 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:13.301 03:10:19 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:13.301 03:10:19 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:13.301 03:10:19 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=Yes 00:07:13.301 03:10:19 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:13.301 03:10:19 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:13.301 03:10:19 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:13.301 03:10:19 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:13.301 03:10:19 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:13.301 03:10:19 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:13.301 03:10:19 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:13.301 03:10:19 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:13.301 03:10:19 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:13.301 03:10:19 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:13.301 03:10:19 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:14.678 03:10:20 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:14.678 03:10:20 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:14.678 03:10:20 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:14.678 03:10:20 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:14.678 03:10:20 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:14.678 03:10:20 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:14.678 03:10:20 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:14.678 03:10:20 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:14.678 03:10:20 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:14.678 03:10:20 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:14.678 03:10:20 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:14.678 03:10:20 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:14.678 03:10:20 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:14.678 03:10:20 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:14.678 03:10:20 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:14.678 03:10:20 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:14.678 03:10:20 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:14.678 03:10:20 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:14.678 03:10:20 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:14.679 03:10:20 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:14.679 03:10:20 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:14.679 03:10:20 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:14.679 03:10:20 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:14.679 03:10:20 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:14.679 03:10:20 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:14.679 03:10:20 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:07:14.679 03:10:20 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:14.679 00:07:14.679 real 0m1.387s 00:07:14.679 user 0m1.250s 00:07:14.679 sys 0m0.140s 00:07:14.679 03:10:20 accel.accel_copy_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:14.679 03:10:20 accel.accel_copy_crc32c -- common/autotest_common.sh@10 -- # set +x 00:07:14.679 ************************************ 00:07:14.679 END TEST accel_copy_crc32c 00:07:14.679 ************************************ 00:07:14.679 03:10:20 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:14.679 03:10:20 accel -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:07:14.679 03:10:20 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:07:14.679 03:10:20 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:14.679 03:10:20 accel -- common/autotest_common.sh@10 -- # set +x 00:07:14.679 ************************************ 00:07:14.679 START TEST accel_copy_crc32c_C2 00:07:14.679 ************************************ 00:07:14.679 03:10:20 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:07:14.679 03:10:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:07:14.679 03:10:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:07:14.679 03:10:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:14.679 03:10:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:07:14.679 03:10:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:14.679 03:10:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:07:14.679 03:10:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:07:14.679 03:10:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:14.679 03:10:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:14.679 03:10:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:14.679 03:10:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:14.679 03:10:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:14.679 03:10:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:07:14.679 03:10:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:07:14.679 [2024-07-15 03:10:20.530751] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:07:14.679 [2024-07-15 03:10:20.530819] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3071655 ] 00:07:14.679 EAL: No free 2048 kB hugepages reported on node 1 00:07:14.679 [2024-07-15 03:10:20.595722] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:14.679 [2024-07-15 03:10:20.693232] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:14.679 03:10:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:14.679 03:10:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:14.679 03:10:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:14.679 03:10:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:14.679 03:10:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:14.679 03:10:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:14.679 03:10:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:14.679 03:10:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:14.679 03:10:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:07:14.679 03:10:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:14.679 03:10:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:14.679 03:10:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:14.679 03:10:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:14.679 03:10:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:14.679 03:10:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:14.679 03:10:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:14.679 03:10:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:14.679 03:10:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:14.679 03:10:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:14.679 03:10:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:14.679 03:10:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=copy_crc32c 00:07:14.679 03:10:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:14.679 03:10:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:07:14.679 03:10:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:14.679 03:10:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:14.679 03:10:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:07:14.679 03:10:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:14.679 03:10:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:14.679 03:10:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:14.679 03:10:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:14.679 03:10:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:14.679 03:10:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:14.679 03:10:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:14.679 03:10:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='8192 bytes' 00:07:14.679 03:10:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:14.679 03:10:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:14.679 03:10:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:14.679 03:10:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:14.679 03:10:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:14.679 03:10:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:14.679 03:10:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:14.679 03:10:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:07:14.679 03:10:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:14.679 03:10:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:07:14.679 03:10:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:14.679 03:10:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:14.679 03:10:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:07:14.679 03:10:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:14.679 03:10:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:14.679 03:10:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:14.679 03:10:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:07:14.679 03:10:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:14.679 03:10:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:14.679 03:10:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:14.679 03:10:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:07:14.679 03:10:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:14.679 03:10:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:14.679 03:10:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:14.679 03:10:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:07:14.679 03:10:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:14.679 03:10:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:14.679 03:10:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:14.679 03:10:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:07:14.679 03:10:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:14.679 03:10:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:14.679 03:10:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:14.679 03:10:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:14.679 03:10:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:14.679 03:10:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:14.679 03:10:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:14.679 03:10:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:14.679 03:10:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:14.679 03:10:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:14.679 03:10:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:16.056 03:10:21 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:16.056 03:10:21 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:16.056 03:10:21 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:16.056 03:10:21 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:16.056 03:10:21 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:16.056 03:10:21 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:16.056 03:10:21 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:16.056 03:10:21 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:16.056 03:10:21 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:16.056 03:10:21 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:16.056 03:10:21 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:16.056 03:10:21 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:16.056 03:10:21 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:16.056 03:10:21 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:16.056 03:10:21 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:16.056 03:10:21 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:16.056 03:10:21 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:16.056 03:10:21 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:16.056 03:10:21 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:16.056 03:10:21 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:16.056 03:10:21 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:16.056 03:10:21 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:16.056 03:10:21 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:16.056 03:10:21 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:16.056 03:10:21 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:16.056 03:10:21 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:07:16.056 03:10:21 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:16.056 00:07:16.056 real 0m1.402s 00:07:16.056 user 0m1.250s 00:07:16.056 sys 0m0.155s 00:07:16.056 03:10:21 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:16.056 03:10:21 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:07:16.056 ************************************ 00:07:16.056 END TEST accel_copy_crc32c_C2 00:07:16.056 ************************************ 00:07:16.056 03:10:21 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:16.056 03:10:21 accel -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:07:16.056 03:10:21 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:07:16.056 03:10:21 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:16.056 03:10:21 accel -- common/autotest_common.sh@10 -- # set +x 00:07:16.056 ************************************ 00:07:16.056 START TEST accel_dualcast 00:07:16.056 ************************************ 00:07:16.056 03:10:21 accel.accel_dualcast -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dualcast -y 00:07:16.056 03:10:21 accel.accel_dualcast -- accel/accel.sh@16 -- # local accel_opc 00:07:16.056 03:10:21 accel.accel_dualcast -- accel/accel.sh@17 -- # local accel_module 00:07:16.056 03:10:21 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:16.056 03:10:21 accel.accel_dualcast -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:07:16.056 03:10:21 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:16.056 03:10:21 accel.accel_dualcast -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:07:16.056 03:10:21 accel.accel_dualcast -- accel/accel.sh@12 -- # build_accel_config 00:07:16.056 03:10:21 accel.accel_dualcast -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:16.056 03:10:21 accel.accel_dualcast -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:16.056 03:10:21 accel.accel_dualcast -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:16.056 03:10:21 accel.accel_dualcast -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:16.057 03:10:21 accel.accel_dualcast -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:16.057 03:10:21 accel.accel_dualcast -- accel/accel.sh@40 -- # local IFS=, 00:07:16.057 03:10:21 accel.accel_dualcast -- accel/accel.sh@41 -- # jq -r . 00:07:16.057 [2024-07-15 03:10:21.979314] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:07:16.057 [2024-07-15 03:10:21.979378] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3071928 ] 00:07:16.057 EAL: No free 2048 kB hugepages reported on node 1 00:07:16.057 [2024-07-15 03:10:22.041740] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:16.057 [2024-07-15 03:10:22.133926] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:16.057 03:10:22 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:16.057 03:10:22 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:16.057 03:10:22 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:16.057 03:10:22 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:16.057 03:10:22 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:16.057 03:10:22 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:16.057 03:10:22 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:16.057 03:10:22 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:16.057 03:10:22 accel.accel_dualcast -- accel/accel.sh@20 -- # val=0x1 00:07:16.057 03:10:22 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:16.057 03:10:22 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:16.057 03:10:22 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:16.057 03:10:22 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:16.057 03:10:22 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:16.057 03:10:22 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:16.057 03:10:22 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:16.057 03:10:22 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:16.057 03:10:22 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:16.057 03:10:22 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:16.057 03:10:22 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:16.057 03:10:22 accel.accel_dualcast -- accel/accel.sh@20 -- # val=dualcast 00:07:16.057 03:10:22 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:16.057 03:10:22 accel.accel_dualcast -- accel/accel.sh@23 -- # accel_opc=dualcast 00:07:16.057 03:10:22 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:16.057 03:10:22 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:16.057 03:10:22 accel.accel_dualcast -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:16.057 03:10:22 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:16.057 03:10:22 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:16.057 03:10:22 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:16.057 03:10:22 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:16.057 03:10:22 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:16.057 03:10:22 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:16.057 03:10:22 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:16.057 03:10:22 accel.accel_dualcast -- accel/accel.sh@20 -- # val=software 00:07:16.057 03:10:22 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:16.057 03:10:22 accel.accel_dualcast -- accel/accel.sh@22 -- # accel_module=software 00:07:16.057 03:10:22 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:16.057 03:10:22 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:16.057 03:10:22 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:07:16.057 03:10:22 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:16.057 03:10:22 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:16.057 03:10:22 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:16.057 03:10:22 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:07:16.057 03:10:22 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:16.057 03:10:22 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:16.057 03:10:22 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:16.057 03:10:22 accel.accel_dualcast -- accel/accel.sh@20 -- # val=1 00:07:16.057 03:10:22 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:16.057 03:10:22 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:16.057 03:10:22 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:16.057 03:10:22 accel.accel_dualcast -- accel/accel.sh@20 -- # val='1 seconds' 00:07:16.057 03:10:22 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:16.057 03:10:22 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:16.057 03:10:22 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:16.057 03:10:22 accel.accel_dualcast -- accel/accel.sh@20 -- # val=Yes 00:07:16.057 03:10:22 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:16.057 03:10:22 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:16.057 03:10:22 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:16.057 03:10:22 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:16.057 03:10:22 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:16.057 03:10:22 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:16.057 03:10:22 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:16.057 03:10:22 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:16.057 03:10:22 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:16.057 03:10:22 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:16.057 03:10:22 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:17.440 03:10:23 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:17.440 03:10:23 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:17.440 03:10:23 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:17.440 03:10:23 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:17.440 03:10:23 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:17.440 03:10:23 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:17.440 03:10:23 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:17.440 03:10:23 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:17.440 03:10:23 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:17.440 03:10:23 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:17.440 03:10:23 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:17.440 03:10:23 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:17.440 03:10:23 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:17.440 03:10:23 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:17.440 03:10:23 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:17.440 03:10:23 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:17.440 03:10:23 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:17.440 03:10:23 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:17.440 03:10:23 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:17.440 03:10:23 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:17.440 03:10:23 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:17.440 03:10:23 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:17.440 03:10:23 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:17.440 03:10:23 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:17.440 03:10:23 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:17.440 03:10:23 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:07:17.440 03:10:23 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:17.440 00:07:17.440 real 0m1.393s 00:07:17.440 user 0m1.253s 00:07:17.440 sys 0m0.142s 00:07:17.440 03:10:23 accel.accel_dualcast -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:17.440 03:10:23 accel.accel_dualcast -- common/autotest_common.sh@10 -- # set +x 00:07:17.440 ************************************ 00:07:17.440 END TEST accel_dualcast 00:07:17.440 ************************************ 00:07:17.440 03:10:23 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:17.440 03:10:23 accel -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:07:17.440 03:10:23 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:07:17.440 03:10:23 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:17.440 03:10:23 accel -- common/autotest_common.sh@10 -- # set +x 00:07:17.440 ************************************ 00:07:17.440 START TEST accel_compare 00:07:17.440 ************************************ 00:07:17.440 03:10:23 accel.accel_compare -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compare -y 00:07:17.440 03:10:23 accel.accel_compare -- accel/accel.sh@16 -- # local accel_opc 00:07:17.440 03:10:23 accel.accel_compare -- accel/accel.sh@17 -- # local accel_module 00:07:17.440 03:10:23 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:17.440 03:10:23 accel.accel_compare -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:07:17.440 03:10:23 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:17.440 03:10:23 accel.accel_compare -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:07:17.440 03:10:23 accel.accel_compare -- accel/accel.sh@12 -- # build_accel_config 00:07:17.440 03:10:23 accel.accel_compare -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:17.440 03:10:23 accel.accel_compare -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:17.440 03:10:23 accel.accel_compare -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:17.440 03:10:23 accel.accel_compare -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:17.440 03:10:23 accel.accel_compare -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:17.440 03:10:23 accel.accel_compare -- accel/accel.sh@40 -- # local IFS=, 00:07:17.440 03:10:23 accel.accel_compare -- accel/accel.sh@41 -- # jq -r . 00:07:17.440 [2024-07-15 03:10:23.417272] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:07:17.440 [2024-07-15 03:10:23.417336] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3072085 ] 00:07:17.440 EAL: No free 2048 kB hugepages reported on node 1 00:07:17.440 [2024-07-15 03:10:23.474011] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:17.440 [2024-07-15 03:10:23.559683] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:17.698 03:10:23 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:17.698 03:10:23 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:17.698 03:10:23 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:17.698 03:10:23 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:17.698 03:10:23 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:17.698 03:10:23 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:17.698 03:10:23 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:17.698 03:10:23 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:17.698 03:10:23 accel.accel_compare -- accel/accel.sh@20 -- # val=0x1 00:07:17.698 03:10:23 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:17.698 03:10:23 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:17.698 03:10:23 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:17.698 03:10:23 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:17.698 03:10:23 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:17.698 03:10:23 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:17.698 03:10:23 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:17.698 03:10:23 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:17.698 03:10:23 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:17.698 03:10:23 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:17.698 03:10:23 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:17.698 03:10:23 accel.accel_compare -- accel/accel.sh@20 -- # val=compare 00:07:17.698 03:10:23 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:17.698 03:10:23 accel.accel_compare -- accel/accel.sh@23 -- # accel_opc=compare 00:07:17.698 03:10:23 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:17.698 03:10:23 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:17.698 03:10:23 accel.accel_compare -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:17.698 03:10:23 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:17.698 03:10:23 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:17.698 03:10:23 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:17.698 03:10:23 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:17.698 03:10:23 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:17.698 03:10:23 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:17.698 03:10:23 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:17.698 03:10:23 accel.accel_compare -- accel/accel.sh@20 -- # val=software 00:07:17.698 03:10:23 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:17.698 03:10:23 accel.accel_compare -- accel/accel.sh@22 -- # accel_module=software 00:07:17.698 03:10:23 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:17.698 03:10:23 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:17.698 03:10:23 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:07:17.698 03:10:23 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:17.698 03:10:23 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:17.698 03:10:23 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:17.698 03:10:23 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:07:17.698 03:10:23 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:17.698 03:10:23 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:17.698 03:10:23 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:17.698 03:10:23 accel.accel_compare -- accel/accel.sh@20 -- # val=1 00:07:17.698 03:10:23 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:17.698 03:10:23 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:17.698 03:10:23 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:17.698 03:10:23 accel.accel_compare -- accel/accel.sh@20 -- # val='1 seconds' 00:07:17.698 03:10:23 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:17.698 03:10:23 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:17.698 03:10:23 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:17.698 03:10:23 accel.accel_compare -- accel/accel.sh@20 -- # val=Yes 00:07:17.698 03:10:23 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:17.698 03:10:23 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:17.698 03:10:23 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:17.698 03:10:23 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:17.698 03:10:23 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:17.698 03:10:23 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:17.698 03:10:23 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:17.698 03:10:23 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:17.698 03:10:23 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:17.698 03:10:23 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:17.698 03:10:23 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:19.075 03:10:24 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:19.075 03:10:24 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:19.075 03:10:24 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:19.075 03:10:24 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:19.075 03:10:24 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:19.075 03:10:24 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:19.075 03:10:24 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:19.075 03:10:24 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:19.075 03:10:24 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:19.075 03:10:24 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:19.075 03:10:24 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:19.075 03:10:24 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:19.075 03:10:24 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:19.075 03:10:24 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:19.075 03:10:24 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:19.075 03:10:24 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:19.075 03:10:24 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:19.075 03:10:24 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:19.075 03:10:24 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:19.075 03:10:24 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:19.075 03:10:24 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:19.075 03:10:24 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:19.075 03:10:24 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:19.075 03:10:24 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:19.075 03:10:24 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:19.075 03:10:24 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n compare ]] 00:07:19.075 03:10:24 accel.accel_compare -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:19.075 00:07:19.075 real 0m1.398s 00:07:19.075 user 0m1.261s 00:07:19.075 sys 0m0.140s 00:07:19.075 03:10:24 accel.accel_compare -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:19.075 03:10:24 accel.accel_compare -- common/autotest_common.sh@10 -- # set +x 00:07:19.075 ************************************ 00:07:19.075 END TEST accel_compare 00:07:19.076 ************************************ 00:07:19.076 03:10:24 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:19.076 03:10:24 accel -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:07:19.076 03:10:24 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:07:19.076 03:10:24 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:19.076 03:10:24 accel -- common/autotest_common.sh@10 -- # set +x 00:07:19.076 ************************************ 00:07:19.076 START TEST accel_xor 00:07:19.076 ************************************ 00:07:19.076 03:10:24 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y 00:07:19.076 03:10:24 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:07:19.076 03:10:24 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:07:19.076 03:10:24 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:19.076 03:10:24 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:07:19.076 03:10:24 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:19.076 03:10:24 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:07:19.076 03:10:24 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:07:19.076 03:10:24 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:19.076 03:10:24 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:19.076 03:10:24 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:19.076 03:10:24 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:19.076 03:10:24 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:19.076 03:10:24 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:07:19.076 03:10:24 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:07:19.076 [2024-07-15 03:10:24.863095] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:07:19.076 [2024-07-15 03:10:24.863162] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3072238 ] 00:07:19.076 EAL: No free 2048 kB hugepages reported on node 1 00:07:19.076 [2024-07-15 03:10:24.924646] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:19.076 [2024-07-15 03:10:25.015788] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:19.076 03:10:25 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:19.076 03:10:25 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:19.076 03:10:25 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:19.076 03:10:25 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:19.076 03:10:25 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:19.076 03:10:25 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:19.076 03:10:25 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:19.076 03:10:25 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:19.076 03:10:25 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:07:19.076 03:10:25 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:19.076 03:10:25 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:19.076 03:10:25 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:19.076 03:10:25 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:19.076 03:10:25 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:19.076 03:10:25 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:19.076 03:10:25 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:19.076 03:10:25 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:19.076 03:10:25 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:19.076 03:10:25 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:19.076 03:10:25 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:19.076 03:10:25 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:07:19.076 03:10:25 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:19.076 03:10:25 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:07:19.076 03:10:25 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:19.076 03:10:25 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:19.076 03:10:25 accel.accel_xor -- accel/accel.sh@20 -- # val=2 00:07:19.076 03:10:25 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:19.076 03:10:25 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:19.076 03:10:25 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:19.076 03:10:25 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:19.076 03:10:25 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:19.076 03:10:25 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:19.076 03:10:25 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:19.076 03:10:25 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:19.076 03:10:25 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:19.076 03:10:25 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:19.076 03:10:25 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:19.076 03:10:25 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:07:19.076 03:10:25 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:19.076 03:10:25 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:07:19.076 03:10:25 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:19.076 03:10:25 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:19.076 03:10:25 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:07:19.076 03:10:25 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:19.076 03:10:25 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:19.076 03:10:25 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:19.076 03:10:25 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:07:19.076 03:10:25 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:19.076 03:10:25 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:19.076 03:10:25 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:19.076 03:10:25 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:07:19.076 03:10:25 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:19.076 03:10:25 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:19.076 03:10:25 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:19.076 03:10:25 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:07:19.076 03:10:25 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:19.076 03:10:25 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:19.076 03:10:25 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:19.076 03:10:25 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:07:19.076 03:10:25 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:19.076 03:10:25 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:19.076 03:10:25 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:19.076 03:10:25 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:19.076 03:10:25 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:19.076 03:10:25 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:19.076 03:10:25 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:19.076 03:10:25 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:19.076 03:10:25 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:19.076 03:10:25 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:19.076 03:10:25 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:20.458 03:10:26 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:20.458 03:10:26 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:20.458 03:10:26 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:20.458 03:10:26 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:20.458 03:10:26 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:20.458 03:10:26 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:20.458 03:10:26 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:20.458 03:10:26 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:20.458 03:10:26 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:20.458 03:10:26 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:20.458 03:10:26 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:20.458 03:10:26 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:20.458 03:10:26 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:20.458 03:10:26 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:20.458 03:10:26 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:20.458 03:10:26 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:20.458 03:10:26 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:20.458 03:10:26 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:20.458 03:10:26 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:20.458 03:10:26 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:20.458 03:10:26 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:20.458 03:10:26 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:20.458 03:10:26 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:20.458 03:10:26 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:20.458 03:10:26 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:20.458 03:10:26 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:07:20.458 03:10:26 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:20.458 00:07:20.458 real 0m1.398s 00:07:20.458 user 0m1.258s 00:07:20.458 sys 0m0.143s 00:07:20.458 03:10:26 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:20.458 03:10:26 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:07:20.458 ************************************ 00:07:20.458 END TEST accel_xor 00:07:20.458 ************************************ 00:07:20.458 03:10:26 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:20.458 03:10:26 accel -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:07:20.458 03:10:26 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:07:20.458 03:10:26 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:20.458 03:10:26 accel -- common/autotest_common.sh@10 -- # set +x 00:07:20.458 ************************************ 00:07:20.458 START TEST accel_xor 00:07:20.458 ************************************ 00:07:20.458 03:10:26 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y -x 3 00:07:20.458 03:10:26 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:07:20.458 03:10:26 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:07:20.458 03:10:26 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:20.458 03:10:26 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:07:20.458 03:10:26 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:20.458 03:10:26 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:07:20.458 03:10:26 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:07:20.458 03:10:26 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:20.458 03:10:26 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:20.458 03:10:26 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:20.458 03:10:26 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:20.458 03:10:26 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:20.458 03:10:26 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:07:20.458 03:10:26 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:07:20.458 [2024-07-15 03:10:26.306861] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:07:20.458 [2024-07-15 03:10:26.306963] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3072400 ] 00:07:20.458 EAL: No free 2048 kB hugepages reported on node 1 00:07:20.458 [2024-07-15 03:10:26.369920] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:20.458 [2024-07-15 03:10:26.462056] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:20.458 03:10:26 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:20.458 03:10:26 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:20.458 03:10:26 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:20.458 03:10:26 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:20.458 03:10:26 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:20.458 03:10:26 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:20.458 03:10:26 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:20.458 03:10:26 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:20.458 03:10:26 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:07:20.458 03:10:26 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:20.458 03:10:26 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:20.458 03:10:26 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:20.458 03:10:26 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:20.458 03:10:26 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:20.458 03:10:26 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:20.458 03:10:26 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:20.458 03:10:26 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:20.458 03:10:26 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:20.458 03:10:26 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:20.458 03:10:26 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:20.458 03:10:26 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:07:20.458 03:10:26 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:20.458 03:10:26 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:07:20.458 03:10:26 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:20.458 03:10:26 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:20.458 03:10:26 accel.accel_xor -- accel/accel.sh@20 -- # val=3 00:07:20.458 03:10:26 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:20.458 03:10:26 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:20.458 03:10:26 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:20.458 03:10:26 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:20.458 03:10:26 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:20.458 03:10:26 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:20.458 03:10:26 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:20.458 03:10:26 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:20.458 03:10:26 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:20.458 03:10:26 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:20.458 03:10:26 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:20.458 03:10:26 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:07:20.458 03:10:26 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:20.458 03:10:26 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:07:20.458 03:10:26 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:20.458 03:10:26 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:20.458 03:10:26 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:07:20.458 03:10:26 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:20.458 03:10:26 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:20.458 03:10:26 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:20.458 03:10:26 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:07:20.458 03:10:26 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:20.458 03:10:26 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:20.458 03:10:26 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:20.458 03:10:26 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:07:20.458 03:10:26 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:20.458 03:10:26 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:20.458 03:10:26 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:20.458 03:10:26 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:07:20.458 03:10:26 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:20.458 03:10:26 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:20.458 03:10:26 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:20.459 03:10:26 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:07:20.459 03:10:26 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:20.459 03:10:26 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:20.459 03:10:26 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:20.459 03:10:26 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:20.459 03:10:26 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:20.459 03:10:26 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:20.459 03:10:26 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:20.459 03:10:26 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:20.459 03:10:26 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:20.459 03:10:26 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:20.459 03:10:26 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:21.835 03:10:27 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:21.835 03:10:27 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:21.835 03:10:27 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:21.835 03:10:27 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:21.835 03:10:27 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:21.835 03:10:27 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:21.835 03:10:27 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:21.835 03:10:27 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:21.835 03:10:27 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:21.835 03:10:27 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:21.835 03:10:27 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:21.835 03:10:27 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:21.835 03:10:27 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:21.835 03:10:27 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:21.835 03:10:27 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:21.835 03:10:27 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:21.835 03:10:27 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:21.835 03:10:27 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:21.835 03:10:27 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:21.835 03:10:27 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:21.835 03:10:27 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:21.835 03:10:27 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:21.835 03:10:27 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:21.835 03:10:27 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:21.835 03:10:27 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:21.835 03:10:27 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:07:21.835 03:10:27 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:21.835 00:07:21.835 real 0m1.402s 00:07:21.835 user 0m1.255s 00:07:21.835 sys 0m0.150s 00:07:21.835 03:10:27 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:21.835 03:10:27 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:07:21.835 ************************************ 00:07:21.835 END TEST accel_xor 00:07:21.835 ************************************ 00:07:21.835 03:10:27 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:21.835 03:10:27 accel -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:07:21.835 03:10:27 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:07:21.835 03:10:27 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:21.835 03:10:27 accel -- common/autotest_common.sh@10 -- # set +x 00:07:21.835 ************************************ 00:07:21.835 START TEST accel_dif_verify 00:07:21.835 ************************************ 00:07:21.835 03:10:27 accel.accel_dif_verify -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_verify 00:07:21.835 03:10:27 accel.accel_dif_verify -- accel/accel.sh@16 -- # local accel_opc 00:07:21.835 03:10:27 accel.accel_dif_verify -- accel/accel.sh@17 -- # local accel_module 00:07:21.835 03:10:27 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:21.835 03:10:27 accel.accel_dif_verify -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:07:21.835 03:10:27 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:21.835 03:10:27 accel.accel_dif_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:07:21.835 03:10:27 accel.accel_dif_verify -- accel/accel.sh@12 -- # build_accel_config 00:07:21.835 03:10:27 accel.accel_dif_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:21.835 03:10:27 accel.accel_dif_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:21.835 03:10:27 accel.accel_dif_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:21.835 03:10:27 accel.accel_dif_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:21.835 03:10:27 accel.accel_dif_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:21.835 03:10:27 accel.accel_dif_verify -- accel/accel.sh@40 -- # local IFS=, 00:07:21.835 03:10:27 accel.accel_dif_verify -- accel/accel.sh@41 -- # jq -r . 00:07:21.835 [2024-07-15 03:10:27.753136] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:07:21.836 [2024-07-15 03:10:27.753201] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3072668 ] 00:07:21.836 EAL: No free 2048 kB hugepages reported on node 1 00:07:21.836 [2024-07-15 03:10:27.816836] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:21.836 [2024-07-15 03:10:27.908976] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:21.836 03:10:27 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:21.836 03:10:27 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:21.836 03:10:27 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:21.836 03:10:27 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:21.836 03:10:27 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:21.836 03:10:27 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:21.836 03:10:27 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:21.836 03:10:27 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:21.836 03:10:27 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=0x1 00:07:21.836 03:10:27 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:21.836 03:10:27 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:21.836 03:10:27 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:21.836 03:10:27 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:21.836 03:10:27 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:21.836 03:10:27 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:21.836 03:10:27 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:21.836 03:10:27 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:21.836 03:10:27 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:21.836 03:10:27 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:21.836 03:10:27 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:21.836 03:10:27 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=dif_verify 00:07:21.836 03:10:27 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:21.836 03:10:27 accel.accel_dif_verify -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:07:21.836 03:10:27 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:21.836 03:10:27 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:21.836 03:10:27 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:21.836 03:10:27 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:21.836 03:10:27 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:21.836 03:10:27 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:21.836 03:10:27 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:21.836 03:10:27 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:21.836 03:10:27 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:21.836 03:10:27 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:21.836 03:10:27 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='512 bytes' 00:07:21.836 03:10:27 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:21.836 03:10:27 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:21.836 03:10:27 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:21.836 03:10:27 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='8 bytes' 00:07:21.836 03:10:27 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:21.836 03:10:27 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:21.836 03:10:27 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:21.836 03:10:27 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:21.836 03:10:27 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:21.836 03:10:27 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:21.836 03:10:27 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:21.836 03:10:27 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=software 00:07:21.836 03:10:27 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:21.836 03:10:27 accel.accel_dif_verify -- accel/accel.sh@22 -- # accel_module=software 00:07:21.836 03:10:27 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:21.836 03:10:27 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:21.836 03:10:27 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:07:21.836 03:10:27 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:21.836 03:10:27 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:21.836 03:10:27 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:21.836 03:10:27 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:07:21.836 03:10:27 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:21.836 03:10:27 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:21.836 03:10:27 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:21.836 03:10:27 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=1 00:07:21.836 03:10:27 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:21.836 03:10:27 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:21.836 03:10:27 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:21.836 03:10:27 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='1 seconds' 00:07:21.836 03:10:27 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:21.836 03:10:27 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:21.836 03:10:27 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:21.836 03:10:27 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=No 00:07:21.836 03:10:27 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:21.836 03:10:27 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:21.836 03:10:27 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:21.836 03:10:27 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:21.836 03:10:27 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:21.836 03:10:27 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:21.836 03:10:27 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:21.836 03:10:27 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:21.836 03:10:27 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:21.836 03:10:27 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:21.836 03:10:27 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:23.217 03:10:29 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:23.217 03:10:29 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:23.217 03:10:29 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:23.217 03:10:29 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:23.217 03:10:29 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:23.217 03:10:29 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:23.217 03:10:29 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:23.217 03:10:29 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:23.217 03:10:29 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:23.217 03:10:29 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:23.217 03:10:29 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:23.217 03:10:29 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:23.217 03:10:29 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:23.217 03:10:29 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:23.217 03:10:29 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:23.217 03:10:29 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:23.217 03:10:29 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:23.217 03:10:29 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:23.217 03:10:29 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:23.217 03:10:29 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:23.217 03:10:29 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:23.217 03:10:29 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:23.217 03:10:29 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:23.217 03:10:29 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:23.217 03:10:29 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:23.217 03:10:29 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:07:23.217 03:10:29 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:23.217 00:07:23.217 real 0m1.397s 00:07:23.217 user 0m1.263s 00:07:23.217 sys 0m0.138s 00:07:23.217 03:10:29 accel.accel_dif_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:23.217 03:10:29 accel.accel_dif_verify -- common/autotest_common.sh@10 -- # set +x 00:07:23.217 ************************************ 00:07:23.217 END TEST accel_dif_verify 00:07:23.217 ************************************ 00:07:23.217 03:10:29 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:23.217 03:10:29 accel -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:07:23.217 03:10:29 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:07:23.217 03:10:29 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:23.217 03:10:29 accel -- common/autotest_common.sh@10 -- # set +x 00:07:23.217 ************************************ 00:07:23.217 START TEST accel_dif_generate 00:07:23.217 ************************************ 00:07:23.217 03:10:29 accel.accel_dif_generate -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate 00:07:23.217 03:10:29 accel.accel_dif_generate -- accel/accel.sh@16 -- # local accel_opc 00:07:23.217 03:10:29 accel.accel_dif_generate -- accel/accel.sh@17 -- # local accel_module 00:07:23.217 03:10:29 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:23.217 03:10:29 accel.accel_dif_generate -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:07:23.217 03:10:29 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:23.217 03:10:29 accel.accel_dif_generate -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:07:23.217 03:10:29 accel.accel_dif_generate -- accel/accel.sh@12 -- # build_accel_config 00:07:23.217 03:10:29 accel.accel_dif_generate -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:23.217 03:10:29 accel.accel_dif_generate -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:23.217 03:10:29 accel.accel_dif_generate -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:23.217 03:10:29 accel.accel_dif_generate -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:23.217 03:10:29 accel.accel_dif_generate -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:23.217 03:10:29 accel.accel_dif_generate -- accel/accel.sh@40 -- # local IFS=, 00:07:23.217 03:10:29 accel.accel_dif_generate -- accel/accel.sh@41 -- # jq -r . 00:07:23.217 [2024-07-15 03:10:29.203028] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:07:23.217 [2024-07-15 03:10:29.203092] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3072830 ] 00:07:23.217 EAL: No free 2048 kB hugepages reported on node 1 00:07:23.217 [2024-07-15 03:10:29.265550] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:23.217 [2024-07-15 03:10:29.356299] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:23.476 03:10:29 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:23.476 03:10:29 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:23.476 03:10:29 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:23.476 03:10:29 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:23.476 03:10:29 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:23.476 03:10:29 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:23.476 03:10:29 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:23.476 03:10:29 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:23.476 03:10:29 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=0x1 00:07:23.476 03:10:29 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:23.476 03:10:29 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:23.476 03:10:29 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:23.476 03:10:29 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:23.476 03:10:29 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:23.476 03:10:29 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:23.476 03:10:29 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:23.476 03:10:29 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:23.476 03:10:29 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:23.476 03:10:29 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:23.476 03:10:29 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:23.476 03:10:29 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=dif_generate 00:07:23.476 03:10:29 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:23.476 03:10:29 accel.accel_dif_generate -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:07:23.476 03:10:29 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:23.476 03:10:29 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:23.476 03:10:29 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:23.476 03:10:29 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:23.476 03:10:29 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:23.476 03:10:29 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:23.477 03:10:29 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:23.477 03:10:29 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:23.477 03:10:29 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:23.477 03:10:29 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:23.477 03:10:29 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='512 bytes' 00:07:23.477 03:10:29 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:23.477 03:10:29 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:23.477 03:10:29 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:23.477 03:10:29 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='8 bytes' 00:07:23.477 03:10:29 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:23.477 03:10:29 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:23.477 03:10:29 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:23.477 03:10:29 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:23.477 03:10:29 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:23.477 03:10:29 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:23.477 03:10:29 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:23.477 03:10:29 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=software 00:07:23.477 03:10:29 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:23.477 03:10:29 accel.accel_dif_generate -- accel/accel.sh@22 -- # accel_module=software 00:07:23.477 03:10:29 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:23.477 03:10:29 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:23.477 03:10:29 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:07:23.477 03:10:29 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:23.477 03:10:29 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:23.477 03:10:29 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:23.477 03:10:29 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:07:23.477 03:10:29 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:23.477 03:10:29 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:23.477 03:10:29 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:23.477 03:10:29 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=1 00:07:23.477 03:10:29 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:23.477 03:10:29 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:23.477 03:10:29 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:23.477 03:10:29 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='1 seconds' 00:07:23.477 03:10:29 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:23.477 03:10:29 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:23.477 03:10:29 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:23.477 03:10:29 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=No 00:07:23.477 03:10:29 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:23.477 03:10:29 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:23.477 03:10:29 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:23.477 03:10:29 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:23.477 03:10:29 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:23.477 03:10:29 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:23.477 03:10:29 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:23.477 03:10:29 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:23.477 03:10:29 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:23.477 03:10:29 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:23.477 03:10:29 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:24.854 03:10:30 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:24.854 03:10:30 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:24.854 03:10:30 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:24.854 03:10:30 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:24.854 03:10:30 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:24.854 03:10:30 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:24.854 03:10:30 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:24.854 03:10:30 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:24.854 03:10:30 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:24.854 03:10:30 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:24.854 03:10:30 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:24.854 03:10:30 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:24.854 03:10:30 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:24.854 03:10:30 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:24.854 03:10:30 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:24.854 03:10:30 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:24.854 03:10:30 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:24.854 03:10:30 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:24.854 03:10:30 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:24.854 03:10:30 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:24.854 03:10:30 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:24.854 03:10:30 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:24.854 03:10:30 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:24.854 03:10:30 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:24.854 03:10:30 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:24.854 03:10:30 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:07:24.854 03:10:30 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:24.854 00:07:24.854 real 0m1.405s 00:07:24.854 user 0m1.265s 00:07:24.854 sys 0m0.145s 00:07:24.854 03:10:30 accel.accel_dif_generate -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:24.854 03:10:30 accel.accel_dif_generate -- common/autotest_common.sh@10 -- # set +x 00:07:24.854 ************************************ 00:07:24.854 END TEST accel_dif_generate 00:07:24.854 ************************************ 00:07:24.854 03:10:30 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:24.854 03:10:30 accel -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:07:24.854 03:10:30 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:07:24.854 03:10:30 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:24.854 03:10:30 accel -- common/autotest_common.sh@10 -- # set +x 00:07:24.854 ************************************ 00:07:24.854 START TEST accel_dif_generate_copy 00:07:24.854 ************************************ 00:07:24.854 03:10:30 accel.accel_dif_generate_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate_copy 00:07:24.854 03:10:30 accel.accel_dif_generate_copy -- accel/accel.sh@16 -- # local accel_opc 00:07:24.854 03:10:30 accel.accel_dif_generate_copy -- accel/accel.sh@17 -- # local accel_module 00:07:24.854 03:10:30 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:24.854 03:10:30 accel.accel_dif_generate_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:07:24.854 03:10:30 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:24.854 03:10:30 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:07:24.854 03:10:30 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # build_accel_config 00:07:24.854 03:10:30 accel.accel_dif_generate_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:24.854 03:10:30 accel.accel_dif_generate_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:24.854 03:10:30 accel.accel_dif_generate_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:24.854 03:10:30 accel.accel_dif_generate_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:24.854 03:10:30 accel.accel_dif_generate_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:24.854 03:10:30 accel.accel_dif_generate_copy -- accel/accel.sh@40 -- # local IFS=, 00:07:24.854 03:10:30 accel.accel_dif_generate_copy -- accel/accel.sh@41 -- # jq -r . 00:07:24.854 [2024-07-15 03:10:30.657695] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:07:24.854 [2024-07-15 03:10:30.657768] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3072987 ] 00:07:24.854 EAL: No free 2048 kB hugepages reported on node 1 00:07:24.854 [2024-07-15 03:10:30.720981] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:24.854 [2024-07-15 03:10:30.813300] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:24.854 03:10:30 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:24.854 03:10:30 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:24.854 03:10:30 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:24.854 03:10:30 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:24.854 03:10:30 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:24.854 03:10:30 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:24.854 03:10:30 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:24.854 03:10:30 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:24.854 03:10:30 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=0x1 00:07:24.854 03:10:30 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:24.854 03:10:30 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:24.854 03:10:30 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:24.854 03:10:30 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:24.854 03:10:30 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:24.854 03:10:30 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:24.854 03:10:30 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:24.854 03:10:30 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:24.854 03:10:30 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:24.854 03:10:30 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:24.854 03:10:30 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:24.854 03:10:30 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=dif_generate_copy 00:07:24.854 03:10:30 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:24.854 03:10:30 accel.accel_dif_generate_copy -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:07:24.854 03:10:30 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:24.854 03:10:30 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:24.854 03:10:30 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:24.854 03:10:30 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:24.854 03:10:30 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:24.854 03:10:30 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:24.854 03:10:30 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:24.854 03:10:30 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:24.854 03:10:30 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:24.854 03:10:30 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:24.854 03:10:30 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:24.854 03:10:30 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:24.854 03:10:30 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:24.855 03:10:30 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:24.855 03:10:30 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=software 00:07:24.855 03:10:30 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:24.855 03:10:30 accel.accel_dif_generate_copy -- accel/accel.sh@22 -- # accel_module=software 00:07:24.855 03:10:30 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:24.855 03:10:30 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:24.855 03:10:30 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:07:24.855 03:10:30 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:24.855 03:10:30 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:24.855 03:10:30 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:24.855 03:10:30 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:07:24.855 03:10:30 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:24.855 03:10:30 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:24.855 03:10:30 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:24.855 03:10:30 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=1 00:07:24.855 03:10:30 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:24.855 03:10:30 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:24.855 03:10:30 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:24.855 03:10:30 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:07:24.855 03:10:30 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:24.855 03:10:30 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:24.855 03:10:30 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:24.855 03:10:30 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=No 00:07:24.855 03:10:30 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:24.855 03:10:30 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:24.855 03:10:30 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:24.855 03:10:30 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:24.855 03:10:30 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:24.855 03:10:30 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:24.855 03:10:30 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:24.855 03:10:30 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:24.855 03:10:30 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:24.855 03:10:30 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:24.855 03:10:30 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:26.258 03:10:32 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:26.258 03:10:32 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:26.258 03:10:32 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:26.258 03:10:32 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:26.258 03:10:32 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:26.258 03:10:32 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:26.258 03:10:32 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:26.258 03:10:32 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:26.258 03:10:32 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:26.258 03:10:32 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:26.258 03:10:32 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:26.258 03:10:32 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:26.258 03:10:32 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:26.258 03:10:32 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:26.258 03:10:32 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:26.259 03:10:32 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:26.259 03:10:32 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:26.259 03:10:32 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:26.259 03:10:32 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:26.259 03:10:32 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:26.259 03:10:32 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:26.259 03:10:32 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:26.259 03:10:32 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:26.259 03:10:32 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:26.259 03:10:32 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:26.259 03:10:32 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:07:26.259 03:10:32 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:26.259 00:07:26.259 real 0m1.413s 00:07:26.259 user 0m1.266s 00:07:26.259 sys 0m0.150s 00:07:26.259 03:10:32 accel.accel_dif_generate_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:26.259 03:10:32 accel.accel_dif_generate_copy -- common/autotest_common.sh@10 -- # set +x 00:07:26.259 ************************************ 00:07:26.259 END TEST accel_dif_generate_copy 00:07:26.259 ************************************ 00:07:26.259 03:10:32 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:26.259 03:10:32 accel -- accel/accel.sh@115 -- # [[ y == y ]] 00:07:26.259 03:10:32 accel -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:26.259 03:10:32 accel -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:07:26.259 03:10:32 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:26.259 03:10:32 accel -- common/autotest_common.sh@10 -- # set +x 00:07:26.259 ************************************ 00:07:26.259 START TEST accel_comp 00:07:26.259 ************************************ 00:07:26.259 03:10:32 accel.accel_comp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:26.259 03:10:32 accel.accel_comp -- accel/accel.sh@16 -- # local accel_opc 00:07:26.259 03:10:32 accel.accel_comp -- accel/accel.sh@17 -- # local accel_module 00:07:26.259 03:10:32 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:26.259 03:10:32 accel.accel_comp -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:26.259 03:10:32 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:26.259 03:10:32 accel.accel_comp -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:26.259 03:10:32 accel.accel_comp -- accel/accel.sh@12 -- # build_accel_config 00:07:26.259 03:10:32 accel.accel_comp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:26.259 03:10:32 accel.accel_comp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:26.259 03:10:32 accel.accel_comp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:26.259 03:10:32 accel.accel_comp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:26.259 03:10:32 accel.accel_comp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:26.259 03:10:32 accel.accel_comp -- accel/accel.sh@40 -- # local IFS=, 00:07:26.259 03:10:32 accel.accel_comp -- accel/accel.sh@41 -- # jq -r . 00:07:26.259 [2024-07-15 03:10:32.113245] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:07:26.259 [2024-07-15 03:10:32.113309] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3073225 ] 00:07:26.259 EAL: No free 2048 kB hugepages reported on node 1 00:07:26.259 [2024-07-15 03:10:32.175745] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:26.259 [2024-07-15 03:10:32.268526] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:26.259 03:10:32 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:26.259 03:10:32 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:26.259 03:10:32 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:26.259 03:10:32 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:26.259 03:10:32 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:26.259 03:10:32 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:26.259 03:10:32 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:26.259 03:10:32 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:26.259 03:10:32 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:26.259 03:10:32 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:26.259 03:10:32 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:26.259 03:10:32 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:26.259 03:10:32 accel.accel_comp -- accel/accel.sh@20 -- # val=0x1 00:07:26.259 03:10:32 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:26.259 03:10:32 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:26.259 03:10:32 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:26.259 03:10:32 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:26.259 03:10:32 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:26.259 03:10:32 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:26.259 03:10:32 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:26.259 03:10:32 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:26.259 03:10:32 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:26.259 03:10:32 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:26.259 03:10:32 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:26.259 03:10:32 accel.accel_comp -- accel/accel.sh@20 -- # val=compress 00:07:26.259 03:10:32 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:26.259 03:10:32 accel.accel_comp -- accel/accel.sh@23 -- # accel_opc=compress 00:07:26.259 03:10:32 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:26.259 03:10:32 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:26.259 03:10:32 accel.accel_comp -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:26.259 03:10:32 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:26.259 03:10:32 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:26.259 03:10:32 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:26.259 03:10:32 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:26.259 03:10:32 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:26.259 03:10:32 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:26.259 03:10:32 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:26.259 03:10:32 accel.accel_comp -- accel/accel.sh@20 -- # val=software 00:07:26.259 03:10:32 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:26.259 03:10:32 accel.accel_comp -- accel/accel.sh@22 -- # accel_module=software 00:07:26.259 03:10:32 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:26.259 03:10:32 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:26.259 03:10:32 accel.accel_comp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:26.259 03:10:32 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:26.259 03:10:32 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:26.259 03:10:32 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:26.259 03:10:32 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:07:26.259 03:10:32 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:26.259 03:10:32 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:26.259 03:10:32 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:26.259 03:10:32 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:07:26.259 03:10:32 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:26.259 03:10:32 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:26.259 03:10:32 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:26.259 03:10:32 accel.accel_comp -- accel/accel.sh@20 -- # val=1 00:07:26.259 03:10:32 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:26.259 03:10:32 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:26.259 03:10:32 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:26.259 03:10:32 accel.accel_comp -- accel/accel.sh@20 -- # val='1 seconds' 00:07:26.259 03:10:32 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:26.259 03:10:32 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:26.259 03:10:32 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:26.259 03:10:32 accel.accel_comp -- accel/accel.sh@20 -- # val=No 00:07:26.259 03:10:32 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:26.259 03:10:32 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:26.259 03:10:32 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:26.259 03:10:32 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:26.259 03:10:32 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:26.259 03:10:32 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:26.259 03:10:32 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:26.259 03:10:32 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:26.259 03:10:32 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:26.259 03:10:32 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:26.259 03:10:32 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:27.641 03:10:33 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:27.641 03:10:33 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:27.641 03:10:33 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:27.641 03:10:33 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:27.641 03:10:33 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:27.641 03:10:33 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:27.641 03:10:33 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:27.641 03:10:33 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:27.641 03:10:33 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:27.641 03:10:33 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:27.641 03:10:33 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:27.641 03:10:33 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:27.641 03:10:33 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:27.641 03:10:33 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:27.641 03:10:33 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:27.641 03:10:33 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:27.641 03:10:33 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:27.641 03:10:33 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:27.641 03:10:33 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:27.641 03:10:33 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:27.641 03:10:33 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:27.641 03:10:33 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:27.641 03:10:33 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:27.641 03:10:33 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:27.641 03:10:33 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:27.641 03:10:33 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n compress ]] 00:07:27.641 03:10:33 accel.accel_comp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:27.641 00:07:27.641 real 0m1.416s 00:07:27.641 user 0m1.275s 00:07:27.641 sys 0m0.145s 00:07:27.641 03:10:33 accel.accel_comp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:27.641 03:10:33 accel.accel_comp -- common/autotest_common.sh@10 -- # set +x 00:07:27.641 ************************************ 00:07:27.641 END TEST accel_comp 00:07:27.641 ************************************ 00:07:27.641 03:10:33 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:27.641 03:10:33 accel -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:27.641 03:10:33 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:07:27.641 03:10:33 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:27.641 03:10:33 accel -- common/autotest_common.sh@10 -- # set +x 00:07:27.641 ************************************ 00:07:27.641 START TEST accel_decomp 00:07:27.641 ************************************ 00:07:27.641 03:10:33 accel.accel_decomp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:27.641 03:10:33 accel.accel_decomp -- accel/accel.sh@16 -- # local accel_opc 00:07:27.641 03:10:33 accel.accel_decomp -- accel/accel.sh@17 -- # local accel_module 00:07:27.641 03:10:33 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:27.641 03:10:33 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:27.641 03:10:33 accel.accel_decomp -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:27.641 03:10:33 accel.accel_decomp -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:27.641 03:10:33 accel.accel_decomp -- accel/accel.sh@12 -- # build_accel_config 00:07:27.641 03:10:33 accel.accel_decomp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:27.641 03:10:33 accel.accel_decomp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:27.641 03:10:33 accel.accel_decomp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:27.641 03:10:33 accel.accel_decomp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:27.641 03:10:33 accel.accel_decomp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:27.641 03:10:33 accel.accel_decomp -- accel/accel.sh@40 -- # local IFS=, 00:07:27.641 03:10:33 accel.accel_decomp -- accel/accel.sh@41 -- # jq -r . 00:07:27.641 [2024-07-15 03:10:33.579284] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:07:27.641 [2024-07-15 03:10:33.579352] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3073417 ] 00:07:27.641 EAL: No free 2048 kB hugepages reported on node 1 00:07:27.641 [2024-07-15 03:10:33.640821] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:27.641 [2024-07-15 03:10:33.736002] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:27.902 03:10:33 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:27.902 03:10:33 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:27.902 03:10:33 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:27.902 03:10:33 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:27.902 03:10:33 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:27.902 03:10:33 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:27.902 03:10:33 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:27.902 03:10:33 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:27.902 03:10:33 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:27.902 03:10:33 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:27.902 03:10:33 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:27.902 03:10:33 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:27.902 03:10:33 accel.accel_decomp -- accel/accel.sh@20 -- # val=0x1 00:07:27.902 03:10:33 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:27.902 03:10:33 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:27.902 03:10:33 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:27.902 03:10:33 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:27.902 03:10:33 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:27.902 03:10:33 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:27.902 03:10:33 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:27.902 03:10:33 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:27.902 03:10:33 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:27.902 03:10:33 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:27.902 03:10:33 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:27.902 03:10:33 accel.accel_decomp -- accel/accel.sh@20 -- # val=decompress 00:07:27.902 03:10:33 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:27.902 03:10:33 accel.accel_decomp -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:27.902 03:10:33 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:27.902 03:10:33 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:27.902 03:10:33 accel.accel_decomp -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:27.902 03:10:33 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:27.902 03:10:33 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:27.902 03:10:33 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:27.902 03:10:33 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:27.902 03:10:33 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:27.902 03:10:33 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:27.902 03:10:33 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:27.902 03:10:33 accel.accel_decomp -- accel/accel.sh@20 -- # val=software 00:07:27.902 03:10:33 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:27.902 03:10:33 accel.accel_decomp -- accel/accel.sh@22 -- # accel_module=software 00:07:27.902 03:10:33 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:27.902 03:10:33 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:27.902 03:10:33 accel.accel_decomp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:27.902 03:10:33 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:27.902 03:10:33 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:27.902 03:10:33 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:27.902 03:10:33 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:07:27.902 03:10:33 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:27.902 03:10:33 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:27.902 03:10:33 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:27.902 03:10:33 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:07:27.902 03:10:33 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:27.902 03:10:33 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:27.902 03:10:33 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:27.902 03:10:33 accel.accel_decomp -- accel/accel.sh@20 -- # val=1 00:07:27.902 03:10:33 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:27.902 03:10:33 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:27.902 03:10:33 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:27.902 03:10:33 accel.accel_decomp -- accel/accel.sh@20 -- # val='1 seconds' 00:07:27.902 03:10:33 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:27.902 03:10:33 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:27.902 03:10:33 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:27.902 03:10:33 accel.accel_decomp -- accel/accel.sh@20 -- # val=Yes 00:07:27.902 03:10:33 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:27.902 03:10:33 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:27.902 03:10:33 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:27.902 03:10:33 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:27.902 03:10:33 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:27.902 03:10:33 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:27.902 03:10:33 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:27.902 03:10:33 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:27.902 03:10:33 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:27.902 03:10:33 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:27.902 03:10:33 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:28.837 03:10:34 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:28.837 03:10:34 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:28.837 03:10:34 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:28.837 03:10:34 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:28.837 03:10:34 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:28.837 03:10:34 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:28.837 03:10:34 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:28.837 03:10:34 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:28.837 03:10:34 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:28.837 03:10:34 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:28.837 03:10:34 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:28.837 03:10:34 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:28.837 03:10:34 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:28.837 03:10:34 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:28.837 03:10:34 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:28.837 03:10:34 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:28.837 03:10:34 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:28.837 03:10:34 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:28.837 03:10:34 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:28.837 03:10:34 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:28.837 03:10:34 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:28.837 03:10:34 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:28.837 03:10:34 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:28.837 03:10:34 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:28.837 03:10:34 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:28.837 03:10:34 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:28.837 03:10:34 accel.accel_decomp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:28.837 00:07:28.837 real 0m1.419s 00:07:28.837 user 0m1.275s 00:07:28.837 sys 0m0.148s 00:07:28.837 03:10:34 accel.accel_decomp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:28.837 03:10:34 accel.accel_decomp -- common/autotest_common.sh@10 -- # set +x 00:07:28.837 ************************************ 00:07:28.837 END TEST accel_decomp 00:07:28.837 ************************************ 00:07:29.095 03:10:34 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:29.095 03:10:34 accel -- accel/accel.sh@118 -- # run_test accel_decomp_full accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:29.095 03:10:34 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:07:29.095 03:10:34 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:29.095 03:10:34 accel -- common/autotest_common.sh@10 -- # set +x 00:07:29.095 ************************************ 00:07:29.095 START TEST accel_decomp_full 00:07:29.095 ************************************ 00:07:29.095 03:10:35 accel.accel_decomp_full -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:29.095 03:10:35 accel.accel_decomp_full -- accel/accel.sh@16 -- # local accel_opc 00:07:29.095 03:10:35 accel.accel_decomp_full -- accel/accel.sh@17 -- # local accel_module 00:07:29.095 03:10:35 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:29.095 03:10:35 accel.accel_decomp_full -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:29.095 03:10:35 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:29.095 03:10:35 accel.accel_decomp_full -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:29.095 03:10:35 accel.accel_decomp_full -- accel/accel.sh@12 -- # build_accel_config 00:07:29.095 03:10:35 accel.accel_decomp_full -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:29.095 03:10:35 accel.accel_decomp_full -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:29.095 03:10:35 accel.accel_decomp_full -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:29.095 03:10:35 accel.accel_decomp_full -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:29.095 03:10:35 accel.accel_decomp_full -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:29.095 03:10:35 accel.accel_decomp_full -- accel/accel.sh@40 -- # local IFS=, 00:07:29.095 03:10:35 accel.accel_decomp_full -- accel/accel.sh@41 -- # jq -r . 00:07:29.095 [2024-07-15 03:10:35.040100] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:07:29.095 [2024-07-15 03:10:35.040163] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3073571 ] 00:07:29.095 EAL: No free 2048 kB hugepages reported on node 1 00:07:29.095 [2024-07-15 03:10:35.102315] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:29.095 [2024-07-15 03:10:35.194772] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:29.354 03:10:35 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:29.354 03:10:35 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:29.354 03:10:35 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:29.354 03:10:35 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:29.354 03:10:35 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:29.354 03:10:35 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:29.354 03:10:35 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:29.354 03:10:35 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:29.354 03:10:35 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:29.354 03:10:35 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:29.354 03:10:35 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:29.354 03:10:35 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:29.354 03:10:35 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=0x1 00:07:29.354 03:10:35 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:29.354 03:10:35 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:29.354 03:10:35 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:29.354 03:10:35 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:29.354 03:10:35 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:29.354 03:10:35 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:29.354 03:10:35 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:29.354 03:10:35 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:29.354 03:10:35 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:29.354 03:10:35 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:29.354 03:10:35 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:29.354 03:10:35 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=decompress 00:07:29.354 03:10:35 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:29.354 03:10:35 accel.accel_decomp_full -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:29.354 03:10:35 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:29.354 03:10:35 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:29.354 03:10:35 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='111250 bytes' 00:07:29.354 03:10:35 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:29.354 03:10:35 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:29.354 03:10:35 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:29.354 03:10:35 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:29.354 03:10:35 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:29.355 03:10:35 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:29.355 03:10:35 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:29.355 03:10:35 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=software 00:07:29.355 03:10:35 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:29.355 03:10:35 accel.accel_decomp_full -- accel/accel.sh@22 -- # accel_module=software 00:07:29.355 03:10:35 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:29.355 03:10:35 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:29.355 03:10:35 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:29.355 03:10:35 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:29.355 03:10:35 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:29.355 03:10:35 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:29.355 03:10:35 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:07:29.355 03:10:35 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:29.355 03:10:35 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:29.355 03:10:35 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:29.355 03:10:35 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:07:29.355 03:10:35 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:29.355 03:10:35 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:29.355 03:10:35 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:29.355 03:10:35 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=1 00:07:29.355 03:10:35 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:29.355 03:10:35 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:29.355 03:10:35 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:29.355 03:10:35 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='1 seconds' 00:07:29.355 03:10:35 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:29.355 03:10:35 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:29.355 03:10:35 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:29.355 03:10:35 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=Yes 00:07:29.355 03:10:35 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:29.355 03:10:35 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:29.355 03:10:35 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:29.355 03:10:35 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:29.355 03:10:35 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:29.355 03:10:35 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:29.355 03:10:35 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:29.355 03:10:35 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:29.355 03:10:35 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:29.355 03:10:35 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:29.355 03:10:35 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:30.295 03:10:36 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:30.295 03:10:36 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:30.295 03:10:36 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:30.295 03:10:36 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:30.295 03:10:36 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:30.295 03:10:36 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:30.295 03:10:36 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:30.295 03:10:36 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:30.295 03:10:36 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:30.295 03:10:36 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:30.295 03:10:36 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:30.295 03:10:36 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:30.295 03:10:36 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:30.295 03:10:36 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:30.295 03:10:36 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:30.295 03:10:36 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:30.295 03:10:36 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:30.295 03:10:36 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:30.295 03:10:36 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:30.295 03:10:36 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:30.295 03:10:36 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:30.295 03:10:36 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:30.295 03:10:36 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:30.295 03:10:36 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:30.295 03:10:36 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:30.295 03:10:36 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:30.295 03:10:36 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:30.295 00:07:30.295 real 0m1.408s 00:07:30.295 user 0m1.275s 00:07:30.295 sys 0m0.136s 00:07:30.295 03:10:36 accel.accel_decomp_full -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:30.295 03:10:36 accel.accel_decomp_full -- common/autotest_common.sh@10 -- # set +x 00:07:30.295 ************************************ 00:07:30.295 END TEST accel_decomp_full 00:07:30.295 ************************************ 00:07:30.554 03:10:36 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:30.554 03:10:36 accel -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:30.554 03:10:36 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:07:30.554 03:10:36 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:30.554 03:10:36 accel -- common/autotest_common.sh@10 -- # set +x 00:07:30.554 ************************************ 00:07:30.554 START TEST accel_decomp_mcore 00:07:30.554 ************************************ 00:07:30.554 03:10:36 accel.accel_decomp_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:30.554 03:10:36 accel.accel_decomp_mcore -- accel/accel.sh@16 -- # local accel_opc 00:07:30.554 03:10:36 accel.accel_decomp_mcore -- accel/accel.sh@17 -- # local accel_module 00:07:30.554 03:10:36 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:30.554 03:10:36 accel.accel_decomp_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:30.554 03:10:36 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:30.554 03:10:36 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:30.554 03:10:36 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # build_accel_config 00:07:30.554 03:10:36 accel.accel_decomp_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:30.554 03:10:36 accel.accel_decomp_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:30.554 03:10:36 accel.accel_decomp_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:30.554 03:10:36 accel.accel_decomp_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:30.555 03:10:36 accel.accel_decomp_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:30.555 03:10:36 accel.accel_decomp_mcore -- accel/accel.sh@40 -- # local IFS=, 00:07:30.555 03:10:36 accel.accel_decomp_mcore -- accel/accel.sh@41 -- # jq -r . 00:07:30.555 [2024-07-15 03:10:36.500551] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:07:30.555 [2024-07-15 03:10:36.500621] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3073735 ] 00:07:30.555 EAL: No free 2048 kB hugepages reported on node 1 00:07:30.555 [2024-07-15 03:10:36.564674] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:30.555 [2024-07-15 03:10:36.663033] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:30.555 [2024-07-15 03:10:36.663100] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:30.555 [2024-07-15 03:10:36.663197] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:30.555 [2024-07-15 03:10:36.663200] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:30.815 03:10:36 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:30.815 03:10:36 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:30.815 03:10:36 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:30.815 03:10:36 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:30.815 03:10:36 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:30.815 03:10:36 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:30.815 03:10:36 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:30.815 03:10:36 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:30.815 03:10:36 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:30.815 03:10:36 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:30.815 03:10:36 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:30.815 03:10:36 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:30.815 03:10:36 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=0xf 00:07:30.815 03:10:36 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:30.815 03:10:36 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:30.815 03:10:36 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:30.815 03:10:36 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:30.815 03:10:36 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:30.815 03:10:36 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:30.815 03:10:36 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:30.815 03:10:36 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:30.815 03:10:36 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:30.815 03:10:36 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:30.815 03:10:36 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:30.815 03:10:36 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=decompress 00:07:30.815 03:10:36 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:30.815 03:10:36 accel.accel_decomp_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:30.815 03:10:36 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:30.815 03:10:36 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:30.815 03:10:36 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:30.815 03:10:36 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:30.815 03:10:36 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:30.815 03:10:36 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:30.815 03:10:36 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:30.815 03:10:36 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:30.815 03:10:36 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:30.815 03:10:36 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:30.815 03:10:36 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=software 00:07:30.815 03:10:36 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:30.815 03:10:36 accel.accel_decomp_mcore -- accel/accel.sh@22 -- # accel_module=software 00:07:30.815 03:10:36 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:30.815 03:10:36 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:30.815 03:10:36 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:30.815 03:10:36 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:30.815 03:10:36 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:30.815 03:10:36 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:30.815 03:10:36 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:07:30.815 03:10:36 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:30.815 03:10:36 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:30.815 03:10:36 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:30.815 03:10:36 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:07:30.815 03:10:36 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:30.815 03:10:36 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:30.815 03:10:36 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:30.815 03:10:36 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=1 00:07:30.815 03:10:36 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:30.815 03:10:36 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:30.815 03:10:36 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:30.815 03:10:36 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:07:30.815 03:10:36 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:30.815 03:10:36 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:30.815 03:10:36 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:30.815 03:10:36 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=Yes 00:07:30.815 03:10:36 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:30.815 03:10:36 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:30.815 03:10:36 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:30.815 03:10:36 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:30.815 03:10:36 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:30.815 03:10:36 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:30.815 03:10:36 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:30.815 03:10:36 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:30.815 03:10:36 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:30.815 03:10:36 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:30.815 03:10:36 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:31.753 03:10:37 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:31.753 03:10:37 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:31.753 03:10:37 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:31.754 03:10:37 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:31.754 03:10:37 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:31.754 03:10:37 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:31.754 03:10:37 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:31.754 03:10:37 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:31.754 03:10:37 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:31.754 03:10:37 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:31.754 03:10:37 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:31.754 03:10:37 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:31.754 03:10:37 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:31.754 03:10:37 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:31.754 03:10:37 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:31.754 03:10:37 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:31.754 03:10:37 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:31.754 03:10:37 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:31.754 03:10:37 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:31.754 03:10:37 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:31.754 03:10:37 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:31.754 03:10:37 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:31.754 03:10:37 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:31.754 03:10:37 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:31.754 03:10:37 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:31.754 03:10:37 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:31.754 03:10:37 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:31.754 03:10:37 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:31.754 03:10:37 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:31.754 03:10:37 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:31.754 03:10:37 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:31.754 03:10:37 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:31.754 03:10:37 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:31.754 03:10:37 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:31.754 03:10:37 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:31.754 03:10:37 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:31.754 03:10:37 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:31.754 03:10:37 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:31.754 03:10:37 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:31.754 00:07:31.754 real 0m1.415s 00:07:31.754 user 0m4.697s 00:07:31.754 sys 0m0.150s 00:07:31.754 03:10:37 accel.accel_decomp_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:31.754 03:10:37 accel.accel_decomp_mcore -- common/autotest_common.sh@10 -- # set +x 00:07:31.754 ************************************ 00:07:31.754 END TEST accel_decomp_mcore 00:07:31.754 ************************************ 00:07:32.013 03:10:37 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:32.013 03:10:37 accel -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:32.013 03:10:37 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:07:32.013 03:10:37 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:32.013 03:10:37 accel -- common/autotest_common.sh@10 -- # set +x 00:07:32.013 ************************************ 00:07:32.013 START TEST accel_decomp_full_mcore 00:07:32.013 ************************************ 00:07:32.013 03:10:37 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:32.013 03:10:37 accel.accel_decomp_full_mcore -- accel/accel.sh@16 -- # local accel_opc 00:07:32.013 03:10:37 accel.accel_decomp_full_mcore -- accel/accel.sh@17 -- # local accel_module 00:07:32.013 03:10:37 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:32.013 03:10:37 accel.accel_decomp_full_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:32.013 03:10:37 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:32.013 03:10:37 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:32.013 03:10:37 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # build_accel_config 00:07:32.013 03:10:37 accel.accel_decomp_full_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:32.013 03:10:37 accel.accel_decomp_full_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:32.013 03:10:37 accel.accel_decomp_full_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:32.013 03:10:37 accel.accel_decomp_full_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:32.013 03:10:37 accel.accel_decomp_full_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:32.013 03:10:37 accel.accel_decomp_full_mcore -- accel/accel.sh@40 -- # local IFS=, 00:07:32.013 03:10:37 accel.accel_decomp_full_mcore -- accel/accel.sh@41 -- # jq -r . 00:07:32.013 [2024-07-15 03:10:37.960176] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:07:32.013 [2024-07-15 03:10:37.960241] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3074004 ] 00:07:32.013 EAL: No free 2048 kB hugepages reported on node 1 00:07:32.013 [2024-07-15 03:10:38.022928] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:32.013 [2024-07-15 03:10:38.119507] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:32.013 [2024-07-15 03:10:38.119557] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:32.013 [2024-07-15 03:10:38.119670] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:32.013 [2024-07-15 03:10:38.119673] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:32.273 03:10:38 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:32.273 03:10:38 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:32.273 03:10:38 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:32.273 03:10:38 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:32.273 03:10:38 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:32.273 03:10:38 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:32.273 03:10:38 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:32.273 03:10:38 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:32.273 03:10:38 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:32.273 03:10:38 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:32.273 03:10:38 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:32.273 03:10:38 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:32.273 03:10:38 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=0xf 00:07:32.273 03:10:38 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:32.273 03:10:38 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:32.273 03:10:38 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:32.273 03:10:38 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:32.273 03:10:38 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:32.273 03:10:38 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:32.273 03:10:38 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:32.273 03:10:38 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:32.273 03:10:38 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:32.273 03:10:38 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:32.273 03:10:38 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:32.273 03:10:38 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=decompress 00:07:32.273 03:10:38 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:32.273 03:10:38 accel.accel_decomp_full_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:32.273 03:10:38 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:32.273 03:10:38 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:32.273 03:10:38 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='111250 bytes' 00:07:32.273 03:10:38 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:32.273 03:10:38 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:32.273 03:10:38 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:32.273 03:10:38 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:32.273 03:10:38 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:32.273 03:10:38 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:32.273 03:10:38 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:32.273 03:10:38 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=software 00:07:32.273 03:10:38 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:32.273 03:10:38 accel.accel_decomp_full_mcore -- accel/accel.sh@22 -- # accel_module=software 00:07:32.273 03:10:38 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:32.273 03:10:38 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:32.273 03:10:38 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:32.273 03:10:38 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:32.273 03:10:38 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:32.273 03:10:38 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:32.273 03:10:38 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:07:32.273 03:10:38 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:32.273 03:10:38 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:32.273 03:10:38 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:32.273 03:10:38 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:07:32.273 03:10:38 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:32.273 03:10:38 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:32.273 03:10:38 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:32.273 03:10:38 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=1 00:07:32.273 03:10:38 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:32.273 03:10:38 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:32.273 03:10:38 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:32.273 03:10:38 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:07:32.273 03:10:38 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:32.273 03:10:38 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:32.273 03:10:38 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:32.273 03:10:38 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=Yes 00:07:32.273 03:10:38 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:32.273 03:10:38 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:32.273 03:10:38 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:32.273 03:10:38 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:32.273 03:10:38 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:32.273 03:10:38 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:32.273 03:10:38 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:32.273 03:10:38 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:32.273 03:10:38 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:32.273 03:10:38 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:32.273 03:10:38 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:33.648 03:10:39 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:33.648 03:10:39 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:33.648 03:10:39 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:33.648 03:10:39 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:33.648 03:10:39 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:33.648 03:10:39 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:33.648 03:10:39 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:33.648 03:10:39 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:33.648 03:10:39 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:33.648 03:10:39 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:33.648 03:10:39 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:33.648 03:10:39 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:33.648 03:10:39 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:33.648 03:10:39 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:33.648 03:10:39 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:33.648 03:10:39 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:33.648 03:10:39 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:33.648 03:10:39 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:33.648 03:10:39 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:33.648 03:10:39 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:33.648 03:10:39 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:33.648 03:10:39 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:33.648 03:10:39 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:33.648 03:10:39 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:33.648 03:10:39 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:33.648 03:10:39 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:33.648 03:10:39 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:33.648 03:10:39 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:33.648 03:10:39 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:33.648 03:10:39 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:33.648 03:10:39 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:33.648 03:10:39 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:33.648 03:10:39 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:33.648 03:10:39 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:33.648 03:10:39 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:33.648 03:10:39 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:33.648 03:10:39 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:33.648 03:10:39 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:33.648 03:10:39 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:33.648 00:07:33.648 real 0m1.419s 00:07:33.648 user 0m4.724s 00:07:33.648 sys 0m0.161s 00:07:33.648 03:10:39 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:33.648 03:10:39 accel.accel_decomp_full_mcore -- common/autotest_common.sh@10 -- # set +x 00:07:33.648 ************************************ 00:07:33.648 END TEST accel_decomp_full_mcore 00:07:33.648 ************************************ 00:07:33.648 03:10:39 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:33.648 03:10:39 accel -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:33.648 03:10:39 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:07:33.648 03:10:39 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:33.648 03:10:39 accel -- common/autotest_common.sh@10 -- # set +x 00:07:33.648 ************************************ 00:07:33.648 START TEST accel_decomp_mthread 00:07:33.648 ************************************ 00:07:33.648 03:10:39 accel.accel_decomp_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:33.648 03:10:39 accel.accel_decomp_mthread -- accel/accel.sh@16 -- # local accel_opc 00:07:33.648 03:10:39 accel.accel_decomp_mthread -- accel/accel.sh@17 -- # local accel_module 00:07:33.648 03:10:39 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:33.648 03:10:39 accel.accel_decomp_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:33.648 03:10:39 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:33.648 03:10:39 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:33.648 03:10:39 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # build_accel_config 00:07:33.648 03:10:39 accel.accel_decomp_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:33.648 03:10:39 accel.accel_decomp_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:33.648 03:10:39 accel.accel_decomp_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:33.648 03:10:39 accel.accel_decomp_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:33.648 03:10:39 accel.accel_decomp_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:33.648 03:10:39 accel.accel_decomp_mthread -- accel/accel.sh@40 -- # local IFS=, 00:07:33.648 03:10:39 accel.accel_decomp_mthread -- accel/accel.sh@41 -- # jq -r . 00:07:33.648 [2024-07-15 03:10:39.423103] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:07:33.648 [2024-07-15 03:10:39.423162] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3074171 ] 00:07:33.648 EAL: No free 2048 kB hugepages reported on node 1 00:07:33.648 [2024-07-15 03:10:39.485251] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:33.648 [2024-07-15 03:10:39.578139] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:33.648 03:10:39 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:33.648 03:10:39 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:33.648 03:10:39 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:33.648 03:10:39 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:33.648 03:10:39 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:33.648 03:10:39 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:33.648 03:10:39 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:33.648 03:10:39 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:33.648 03:10:39 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:33.648 03:10:39 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:33.648 03:10:39 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:33.648 03:10:39 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:33.648 03:10:39 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=0x1 00:07:33.649 03:10:39 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:33.649 03:10:39 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:33.649 03:10:39 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:33.649 03:10:39 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:33.649 03:10:39 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:33.649 03:10:39 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:33.649 03:10:39 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:33.649 03:10:39 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:33.649 03:10:39 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:33.649 03:10:39 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:33.649 03:10:39 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:33.649 03:10:39 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=decompress 00:07:33.649 03:10:39 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:33.649 03:10:39 accel.accel_decomp_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:33.649 03:10:39 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:33.649 03:10:39 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:33.649 03:10:39 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:33.649 03:10:39 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:33.649 03:10:39 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:33.649 03:10:39 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:33.649 03:10:39 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:33.649 03:10:39 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:33.649 03:10:39 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:33.649 03:10:39 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:33.649 03:10:39 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=software 00:07:33.649 03:10:39 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:33.649 03:10:39 accel.accel_decomp_mthread -- accel/accel.sh@22 -- # accel_module=software 00:07:33.649 03:10:39 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:33.649 03:10:39 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:33.649 03:10:39 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:33.649 03:10:39 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:33.649 03:10:39 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:33.649 03:10:39 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:33.649 03:10:39 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:07:33.649 03:10:39 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:33.649 03:10:39 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:33.649 03:10:39 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:33.649 03:10:39 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:07:33.649 03:10:39 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:33.649 03:10:39 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:33.649 03:10:39 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:33.649 03:10:39 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=2 00:07:33.649 03:10:39 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:33.649 03:10:39 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:33.649 03:10:39 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:33.649 03:10:39 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:07:33.649 03:10:39 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:33.649 03:10:39 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:33.649 03:10:39 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:33.649 03:10:39 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=Yes 00:07:33.649 03:10:39 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:33.649 03:10:39 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:33.649 03:10:39 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:33.649 03:10:39 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:33.649 03:10:39 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:33.649 03:10:39 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:33.649 03:10:39 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:33.649 03:10:39 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:33.649 03:10:39 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:33.649 03:10:39 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:33.649 03:10:39 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:35.029 03:10:40 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:35.029 03:10:40 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:35.029 03:10:40 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:35.029 03:10:40 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:35.029 03:10:40 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:35.029 03:10:40 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:35.029 03:10:40 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:35.029 03:10:40 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:35.029 03:10:40 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:35.029 03:10:40 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:35.029 03:10:40 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:35.029 03:10:40 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:35.029 03:10:40 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:35.029 03:10:40 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:35.029 03:10:40 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:35.029 03:10:40 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:35.029 03:10:40 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:35.029 03:10:40 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:35.029 03:10:40 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:35.029 03:10:40 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:35.029 03:10:40 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:35.029 03:10:40 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:35.029 03:10:40 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:35.029 03:10:40 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:35.029 03:10:40 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:35.029 03:10:40 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:35.029 03:10:40 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:35.029 03:10:40 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:35.029 03:10:40 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:35.029 03:10:40 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:35.029 03:10:40 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:35.029 00:07:35.029 real 0m1.417s 00:07:35.029 user 0m1.279s 00:07:35.029 sys 0m0.142s 00:07:35.029 03:10:40 accel.accel_decomp_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:35.029 03:10:40 accel.accel_decomp_mthread -- common/autotest_common.sh@10 -- # set +x 00:07:35.029 ************************************ 00:07:35.029 END TEST accel_decomp_mthread 00:07:35.029 ************************************ 00:07:35.029 03:10:40 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:35.029 03:10:40 accel -- accel/accel.sh@122 -- # run_test accel_decomp_full_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:35.029 03:10:40 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:07:35.029 03:10:40 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:35.029 03:10:40 accel -- common/autotest_common.sh@10 -- # set +x 00:07:35.029 ************************************ 00:07:35.029 START TEST accel_decomp_full_mthread 00:07:35.029 ************************************ 00:07:35.029 03:10:40 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:35.029 03:10:40 accel.accel_decomp_full_mthread -- accel/accel.sh@16 -- # local accel_opc 00:07:35.029 03:10:40 accel.accel_decomp_full_mthread -- accel/accel.sh@17 -- # local accel_module 00:07:35.029 03:10:40 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:35.029 03:10:40 accel.accel_decomp_full_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:35.029 03:10:40 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:35.029 03:10:40 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:35.029 03:10:40 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # build_accel_config 00:07:35.029 03:10:40 accel.accel_decomp_full_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:35.029 03:10:40 accel.accel_decomp_full_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:35.029 03:10:40 accel.accel_decomp_full_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:35.029 03:10:40 accel.accel_decomp_full_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:35.029 03:10:40 accel.accel_decomp_full_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:35.029 03:10:40 accel.accel_decomp_full_mthread -- accel/accel.sh@40 -- # local IFS=, 00:07:35.029 03:10:40 accel.accel_decomp_full_mthread -- accel/accel.sh@41 -- # jq -r . 00:07:35.029 [2024-07-15 03:10:40.894260] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:07:35.029 [2024-07-15 03:10:40.894326] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3074323 ] 00:07:35.029 EAL: No free 2048 kB hugepages reported on node 1 00:07:35.029 [2024-07-15 03:10:40.958328] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:35.029 [2024-07-15 03:10:41.046979] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:35.029 03:10:41 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:35.029 03:10:41 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:35.029 03:10:41 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:35.029 03:10:41 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:35.029 03:10:41 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:35.029 03:10:41 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:35.029 03:10:41 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:35.029 03:10:41 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:35.029 03:10:41 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:35.029 03:10:41 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:35.029 03:10:41 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:35.029 03:10:41 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:35.029 03:10:41 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=0x1 00:07:35.029 03:10:41 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:35.029 03:10:41 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:35.029 03:10:41 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:35.029 03:10:41 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:35.029 03:10:41 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:35.029 03:10:41 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:35.029 03:10:41 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:35.029 03:10:41 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:35.029 03:10:41 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:35.029 03:10:41 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:35.029 03:10:41 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:35.029 03:10:41 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=decompress 00:07:35.029 03:10:41 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:35.029 03:10:41 accel.accel_decomp_full_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:35.029 03:10:41 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:35.029 03:10:41 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:35.029 03:10:41 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='111250 bytes' 00:07:35.029 03:10:41 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:35.029 03:10:41 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:35.029 03:10:41 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:35.029 03:10:41 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:35.029 03:10:41 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:35.029 03:10:41 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:35.029 03:10:41 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:35.029 03:10:41 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=software 00:07:35.029 03:10:41 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:35.029 03:10:41 accel.accel_decomp_full_mthread -- accel/accel.sh@22 -- # accel_module=software 00:07:35.029 03:10:41 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:35.029 03:10:41 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:35.029 03:10:41 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:35.029 03:10:41 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:35.029 03:10:41 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:35.029 03:10:41 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:35.029 03:10:41 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:07:35.029 03:10:41 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:35.029 03:10:41 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:35.029 03:10:41 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:35.029 03:10:41 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:07:35.029 03:10:41 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:35.029 03:10:41 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:35.029 03:10:41 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:35.029 03:10:41 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=2 00:07:35.029 03:10:41 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:35.029 03:10:41 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:35.029 03:10:41 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:35.029 03:10:41 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:07:35.029 03:10:41 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:35.029 03:10:41 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:35.029 03:10:41 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:35.029 03:10:41 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=Yes 00:07:35.029 03:10:41 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:35.029 03:10:41 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:35.029 03:10:41 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:35.029 03:10:41 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:35.029 03:10:41 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:35.029 03:10:41 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:35.029 03:10:41 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:35.029 03:10:41 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:35.029 03:10:41 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:35.029 03:10:41 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:35.029 03:10:41 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:36.406 03:10:42 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:36.406 03:10:42 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:36.406 03:10:42 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:36.406 03:10:42 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:36.406 03:10:42 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:36.406 03:10:42 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:36.406 03:10:42 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:36.406 03:10:42 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:36.406 03:10:42 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:36.406 03:10:42 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:36.406 03:10:42 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:36.406 03:10:42 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:36.406 03:10:42 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:36.406 03:10:42 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:36.406 03:10:42 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:36.406 03:10:42 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:36.406 03:10:42 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:36.406 03:10:42 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:36.406 03:10:42 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:36.406 03:10:42 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:36.406 03:10:42 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:36.406 03:10:42 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:36.406 03:10:42 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:36.406 03:10:42 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:36.406 03:10:42 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:36.406 03:10:42 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:36.406 03:10:42 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:36.406 03:10:42 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:36.406 03:10:42 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:36.406 03:10:42 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:36.406 03:10:42 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:36.406 00:07:36.406 real 0m1.434s 00:07:36.406 user 0m1.286s 00:07:36.406 sys 0m0.151s 00:07:36.406 03:10:42 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:36.406 03:10:42 accel.accel_decomp_full_mthread -- common/autotest_common.sh@10 -- # set +x 00:07:36.406 ************************************ 00:07:36.406 END TEST accel_decomp_full_mthread 00:07:36.406 ************************************ 00:07:36.406 03:10:42 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:36.406 03:10:42 accel -- accel/accel.sh@124 -- # [[ n == y ]] 00:07:36.406 03:10:42 accel -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:07:36.406 03:10:42 accel -- accel/accel.sh@137 -- # build_accel_config 00:07:36.406 03:10:42 accel -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:07:36.406 03:10:42 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:36.406 03:10:42 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:36.406 03:10:42 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:36.406 03:10:42 accel -- common/autotest_common.sh@10 -- # set +x 00:07:36.406 03:10:42 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:36.406 03:10:42 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:36.406 03:10:42 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:36.406 03:10:42 accel -- accel/accel.sh@40 -- # local IFS=, 00:07:36.406 03:10:42 accel -- accel/accel.sh@41 -- # jq -r . 00:07:36.406 ************************************ 00:07:36.406 START TEST accel_dif_functional_tests 00:07:36.406 ************************************ 00:07:36.406 03:10:42 accel.accel_dif_functional_tests -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:07:36.406 [2024-07-15 03:10:42.390233] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:07:36.406 [2024-07-15 03:10:42.390310] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3074588 ] 00:07:36.406 EAL: No free 2048 kB hugepages reported on node 1 00:07:36.406 [2024-07-15 03:10:42.451213] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:36.406 [2024-07-15 03:10:42.546325] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:36.406 [2024-07-15 03:10:42.546396] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:36.406 [2024-07-15 03:10:42.546399] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:36.666 00:07:36.666 00:07:36.666 CUnit - A unit testing framework for C - Version 2.1-3 00:07:36.666 http://cunit.sourceforge.net/ 00:07:36.666 00:07:36.666 00:07:36.666 Suite: accel_dif 00:07:36.666 Test: verify: DIF generated, GUARD check ...passed 00:07:36.666 Test: verify: DIF generated, APPTAG check ...passed 00:07:36.666 Test: verify: DIF generated, REFTAG check ...passed 00:07:36.666 Test: verify: DIF not generated, GUARD check ...[2024-07-15 03:10:42.640155] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:07:36.666 passed 00:07:36.666 Test: verify: DIF not generated, APPTAG check ...[2024-07-15 03:10:42.640238] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:07:36.666 passed 00:07:36.666 Test: verify: DIF not generated, REFTAG check ...[2024-07-15 03:10:42.640287] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:07:36.666 passed 00:07:36.666 Test: verify: APPTAG correct, APPTAG check ...passed 00:07:36.666 Test: verify: APPTAG incorrect, APPTAG check ...[2024-07-15 03:10:42.640347] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:07:36.666 passed 00:07:36.666 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:07:36.666 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:07:36.666 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:07:36.666 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-07-15 03:10:42.640483] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:07:36.666 passed 00:07:36.666 Test: verify copy: DIF generated, GUARD check ...passed 00:07:36.666 Test: verify copy: DIF generated, APPTAG check ...passed 00:07:36.666 Test: verify copy: DIF generated, REFTAG check ...passed 00:07:36.666 Test: verify copy: DIF not generated, GUARD check ...[2024-07-15 03:10:42.640629] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:07:36.666 passed 00:07:36.666 Test: verify copy: DIF not generated, APPTAG check ...[2024-07-15 03:10:42.640664] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:07:36.666 passed 00:07:36.666 Test: verify copy: DIF not generated, REFTAG check ...[2024-07-15 03:10:42.640695] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:07:36.666 passed 00:07:36.666 Test: generate copy: DIF generated, GUARD check ...passed 00:07:36.666 Test: generate copy: DIF generated, APTTAG check ...passed 00:07:36.666 Test: generate copy: DIF generated, REFTAG check ...passed 00:07:36.666 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:07:36.666 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:07:36.666 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:07:36.666 Test: generate copy: iovecs-len validate ...[2024-07-15 03:10:42.640934] dif.c:1190:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:07:36.666 passed 00:07:36.666 Test: generate copy: buffer alignment validate ...passed 00:07:36.666 00:07:36.666 Run Summary: Type Total Ran Passed Failed Inactive 00:07:36.666 suites 1 1 n/a 0 0 00:07:36.666 tests 26 26 26 0 0 00:07:36.666 asserts 115 115 115 0 n/a 00:07:36.666 00:07:36.666 Elapsed time = 0.002 seconds 00:07:36.925 00:07:36.925 real 0m0.493s 00:07:36.925 user 0m0.751s 00:07:36.925 sys 0m0.188s 00:07:36.925 03:10:42 accel.accel_dif_functional_tests -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:36.925 03:10:42 accel.accel_dif_functional_tests -- common/autotest_common.sh@10 -- # set +x 00:07:36.925 ************************************ 00:07:36.925 END TEST accel_dif_functional_tests 00:07:36.925 ************************************ 00:07:36.925 03:10:42 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:36.925 00:07:36.925 real 0m31.714s 00:07:36.925 user 0m35.098s 00:07:36.925 sys 0m4.608s 00:07:36.925 03:10:42 accel -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:36.925 03:10:42 accel -- common/autotest_common.sh@10 -- # set +x 00:07:36.925 ************************************ 00:07:36.925 END TEST accel 00:07:36.925 ************************************ 00:07:36.925 03:10:42 -- common/autotest_common.sh@1142 -- # return 0 00:07:36.925 03:10:42 -- spdk/autotest.sh@184 -- # run_test accel_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:07:36.925 03:10:42 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:36.925 03:10:42 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:36.925 03:10:42 -- common/autotest_common.sh@10 -- # set +x 00:07:36.925 ************************************ 00:07:36.925 START TEST accel_rpc 00:07:36.925 ************************************ 00:07:36.925 03:10:42 accel_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:07:36.925 * Looking for test storage... 00:07:36.925 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:07:36.925 03:10:42 accel_rpc -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:07:36.925 03:10:42 accel_rpc -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=3074669 00:07:36.925 03:10:42 accel_rpc -- accel/accel_rpc.sh@13 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --wait-for-rpc 00:07:36.925 03:10:42 accel_rpc -- accel/accel_rpc.sh@15 -- # waitforlisten 3074669 00:07:36.925 03:10:42 accel_rpc -- common/autotest_common.sh@829 -- # '[' -z 3074669 ']' 00:07:36.925 03:10:42 accel_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:36.925 03:10:42 accel_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:36.925 03:10:42 accel_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:36.925 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:36.925 03:10:42 accel_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:36.925 03:10:42 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:36.925 [2024-07-15 03:10:43.024351] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:07:36.926 [2024-07-15 03:10:43.024445] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3074669 ] 00:07:36.926 EAL: No free 2048 kB hugepages reported on node 1 00:07:37.186 [2024-07-15 03:10:43.082341] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:37.186 [2024-07-15 03:10:43.166021] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:37.186 03:10:43 accel_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:37.186 03:10:43 accel_rpc -- common/autotest_common.sh@862 -- # return 0 00:07:37.186 03:10:43 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:07:37.186 03:10:43 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:07:37.186 03:10:43 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:07:37.186 03:10:43 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:07:37.186 03:10:43 accel_rpc -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:07:37.186 03:10:43 accel_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:37.186 03:10:43 accel_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:37.186 03:10:43 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:37.186 ************************************ 00:07:37.186 START TEST accel_assign_opcode 00:07:37.186 ************************************ 00:07:37.186 03:10:43 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1123 -- # accel_assign_opcode_test_suite 00:07:37.186 03:10:43 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:07:37.186 03:10:43 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:37.186 03:10:43 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:37.186 [2024-07-15 03:10:43.250719] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:07:37.186 03:10:43 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:37.186 03:10:43 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:07:37.186 03:10:43 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:37.186 03:10:43 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:37.186 [2024-07-15 03:10:43.258737] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:07:37.186 03:10:43 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:37.186 03:10:43 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:07:37.186 03:10:43 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:37.186 03:10:43 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:37.444 03:10:43 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:37.444 03:10:43 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:07:37.444 03:10:43 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:37.444 03:10:43 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:37.444 03:10:43 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:07:37.444 03:10:43 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # grep software 00:07:37.444 03:10:43 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:37.444 software 00:07:37.444 00:07:37.444 real 0m0.292s 00:07:37.444 user 0m0.041s 00:07:37.444 sys 0m0.009s 00:07:37.444 03:10:43 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:37.444 03:10:43 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:37.444 ************************************ 00:07:37.444 END TEST accel_assign_opcode 00:07:37.444 ************************************ 00:07:37.444 03:10:43 accel_rpc -- common/autotest_common.sh@1142 -- # return 0 00:07:37.444 03:10:43 accel_rpc -- accel/accel_rpc.sh@55 -- # killprocess 3074669 00:07:37.444 03:10:43 accel_rpc -- common/autotest_common.sh@948 -- # '[' -z 3074669 ']' 00:07:37.444 03:10:43 accel_rpc -- common/autotest_common.sh@952 -- # kill -0 3074669 00:07:37.444 03:10:43 accel_rpc -- common/autotest_common.sh@953 -- # uname 00:07:37.444 03:10:43 accel_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:37.444 03:10:43 accel_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3074669 00:07:37.444 03:10:43 accel_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:37.444 03:10:43 accel_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:37.444 03:10:43 accel_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3074669' 00:07:37.444 killing process with pid 3074669 00:07:37.444 03:10:43 accel_rpc -- common/autotest_common.sh@967 -- # kill 3074669 00:07:37.444 03:10:43 accel_rpc -- common/autotest_common.sh@972 -- # wait 3074669 00:07:38.009 00:07:38.009 real 0m1.066s 00:07:38.009 user 0m1.003s 00:07:38.009 sys 0m0.414s 00:07:38.009 03:10:43 accel_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:38.009 03:10:43 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:38.009 ************************************ 00:07:38.009 END TEST accel_rpc 00:07:38.009 ************************************ 00:07:38.009 03:10:44 -- common/autotest_common.sh@1142 -- # return 0 00:07:38.009 03:10:44 -- spdk/autotest.sh@185 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:07:38.010 03:10:44 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:38.010 03:10:44 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:38.010 03:10:44 -- common/autotest_common.sh@10 -- # set +x 00:07:38.010 ************************************ 00:07:38.010 START TEST app_cmdline 00:07:38.010 ************************************ 00:07:38.010 03:10:44 app_cmdline -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:07:38.010 * Looking for test storage... 00:07:38.010 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:07:38.010 03:10:44 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:07:38.010 03:10:44 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=3074873 00:07:38.010 03:10:44 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:07:38.010 03:10:44 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 3074873 00:07:38.010 03:10:44 app_cmdline -- common/autotest_common.sh@829 -- # '[' -z 3074873 ']' 00:07:38.010 03:10:44 app_cmdline -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:38.010 03:10:44 app_cmdline -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:38.010 03:10:44 app_cmdline -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:38.010 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:38.010 03:10:44 app_cmdline -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:38.010 03:10:44 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:38.010 [2024-07-15 03:10:44.136243] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:07:38.010 [2024-07-15 03:10:44.136335] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3074873 ] 00:07:38.269 EAL: No free 2048 kB hugepages reported on node 1 00:07:38.269 [2024-07-15 03:10:44.195230] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:38.269 [2024-07-15 03:10:44.280700] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:38.529 03:10:44 app_cmdline -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:38.529 03:10:44 app_cmdline -- common/autotest_common.sh@862 -- # return 0 00:07:38.529 03:10:44 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:07:38.787 { 00:07:38.787 "version": "SPDK v24.09-pre git sha1 719d03c6a", 00:07:38.787 "fields": { 00:07:38.787 "major": 24, 00:07:38.787 "minor": 9, 00:07:38.787 "patch": 0, 00:07:38.787 "suffix": "-pre", 00:07:38.787 "commit": "719d03c6a" 00:07:38.787 } 00:07:38.787 } 00:07:38.787 03:10:44 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:07:38.787 03:10:44 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:07:38.788 03:10:44 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:07:38.788 03:10:44 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:07:38.788 03:10:44 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:07:38.788 03:10:44 app_cmdline -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:38.788 03:10:44 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:38.788 03:10:44 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:07:38.788 03:10:44 app_cmdline -- app/cmdline.sh@26 -- # sort 00:07:38.788 03:10:44 app_cmdline -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:38.788 03:10:44 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:07:38.788 03:10:44 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:07:38.788 03:10:44 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:38.788 03:10:44 app_cmdline -- common/autotest_common.sh@648 -- # local es=0 00:07:38.788 03:10:44 app_cmdline -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:38.788 03:10:44 app_cmdline -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:38.788 03:10:44 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:38.788 03:10:44 app_cmdline -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:38.788 03:10:44 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:38.788 03:10:44 app_cmdline -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:38.788 03:10:44 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:38.788 03:10:44 app_cmdline -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:38.788 03:10:44 app_cmdline -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:07:38.788 03:10:44 app_cmdline -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:39.047 request: 00:07:39.047 { 00:07:39.047 "method": "env_dpdk_get_mem_stats", 00:07:39.047 "req_id": 1 00:07:39.047 } 00:07:39.047 Got JSON-RPC error response 00:07:39.047 response: 00:07:39.047 { 00:07:39.047 "code": -32601, 00:07:39.047 "message": "Method not found" 00:07:39.047 } 00:07:39.047 03:10:45 app_cmdline -- common/autotest_common.sh@651 -- # es=1 00:07:39.047 03:10:45 app_cmdline -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:39.047 03:10:45 app_cmdline -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:39.047 03:10:45 app_cmdline -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:39.047 03:10:45 app_cmdline -- app/cmdline.sh@1 -- # killprocess 3074873 00:07:39.047 03:10:45 app_cmdline -- common/autotest_common.sh@948 -- # '[' -z 3074873 ']' 00:07:39.047 03:10:45 app_cmdline -- common/autotest_common.sh@952 -- # kill -0 3074873 00:07:39.047 03:10:45 app_cmdline -- common/autotest_common.sh@953 -- # uname 00:07:39.047 03:10:45 app_cmdline -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:39.047 03:10:45 app_cmdline -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3074873 00:07:39.047 03:10:45 app_cmdline -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:39.047 03:10:45 app_cmdline -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:39.047 03:10:45 app_cmdline -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3074873' 00:07:39.047 killing process with pid 3074873 00:07:39.047 03:10:45 app_cmdline -- common/autotest_common.sh@967 -- # kill 3074873 00:07:39.047 03:10:45 app_cmdline -- common/autotest_common.sh@972 -- # wait 3074873 00:07:39.615 00:07:39.615 real 0m1.466s 00:07:39.615 user 0m1.792s 00:07:39.615 sys 0m0.457s 00:07:39.615 03:10:45 app_cmdline -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:39.615 03:10:45 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:39.615 ************************************ 00:07:39.615 END TEST app_cmdline 00:07:39.615 ************************************ 00:07:39.615 03:10:45 -- common/autotest_common.sh@1142 -- # return 0 00:07:39.615 03:10:45 -- spdk/autotest.sh@186 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:07:39.615 03:10:45 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:39.615 03:10:45 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:39.615 03:10:45 -- common/autotest_common.sh@10 -- # set +x 00:07:39.615 ************************************ 00:07:39.615 START TEST version 00:07:39.615 ************************************ 00:07:39.615 03:10:45 version -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:07:39.615 * Looking for test storage... 00:07:39.615 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:07:39.615 03:10:45 version -- app/version.sh@17 -- # get_header_version major 00:07:39.615 03:10:45 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:39.615 03:10:45 version -- app/version.sh@14 -- # cut -f2 00:07:39.615 03:10:45 version -- app/version.sh@14 -- # tr -d '"' 00:07:39.615 03:10:45 version -- app/version.sh@17 -- # major=24 00:07:39.615 03:10:45 version -- app/version.sh@18 -- # get_header_version minor 00:07:39.615 03:10:45 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:39.615 03:10:45 version -- app/version.sh@14 -- # cut -f2 00:07:39.615 03:10:45 version -- app/version.sh@14 -- # tr -d '"' 00:07:39.615 03:10:45 version -- app/version.sh@18 -- # minor=9 00:07:39.615 03:10:45 version -- app/version.sh@19 -- # get_header_version patch 00:07:39.615 03:10:45 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:39.615 03:10:45 version -- app/version.sh@14 -- # cut -f2 00:07:39.615 03:10:45 version -- app/version.sh@14 -- # tr -d '"' 00:07:39.615 03:10:45 version -- app/version.sh@19 -- # patch=0 00:07:39.615 03:10:45 version -- app/version.sh@20 -- # get_header_version suffix 00:07:39.615 03:10:45 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:39.615 03:10:45 version -- app/version.sh@14 -- # cut -f2 00:07:39.615 03:10:45 version -- app/version.sh@14 -- # tr -d '"' 00:07:39.615 03:10:45 version -- app/version.sh@20 -- # suffix=-pre 00:07:39.615 03:10:45 version -- app/version.sh@22 -- # version=24.9 00:07:39.615 03:10:45 version -- app/version.sh@25 -- # (( patch != 0 )) 00:07:39.615 03:10:45 version -- app/version.sh@28 -- # version=24.9rc0 00:07:39.615 03:10:45 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:07:39.615 03:10:45 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:07:39.615 03:10:45 version -- app/version.sh@30 -- # py_version=24.9rc0 00:07:39.615 03:10:45 version -- app/version.sh@31 -- # [[ 24.9rc0 == \2\4\.\9\r\c\0 ]] 00:07:39.615 00:07:39.615 real 0m0.109s 00:07:39.615 user 0m0.062s 00:07:39.615 sys 0m0.069s 00:07:39.615 03:10:45 version -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:39.615 03:10:45 version -- common/autotest_common.sh@10 -- # set +x 00:07:39.615 ************************************ 00:07:39.615 END TEST version 00:07:39.615 ************************************ 00:07:39.615 03:10:45 -- common/autotest_common.sh@1142 -- # return 0 00:07:39.615 03:10:45 -- spdk/autotest.sh@188 -- # '[' 0 -eq 1 ']' 00:07:39.615 03:10:45 -- spdk/autotest.sh@198 -- # uname -s 00:07:39.615 03:10:45 -- spdk/autotest.sh@198 -- # [[ Linux == Linux ]] 00:07:39.615 03:10:45 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:07:39.615 03:10:45 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:07:39.615 03:10:45 -- spdk/autotest.sh@211 -- # '[' 0 -eq 1 ']' 00:07:39.615 03:10:45 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:07:39.615 03:10:45 -- spdk/autotest.sh@260 -- # timing_exit lib 00:07:39.615 03:10:45 -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:39.615 03:10:45 -- common/autotest_common.sh@10 -- # set +x 00:07:39.615 03:10:45 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:07:39.615 03:10:45 -- spdk/autotest.sh@270 -- # '[' 0 -eq 1 ']' 00:07:39.615 03:10:45 -- spdk/autotest.sh@279 -- # '[' 1 -eq 1 ']' 00:07:39.615 03:10:45 -- spdk/autotest.sh@280 -- # export NET_TYPE 00:07:39.615 03:10:45 -- spdk/autotest.sh@283 -- # '[' tcp = rdma ']' 00:07:39.615 03:10:45 -- spdk/autotest.sh@286 -- # '[' tcp = tcp ']' 00:07:39.615 03:10:45 -- spdk/autotest.sh@287 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:39.615 03:10:45 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:39.615 03:10:45 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:39.615 03:10:45 -- common/autotest_common.sh@10 -- # set +x 00:07:39.615 ************************************ 00:07:39.615 START TEST nvmf_tcp 00:07:39.615 ************************************ 00:07:39.615 03:10:45 nvmf_tcp -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:39.875 * Looking for test storage... 00:07:39.875 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:07:39.875 03:10:45 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:07:39.875 03:10:45 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:07:39.875 03:10:45 nvmf_tcp -- nvmf/nvmf.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:39.875 03:10:45 nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:07:39.875 03:10:45 nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:39.875 03:10:45 nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:39.875 03:10:45 nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:39.875 03:10:45 nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:39.875 03:10:45 nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:39.875 03:10:45 nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:39.875 03:10:45 nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:39.875 03:10:45 nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:39.875 03:10:45 nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:39.875 03:10:45 nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:39.875 03:10:45 nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:07:39.875 03:10:45 nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:07:39.875 03:10:45 nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:39.875 03:10:45 nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:39.875 03:10:45 nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:39.875 03:10:45 nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:39.875 03:10:45 nvmf_tcp -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:39.875 03:10:45 nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:39.875 03:10:45 nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:39.875 03:10:45 nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:39.875 03:10:45 nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:39.875 03:10:45 nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:39.875 03:10:45 nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:39.875 03:10:45 nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:07:39.875 03:10:45 nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:39.875 03:10:45 nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:07:39.875 03:10:45 nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:39.875 03:10:45 nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:39.875 03:10:45 nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:39.875 03:10:45 nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:39.875 03:10:45 nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:39.875 03:10:45 nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:39.875 03:10:45 nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:39.875 03:10:45 nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:39.875 03:10:45 nvmf_tcp -- nvmf/nvmf.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:07:39.875 03:10:45 nvmf_tcp -- nvmf/nvmf.sh@18 -- # TEST_ARGS=("$@") 00:07:39.875 03:10:45 nvmf_tcp -- nvmf/nvmf.sh@20 -- # timing_enter target 00:07:39.875 03:10:45 nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:39.875 03:10:45 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:39.875 03:10:45 nvmf_tcp -- nvmf/nvmf.sh@22 -- # [[ 0 -eq 0 ]] 00:07:39.875 03:10:45 nvmf_tcp -- nvmf/nvmf.sh@23 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:07:39.875 03:10:45 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:39.875 03:10:45 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:39.875 03:10:45 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:39.875 ************************************ 00:07:39.875 START TEST nvmf_example 00:07:39.875 ************************************ 00:07:39.875 03:10:45 nvmf_tcp.nvmf_example -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:07:39.875 * Looking for test storage... 00:07:39.875 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:39.875 03:10:45 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:39.875 03:10:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:07:39.875 03:10:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:39.875 03:10:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:39.875 03:10:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:39.875 03:10:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:39.875 03:10:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:39.875 03:10:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:39.875 03:10:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:39.875 03:10:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:39.875 03:10:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:39.875 03:10:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:39.875 03:10:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:07:39.875 03:10:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:07:39.875 03:10:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:39.875 03:10:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:39.875 03:10:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:39.875 03:10:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:39.875 03:10:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:39.875 03:10:45 nvmf_tcp.nvmf_example -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:39.875 03:10:45 nvmf_tcp.nvmf_example -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:39.875 03:10:45 nvmf_tcp.nvmf_example -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:39.875 03:10:45 nvmf_tcp.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:39.876 03:10:45 nvmf_tcp.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:39.876 03:10:45 nvmf_tcp.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:39.876 03:10:45 nvmf_tcp.nvmf_example -- paths/export.sh@5 -- # export PATH 00:07:39.876 03:10:45 nvmf_tcp.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:39.876 03:10:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@47 -- # : 0 00:07:39.876 03:10:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:39.876 03:10:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:39.876 03:10:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:39.876 03:10:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:39.876 03:10:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:39.876 03:10:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:39.876 03:10:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:39.876 03:10:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:39.876 03:10:45 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:07:39.876 03:10:45 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:07:39.876 03:10:45 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:07:39.876 03:10:45 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:07:39.876 03:10:45 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:07:39.876 03:10:45 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:07:39.876 03:10:45 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:07:39.876 03:10:45 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:07:39.876 03:10:45 nvmf_tcp.nvmf_example -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:39.876 03:10:45 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:39.876 03:10:45 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:07:39.876 03:10:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:39.876 03:10:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:39.876 03:10:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:39.876 03:10:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:39.876 03:10:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:39.876 03:10:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:39.876 03:10:45 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:39.876 03:10:45 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:39.876 03:10:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:39.876 03:10:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:39.876 03:10:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@285 -- # xtrace_disable 00:07:39.876 03:10:45 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:41.781 03:10:47 nvmf_tcp.nvmf_example -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:41.781 03:10:47 nvmf_tcp.nvmf_example -- nvmf/common.sh@291 -- # pci_devs=() 00:07:41.781 03:10:47 nvmf_tcp.nvmf_example -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:41.781 03:10:47 nvmf_tcp.nvmf_example -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:41.781 03:10:47 nvmf_tcp.nvmf_example -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:41.781 03:10:47 nvmf_tcp.nvmf_example -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:41.781 03:10:47 nvmf_tcp.nvmf_example -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:41.781 03:10:47 nvmf_tcp.nvmf_example -- nvmf/common.sh@295 -- # net_devs=() 00:07:41.781 03:10:47 nvmf_tcp.nvmf_example -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:41.781 03:10:47 nvmf_tcp.nvmf_example -- nvmf/common.sh@296 -- # e810=() 00:07:41.781 03:10:47 nvmf_tcp.nvmf_example -- nvmf/common.sh@296 -- # local -ga e810 00:07:41.781 03:10:47 nvmf_tcp.nvmf_example -- nvmf/common.sh@297 -- # x722=() 00:07:41.781 03:10:47 nvmf_tcp.nvmf_example -- nvmf/common.sh@297 -- # local -ga x722 00:07:41.781 03:10:47 nvmf_tcp.nvmf_example -- nvmf/common.sh@298 -- # mlx=() 00:07:41.781 03:10:47 nvmf_tcp.nvmf_example -- nvmf/common.sh@298 -- # local -ga mlx 00:07:41.781 03:10:47 nvmf_tcp.nvmf_example -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:41.781 03:10:47 nvmf_tcp.nvmf_example -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:41.781 03:10:47 nvmf_tcp.nvmf_example -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:41.781 03:10:47 nvmf_tcp.nvmf_example -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:41.781 03:10:47 nvmf_tcp.nvmf_example -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:41.781 03:10:47 nvmf_tcp.nvmf_example -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:41.781 03:10:47 nvmf_tcp.nvmf_example -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:41.781 03:10:47 nvmf_tcp.nvmf_example -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:41.781 03:10:47 nvmf_tcp.nvmf_example -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:41.781 03:10:47 nvmf_tcp.nvmf_example -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:41.781 03:10:47 nvmf_tcp.nvmf_example -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:41.781 03:10:47 nvmf_tcp.nvmf_example -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:41.781 03:10:47 nvmf_tcp.nvmf_example -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:41.781 03:10:47 nvmf_tcp.nvmf_example -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:41.781 03:10:47 nvmf_tcp.nvmf_example -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:41.781 03:10:47 nvmf_tcp.nvmf_example -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:41.781 03:10:47 nvmf_tcp.nvmf_example -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:41.781 03:10:47 nvmf_tcp.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:41.781 03:10:47 nvmf_tcp.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:07:41.781 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:07:41.781 03:10:47 nvmf_tcp.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:41.781 03:10:47 nvmf_tcp.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:41.781 03:10:47 nvmf_tcp.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:41.781 03:10:47 nvmf_tcp.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:41.781 03:10:47 nvmf_tcp.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:41.781 03:10:47 nvmf_tcp.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:41.781 03:10:47 nvmf_tcp.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:07:41.781 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:07:41.781 03:10:47 nvmf_tcp.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:41.781 03:10:47 nvmf_tcp.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:41.781 03:10:47 nvmf_tcp.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:41.781 03:10:47 nvmf_tcp.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:41.781 03:10:47 nvmf_tcp.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:41.781 03:10:47 nvmf_tcp.nvmf_example -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:41.781 03:10:47 nvmf_tcp.nvmf_example -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:41.781 03:10:47 nvmf_tcp.nvmf_example -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:41.781 03:10:47 nvmf_tcp.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:41.781 03:10:47 nvmf_tcp.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:41.781 03:10:47 nvmf_tcp.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:41.781 03:10:47 nvmf_tcp.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:41.781 03:10:47 nvmf_tcp.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:41.781 03:10:47 nvmf_tcp.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:41.781 03:10:47 nvmf_tcp.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:41.781 03:10:47 nvmf_tcp.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:07:41.781 Found net devices under 0000:0a:00.0: cvl_0_0 00:07:41.781 03:10:47 nvmf_tcp.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:41.781 03:10:47 nvmf_tcp.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:41.781 03:10:47 nvmf_tcp.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:41.781 03:10:47 nvmf_tcp.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:41.781 03:10:47 nvmf_tcp.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:41.781 03:10:47 nvmf_tcp.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:41.781 03:10:47 nvmf_tcp.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:41.781 03:10:47 nvmf_tcp.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:41.781 03:10:47 nvmf_tcp.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:07:41.781 Found net devices under 0000:0a:00.1: cvl_0_1 00:07:41.781 03:10:47 nvmf_tcp.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:41.781 03:10:47 nvmf_tcp.nvmf_example -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:41.781 03:10:47 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # is_hw=yes 00:07:41.781 03:10:47 nvmf_tcp.nvmf_example -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:41.781 03:10:47 nvmf_tcp.nvmf_example -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:41.781 03:10:47 nvmf_tcp.nvmf_example -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:41.781 03:10:47 nvmf_tcp.nvmf_example -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:41.781 03:10:47 nvmf_tcp.nvmf_example -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:41.781 03:10:47 nvmf_tcp.nvmf_example -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:41.781 03:10:47 nvmf_tcp.nvmf_example -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:41.781 03:10:47 nvmf_tcp.nvmf_example -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:41.781 03:10:47 nvmf_tcp.nvmf_example -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:41.781 03:10:47 nvmf_tcp.nvmf_example -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:41.781 03:10:47 nvmf_tcp.nvmf_example -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:41.781 03:10:47 nvmf_tcp.nvmf_example -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:41.781 03:10:47 nvmf_tcp.nvmf_example -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:41.781 03:10:47 nvmf_tcp.nvmf_example -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:41.781 03:10:47 nvmf_tcp.nvmf_example -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:41.781 03:10:47 nvmf_tcp.nvmf_example -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:41.781 03:10:47 nvmf_tcp.nvmf_example -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:41.781 03:10:47 nvmf_tcp.nvmf_example -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:41.781 03:10:47 nvmf_tcp.nvmf_example -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:41.781 03:10:47 nvmf_tcp.nvmf_example -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:41.781 03:10:47 nvmf_tcp.nvmf_example -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:41.781 03:10:47 nvmf_tcp.nvmf_example -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:41.781 03:10:47 nvmf_tcp.nvmf_example -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:41.781 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:41.781 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.207 ms 00:07:41.781 00:07:41.781 --- 10.0.0.2 ping statistics --- 00:07:41.781 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:41.781 rtt min/avg/max/mdev = 0.207/0.207/0.207/0.000 ms 00:07:41.781 03:10:47 nvmf_tcp.nvmf_example -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:42.041 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:42.041 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.169 ms 00:07:42.041 00:07:42.041 --- 10.0.0.1 ping statistics --- 00:07:42.041 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:42.041 rtt min/avg/max/mdev = 0.169/0.169/0.169/0.000 ms 00:07:42.041 03:10:47 nvmf_tcp.nvmf_example -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:42.041 03:10:47 nvmf_tcp.nvmf_example -- nvmf/common.sh@422 -- # return 0 00:07:42.041 03:10:47 nvmf_tcp.nvmf_example -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:42.041 03:10:47 nvmf_tcp.nvmf_example -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:42.041 03:10:47 nvmf_tcp.nvmf_example -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:42.041 03:10:47 nvmf_tcp.nvmf_example -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:42.041 03:10:47 nvmf_tcp.nvmf_example -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:42.041 03:10:47 nvmf_tcp.nvmf_example -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:42.041 03:10:47 nvmf_tcp.nvmf_example -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:42.041 03:10:47 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:07:42.041 03:10:47 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:07:42.041 03:10:47 nvmf_tcp.nvmf_example -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:42.041 03:10:47 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:42.041 03:10:47 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:07:42.041 03:10:47 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:07:42.041 03:10:47 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=3076842 00:07:42.041 03:10:47 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:07:42.041 03:10:47 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:42.041 03:10:47 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 3076842 00:07:42.041 03:10:47 nvmf_tcp.nvmf_example -- common/autotest_common.sh@829 -- # '[' -z 3076842 ']' 00:07:42.041 03:10:47 nvmf_tcp.nvmf_example -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:42.042 03:10:47 nvmf_tcp.nvmf_example -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:42.042 03:10:47 nvmf_tcp.nvmf_example -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:42.042 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:42.042 03:10:47 nvmf_tcp.nvmf_example -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:42.042 03:10:47 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:42.042 EAL: No free 2048 kB hugepages reported on node 1 00:07:42.300 03:10:48 nvmf_tcp.nvmf_example -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:42.300 03:10:48 nvmf_tcp.nvmf_example -- common/autotest_common.sh@862 -- # return 0 00:07:42.300 03:10:48 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:07:42.300 03:10:48 nvmf_tcp.nvmf_example -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:42.300 03:10:48 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:42.300 03:10:48 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:42.300 03:10:48 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:42.300 03:10:48 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:42.300 03:10:48 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:42.300 03:10:48 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:07:42.300 03:10:48 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:42.300 03:10:48 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:42.300 03:10:48 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:42.300 03:10:48 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:07:42.300 03:10:48 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:42.300 03:10:48 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:42.300 03:10:48 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:42.300 03:10:48 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:42.300 03:10:48 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:07:42.300 03:10:48 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:07:42.300 03:10:48 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:42.300 03:10:48 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:42.300 03:10:48 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:42.300 03:10:48 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:42.300 03:10:48 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:42.300 03:10:48 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:42.300 03:10:48 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:42.300 03:10:48 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:07:42.300 03:10:48 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:07:42.300 EAL: No free 2048 kB hugepages reported on node 1 00:07:54.597 Initializing NVMe Controllers 00:07:54.597 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:07:54.597 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:07:54.597 Initialization complete. Launching workers. 00:07:54.597 ======================================================== 00:07:54.597 Latency(us) 00:07:54.597 Device Information : IOPS MiB/s Average min max 00:07:54.597 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 14720.86 57.50 4347.26 829.13 16287.91 00:07:54.597 ======================================================== 00:07:54.597 Total : 14720.86 57.50 4347.26 829.13 16287.91 00:07:54.597 00:07:54.597 03:10:58 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:07:54.597 03:10:58 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:07:54.597 03:10:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:54.597 03:10:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@117 -- # sync 00:07:54.597 03:10:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:54.597 03:10:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@120 -- # set +e 00:07:54.597 03:10:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:54.597 03:10:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:54.597 rmmod nvme_tcp 00:07:54.597 rmmod nvme_fabrics 00:07:54.597 rmmod nvme_keyring 00:07:54.597 03:10:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:54.597 03:10:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@124 -- # set -e 00:07:54.597 03:10:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@125 -- # return 0 00:07:54.597 03:10:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@489 -- # '[' -n 3076842 ']' 00:07:54.597 03:10:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@490 -- # killprocess 3076842 00:07:54.597 03:10:58 nvmf_tcp.nvmf_example -- common/autotest_common.sh@948 -- # '[' -z 3076842 ']' 00:07:54.597 03:10:58 nvmf_tcp.nvmf_example -- common/autotest_common.sh@952 -- # kill -0 3076842 00:07:54.597 03:10:58 nvmf_tcp.nvmf_example -- common/autotest_common.sh@953 -- # uname 00:07:54.597 03:10:58 nvmf_tcp.nvmf_example -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:54.597 03:10:58 nvmf_tcp.nvmf_example -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3076842 00:07:54.597 03:10:58 nvmf_tcp.nvmf_example -- common/autotest_common.sh@954 -- # process_name=nvmf 00:07:54.597 03:10:58 nvmf_tcp.nvmf_example -- common/autotest_common.sh@958 -- # '[' nvmf = sudo ']' 00:07:54.597 03:10:58 nvmf_tcp.nvmf_example -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3076842' 00:07:54.597 killing process with pid 3076842 00:07:54.597 03:10:58 nvmf_tcp.nvmf_example -- common/autotest_common.sh@967 -- # kill 3076842 00:07:54.597 03:10:58 nvmf_tcp.nvmf_example -- common/autotest_common.sh@972 -- # wait 3076842 00:07:54.597 nvmf threads initialize successfully 00:07:54.597 bdev subsystem init successfully 00:07:54.597 created a nvmf target service 00:07:54.597 create targets's poll groups done 00:07:54.597 all subsystems of target started 00:07:54.597 nvmf target is running 00:07:54.597 all subsystems of target stopped 00:07:54.597 destroy targets's poll groups done 00:07:54.597 destroyed the nvmf target service 00:07:54.597 bdev subsystem finish successfully 00:07:54.597 nvmf threads destroy successfully 00:07:54.597 03:10:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:54.597 03:10:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:54.597 03:10:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:54.597 03:10:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:54.597 03:10:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:54.597 03:10:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:54.597 03:10:58 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:54.597 03:10:58 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:54.855 03:11:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:54.855 03:11:00 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:07:54.855 03:11:00 nvmf_tcp.nvmf_example -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:54.855 03:11:00 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:54.855 00:07:54.855 real 0m15.137s 00:07:54.855 user 0m42.148s 00:07:54.855 sys 0m3.262s 00:07:54.855 03:11:00 nvmf_tcp.nvmf_example -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:54.855 03:11:00 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:54.855 ************************************ 00:07:54.855 END TEST nvmf_example 00:07:54.855 ************************************ 00:07:54.855 03:11:00 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:07:54.855 03:11:00 nvmf_tcp -- nvmf/nvmf.sh@24 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:07:54.855 03:11:00 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:54.855 03:11:00 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:54.855 03:11:00 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:54.855 ************************************ 00:07:54.855 START TEST nvmf_filesystem 00:07:54.855 ************************************ 00:07:54.855 03:11:00 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:07:55.117 * Looking for test storage... 00:07:55.117 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:55.117 03:11:01 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:07:55.117 03:11:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:07:55.117 03:11:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:07:55.117 03:11:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:07:55.117 03:11:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:07:55.117 03:11:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:07:55.117 03:11:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:07:55.117 03:11:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:07:55.117 03:11:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:07:55.117 03:11:01 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:07:55.117 03:11:01 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:07:55.117 03:11:01 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:07:55.117 03:11:01 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:07:55.117 03:11:01 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:07:55.117 03:11:01 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:07:55.117 03:11:01 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:07:55.117 03:11:01 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:07:55.117 03:11:01 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:07:55.117 03:11:01 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:07:55.117 03:11:01 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:07:55.117 03:11:01 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:07:55.117 03:11:01 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:07:55.117 03:11:01 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:07:55.117 03:11:01 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:07:55.117 03:11:01 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:07:55.117 03:11:01 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:07:55.117 03:11:01 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:07:55.117 03:11:01 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:07:55.117 03:11:01 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:07:55.117 03:11:01 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:07:55.117 03:11:01 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_CET=n 00:07:55.117 03:11:01 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:07:55.117 03:11:01 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:07:55.117 03:11:01 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:07:55.117 03:11:01 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:07:55.117 03:11:01 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:07:55.117 03:11:01 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:07:55.117 03:11:01 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:07:55.117 03:11:01 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:07:55.117 03:11:01 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:07:55.117 03:11:01 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:07:55.117 03:11:01 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:07:55.117 03:11:01 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:07:55.117 03:11:01 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:07:55.117 03:11:01 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:07:55.117 03:11:01 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:07:55.117 03:11:01 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:07:55.117 03:11:01 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:07:55.117 03:11:01 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:07:55.117 03:11:01 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR=//var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:07:55.117 03:11:01 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:07:55.117 03:11:01 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:07:55.117 03:11:01 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:07:55.117 03:11:01 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:07:55.117 03:11:01 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:07:55.117 03:11:01 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:07:55.117 03:11:01 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:07:55.117 03:11:01 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:07:55.117 03:11:01 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:07:55.117 03:11:01 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:07:55.117 03:11:01 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=y 00:07:55.117 03:11:01 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:07:55.117 03:11:01 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:07:55.117 03:11:01 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=n 00:07:55.117 03:11:01 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:07:55.117 03:11:01 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:07:55.117 03:11:01 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:07:55.117 03:11:01 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR= 00:07:55.117 03:11:01 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_GOLANG=n 00:07:55.117 03:11:01 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:07:55.117 03:11:01 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=y 00:07:55.117 03:11:01 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:07:55.117 03:11:01 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:07:55.117 03:11:01 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:07:55.117 03:11:01 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_SHARED=y 00:07:55.117 03:11:01 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=y 00:07:55.117 03:11:01 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:07:55.117 03:11:01 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:07:55.117 03:11:01 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_FC=n 00:07:55.117 03:11:01 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_AVAHI=n 00:07:55.117 03:11:01 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:07:55.117 03:11:01 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:07:55.117 03:11:01 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:07:55.117 03:11:01 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:07:55.117 03:11:01 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=n 00:07:55.117 03:11:01 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES=128 00:07:55.117 03:11:01 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=n 00:07:55.117 03:11:01 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:07:55.117 03:11:01 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:07:55.117 03:11:01 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=n 00:07:55.117 03:11:01 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:07:55.117 03:11:01 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_URING=n 00:07:55.118 03:11:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:07:55.118 03:11:01 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:07:55.118 03:11:01 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:07:55.118 03:11:01 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:07:55.118 03:11:01 nvmf_tcp.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:07:55.118 03:11:01 nvmf_tcp.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:07:55.118 03:11:01 nvmf_tcp.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:07:55.118 03:11:01 nvmf_tcp.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:07:55.118 03:11:01 nvmf_tcp.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:07:55.118 03:11:01 nvmf_tcp.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:07:55.118 03:11:01 nvmf_tcp.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:07:55.118 03:11:01 nvmf_tcp.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:07:55.118 03:11:01 nvmf_tcp.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:07:55.118 03:11:01 nvmf_tcp.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:07:55.118 03:11:01 nvmf_tcp.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:07:55.118 03:11:01 nvmf_tcp.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:07:55.118 #define SPDK_CONFIG_H 00:07:55.118 #define SPDK_CONFIG_APPS 1 00:07:55.118 #define SPDK_CONFIG_ARCH native 00:07:55.118 #undef SPDK_CONFIG_ASAN 00:07:55.118 #undef SPDK_CONFIG_AVAHI 00:07:55.118 #undef SPDK_CONFIG_CET 00:07:55.118 #define SPDK_CONFIG_COVERAGE 1 00:07:55.118 #define SPDK_CONFIG_CROSS_PREFIX 00:07:55.118 #undef SPDK_CONFIG_CRYPTO 00:07:55.118 #undef SPDK_CONFIG_CRYPTO_MLX5 00:07:55.118 #undef SPDK_CONFIG_CUSTOMOCF 00:07:55.118 #undef SPDK_CONFIG_DAOS 00:07:55.118 #define SPDK_CONFIG_DAOS_DIR 00:07:55.118 #define SPDK_CONFIG_DEBUG 1 00:07:55.118 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:07:55.118 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:07:55.118 #define SPDK_CONFIG_DPDK_INC_DIR //var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:07:55.118 #define SPDK_CONFIG_DPDK_LIB_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:07:55.118 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:07:55.118 #undef SPDK_CONFIG_DPDK_UADK 00:07:55.118 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:07:55.118 #define SPDK_CONFIG_EXAMPLES 1 00:07:55.118 #undef SPDK_CONFIG_FC 00:07:55.118 #define SPDK_CONFIG_FC_PATH 00:07:55.118 #define SPDK_CONFIG_FIO_PLUGIN 1 00:07:55.118 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:07:55.118 #undef SPDK_CONFIG_FUSE 00:07:55.118 #undef SPDK_CONFIG_FUZZER 00:07:55.118 #define SPDK_CONFIG_FUZZER_LIB 00:07:55.118 #undef SPDK_CONFIG_GOLANG 00:07:55.118 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:07:55.118 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:07:55.118 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:07:55.118 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:07:55.118 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:07:55.118 #undef SPDK_CONFIG_HAVE_LIBBSD 00:07:55.118 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:07:55.118 #define SPDK_CONFIG_IDXD 1 00:07:55.118 #define SPDK_CONFIG_IDXD_KERNEL 1 00:07:55.118 #undef SPDK_CONFIG_IPSEC_MB 00:07:55.118 #define SPDK_CONFIG_IPSEC_MB_DIR 00:07:55.118 #define SPDK_CONFIG_ISAL 1 00:07:55.118 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:07:55.118 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:07:55.118 #define SPDK_CONFIG_LIBDIR 00:07:55.118 #undef SPDK_CONFIG_LTO 00:07:55.118 #define SPDK_CONFIG_MAX_LCORES 128 00:07:55.118 #define SPDK_CONFIG_NVME_CUSE 1 00:07:55.118 #undef SPDK_CONFIG_OCF 00:07:55.118 #define SPDK_CONFIG_OCF_PATH 00:07:55.118 #define SPDK_CONFIG_OPENSSL_PATH 00:07:55.118 #undef SPDK_CONFIG_PGO_CAPTURE 00:07:55.118 #define SPDK_CONFIG_PGO_DIR 00:07:55.118 #undef SPDK_CONFIG_PGO_USE 00:07:55.118 #define SPDK_CONFIG_PREFIX /usr/local 00:07:55.118 #undef SPDK_CONFIG_RAID5F 00:07:55.118 #undef SPDK_CONFIG_RBD 00:07:55.118 #define SPDK_CONFIG_RDMA 1 00:07:55.118 #define SPDK_CONFIG_RDMA_PROV verbs 00:07:55.118 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:07:55.118 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:07:55.118 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:07:55.118 #define SPDK_CONFIG_SHARED 1 00:07:55.118 #undef SPDK_CONFIG_SMA 00:07:55.118 #define SPDK_CONFIG_TESTS 1 00:07:55.118 #undef SPDK_CONFIG_TSAN 00:07:55.118 #define SPDK_CONFIG_UBLK 1 00:07:55.118 #define SPDK_CONFIG_UBSAN 1 00:07:55.118 #undef SPDK_CONFIG_UNIT_TESTS 00:07:55.118 #undef SPDK_CONFIG_URING 00:07:55.118 #define SPDK_CONFIG_URING_PATH 00:07:55.118 #undef SPDK_CONFIG_URING_ZNS 00:07:55.118 #undef SPDK_CONFIG_USDT 00:07:55.118 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:07:55.118 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:07:55.118 #define SPDK_CONFIG_VFIO_USER 1 00:07:55.118 #define SPDK_CONFIG_VFIO_USER_DIR 00:07:55.118 #define SPDK_CONFIG_VHOST 1 00:07:55.118 #define SPDK_CONFIG_VIRTIO 1 00:07:55.118 #undef SPDK_CONFIG_VTUNE 00:07:55.118 #define SPDK_CONFIG_VTUNE_DIR 00:07:55.118 #define SPDK_CONFIG_WERROR 1 00:07:55.118 #define SPDK_CONFIG_WPDK_DIR 00:07:55.118 #undef SPDK_CONFIG_XNVME 00:07:55.118 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:07:55.118 03:11:01 nvmf_tcp.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:07:55.118 03:11:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:55.118 03:11:01 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:55.118 03:11:01 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:55.118 03:11:01 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:55.118 03:11:01 nvmf_tcp.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:55.118 03:11:01 nvmf_tcp.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:55.118 03:11:01 nvmf_tcp.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:55.118 03:11:01 nvmf_tcp.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:07:55.118 03:11:01 nvmf_tcp.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:55.118 03:11:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:07:55.118 03:11:01 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:07:55.118 03:11:01 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:07:55.118 03:11:01 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:07:55.118 03:11:01 nvmf_tcp.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:07:55.118 03:11:01 nvmf_tcp.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:07:55.118 03:11:01 nvmf_tcp.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:07:55.118 03:11:01 nvmf_tcp.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:07:55.118 03:11:01 nvmf_tcp.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:07:55.118 03:11:01 nvmf_tcp.nvmf_filesystem -- pm/common@68 -- # uname -s 00:07:55.118 03:11:01 nvmf_tcp.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:07:55.118 03:11:01 nvmf_tcp.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:07:55.118 03:11:01 nvmf_tcp.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:07:55.118 03:11:01 nvmf_tcp.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:07:55.118 03:11:01 nvmf_tcp.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:07:55.118 03:11:01 nvmf_tcp.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:07:55.118 03:11:01 nvmf_tcp.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:07:55.118 03:11:01 nvmf_tcp.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:07:55.118 03:11:01 nvmf_tcp.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:07:55.118 03:11:01 nvmf_tcp.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:07:55.118 03:11:01 nvmf_tcp.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:07:55.118 03:11:01 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:07:55.118 03:11:01 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:07:55.118 03:11:01 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:07:55.118 03:11:01 nvmf_tcp.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:07:55.118 03:11:01 nvmf_tcp.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:07:55.118 03:11:01 nvmf_tcp.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:07:55.118 03:11:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 1 00:07:55.118 03:11:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:07:55.118 03:11:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:07:55.118 03:11:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:07:55.118 03:11:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:07:55.118 03:11:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:07:55.119 03:11:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:07:55.119 03:11:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:07:55.119 03:11:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:07:55.119 03:11:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:07:55.119 03:11:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:07:55.119 03:11:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:07:55.119 03:11:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:07:55.119 03:11:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:07:55.119 03:11:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:07:55.119 03:11:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:07:55.119 03:11:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:07:55.119 03:11:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:07:55.119 03:11:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:07:55.119 03:11:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:07:55.119 03:11:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:07:55.119 03:11:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:07:55.119 03:11:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:07:55.119 03:11:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:07:55.119 03:11:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:07:55.119 03:11:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:07:55.119 03:11:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:07:55.119 03:11:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:07:55.119 03:11:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:07:55.119 03:11:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:07:55.119 03:11:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:07:55.119 03:11:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:07:55.119 03:11:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:07:55.119 03:11:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:07:55.119 03:11:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 1 00:07:55.119 03:11:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:07:55.119 03:11:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:07:55.119 03:11:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:07:55.119 03:11:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:07:55.119 03:11:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:07:55.119 03:11:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:07:55.119 03:11:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:07:55.119 03:11:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:07:55.119 03:11:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:07:55.119 03:11:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:07:55.119 03:11:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:07:55.119 03:11:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:07:55.119 03:11:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:07:55.119 03:11:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:07:55.119 03:11:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:07:55.119 03:11:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:07:55.119 03:11:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_IOAT 00:07:55.119 03:11:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:07:55.119 03:11:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_BLOBFS 00:07:55.119 03:11:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:07:55.119 03:11:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_VHOST_INIT 00:07:55.119 03:11:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:07:55.119 03:11:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_LVOL 00:07:55.119 03:11:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:07:55.119 03:11:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_VBDEV_COMPRESS 00:07:55.119 03:11:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:07:55.119 03:11:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_RUN_ASAN 00:07:55.119 03:11:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 1 00:07:55.119 03:11:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_UBSAN 00:07:55.119 03:11:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@124 -- # : /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:07:55.119 03:11:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_EXTERNAL_DPDK 00:07:55.119 03:11:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 0 00:07:55.119 03:11:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_NON_ROOT 00:07:55.119 03:11:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:07:55.119 03:11:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_TEST_CRYPTO 00:07:55.119 03:11:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:07:55.119 03:11:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_FTL 00:07:55.119 03:11:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:07:55.119 03:11:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_OCF 00:07:55.119 03:11:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:07:55.119 03:11:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_VMD 00:07:55.119 03:11:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:07:55.119 03:11:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_OPAL 00:07:55.119 03:11:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@138 -- # : v23.11 00:07:55.119 03:11:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_NATIVE_DPDK 00:07:55.119 03:11:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@140 -- # : true 00:07:55.119 03:11:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_AUTOTEST_X 00:07:55.119 03:11:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@142 -- # : 0 00:07:55.119 03:11:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_TEST_RAID5 00:07:55.119 03:11:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:07:55.119 03:11:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:07:55.119 03:11:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:07:55.119 03:11:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:07:55.119 03:11:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:07:55.119 03:11:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:07:55.119 03:11:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:07:55.119 03:11:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:07:55.119 03:11:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:07:55.119 03:11:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:07:55.119 03:11:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:07:55.119 03:11:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:07:55.119 03:11:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:07:55.119 03:11:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:07:55.119 03:11:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:07:55.119 03:11:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:07:55.119 03:11:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:07:55.119 03:11:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:07:55.119 03:11:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:07:55.119 03:11:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL_DSA 00:07:55.119 03:11:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:07:55.119 03:11:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_IAA 00:07:55.119 03:11:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@167 -- # : 00:07:55.119 03:11:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@168 -- # export SPDK_TEST_FUZZER_TARGET 00:07:55.119 03:11:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 0 00:07:55.119 03:11:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_NVMF_MDNS 00:07:55.119 03:11:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:07:55.119 03:11:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_JSONRPC_GO_CLIENT 00:07:55.119 03:11:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@175 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:07:55.119 03:11:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@175 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:07:55.119 03:11:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@176 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:07:55.119 03:11:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@176 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:07:55.119 03:11:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@177 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:55.119 03:11:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@177 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:55.119 03:11:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@178 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:55.119 03:11:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@178 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:55.119 03:11:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@181 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:07:55.120 03:11:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@181 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:07:55.120 03:11:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@185 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:07:55.120 03:11:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@185 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:07:55.120 03:11:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@189 -- # export PYTHONDONTWRITEBYTECODE=1 00:07:55.120 03:11:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@189 -- # PYTHONDONTWRITEBYTECODE=1 00:07:55.120 03:11:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@193 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:07:55.120 03:11:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@193 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:07:55.120 03:11:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@194 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:07:55.120 03:11:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@194 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:07:55.120 03:11:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@198 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:07:55.120 03:11:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@199 -- # rm -rf /var/tmp/asan_suppression_file 00:07:55.120 03:11:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@200 -- # cat 00:07:55.120 03:11:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@236 -- # echo leak:libfuse3.so 00:07:55.120 03:11:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@238 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:07:55.120 03:11:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@238 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:07:55.120 03:11:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@240 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:07:55.120 03:11:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@240 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:07:55.120 03:11:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@242 -- # '[' -z /var/spdk/dependencies ']' 00:07:55.120 03:11:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@245 -- # export DEPENDENCY_DIR 00:07:55.120 03:11:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@249 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:07:55.120 03:11:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@249 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:07:55.120 03:11:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@250 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:07:55.120 03:11:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@250 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:07:55.120 03:11:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@253 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:07:55.120 03:11:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@253 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:07:55.120 03:11:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@254 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:07:55.120 03:11:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@254 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:07:55.120 03:11:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@256 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:07:55.120 03:11:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@256 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:07:55.120 03:11:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@259 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:07:55.120 03:11:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@259 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:07:55.120 03:11:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@262 -- # '[' 0 -eq 0 ']' 00:07:55.120 03:11:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@263 -- # export valgrind= 00:07:55.120 03:11:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@263 -- # valgrind= 00:07:55.120 03:11:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@269 -- # uname -s 00:07:55.120 03:11:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@269 -- # '[' Linux = Linux ']' 00:07:55.120 03:11:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@270 -- # HUGEMEM=4096 00:07:55.120 03:11:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@271 -- # export CLEAR_HUGE=yes 00:07:55.120 03:11:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@271 -- # CLEAR_HUGE=yes 00:07:55.120 03:11:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:07:55.120 03:11:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:07:55.120 03:11:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@279 -- # MAKE=make 00:07:55.120 03:11:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@280 -- # MAKEFLAGS=-j48 00:07:55.120 03:11:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@296 -- # export HUGEMEM=4096 00:07:55.120 03:11:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@296 -- # HUGEMEM=4096 00:07:55.120 03:11:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@298 -- # NO_HUGE=() 00:07:55.120 03:11:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@299 -- # TEST_MODE= 00:07:55.120 03:11:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@300 -- # for i in "$@" 00:07:55.120 03:11:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@301 -- # case "$i" in 00:07:55.120 03:11:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@306 -- # TEST_TRANSPORT=tcp 00:07:55.120 03:11:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@318 -- # [[ -z 3078470 ]] 00:07:55.120 03:11:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@318 -- # kill -0 3078470 00:07:55.120 03:11:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1680 -- # set_test_storage 2147483648 00:07:55.120 03:11:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@328 -- # [[ -v testdir ]] 00:07:55.120 03:11:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@330 -- # local requested_size=2147483648 00:07:55.120 03:11:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@331 -- # local mount target_dir 00:07:55.120 03:11:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@333 -- # local -A mounts fss sizes avails uses 00:07:55.120 03:11:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@334 -- # local source fs size avail mount use 00:07:55.120 03:11:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@336 -- # local storage_fallback storage_candidates 00:07:55.120 03:11:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@338 -- # mktemp -udt spdk.XXXXXX 00:07:55.120 03:11:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@338 -- # storage_fallback=/tmp/spdk.GoFCCE 00:07:55.120 03:11:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@343 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:07:55.120 03:11:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@345 -- # [[ -n '' ]] 00:07:55.120 03:11:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@350 -- # [[ -n '' ]] 00:07:55.120 03:11:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@355 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.GoFCCE/tests/target /tmp/spdk.GoFCCE 00:07:55.120 03:11:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@358 -- # requested_size=2214592512 00:07:55.120 03:11:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:55.120 03:11:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@327 -- # df -T 00:07:55.120 03:11:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@327 -- # grep -v Filesystem 00:07:55.120 03:11:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=spdk_devtmpfs 00:07:55.120 03:11:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=devtmpfs 00:07:55.120 03:11:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=67108864 00:07:55.120 03:11:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=67108864 00:07:55.120 03:11:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=0 00:07:55.120 03:11:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:55.120 03:11:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=/dev/pmem0 00:07:55.120 03:11:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=ext2 00:07:55.120 03:11:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=953643008 00:07:55.120 03:11:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=5284429824 00:07:55.120 03:11:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=4330786816 00:07:55.120 03:11:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:55.120 03:11:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=spdk_root 00:07:55.120 03:11:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=overlay 00:07:55.120 03:11:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=52935581696 00:07:55.120 03:11:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=61994692608 00:07:55.120 03:11:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=9059110912 00:07:55.120 03:11:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:55.120 03:11:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:07:55.120 03:11:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:07:55.120 03:11:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=30941708288 00:07:55.120 03:11:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=30997344256 00:07:55.120 03:11:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=55635968 00:07:55.120 03:11:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:55.120 03:11:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:07:55.120 03:11:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:07:55.120 03:11:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=12390178816 00:07:55.120 03:11:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=12398940160 00:07:55.120 03:11:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=8761344 00:07:55.120 03:11:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:55.120 03:11:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:07:55.120 03:11:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:07:55.120 03:11:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=30996664320 00:07:55.120 03:11:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=30997348352 00:07:55.120 03:11:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=684032 00:07:55.120 03:11:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:55.120 03:11:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:07:55.120 03:11:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:07:55.120 03:11:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=6199463936 00:07:55.120 03:11:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=6199468032 00:07:55.120 03:11:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=4096 00:07:55.120 03:11:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:55.121 03:11:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@366 -- # printf '* Looking for test storage...\n' 00:07:55.121 * Looking for test storage... 00:07:55.121 03:11:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@368 -- # local target_space new_size 00:07:55.121 03:11:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@369 -- # for target_dir in "${storage_candidates[@]}" 00:07:55.121 03:11:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@372 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:55.121 03:11:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@372 -- # awk '$1 !~ /Filesystem/{print $6}' 00:07:55.121 03:11:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@372 -- # mount=/ 00:07:55.121 03:11:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@374 -- # target_space=52935581696 00:07:55.121 03:11:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@375 -- # (( target_space == 0 || target_space < requested_size )) 00:07:55.121 03:11:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@378 -- # (( target_space >= requested_size )) 00:07:55.121 03:11:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ overlay == tmpfs ]] 00:07:55.121 03:11:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ overlay == ramfs ]] 00:07:55.121 03:11:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ / == / ]] 00:07:55.121 03:11:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@381 -- # new_size=11273703424 00:07:55.121 03:11:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@382 -- # (( new_size * 100 / sizes[/] > 95 )) 00:07:55.121 03:11:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@387 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:55.121 03:11:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@387 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:55.121 03:11:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@388 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:55.121 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:55.121 03:11:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@389 -- # return 0 00:07:55.121 03:11:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1682 -- # set -o errtrace 00:07:55.121 03:11:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1683 -- # shopt -s extdebug 00:07:55.121 03:11:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1684 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:07:55.121 03:11:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1686 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:07:55.121 03:11:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1687 -- # true 00:07:55.121 03:11:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1689 -- # xtrace_fd 00:07:55.121 03:11:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:07:55.121 03:11:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:07:55.121 03:11:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:07:55.121 03:11:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:07:55.121 03:11:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:07:55.121 03:11:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:07:55.121 03:11:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:07:55.121 03:11:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:07:55.121 03:11:01 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:55.121 03:11:01 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:07:55.121 03:11:01 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:55.121 03:11:01 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:55.121 03:11:01 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:55.121 03:11:01 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:55.121 03:11:01 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:55.121 03:11:01 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:55.121 03:11:01 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:55.121 03:11:01 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:55.121 03:11:01 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:55.121 03:11:01 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:55.121 03:11:01 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:07:55.121 03:11:01 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:07:55.121 03:11:01 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:55.121 03:11:01 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:55.121 03:11:01 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:55.121 03:11:01 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:55.121 03:11:01 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:55.121 03:11:01 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:55.121 03:11:01 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:55.121 03:11:01 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:55.121 03:11:01 nvmf_tcp.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:55.121 03:11:01 nvmf_tcp.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:55.121 03:11:01 nvmf_tcp.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:55.121 03:11:01 nvmf_tcp.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:07:55.121 03:11:01 nvmf_tcp.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:55.121 03:11:01 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@47 -- # : 0 00:07:55.121 03:11:01 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:55.121 03:11:01 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:55.121 03:11:01 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:55.121 03:11:01 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:55.121 03:11:01 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:55.121 03:11:01 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:55.121 03:11:01 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:55.121 03:11:01 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:55.121 03:11:01 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:07:55.121 03:11:01 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:07:55.121 03:11:01 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:07:55.121 03:11:01 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:55.121 03:11:01 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:55.121 03:11:01 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:55.122 03:11:01 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:55.122 03:11:01 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:55.122 03:11:01 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:55.122 03:11:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:55.122 03:11:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:55.122 03:11:01 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:55.122 03:11:01 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:55.122 03:11:01 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@285 -- # xtrace_disable 00:07:55.122 03:11:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:07:57.660 03:11:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:57.660 03:11:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@291 -- # pci_devs=() 00:07:57.660 03:11:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:57.660 03:11:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:57.660 03:11:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:57.660 03:11:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:57.660 03:11:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:57.660 03:11:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@295 -- # net_devs=() 00:07:57.660 03:11:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:57.660 03:11:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@296 -- # e810=() 00:07:57.660 03:11:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@296 -- # local -ga e810 00:07:57.660 03:11:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@297 -- # x722=() 00:07:57.660 03:11:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@297 -- # local -ga x722 00:07:57.660 03:11:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@298 -- # mlx=() 00:07:57.660 03:11:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@298 -- # local -ga mlx 00:07:57.660 03:11:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:57.660 03:11:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:57.660 03:11:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:57.660 03:11:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:57.660 03:11:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:57.660 03:11:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:57.660 03:11:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:57.660 03:11:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:57.660 03:11:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:57.660 03:11:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:57.660 03:11:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:57.660 03:11:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:57.660 03:11:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:57.660 03:11:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:57.660 03:11:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:57.660 03:11:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:57.660 03:11:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:57.660 03:11:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:57.660 03:11:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:07:57.660 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:07:57.660 03:11:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:57.660 03:11:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:57.660 03:11:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:57.660 03:11:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:57.660 03:11:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:57.660 03:11:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:57.660 03:11:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:07:57.660 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:07:57.660 03:11:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:57.660 03:11:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:57.660 03:11:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:57.660 03:11:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:57.660 03:11:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:57.660 03:11:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:57.660 03:11:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:57.660 03:11:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:57.660 03:11:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:57.660 03:11:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:57.660 03:11:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:57.660 03:11:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:57.660 03:11:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:57.660 03:11:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:57.660 03:11:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:57.660 03:11:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:07:57.660 Found net devices under 0000:0a:00.0: cvl_0_0 00:07:57.660 03:11:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:57.660 03:11:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:57.660 03:11:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:57.661 03:11:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:57.661 03:11:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:57.661 03:11:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:57.661 03:11:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:57.661 03:11:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:57.661 03:11:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:07:57.661 Found net devices under 0000:0a:00.1: cvl_0_1 00:07:57.661 03:11:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:57.661 03:11:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:57.661 03:11:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # is_hw=yes 00:07:57.661 03:11:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:57.661 03:11:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:57.661 03:11:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:57.661 03:11:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:57.661 03:11:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:57.661 03:11:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:57.661 03:11:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:57.661 03:11:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:57.661 03:11:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:57.661 03:11:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:57.661 03:11:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:57.661 03:11:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:57.661 03:11:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:57.661 03:11:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:57.661 03:11:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:57.661 03:11:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:57.661 03:11:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:57.661 03:11:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:57.661 03:11:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:57.661 03:11:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:57.661 03:11:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:57.661 03:11:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:57.661 03:11:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:57.661 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:57.661 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.250 ms 00:07:57.661 00:07:57.661 --- 10.0.0.2 ping statistics --- 00:07:57.661 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:57.661 rtt min/avg/max/mdev = 0.250/0.250/0.250/0.000 ms 00:07:57.661 03:11:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:57.661 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:57.661 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.159 ms 00:07:57.661 00:07:57.661 --- 10.0.0.1 ping statistics --- 00:07:57.661 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:57.661 rtt min/avg/max/mdev = 0.159/0.159/0.159/0.000 ms 00:07:57.661 03:11:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:57.661 03:11:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@422 -- # return 0 00:07:57.661 03:11:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:57.661 03:11:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:57.661 03:11:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:57.661 03:11:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:57.661 03:11:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:57.661 03:11:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:57.661 03:11:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:57.661 03:11:03 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:07:57.661 03:11:03 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:57.661 03:11:03 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:57.661 03:11:03 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:07:57.661 ************************************ 00:07:57.661 START TEST nvmf_filesystem_no_in_capsule 00:07:57.661 ************************************ 00:07:57.661 03:11:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1123 -- # nvmf_filesystem_part 0 00:07:57.661 03:11:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:07:57.661 03:11:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:07:57.661 03:11:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:57.661 03:11:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:57.661 03:11:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:57.661 03:11:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=3080094 00:07:57.661 03:11:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:57.661 03:11:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 3080094 00:07:57.661 03:11:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@829 -- # '[' -z 3080094 ']' 00:07:57.661 03:11:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:57.661 03:11:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:57.661 03:11:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:57.661 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:57.661 03:11:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:57.661 03:11:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:57.661 [2024-07-15 03:11:03.422750] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:07:57.661 [2024-07-15 03:11:03.422829] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:57.661 EAL: No free 2048 kB hugepages reported on node 1 00:07:57.661 [2024-07-15 03:11:03.492560] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:57.661 [2024-07-15 03:11:03.586475] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:57.661 [2024-07-15 03:11:03.586541] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:57.661 [2024-07-15 03:11:03.586558] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:57.661 [2024-07-15 03:11:03.586572] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:57.661 [2024-07-15 03:11:03.586583] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:57.661 [2024-07-15 03:11:03.586663] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:57.661 [2024-07-15 03:11:03.586729] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:57.661 [2024-07-15 03:11:03.586822] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:57.661 [2024-07-15 03:11:03.586823] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:57.661 03:11:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:57.661 03:11:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@862 -- # return 0 00:07:57.661 03:11:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:57.661 03:11:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:57.661 03:11:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:57.661 03:11:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:57.661 03:11:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:07:57.661 03:11:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:07:57.661 03:11:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:57.661 03:11:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:57.661 [2024-07-15 03:11:03.740784] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:57.661 03:11:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:57.661 03:11:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:07:57.661 03:11:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:57.661 03:11:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:57.921 Malloc1 00:07:57.921 03:11:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:57.921 03:11:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:07:57.921 03:11:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:57.921 03:11:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:57.921 03:11:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:57.921 03:11:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:57.921 03:11:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:57.921 03:11:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:57.921 03:11:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:57.921 03:11:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:57.921 03:11:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:57.921 03:11:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:57.921 [2024-07-15 03:11:03.920653] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:57.921 03:11:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:57.921 03:11:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:07:57.921 03:11:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:07:57.921 03:11:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:07:57.921 03:11:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:07:57.921 03:11:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:07:57.921 03:11:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:07:57.921 03:11:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:57.921 03:11:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:57.921 03:11:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:57.921 03:11:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:07:57.921 { 00:07:57.921 "name": "Malloc1", 00:07:57.921 "aliases": [ 00:07:57.921 "f9711926-ca57-4a0b-8f77-dc94b621b428" 00:07:57.921 ], 00:07:57.921 "product_name": "Malloc disk", 00:07:57.921 "block_size": 512, 00:07:57.921 "num_blocks": 1048576, 00:07:57.921 "uuid": "f9711926-ca57-4a0b-8f77-dc94b621b428", 00:07:57.921 "assigned_rate_limits": { 00:07:57.921 "rw_ios_per_sec": 0, 00:07:57.921 "rw_mbytes_per_sec": 0, 00:07:57.921 "r_mbytes_per_sec": 0, 00:07:57.921 "w_mbytes_per_sec": 0 00:07:57.921 }, 00:07:57.921 "claimed": true, 00:07:57.921 "claim_type": "exclusive_write", 00:07:57.921 "zoned": false, 00:07:57.921 "supported_io_types": { 00:07:57.921 "read": true, 00:07:57.921 "write": true, 00:07:57.921 "unmap": true, 00:07:57.921 "flush": true, 00:07:57.921 "reset": true, 00:07:57.921 "nvme_admin": false, 00:07:57.921 "nvme_io": false, 00:07:57.921 "nvme_io_md": false, 00:07:57.921 "write_zeroes": true, 00:07:57.921 "zcopy": true, 00:07:57.922 "get_zone_info": false, 00:07:57.922 "zone_management": false, 00:07:57.922 "zone_append": false, 00:07:57.922 "compare": false, 00:07:57.922 "compare_and_write": false, 00:07:57.922 "abort": true, 00:07:57.922 "seek_hole": false, 00:07:57.922 "seek_data": false, 00:07:57.922 "copy": true, 00:07:57.922 "nvme_iov_md": false 00:07:57.922 }, 00:07:57.922 "memory_domains": [ 00:07:57.922 { 00:07:57.922 "dma_device_id": "system", 00:07:57.922 "dma_device_type": 1 00:07:57.922 }, 00:07:57.922 { 00:07:57.922 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:57.922 "dma_device_type": 2 00:07:57.922 } 00:07:57.922 ], 00:07:57.922 "driver_specific": {} 00:07:57.922 } 00:07:57.922 ]' 00:07:57.922 03:11:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:07:57.922 03:11:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:07:57.922 03:11:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:07:57.922 03:11:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:07:57.922 03:11:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:07:57.922 03:11:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:07:57.922 03:11:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:07:57.922 03:11:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:58.887 03:11:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:07:58.887 03:11:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:07:58.887 03:11:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:07:58.887 03:11:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:07:58.887 03:11:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:08:00.796 03:11:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:08:00.796 03:11:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:08:00.796 03:11:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:08:00.796 03:11:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:08:00.796 03:11:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:08:00.796 03:11:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:08:00.796 03:11:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:08:00.796 03:11:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:08:00.796 03:11:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:08:00.796 03:11:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:08:00.796 03:11:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:08:00.796 03:11:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:08:00.796 03:11:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:08:00.796 03:11:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:08:00.796 03:11:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:08:00.796 03:11:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:08:00.796 03:11:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:08:01.054 03:11:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:08:01.620 03:11:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:08:02.556 03:11:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:08:02.556 03:11:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:08:02.556 03:11:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:08:02.556 03:11:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:02.556 03:11:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:02.556 ************************************ 00:08:02.556 START TEST filesystem_ext4 00:08:02.556 ************************************ 00:08:02.556 03:11:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create ext4 nvme0n1 00:08:02.556 03:11:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:08:02.556 03:11:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:02.556 03:11:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:08:02.556 03:11:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@924 -- # local fstype=ext4 00:08:02.556 03:11:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:08:02.556 03:11:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@926 -- # local i=0 00:08:02.556 03:11:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@927 -- # local force 00:08:02.556 03:11:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@929 -- # '[' ext4 = ext4 ']' 00:08:02.556 03:11:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@930 -- # force=-F 00:08:02.556 03:11:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@935 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:08:02.556 mke2fs 1.46.5 (30-Dec-2021) 00:08:02.815 Discarding device blocks: 0/522240 done 00:08:02.815 Creating filesystem with 522240 1k blocks and 130560 inodes 00:08:02.815 Filesystem UUID: 1effef9e-b67a-4f4c-936d-73e5d22aa8cc 00:08:02.815 Superblock backups stored on blocks: 00:08:02.815 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:08:02.815 00:08:02.815 Allocating group tables: 0/64 done 00:08:02.815 Writing inode tables: 0/64 done 00:08:02.815 Creating journal (8192 blocks): done 00:08:02.815 Writing superblocks and filesystem accounting information: 0/64 done 00:08:02.815 00:08:02.815 03:11:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@943 -- # return 0 00:08:02.815 03:11:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:03.750 03:11:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:03.750 03:11:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:08:03.750 03:11:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:03.750 03:11:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:08:03.750 03:11:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:08:03.750 03:11:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:04.008 03:11:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 3080094 00:08:04.008 03:11:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:04.009 03:11:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:04.009 03:11:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:04.009 03:11:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:04.009 00:08:04.009 real 0m1.219s 00:08:04.009 user 0m0.010s 00:08:04.009 sys 0m0.060s 00:08:04.009 03:11:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:04.009 03:11:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:08:04.009 ************************************ 00:08:04.009 END TEST filesystem_ext4 00:08:04.009 ************************************ 00:08:04.009 03:11:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:08:04.009 03:11:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:08:04.009 03:11:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:08:04.009 03:11:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:04.009 03:11:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:04.009 ************************************ 00:08:04.009 START TEST filesystem_btrfs 00:08:04.009 ************************************ 00:08:04.009 03:11:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create btrfs nvme0n1 00:08:04.009 03:11:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:08:04.009 03:11:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:04.009 03:11:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:08:04.009 03:11:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@924 -- # local fstype=btrfs 00:08:04.009 03:11:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:08:04.009 03:11:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@926 -- # local i=0 00:08:04.009 03:11:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@927 -- # local force 00:08:04.009 03:11:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@929 -- # '[' btrfs = ext4 ']' 00:08:04.009 03:11:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@932 -- # force=-f 00:08:04.009 03:11:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@935 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:08:04.266 btrfs-progs v6.6.2 00:08:04.266 See https://btrfs.readthedocs.io for more information. 00:08:04.266 00:08:04.266 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:08:04.266 NOTE: several default settings have changed in version 5.15, please make sure 00:08:04.266 this does not affect your deployments: 00:08:04.266 - DUP for metadata (-m dup) 00:08:04.266 - enabled no-holes (-O no-holes) 00:08:04.266 - enabled free-space-tree (-R free-space-tree) 00:08:04.266 00:08:04.266 Label: (null) 00:08:04.266 UUID: 8da20a05-f944-4ddc-bd53-a0dd0b5c9260 00:08:04.266 Node size: 16384 00:08:04.266 Sector size: 4096 00:08:04.266 Filesystem size: 510.00MiB 00:08:04.266 Block group profiles: 00:08:04.266 Data: single 8.00MiB 00:08:04.266 Metadata: DUP 32.00MiB 00:08:04.266 System: DUP 8.00MiB 00:08:04.266 SSD detected: yes 00:08:04.266 Zoned device: no 00:08:04.266 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:08:04.266 Runtime features: free-space-tree 00:08:04.266 Checksum: crc32c 00:08:04.266 Number of devices: 1 00:08:04.266 Devices: 00:08:04.266 ID SIZE PATH 00:08:04.266 1 510.00MiB /dev/nvme0n1p1 00:08:04.266 00:08:04.266 03:11:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@943 -- # return 0 00:08:04.266 03:11:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:04.834 03:11:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:04.834 03:11:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:08:04.834 03:11:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:04.834 03:11:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:08:04.834 03:11:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:08:04.834 03:11:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:04.834 03:11:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 3080094 00:08:04.834 03:11:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:04.834 03:11:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:04.834 03:11:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:04.834 03:11:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:04.834 00:08:04.834 real 0m0.947s 00:08:04.834 user 0m0.015s 00:08:04.834 sys 0m0.116s 00:08:04.834 03:11:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:04.834 03:11:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:08:04.834 ************************************ 00:08:04.834 END TEST filesystem_btrfs 00:08:04.834 ************************************ 00:08:04.834 03:11:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:08:04.834 03:11:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:08:04.834 03:11:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:08:04.834 03:11:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:04.834 03:11:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:04.834 ************************************ 00:08:04.834 START TEST filesystem_xfs 00:08:04.834 ************************************ 00:08:04.834 03:11:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create xfs nvme0n1 00:08:04.834 03:11:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:08:04.834 03:11:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:04.834 03:11:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:08:04.834 03:11:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@924 -- # local fstype=xfs 00:08:04.834 03:11:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:08:04.834 03:11:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@926 -- # local i=0 00:08:04.834 03:11:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@927 -- # local force 00:08:04.834 03:11:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@929 -- # '[' xfs = ext4 ']' 00:08:04.834 03:11:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@932 -- # force=-f 00:08:04.834 03:11:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@935 -- # mkfs.xfs -f /dev/nvme0n1p1 00:08:05.092 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:08:05.092 = sectsz=512 attr=2, projid32bit=1 00:08:05.092 = crc=1 finobt=1, sparse=1, rmapbt=0 00:08:05.092 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:08:05.092 data = bsize=4096 blocks=130560, imaxpct=25 00:08:05.092 = sunit=0 swidth=0 blks 00:08:05.092 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:08:05.092 log =internal log bsize=4096 blocks=16384, version=2 00:08:05.092 = sectsz=512 sunit=0 blks, lazy-count=1 00:08:05.092 realtime =none extsz=4096 blocks=0, rtextents=0 00:08:06.030 Discarding blocks...Done. 00:08:06.030 03:11:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@943 -- # return 0 00:08:06.030 03:11:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:08.561 03:11:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:08.561 03:11:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:08:08.561 03:11:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:08.561 03:11:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:08:08.561 03:11:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:08:08.561 03:11:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:08.561 03:11:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 3080094 00:08:08.561 03:11:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:08.561 03:11:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:08.561 03:11:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:08.561 03:11:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:08.561 00:08:08.561 real 0m3.505s 00:08:08.561 user 0m0.018s 00:08:08.561 sys 0m0.060s 00:08:08.561 03:11:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:08.561 03:11:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:08:08.561 ************************************ 00:08:08.561 END TEST filesystem_xfs 00:08:08.561 ************************************ 00:08:08.561 03:11:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:08:08.561 03:11:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:08:08.561 03:11:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:08:08.561 03:11:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:08.561 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:08.561 03:11:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:08.561 03:11:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:08:08.561 03:11:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:08:08.561 03:11:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:08.561 03:11:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:08:08.562 03:11:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:08.562 03:11:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:08:08.562 03:11:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:08.562 03:11:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:08.562 03:11:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:08.562 03:11:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:08.562 03:11:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:08:08.562 03:11:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 3080094 00:08:08.562 03:11:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@948 -- # '[' -z 3080094 ']' 00:08:08.562 03:11:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@952 -- # kill -0 3080094 00:08:08.562 03:11:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@953 -- # uname 00:08:08.562 03:11:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:08.562 03:11:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3080094 00:08:08.562 03:11:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:08.562 03:11:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:08.562 03:11:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3080094' 00:08:08.562 killing process with pid 3080094 00:08:08.562 03:11:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@967 -- # kill 3080094 00:08:08.562 03:11:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@972 -- # wait 3080094 00:08:09.129 03:11:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:08:09.129 00:08:09.129 real 0m11.705s 00:08:09.129 user 0m44.920s 00:08:09.129 sys 0m1.775s 00:08:09.129 03:11:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:09.129 03:11:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:09.129 ************************************ 00:08:09.129 END TEST nvmf_filesystem_no_in_capsule 00:08:09.129 ************************************ 00:08:09.129 03:11:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1142 -- # return 0 00:08:09.129 03:11:15 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:08:09.129 03:11:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:09.129 03:11:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:09.129 03:11:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:08:09.129 ************************************ 00:08:09.129 START TEST nvmf_filesystem_in_capsule 00:08:09.129 ************************************ 00:08:09.129 03:11:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1123 -- # nvmf_filesystem_part 4096 00:08:09.129 03:11:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:08:09.129 03:11:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:08:09.130 03:11:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:09.130 03:11:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:09.130 03:11:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:09.130 03:11:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=3081658 00:08:09.130 03:11:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:09.130 03:11:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 3081658 00:08:09.130 03:11:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@829 -- # '[' -z 3081658 ']' 00:08:09.130 03:11:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:09.130 03:11:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:09.130 03:11:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:09.130 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:09.130 03:11:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:09.130 03:11:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:09.130 [2024-07-15 03:11:15.179522] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:08:09.130 [2024-07-15 03:11:15.179617] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:09.130 EAL: No free 2048 kB hugepages reported on node 1 00:08:09.130 [2024-07-15 03:11:15.245439] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:09.389 [2024-07-15 03:11:15.335083] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:09.389 [2024-07-15 03:11:15.335135] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:09.389 [2024-07-15 03:11:15.335149] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:09.389 [2024-07-15 03:11:15.335160] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:09.389 [2024-07-15 03:11:15.335170] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:09.390 [2024-07-15 03:11:15.335249] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:09.390 [2024-07-15 03:11:15.335314] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:09.390 [2024-07-15 03:11:15.335380] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:09.390 [2024-07-15 03:11:15.335382] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:09.390 03:11:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:09.390 03:11:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@862 -- # return 0 00:08:09.390 03:11:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:09.390 03:11:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:09.390 03:11:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:09.390 03:11:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:09.390 03:11:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:08:09.390 03:11:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:08:09.390 03:11:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:09.390 03:11:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:09.390 [2024-07-15 03:11:15.492739] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:09.390 03:11:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:09.390 03:11:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:08:09.390 03:11:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:09.390 03:11:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:09.648 Malloc1 00:08:09.648 03:11:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:09.648 03:11:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:09.648 03:11:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:09.648 03:11:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:09.648 03:11:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:09.648 03:11:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:09.648 03:11:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:09.648 03:11:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:09.648 03:11:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:09.648 03:11:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:09.648 03:11:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:09.648 03:11:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:09.648 [2024-07-15 03:11:15.685208] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:09.648 03:11:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:09.648 03:11:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:08:09.648 03:11:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:08:09.648 03:11:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:08:09.648 03:11:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:08:09.648 03:11:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:08:09.648 03:11:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:08:09.648 03:11:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:09.648 03:11:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:09.648 03:11:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:09.648 03:11:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:08:09.648 { 00:08:09.648 "name": "Malloc1", 00:08:09.648 "aliases": [ 00:08:09.648 "147756d0-7f9a-44c3-b785-cf304f842df8" 00:08:09.648 ], 00:08:09.648 "product_name": "Malloc disk", 00:08:09.648 "block_size": 512, 00:08:09.648 "num_blocks": 1048576, 00:08:09.648 "uuid": "147756d0-7f9a-44c3-b785-cf304f842df8", 00:08:09.648 "assigned_rate_limits": { 00:08:09.648 "rw_ios_per_sec": 0, 00:08:09.648 "rw_mbytes_per_sec": 0, 00:08:09.648 "r_mbytes_per_sec": 0, 00:08:09.648 "w_mbytes_per_sec": 0 00:08:09.648 }, 00:08:09.648 "claimed": true, 00:08:09.648 "claim_type": "exclusive_write", 00:08:09.648 "zoned": false, 00:08:09.648 "supported_io_types": { 00:08:09.648 "read": true, 00:08:09.648 "write": true, 00:08:09.648 "unmap": true, 00:08:09.648 "flush": true, 00:08:09.648 "reset": true, 00:08:09.648 "nvme_admin": false, 00:08:09.648 "nvme_io": false, 00:08:09.648 "nvme_io_md": false, 00:08:09.648 "write_zeroes": true, 00:08:09.648 "zcopy": true, 00:08:09.648 "get_zone_info": false, 00:08:09.648 "zone_management": false, 00:08:09.648 "zone_append": false, 00:08:09.648 "compare": false, 00:08:09.648 "compare_and_write": false, 00:08:09.648 "abort": true, 00:08:09.648 "seek_hole": false, 00:08:09.648 "seek_data": false, 00:08:09.648 "copy": true, 00:08:09.648 "nvme_iov_md": false 00:08:09.648 }, 00:08:09.648 "memory_domains": [ 00:08:09.648 { 00:08:09.648 "dma_device_id": "system", 00:08:09.648 "dma_device_type": 1 00:08:09.648 }, 00:08:09.648 { 00:08:09.648 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:09.648 "dma_device_type": 2 00:08:09.648 } 00:08:09.648 ], 00:08:09.648 "driver_specific": {} 00:08:09.648 } 00:08:09.648 ]' 00:08:09.648 03:11:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:08:09.648 03:11:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:08:09.648 03:11:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:08:09.648 03:11:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:08:09.648 03:11:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:08:09.648 03:11:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:08:09.648 03:11:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:08:09.648 03:11:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:10.607 03:11:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:08:10.607 03:11:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:08:10.607 03:11:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:08:10.607 03:11:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:08:10.607 03:11:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:08:12.527 03:11:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:08:12.527 03:11:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:08:12.527 03:11:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:08:12.527 03:11:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:08:12.527 03:11:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:08:12.528 03:11:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:08:12.528 03:11:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:08:12.528 03:11:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:08:12.528 03:11:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:08:12.528 03:11:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:08:12.528 03:11:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:08:12.528 03:11:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:08:12.528 03:11:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:08:12.528 03:11:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:08:12.528 03:11:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:08:12.528 03:11:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:08:12.528 03:11:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:08:12.786 03:11:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:08:13.353 03:11:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:08:14.290 03:11:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:08:14.290 03:11:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:08:14.290 03:11:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:08:14.290 03:11:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:14.290 03:11:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:14.290 ************************************ 00:08:14.290 START TEST filesystem_in_capsule_ext4 00:08:14.291 ************************************ 00:08:14.291 03:11:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create ext4 nvme0n1 00:08:14.291 03:11:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:08:14.291 03:11:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:14.291 03:11:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:08:14.291 03:11:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@924 -- # local fstype=ext4 00:08:14.291 03:11:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:08:14.291 03:11:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@926 -- # local i=0 00:08:14.291 03:11:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@927 -- # local force 00:08:14.291 03:11:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@929 -- # '[' ext4 = ext4 ']' 00:08:14.291 03:11:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@930 -- # force=-F 00:08:14.291 03:11:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@935 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:08:14.291 mke2fs 1.46.5 (30-Dec-2021) 00:08:14.291 Discarding device blocks: 0/522240 done 00:08:14.291 Creating filesystem with 522240 1k blocks and 130560 inodes 00:08:14.291 Filesystem UUID: ce81a1c1-d7d5-4e59-aa2b-1d3ff3846b0c 00:08:14.291 Superblock backups stored on blocks: 00:08:14.291 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:08:14.291 00:08:14.291 Allocating group tables: 0/64 done 00:08:14.291 Writing inode tables: 0/64 done 00:08:16.197 Creating journal (8192 blocks): done 00:08:16.767 Writing superblocks and filesystem accounting information: 0/64 2/64 done 00:08:16.767 00:08:16.767 03:11:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@943 -- # return 0 00:08:16.767 03:11:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:17.704 03:11:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:17.704 03:11:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:08:17.704 03:11:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:17.704 03:11:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:08:17.704 03:11:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:08:17.704 03:11:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:17.704 03:11:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 3081658 00:08:17.704 03:11:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:17.704 03:11:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:17.704 03:11:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:17.704 03:11:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:17.704 00:08:17.704 real 0m3.476s 00:08:17.704 user 0m0.020s 00:08:17.704 sys 0m0.058s 00:08:17.704 03:11:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:17.704 03:11:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:08:17.704 ************************************ 00:08:17.704 END TEST filesystem_in_capsule_ext4 00:08:17.704 ************************************ 00:08:17.704 03:11:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:08:17.704 03:11:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:08:17.704 03:11:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:08:17.704 03:11:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:17.705 03:11:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:17.705 ************************************ 00:08:17.705 START TEST filesystem_in_capsule_btrfs 00:08:17.705 ************************************ 00:08:17.705 03:11:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create btrfs nvme0n1 00:08:17.705 03:11:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:08:17.705 03:11:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:17.705 03:11:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:08:17.705 03:11:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@924 -- # local fstype=btrfs 00:08:17.705 03:11:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:08:17.705 03:11:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@926 -- # local i=0 00:08:17.705 03:11:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@927 -- # local force 00:08:17.705 03:11:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@929 -- # '[' btrfs = ext4 ']' 00:08:17.705 03:11:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@932 -- # force=-f 00:08:17.705 03:11:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@935 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:08:18.274 btrfs-progs v6.6.2 00:08:18.274 See https://btrfs.readthedocs.io for more information. 00:08:18.274 00:08:18.274 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:08:18.274 NOTE: several default settings have changed in version 5.15, please make sure 00:08:18.274 this does not affect your deployments: 00:08:18.274 - DUP for metadata (-m dup) 00:08:18.274 - enabled no-holes (-O no-holes) 00:08:18.274 - enabled free-space-tree (-R free-space-tree) 00:08:18.274 00:08:18.274 Label: (null) 00:08:18.274 UUID: 7777afaa-a425-4398-807d-2c901c753903 00:08:18.274 Node size: 16384 00:08:18.274 Sector size: 4096 00:08:18.274 Filesystem size: 510.00MiB 00:08:18.274 Block group profiles: 00:08:18.274 Data: single 8.00MiB 00:08:18.274 Metadata: DUP 32.00MiB 00:08:18.274 System: DUP 8.00MiB 00:08:18.274 SSD detected: yes 00:08:18.274 Zoned device: no 00:08:18.274 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:08:18.274 Runtime features: free-space-tree 00:08:18.274 Checksum: crc32c 00:08:18.274 Number of devices: 1 00:08:18.274 Devices: 00:08:18.274 ID SIZE PATH 00:08:18.274 1 510.00MiB /dev/nvme0n1p1 00:08:18.274 00:08:18.274 03:11:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@943 -- # return 0 00:08:18.275 03:11:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:18.841 03:11:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:18.841 03:11:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:08:18.841 03:11:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:18.841 03:11:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:08:18.841 03:11:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:08:18.841 03:11:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:19.101 03:11:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 3081658 00:08:19.101 03:11:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:19.101 03:11:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:19.101 03:11:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:19.101 03:11:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:19.101 00:08:19.101 real 0m1.241s 00:08:19.101 user 0m0.015s 00:08:19.101 sys 0m0.128s 00:08:19.101 03:11:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:19.101 03:11:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:08:19.101 ************************************ 00:08:19.101 END TEST filesystem_in_capsule_btrfs 00:08:19.101 ************************************ 00:08:19.101 03:11:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:08:19.101 03:11:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:08:19.101 03:11:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:08:19.101 03:11:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:19.101 03:11:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:19.101 ************************************ 00:08:19.101 START TEST filesystem_in_capsule_xfs 00:08:19.101 ************************************ 00:08:19.101 03:11:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create xfs nvme0n1 00:08:19.101 03:11:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:08:19.101 03:11:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:19.101 03:11:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:08:19.101 03:11:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@924 -- # local fstype=xfs 00:08:19.101 03:11:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:08:19.101 03:11:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@926 -- # local i=0 00:08:19.101 03:11:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@927 -- # local force 00:08:19.101 03:11:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@929 -- # '[' xfs = ext4 ']' 00:08:19.101 03:11:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@932 -- # force=-f 00:08:19.101 03:11:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@935 -- # mkfs.xfs -f /dev/nvme0n1p1 00:08:19.101 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:08:19.101 = sectsz=512 attr=2, projid32bit=1 00:08:19.101 = crc=1 finobt=1, sparse=1, rmapbt=0 00:08:19.101 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:08:19.101 data = bsize=4096 blocks=130560, imaxpct=25 00:08:19.101 = sunit=0 swidth=0 blks 00:08:19.101 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:08:19.101 log =internal log bsize=4096 blocks=16384, version=2 00:08:19.101 = sectsz=512 sunit=0 blks, lazy-count=1 00:08:19.101 realtime =none extsz=4096 blocks=0, rtextents=0 00:08:20.037 Discarding blocks...Done. 00:08:20.296 03:11:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@943 -- # return 0 00:08:20.296 03:11:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:22.833 03:11:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:22.833 03:11:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:08:22.833 03:11:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:22.833 03:11:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:08:22.833 03:11:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:08:22.833 03:11:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:22.833 03:11:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 3081658 00:08:22.833 03:11:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:22.833 03:11:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:22.833 03:11:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:22.833 03:11:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:22.833 00:08:22.833 real 0m3.443s 00:08:22.833 user 0m0.024s 00:08:22.833 sys 0m0.056s 00:08:22.833 03:11:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:22.833 03:11:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:08:22.833 ************************************ 00:08:22.833 END TEST filesystem_in_capsule_xfs 00:08:22.833 ************************************ 00:08:22.833 03:11:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:08:22.833 03:11:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:08:22.833 03:11:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:08:22.833 03:11:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:22.833 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:22.833 03:11:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:22.834 03:11:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:08:22.834 03:11:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:08:22.834 03:11:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:22.834 03:11:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:08:22.834 03:11:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:22.834 03:11:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:08:22.834 03:11:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:22.834 03:11:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:22.834 03:11:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:22.834 03:11:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:22.834 03:11:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:08:22.834 03:11:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 3081658 00:08:22.834 03:11:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@948 -- # '[' -z 3081658 ']' 00:08:22.834 03:11:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@952 -- # kill -0 3081658 00:08:22.834 03:11:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@953 -- # uname 00:08:22.834 03:11:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:22.834 03:11:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3081658 00:08:22.834 03:11:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:22.834 03:11:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:22.834 03:11:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3081658' 00:08:22.834 killing process with pid 3081658 00:08:22.834 03:11:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@967 -- # kill 3081658 00:08:22.834 03:11:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@972 -- # wait 3081658 00:08:23.402 03:11:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:08:23.402 00:08:23.402 real 0m14.117s 00:08:23.402 user 0m54.429s 00:08:23.402 sys 0m1.902s 00:08:23.402 03:11:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:23.402 03:11:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:23.402 ************************************ 00:08:23.402 END TEST nvmf_filesystem_in_capsule 00:08:23.402 ************************************ 00:08:23.402 03:11:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1142 -- # return 0 00:08:23.402 03:11:29 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:08:23.402 03:11:29 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:23.402 03:11:29 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@117 -- # sync 00:08:23.402 03:11:29 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:23.402 03:11:29 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@120 -- # set +e 00:08:23.402 03:11:29 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:23.402 03:11:29 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:23.403 rmmod nvme_tcp 00:08:23.403 rmmod nvme_fabrics 00:08:23.403 rmmod nvme_keyring 00:08:23.403 03:11:29 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:23.403 03:11:29 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@124 -- # set -e 00:08:23.403 03:11:29 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@125 -- # return 0 00:08:23.403 03:11:29 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:08:23.403 03:11:29 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:23.403 03:11:29 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:23.403 03:11:29 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:23.403 03:11:29 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:23.403 03:11:29 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:23.403 03:11:29 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:23.403 03:11:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:23.403 03:11:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:25.312 03:11:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:25.312 00:08:25.312 real 0m30.385s 00:08:25.312 user 1m40.283s 00:08:25.312 sys 0m5.306s 00:08:25.312 03:11:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:25.312 03:11:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:08:25.312 ************************************ 00:08:25.312 END TEST nvmf_filesystem 00:08:25.312 ************************************ 00:08:25.312 03:11:31 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:08:25.312 03:11:31 nvmf_tcp -- nvmf/nvmf.sh@25 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:08:25.312 03:11:31 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:25.312 03:11:31 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:25.312 03:11:31 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:25.312 ************************************ 00:08:25.312 START TEST nvmf_target_discovery 00:08:25.312 ************************************ 00:08:25.312 03:11:31 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:08:25.569 * Looking for test storage... 00:08:25.569 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:25.569 03:11:31 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:25.569 03:11:31 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:08:25.569 03:11:31 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:25.569 03:11:31 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:25.569 03:11:31 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:25.569 03:11:31 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:25.569 03:11:31 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:25.569 03:11:31 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:25.569 03:11:31 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:25.569 03:11:31 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:25.569 03:11:31 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:25.569 03:11:31 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:25.569 03:11:31 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:08:25.569 03:11:31 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:08:25.569 03:11:31 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:25.569 03:11:31 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:25.569 03:11:31 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:25.569 03:11:31 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:25.569 03:11:31 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:25.569 03:11:31 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:25.569 03:11:31 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:25.569 03:11:31 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:25.569 03:11:31 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:25.569 03:11:31 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:25.569 03:11:31 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:25.569 03:11:31 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:08:25.569 03:11:31 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:25.569 03:11:31 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@47 -- # : 0 00:08:25.569 03:11:31 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:25.569 03:11:31 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:25.569 03:11:31 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:25.569 03:11:31 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:25.569 03:11:31 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:25.569 03:11:31 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:25.569 03:11:31 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:25.569 03:11:31 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:25.569 03:11:31 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:08:25.569 03:11:31 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:08:25.569 03:11:31 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:08:25.569 03:11:31 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:08:25.569 03:11:31 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:08:25.569 03:11:31 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:25.569 03:11:31 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:25.569 03:11:31 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:25.569 03:11:31 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:25.569 03:11:31 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:25.569 03:11:31 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:25.569 03:11:31 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:25.569 03:11:31 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:25.569 03:11:31 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:25.569 03:11:31 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:25.569 03:11:31 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:08:25.569 03:11:31 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:27.514 03:11:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:27.514 03:11:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:08:27.514 03:11:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:27.514 03:11:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:27.514 03:11:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:27.514 03:11:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:27.514 03:11:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:27.514 03:11:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:08:27.514 03:11:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:27.514 03:11:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@296 -- # e810=() 00:08:27.514 03:11:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:08:27.514 03:11:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@297 -- # x722=() 00:08:27.514 03:11:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:08:27.514 03:11:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@298 -- # mlx=() 00:08:27.514 03:11:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:08:27.514 03:11:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:27.514 03:11:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:27.514 03:11:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:27.514 03:11:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:27.514 03:11:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:27.514 03:11:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:27.514 03:11:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:27.514 03:11:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:27.514 03:11:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:27.514 03:11:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:27.514 03:11:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:27.514 03:11:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:27.514 03:11:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:27.514 03:11:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:27.514 03:11:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:27.514 03:11:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:27.514 03:11:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:27.514 03:11:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:27.514 03:11:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:08:27.514 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:08:27.514 03:11:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:27.514 03:11:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:27.514 03:11:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:27.514 03:11:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:27.514 03:11:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:27.514 03:11:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:27.514 03:11:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:08:27.514 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:08:27.514 03:11:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:27.514 03:11:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:27.514 03:11:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:27.514 03:11:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:27.514 03:11:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:27.514 03:11:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:27.514 03:11:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:27.514 03:11:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:27.514 03:11:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:27.514 03:11:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:27.514 03:11:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:27.514 03:11:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:27.514 03:11:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:27.514 03:11:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:27.514 03:11:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:27.514 03:11:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:08:27.514 Found net devices under 0000:0a:00.0: cvl_0_0 00:08:27.514 03:11:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:27.514 03:11:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:27.514 03:11:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:27.514 03:11:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:27.514 03:11:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:27.514 03:11:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:27.514 03:11:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:27.514 03:11:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:27.514 03:11:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:08:27.514 Found net devices under 0000:0a:00.1: cvl_0_1 00:08:27.514 03:11:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:27.514 03:11:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:27.514 03:11:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:08:27.514 03:11:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:27.514 03:11:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:27.514 03:11:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:27.514 03:11:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:27.514 03:11:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:27.514 03:11:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:27.515 03:11:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:27.515 03:11:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:27.515 03:11:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:27.515 03:11:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:27.515 03:11:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:27.515 03:11:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:27.515 03:11:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:27.515 03:11:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:27.515 03:11:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:27.515 03:11:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:27.515 03:11:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:27.515 03:11:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:27.515 03:11:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:27.515 03:11:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:27.515 03:11:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:27.515 03:11:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:27.776 03:11:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:27.776 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:27.776 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.118 ms 00:08:27.776 00:08:27.776 --- 10.0.0.2 ping statistics --- 00:08:27.776 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:27.776 rtt min/avg/max/mdev = 0.118/0.118/0.118/0.000 ms 00:08:27.776 03:11:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:27.776 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:27.776 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.109 ms 00:08:27.776 00:08:27.776 --- 10.0.0.1 ping statistics --- 00:08:27.776 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:27.776 rtt min/avg/max/mdev = 0.109/0.109/0.109/0.000 ms 00:08:27.776 03:11:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:27.776 03:11:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@422 -- # return 0 00:08:27.776 03:11:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:27.776 03:11:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:27.776 03:11:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:27.776 03:11:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:27.776 03:11:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:27.776 03:11:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:27.776 03:11:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:27.776 03:11:33 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:08:27.776 03:11:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:27.776 03:11:33 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:27.776 03:11:33 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:27.776 03:11:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@481 -- # nvmfpid=3085411 00:08:27.776 03:11:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:27.776 03:11:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@482 -- # waitforlisten 3085411 00:08:27.776 03:11:33 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@829 -- # '[' -z 3085411 ']' 00:08:27.776 03:11:33 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:27.776 03:11:33 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:27.776 03:11:33 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:27.776 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:27.776 03:11:33 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:27.776 03:11:33 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:27.776 [2024-07-15 03:11:33.745489] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:08:27.776 [2024-07-15 03:11:33.745563] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:27.776 EAL: No free 2048 kB hugepages reported on node 1 00:08:27.776 [2024-07-15 03:11:33.811820] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:27.776 [2024-07-15 03:11:33.899346] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:27.776 [2024-07-15 03:11:33.899421] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:27.776 [2024-07-15 03:11:33.899435] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:27.776 [2024-07-15 03:11:33.899445] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:27.776 [2024-07-15 03:11:33.899469] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:27.776 [2024-07-15 03:11:33.899581] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:27.776 [2024-07-15 03:11:33.899643] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:27.776 [2024-07-15 03:11:33.899709] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:27.776 [2024-07-15 03:11:33.899711] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:28.037 03:11:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:28.037 03:11:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@862 -- # return 0 00:08:28.037 03:11:34 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:28.037 03:11:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:28.037 03:11:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:28.037 03:11:34 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:28.037 03:11:34 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:28.037 03:11:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:28.037 03:11:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:28.037 [2024-07-15 03:11:34.052720] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:28.037 03:11:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:28.037 03:11:34 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:08:28.037 03:11:34 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:28.037 03:11:34 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:08:28.037 03:11:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:28.037 03:11:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:28.037 Null1 00:08:28.037 03:11:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:28.037 03:11:34 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:28.037 03:11:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:28.037 03:11:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:28.037 03:11:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:28.037 03:11:34 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:08:28.037 03:11:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:28.037 03:11:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:28.037 03:11:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:28.037 03:11:34 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:28.037 03:11:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:28.037 03:11:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:28.037 [2024-07-15 03:11:34.093074] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:28.037 03:11:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:28.037 03:11:34 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:28.037 03:11:34 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:08:28.037 03:11:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:28.037 03:11:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:28.037 Null2 00:08:28.037 03:11:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:28.037 03:11:34 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:08:28.037 03:11:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:28.037 03:11:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:28.037 03:11:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:28.037 03:11:34 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:08:28.037 03:11:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:28.037 03:11:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:28.037 03:11:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:28.037 03:11:34 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:08:28.037 03:11:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:28.037 03:11:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:28.037 03:11:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:28.037 03:11:34 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:28.037 03:11:34 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:08:28.037 03:11:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:28.037 03:11:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:28.037 Null3 00:08:28.037 03:11:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:28.037 03:11:34 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:08:28.037 03:11:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:28.037 03:11:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:28.037 03:11:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:28.037 03:11:34 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:08:28.037 03:11:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:28.037 03:11:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:28.037 03:11:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:28.037 03:11:34 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:08:28.037 03:11:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:28.037 03:11:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:28.037 03:11:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:28.037 03:11:34 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:28.037 03:11:34 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:08:28.037 03:11:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:28.037 03:11:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:28.037 Null4 00:08:28.037 03:11:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:28.037 03:11:34 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:08:28.037 03:11:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:28.037 03:11:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:28.037 03:11:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:28.037 03:11:34 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:08:28.037 03:11:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:28.037 03:11:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:28.298 03:11:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:28.298 03:11:34 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:08:28.298 03:11:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:28.298 03:11:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:28.298 03:11:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:28.298 03:11:34 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:28.298 03:11:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:28.298 03:11:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:28.298 03:11:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:28.298 03:11:34 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:08:28.298 03:11:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:28.298 03:11:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:28.298 03:11:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:28.298 03:11:34 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 4420 00:08:28.298 00:08:28.298 Discovery Log Number of Records 6, Generation counter 6 00:08:28.298 =====Discovery Log Entry 0====== 00:08:28.298 trtype: tcp 00:08:28.298 adrfam: ipv4 00:08:28.298 subtype: current discovery subsystem 00:08:28.298 treq: not required 00:08:28.298 portid: 0 00:08:28.298 trsvcid: 4420 00:08:28.298 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:08:28.298 traddr: 10.0.0.2 00:08:28.298 eflags: explicit discovery connections, duplicate discovery information 00:08:28.298 sectype: none 00:08:28.298 =====Discovery Log Entry 1====== 00:08:28.298 trtype: tcp 00:08:28.298 adrfam: ipv4 00:08:28.298 subtype: nvme subsystem 00:08:28.298 treq: not required 00:08:28.299 portid: 0 00:08:28.299 trsvcid: 4420 00:08:28.299 subnqn: nqn.2016-06.io.spdk:cnode1 00:08:28.299 traddr: 10.0.0.2 00:08:28.299 eflags: none 00:08:28.299 sectype: none 00:08:28.299 =====Discovery Log Entry 2====== 00:08:28.299 trtype: tcp 00:08:28.299 adrfam: ipv4 00:08:28.299 subtype: nvme subsystem 00:08:28.299 treq: not required 00:08:28.299 portid: 0 00:08:28.299 trsvcid: 4420 00:08:28.299 subnqn: nqn.2016-06.io.spdk:cnode2 00:08:28.299 traddr: 10.0.0.2 00:08:28.299 eflags: none 00:08:28.299 sectype: none 00:08:28.299 =====Discovery Log Entry 3====== 00:08:28.299 trtype: tcp 00:08:28.299 adrfam: ipv4 00:08:28.299 subtype: nvme subsystem 00:08:28.299 treq: not required 00:08:28.299 portid: 0 00:08:28.299 trsvcid: 4420 00:08:28.299 subnqn: nqn.2016-06.io.spdk:cnode3 00:08:28.299 traddr: 10.0.0.2 00:08:28.299 eflags: none 00:08:28.299 sectype: none 00:08:28.299 =====Discovery Log Entry 4====== 00:08:28.299 trtype: tcp 00:08:28.299 adrfam: ipv4 00:08:28.299 subtype: nvme subsystem 00:08:28.299 treq: not required 00:08:28.299 portid: 0 00:08:28.299 trsvcid: 4420 00:08:28.299 subnqn: nqn.2016-06.io.spdk:cnode4 00:08:28.299 traddr: 10.0.0.2 00:08:28.299 eflags: none 00:08:28.299 sectype: none 00:08:28.299 =====Discovery Log Entry 5====== 00:08:28.299 trtype: tcp 00:08:28.299 adrfam: ipv4 00:08:28.299 subtype: discovery subsystem referral 00:08:28.299 treq: not required 00:08:28.299 portid: 0 00:08:28.299 trsvcid: 4430 00:08:28.299 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:08:28.299 traddr: 10.0.0.2 00:08:28.299 eflags: none 00:08:28.299 sectype: none 00:08:28.299 03:11:34 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:08:28.299 Perform nvmf subsystem discovery via RPC 00:08:28.299 03:11:34 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:08:28.299 03:11:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:28.299 03:11:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:28.299 [ 00:08:28.299 { 00:08:28.299 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:08:28.299 "subtype": "Discovery", 00:08:28.299 "listen_addresses": [ 00:08:28.299 { 00:08:28.299 "trtype": "TCP", 00:08:28.299 "adrfam": "IPv4", 00:08:28.299 "traddr": "10.0.0.2", 00:08:28.299 "trsvcid": "4420" 00:08:28.299 } 00:08:28.299 ], 00:08:28.299 "allow_any_host": true, 00:08:28.299 "hosts": [] 00:08:28.299 }, 00:08:28.299 { 00:08:28.299 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:08:28.299 "subtype": "NVMe", 00:08:28.299 "listen_addresses": [ 00:08:28.299 { 00:08:28.299 "trtype": "TCP", 00:08:28.299 "adrfam": "IPv4", 00:08:28.299 "traddr": "10.0.0.2", 00:08:28.299 "trsvcid": "4420" 00:08:28.299 } 00:08:28.299 ], 00:08:28.299 "allow_any_host": true, 00:08:28.299 "hosts": [], 00:08:28.299 "serial_number": "SPDK00000000000001", 00:08:28.299 "model_number": "SPDK bdev Controller", 00:08:28.299 "max_namespaces": 32, 00:08:28.299 "min_cntlid": 1, 00:08:28.299 "max_cntlid": 65519, 00:08:28.299 "namespaces": [ 00:08:28.299 { 00:08:28.299 "nsid": 1, 00:08:28.299 "bdev_name": "Null1", 00:08:28.299 "name": "Null1", 00:08:28.299 "nguid": "AA0C813873FD4686B7EB808544FDD794", 00:08:28.299 "uuid": "aa0c8138-73fd-4686-b7eb-808544fdd794" 00:08:28.299 } 00:08:28.299 ] 00:08:28.299 }, 00:08:28.299 { 00:08:28.299 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:08:28.299 "subtype": "NVMe", 00:08:28.299 "listen_addresses": [ 00:08:28.299 { 00:08:28.299 "trtype": "TCP", 00:08:28.299 "adrfam": "IPv4", 00:08:28.299 "traddr": "10.0.0.2", 00:08:28.299 "trsvcid": "4420" 00:08:28.299 } 00:08:28.299 ], 00:08:28.299 "allow_any_host": true, 00:08:28.299 "hosts": [], 00:08:28.299 "serial_number": "SPDK00000000000002", 00:08:28.299 "model_number": "SPDK bdev Controller", 00:08:28.299 "max_namespaces": 32, 00:08:28.299 "min_cntlid": 1, 00:08:28.299 "max_cntlid": 65519, 00:08:28.299 "namespaces": [ 00:08:28.299 { 00:08:28.299 "nsid": 1, 00:08:28.299 "bdev_name": "Null2", 00:08:28.299 "name": "Null2", 00:08:28.299 "nguid": "5BCCC087AF85490E83DB9B3336B4ADB8", 00:08:28.299 "uuid": "5bccc087-af85-490e-83db-9b3336b4adb8" 00:08:28.299 } 00:08:28.299 ] 00:08:28.299 }, 00:08:28.299 { 00:08:28.299 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:08:28.299 "subtype": "NVMe", 00:08:28.299 "listen_addresses": [ 00:08:28.299 { 00:08:28.299 "trtype": "TCP", 00:08:28.299 "adrfam": "IPv4", 00:08:28.299 "traddr": "10.0.0.2", 00:08:28.299 "trsvcid": "4420" 00:08:28.299 } 00:08:28.299 ], 00:08:28.299 "allow_any_host": true, 00:08:28.299 "hosts": [], 00:08:28.299 "serial_number": "SPDK00000000000003", 00:08:28.299 "model_number": "SPDK bdev Controller", 00:08:28.299 "max_namespaces": 32, 00:08:28.299 "min_cntlid": 1, 00:08:28.299 "max_cntlid": 65519, 00:08:28.299 "namespaces": [ 00:08:28.299 { 00:08:28.299 "nsid": 1, 00:08:28.299 "bdev_name": "Null3", 00:08:28.299 "name": "Null3", 00:08:28.299 "nguid": "6CB7BA6E3E174DD29799DC4658D98FD6", 00:08:28.299 "uuid": "6cb7ba6e-3e17-4dd2-9799-dc4658d98fd6" 00:08:28.299 } 00:08:28.299 ] 00:08:28.299 }, 00:08:28.299 { 00:08:28.299 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:08:28.299 "subtype": "NVMe", 00:08:28.299 "listen_addresses": [ 00:08:28.299 { 00:08:28.299 "trtype": "TCP", 00:08:28.299 "adrfam": "IPv4", 00:08:28.299 "traddr": "10.0.0.2", 00:08:28.299 "trsvcid": "4420" 00:08:28.299 } 00:08:28.299 ], 00:08:28.299 "allow_any_host": true, 00:08:28.299 "hosts": [], 00:08:28.299 "serial_number": "SPDK00000000000004", 00:08:28.299 "model_number": "SPDK bdev Controller", 00:08:28.299 "max_namespaces": 32, 00:08:28.299 "min_cntlid": 1, 00:08:28.299 "max_cntlid": 65519, 00:08:28.299 "namespaces": [ 00:08:28.299 { 00:08:28.299 "nsid": 1, 00:08:28.299 "bdev_name": "Null4", 00:08:28.299 "name": "Null4", 00:08:28.299 "nguid": "5672FDFEB7AE412D88C1EB550B87204A", 00:08:28.299 "uuid": "5672fdfe-b7ae-412d-88c1-eb550b87204a" 00:08:28.299 } 00:08:28.299 ] 00:08:28.299 } 00:08:28.299 ] 00:08:28.299 03:11:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:28.299 03:11:34 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:08:28.299 03:11:34 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:28.299 03:11:34 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:28.299 03:11:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:28.300 03:11:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:28.300 03:11:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:28.300 03:11:34 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:08:28.300 03:11:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:28.300 03:11:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:28.300 03:11:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:28.300 03:11:34 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:28.300 03:11:34 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:08:28.300 03:11:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:28.300 03:11:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:28.300 03:11:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:28.300 03:11:34 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:08:28.300 03:11:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:28.300 03:11:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:28.300 03:11:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:28.300 03:11:34 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:28.300 03:11:34 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:08:28.300 03:11:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:28.300 03:11:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:28.300 03:11:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:28.300 03:11:34 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:08:28.300 03:11:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:28.300 03:11:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:28.300 03:11:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:28.300 03:11:34 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:28.300 03:11:34 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:08:28.300 03:11:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:28.300 03:11:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:28.300 03:11:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:28.300 03:11:34 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:08:28.300 03:11:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:28.300 03:11:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:28.561 03:11:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:28.561 03:11:34 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:08:28.561 03:11:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:28.561 03:11:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:28.561 03:11:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:28.561 03:11:34 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:08:28.561 03:11:34 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:08:28.561 03:11:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:28.561 03:11:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:28.561 03:11:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:28.561 03:11:34 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:08:28.561 03:11:34 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:08:28.561 03:11:34 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:08:28.561 03:11:34 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:08:28.561 03:11:34 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:28.561 03:11:34 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@117 -- # sync 00:08:28.561 03:11:34 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:28.561 03:11:34 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@120 -- # set +e 00:08:28.561 03:11:34 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:28.561 03:11:34 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:28.561 rmmod nvme_tcp 00:08:28.561 rmmod nvme_fabrics 00:08:28.561 rmmod nvme_keyring 00:08:28.561 03:11:34 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:28.561 03:11:34 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@124 -- # set -e 00:08:28.561 03:11:34 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@125 -- # return 0 00:08:28.561 03:11:34 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@489 -- # '[' -n 3085411 ']' 00:08:28.561 03:11:34 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@490 -- # killprocess 3085411 00:08:28.561 03:11:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@948 -- # '[' -z 3085411 ']' 00:08:28.562 03:11:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@952 -- # kill -0 3085411 00:08:28.562 03:11:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@953 -- # uname 00:08:28.562 03:11:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:28.562 03:11:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3085411 00:08:28.562 03:11:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:28.562 03:11:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:28.562 03:11:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3085411' 00:08:28.562 killing process with pid 3085411 00:08:28.562 03:11:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@967 -- # kill 3085411 00:08:28.562 03:11:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@972 -- # wait 3085411 00:08:28.822 03:11:34 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:28.822 03:11:34 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:28.822 03:11:34 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:28.822 03:11:34 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:28.822 03:11:34 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:28.822 03:11:34 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:28.822 03:11:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:28.822 03:11:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:30.763 03:11:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:30.763 00:08:30.763 real 0m5.411s 00:08:30.763 user 0m4.324s 00:08:30.763 sys 0m1.879s 00:08:30.763 03:11:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:30.763 03:11:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:30.763 ************************************ 00:08:30.763 END TEST nvmf_target_discovery 00:08:30.763 ************************************ 00:08:30.763 03:11:36 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:08:30.763 03:11:36 nvmf_tcp -- nvmf/nvmf.sh@26 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:08:30.763 03:11:36 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:30.763 03:11:36 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:30.763 03:11:36 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:31.020 ************************************ 00:08:31.020 START TEST nvmf_referrals 00:08:31.020 ************************************ 00:08:31.020 03:11:36 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:08:31.020 * Looking for test storage... 00:08:31.020 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:31.020 03:11:36 nvmf_tcp.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:31.020 03:11:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:08:31.020 03:11:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:31.020 03:11:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:31.020 03:11:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:31.020 03:11:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:31.020 03:11:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:31.020 03:11:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:31.020 03:11:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:31.020 03:11:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:31.020 03:11:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:31.020 03:11:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:31.020 03:11:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:08:31.020 03:11:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:08:31.020 03:11:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:31.020 03:11:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:31.020 03:11:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:31.020 03:11:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:31.020 03:11:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:31.020 03:11:36 nvmf_tcp.nvmf_referrals -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:31.020 03:11:36 nvmf_tcp.nvmf_referrals -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:31.020 03:11:36 nvmf_tcp.nvmf_referrals -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:31.020 03:11:36 nvmf_tcp.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:31.020 03:11:36 nvmf_tcp.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:31.021 03:11:36 nvmf_tcp.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:31.021 03:11:36 nvmf_tcp.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:08:31.021 03:11:36 nvmf_tcp.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:31.021 03:11:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@47 -- # : 0 00:08:31.021 03:11:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:31.021 03:11:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:31.021 03:11:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:31.021 03:11:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:31.021 03:11:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:31.021 03:11:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:31.021 03:11:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:31.021 03:11:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:31.021 03:11:36 nvmf_tcp.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:08:31.021 03:11:36 nvmf_tcp.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:08:31.021 03:11:36 nvmf_tcp.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:08:31.021 03:11:36 nvmf_tcp.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:08:31.021 03:11:36 nvmf_tcp.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:08:31.021 03:11:36 nvmf_tcp.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:08:31.021 03:11:36 nvmf_tcp.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:08:31.021 03:11:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:31.021 03:11:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:31.021 03:11:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:31.021 03:11:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:31.021 03:11:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:31.021 03:11:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:31.021 03:11:36 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:31.021 03:11:36 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:31.021 03:11:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:31.021 03:11:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:31.021 03:11:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@285 -- # xtrace_disable 00:08:31.021 03:11:36 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:32.919 03:11:38 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:32.919 03:11:38 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@291 -- # pci_devs=() 00:08:32.919 03:11:38 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:32.919 03:11:38 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:32.919 03:11:38 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:32.919 03:11:38 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:32.919 03:11:38 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:32.919 03:11:38 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@295 -- # net_devs=() 00:08:32.919 03:11:38 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:32.919 03:11:38 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@296 -- # e810=() 00:08:32.919 03:11:38 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@296 -- # local -ga e810 00:08:32.919 03:11:38 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@297 -- # x722=() 00:08:32.919 03:11:38 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@297 -- # local -ga x722 00:08:32.919 03:11:38 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@298 -- # mlx=() 00:08:32.919 03:11:38 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@298 -- # local -ga mlx 00:08:32.919 03:11:38 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:32.919 03:11:38 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:32.919 03:11:38 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:32.919 03:11:38 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:32.919 03:11:38 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:32.919 03:11:38 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:32.919 03:11:38 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:32.919 03:11:38 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:32.919 03:11:38 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:32.919 03:11:38 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:32.919 03:11:38 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:32.919 03:11:38 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:32.919 03:11:38 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:32.919 03:11:38 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:32.919 03:11:38 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:32.919 03:11:38 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:32.919 03:11:38 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:32.919 03:11:38 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:32.919 03:11:38 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:08:32.919 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:08:32.919 03:11:38 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:32.920 03:11:38 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:32.920 03:11:38 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:32.920 03:11:38 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:32.920 03:11:38 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:32.920 03:11:38 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:32.920 03:11:38 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:08:32.920 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:08:32.920 03:11:38 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:32.920 03:11:38 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:32.920 03:11:38 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:32.920 03:11:38 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:32.920 03:11:38 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:32.920 03:11:38 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:32.920 03:11:38 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:32.920 03:11:38 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:32.920 03:11:38 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:32.920 03:11:38 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:32.920 03:11:38 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:32.920 03:11:38 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:32.920 03:11:38 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:32.920 03:11:38 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:32.920 03:11:38 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:32.920 03:11:38 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:08:32.920 Found net devices under 0000:0a:00.0: cvl_0_0 00:08:32.920 03:11:38 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:32.920 03:11:38 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:32.920 03:11:38 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:32.920 03:11:38 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:32.920 03:11:38 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:32.920 03:11:38 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:32.920 03:11:38 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:32.920 03:11:38 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:32.920 03:11:38 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:08:32.920 Found net devices under 0000:0a:00.1: cvl_0_1 00:08:32.920 03:11:38 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:32.920 03:11:38 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:32.920 03:11:38 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # is_hw=yes 00:08:32.920 03:11:38 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:32.920 03:11:38 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:32.920 03:11:38 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:32.920 03:11:38 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:32.920 03:11:38 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:32.920 03:11:38 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:32.920 03:11:38 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:32.920 03:11:38 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:32.920 03:11:38 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:32.920 03:11:38 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:32.920 03:11:38 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:32.920 03:11:38 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:32.920 03:11:38 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:32.920 03:11:38 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:32.920 03:11:38 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:32.920 03:11:38 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:32.920 03:11:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:32.920 03:11:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:32.920 03:11:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:32.920 03:11:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:33.178 03:11:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:33.178 03:11:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:33.178 03:11:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:33.178 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:33.178 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.150 ms 00:08:33.178 00:08:33.178 --- 10.0.0.2 ping statistics --- 00:08:33.178 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:33.178 rtt min/avg/max/mdev = 0.150/0.150/0.150/0.000 ms 00:08:33.178 03:11:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:33.178 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:33.179 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.086 ms 00:08:33.179 00:08:33.179 --- 10.0.0.1 ping statistics --- 00:08:33.179 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:33.179 rtt min/avg/max/mdev = 0.086/0.086/0.086/0.000 ms 00:08:33.179 03:11:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:33.179 03:11:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@422 -- # return 0 00:08:33.179 03:11:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:33.179 03:11:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:33.179 03:11:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:33.179 03:11:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:33.179 03:11:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:33.179 03:11:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:33.179 03:11:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:33.179 03:11:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:08:33.179 03:11:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:33.179 03:11:39 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:33.179 03:11:39 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:33.179 03:11:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@481 -- # nvmfpid=3087500 00:08:33.179 03:11:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:33.179 03:11:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@482 -- # waitforlisten 3087500 00:08:33.179 03:11:39 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@829 -- # '[' -z 3087500 ']' 00:08:33.179 03:11:39 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:33.179 03:11:39 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:33.179 03:11:39 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:33.179 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:33.179 03:11:39 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:33.179 03:11:39 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:33.179 [2024-07-15 03:11:39.179094] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:08:33.179 [2024-07-15 03:11:39.179178] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:33.179 EAL: No free 2048 kB hugepages reported on node 1 00:08:33.179 [2024-07-15 03:11:39.246708] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:33.436 [2024-07-15 03:11:39.338841] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:33.436 [2024-07-15 03:11:39.338891] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:33.436 [2024-07-15 03:11:39.338907] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:33.437 [2024-07-15 03:11:39.338919] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:33.437 [2024-07-15 03:11:39.338930] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:33.437 [2024-07-15 03:11:39.338985] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:33.437 [2024-07-15 03:11:39.339024] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:33.437 [2024-07-15 03:11:39.339073] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:33.437 [2024-07-15 03:11:39.339076] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:33.437 03:11:39 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:33.437 03:11:39 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@862 -- # return 0 00:08:33.437 03:11:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:33.437 03:11:39 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:33.437 03:11:39 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:33.437 03:11:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:33.437 03:11:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:33.437 03:11:39 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:33.437 03:11:39 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:33.437 [2024-07-15 03:11:39.497581] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:33.437 03:11:39 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:33.437 03:11:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:08:33.437 03:11:39 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:33.437 03:11:39 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:33.437 [2024-07-15 03:11:39.509812] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:08:33.437 03:11:39 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:33.437 03:11:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:08:33.437 03:11:39 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:33.437 03:11:39 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:33.437 03:11:39 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:33.437 03:11:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:08:33.437 03:11:39 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:33.437 03:11:39 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:33.437 03:11:39 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:33.437 03:11:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:08:33.437 03:11:39 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:33.437 03:11:39 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:33.437 03:11:39 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:33.437 03:11:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:33.437 03:11:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:08:33.437 03:11:39 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:33.437 03:11:39 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:33.437 03:11:39 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:33.437 03:11:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:08:33.437 03:11:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:08:33.694 03:11:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:33.694 03:11:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:33.694 03:11:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:33.694 03:11:39 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:33.694 03:11:39 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:33.694 03:11:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:08:33.694 03:11:39 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:33.695 03:11:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:08:33.695 03:11:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:08:33.695 03:11:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:08:33.695 03:11:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:33.695 03:11:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:33.695 03:11:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:33.695 03:11:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:33.695 03:11:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:08:33.695 03:11:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:08:33.695 03:11:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:08:33.695 03:11:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:08:33.695 03:11:39 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:33.695 03:11:39 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:33.695 03:11:39 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:33.695 03:11:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:08:33.695 03:11:39 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:33.695 03:11:39 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:33.695 03:11:39 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:33.695 03:11:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:08:33.695 03:11:39 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:33.695 03:11:39 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:33.695 03:11:39 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:33.695 03:11:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:33.695 03:11:39 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:33.695 03:11:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:08:33.695 03:11:39 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:33.695 03:11:39 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:33.953 03:11:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:08:33.953 03:11:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:08:33.953 03:11:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:33.953 03:11:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:33.953 03:11:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:33.953 03:11:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:33.953 03:11:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:08:33.953 03:11:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:08:33.953 03:11:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:08:33.953 03:11:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:08:33.953 03:11:39 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:33.953 03:11:39 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:33.953 03:11:39 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:33.953 03:11:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:08:33.953 03:11:39 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:33.953 03:11:39 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:33.953 03:11:39 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:33.953 03:11:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:08:33.953 03:11:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:33.953 03:11:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:33.953 03:11:39 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:33.953 03:11:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:33.953 03:11:39 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:33.953 03:11:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:08:33.953 03:11:40 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:33.953 03:11:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:08:33.953 03:11:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:08:33.953 03:11:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:08:33.953 03:11:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:33.953 03:11:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:33.953 03:11:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:33.953 03:11:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:33.953 03:11:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:08:34.211 03:11:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:08:34.211 03:11:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:08:34.211 03:11:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:08:34.211 03:11:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:08:34.211 03:11:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:08:34.211 03:11:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:34.211 03:11:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:08:34.211 03:11:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:08:34.211 03:11:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:08:34.211 03:11:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:08:34.211 03:11:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:08:34.211 03:11:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:34.211 03:11:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:08:34.211 03:11:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:08:34.211 03:11:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:08:34.211 03:11:40 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:34.211 03:11:40 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:34.211 03:11:40 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:34.211 03:11:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:08:34.211 03:11:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:34.211 03:11:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:34.211 03:11:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:34.211 03:11:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:08:34.211 03:11:40 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:34.211 03:11:40 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:34.469 03:11:40 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:34.469 03:11:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:08:34.469 03:11:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:08:34.469 03:11:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:08:34.469 03:11:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:34.469 03:11:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:34.469 03:11:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:34.469 03:11:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:34.469 03:11:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:08:34.469 03:11:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:08:34.469 03:11:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:08:34.469 03:11:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:08:34.469 03:11:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:08:34.469 03:11:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:08:34.469 03:11:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:34.469 03:11:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:08:34.726 03:11:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:08:34.726 03:11:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:08:34.726 03:11:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:08:34.726 03:11:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:08:34.726 03:11:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:34.726 03:11:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:08:34.726 03:11:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:08:34.726 03:11:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:08:34.726 03:11:40 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:34.726 03:11:40 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:34.726 03:11:40 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:34.726 03:11:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:34.726 03:11:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:08:34.726 03:11:40 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:34.726 03:11:40 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:34.726 03:11:40 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:34.985 03:11:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:08:34.985 03:11:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:08:34.985 03:11:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:34.985 03:11:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:34.985 03:11:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:34.985 03:11:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:34.985 03:11:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:08:34.985 03:11:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:08:34.985 03:11:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:08:34.985 03:11:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:08:34.985 03:11:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:08:34.985 03:11:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:34.985 03:11:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@117 -- # sync 00:08:34.985 03:11:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:34.985 03:11:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@120 -- # set +e 00:08:34.985 03:11:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:34.985 03:11:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:34.985 rmmod nvme_tcp 00:08:34.985 rmmod nvme_fabrics 00:08:34.985 rmmod nvme_keyring 00:08:34.985 03:11:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:34.985 03:11:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@124 -- # set -e 00:08:34.985 03:11:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@125 -- # return 0 00:08:34.985 03:11:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@489 -- # '[' -n 3087500 ']' 00:08:34.986 03:11:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@490 -- # killprocess 3087500 00:08:34.986 03:11:41 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@948 -- # '[' -z 3087500 ']' 00:08:34.986 03:11:41 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@952 -- # kill -0 3087500 00:08:34.986 03:11:41 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@953 -- # uname 00:08:34.986 03:11:41 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:34.986 03:11:41 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3087500 00:08:34.986 03:11:41 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:34.986 03:11:41 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:34.986 03:11:41 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3087500' 00:08:34.986 killing process with pid 3087500 00:08:34.986 03:11:41 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@967 -- # kill 3087500 00:08:34.986 03:11:41 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@972 -- # wait 3087500 00:08:35.243 03:11:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:35.243 03:11:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:35.243 03:11:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:35.244 03:11:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:35.244 03:11:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:35.244 03:11:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:35.244 03:11:41 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:35.244 03:11:41 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:37.775 03:11:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:37.775 00:08:37.775 real 0m6.444s 00:08:37.775 user 0m9.140s 00:08:37.775 sys 0m2.074s 00:08:37.775 03:11:43 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:37.775 03:11:43 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:37.775 ************************************ 00:08:37.775 END TEST nvmf_referrals 00:08:37.775 ************************************ 00:08:37.775 03:11:43 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:08:37.775 03:11:43 nvmf_tcp -- nvmf/nvmf.sh@27 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:08:37.775 03:11:43 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:37.775 03:11:43 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:37.775 03:11:43 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:37.775 ************************************ 00:08:37.775 START TEST nvmf_connect_disconnect 00:08:37.775 ************************************ 00:08:37.775 03:11:43 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:08:37.775 * Looking for test storage... 00:08:37.775 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:37.775 03:11:43 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:37.775 03:11:43 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:08:37.775 03:11:43 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:37.775 03:11:43 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:37.775 03:11:43 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:37.775 03:11:43 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:37.775 03:11:43 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:37.775 03:11:43 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:37.775 03:11:43 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:37.775 03:11:43 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:37.775 03:11:43 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:37.775 03:11:43 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:37.775 03:11:43 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:08:37.775 03:11:43 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:08:37.775 03:11:43 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:37.775 03:11:43 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:37.775 03:11:43 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:37.775 03:11:43 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:37.775 03:11:43 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:37.775 03:11:43 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:37.775 03:11:43 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:37.775 03:11:43 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:37.775 03:11:43 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:37.776 03:11:43 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:37.776 03:11:43 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:37.776 03:11:43 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:08:37.776 03:11:43 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:37.776 03:11:43 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@47 -- # : 0 00:08:37.776 03:11:43 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:37.776 03:11:43 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:37.776 03:11:43 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:37.776 03:11:43 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:37.776 03:11:43 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:37.776 03:11:43 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:37.776 03:11:43 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:37.776 03:11:43 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:37.776 03:11:43 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:37.776 03:11:43 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:37.776 03:11:43 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:08:37.776 03:11:43 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:37.776 03:11:43 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:37.776 03:11:43 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:37.776 03:11:43 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:37.776 03:11:43 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:37.776 03:11:43 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:37.776 03:11:43 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:37.776 03:11:43 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:37.776 03:11:43 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:37.776 03:11:43 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:37.776 03:11:43 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:08:37.776 03:11:43 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:39.680 03:11:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:39.680 03:11:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:08:39.680 03:11:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:39.680 03:11:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:39.680 03:11:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:39.680 03:11:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:39.680 03:11:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:39.680 03:11:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:08:39.680 03:11:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:39.680 03:11:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # e810=() 00:08:39.680 03:11:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:08:39.680 03:11:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # x722=() 00:08:39.680 03:11:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:08:39.680 03:11:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:08:39.680 03:11:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:08:39.680 03:11:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:39.680 03:11:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:39.680 03:11:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:39.680 03:11:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:39.680 03:11:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:39.680 03:11:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:39.680 03:11:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:39.680 03:11:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:39.680 03:11:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:39.680 03:11:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:39.680 03:11:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:39.680 03:11:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:39.680 03:11:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:39.680 03:11:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:39.680 03:11:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:39.680 03:11:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:39.680 03:11:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:39.680 03:11:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:39.680 03:11:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:08:39.680 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:08:39.680 03:11:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:39.680 03:11:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:39.680 03:11:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:39.680 03:11:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:39.680 03:11:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:39.680 03:11:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:39.680 03:11:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:08:39.680 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:08:39.680 03:11:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:39.680 03:11:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:39.680 03:11:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:39.680 03:11:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:39.680 03:11:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:39.680 03:11:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:39.680 03:11:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:39.680 03:11:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:39.680 03:11:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:39.680 03:11:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:39.680 03:11:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:39.680 03:11:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:39.680 03:11:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:39.680 03:11:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:39.680 03:11:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:39.680 03:11:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:08:39.680 Found net devices under 0000:0a:00.0: cvl_0_0 00:08:39.680 03:11:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:39.680 03:11:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:39.680 03:11:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:39.680 03:11:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:39.681 03:11:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:39.681 03:11:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:39.681 03:11:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:39.681 03:11:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:39.681 03:11:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:08:39.681 Found net devices under 0000:0a:00.1: cvl_0_1 00:08:39.681 03:11:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:39.681 03:11:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:39.681 03:11:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:08:39.681 03:11:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:39.681 03:11:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:39.681 03:11:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:39.681 03:11:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:39.681 03:11:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:39.681 03:11:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:39.681 03:11:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:39.681 03:11:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:39.681 03:11:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:39.681 03:11:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:39.681 03:11:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:39.681 03:11:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:39.681 03:11:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:39.681 03:11:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:39.681 03:11:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:39.681 03:11:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:39.681 03:11:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:39.681 03:11:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:39.681 03:11:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:39.681 03:11:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:39.681 03:11:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:39.681 03:11:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:39.681 03:11:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:39.681 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:39.681 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.240 ms 00:08:39.681 00:08:39.681 --- 10.0.0.2 ping statistics --- 00:08:39.681 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:39.681 rtt min/avg/max/mdev = 0.240/0.240/0.240/0.000 ms 00:08:39.681 03:11:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:39.681 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:39.681 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.160 ms 00:08:39.681 00:08:39.681 --- 10.0.0.1 ping statistics --- 00:08:39.681 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:39.681 rtt min/avg/max/mdev = 0.160/0.160/0.160/0.000 ms 00:08:39.681 03:11:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:39.681 03:11:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # return 0 00:08:39.681 03:11:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:39.681 03:11:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:39.681 03:11:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:39.681 03:11:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:39.681 03:11:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:39.681 03:11:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:39.681 03:11:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:39.681 03:11:45 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:08:39.681 03:11:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:39.681 03:11:45 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:39.681 03:11:45 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:39.681 03:11:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@481 -- # nvmfpid=3089785 00:08:39.681 03:11:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:39.681 03:11:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # waitforlisten 3089785 00:08:39.681 03:11:45 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@829 -- # '[' -z 3089785 ']' 00:08:39.681 03:11:45 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:39.681 03:11:45 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:39.681 03:11:45 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:39.681 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:39.681 03:11:45 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:39.681 03:11:45 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:39.681 [2024-07-15 03:11:45.710128] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:08:39.681 [2024-07-15 03:11:45.710240] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:39.681 EAL: No free 2048 kB hugepages reported on node 1 00:08:39.681 [2024-07-15 03:11:45.776901] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:39.939 [2024-07-15 03:11:45.864058] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:39.939 [2024-07-15 03:11:45.864105] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:39.939 [2024-07-15 03:11:45.864118] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:39.939 [2024-07-15 03:11:45.864129] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:39.939 [2024-07-15 03:11:45.864138] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:39.939 [2024-07-15 03:11:45.864217] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:39.939 [2024-07-15 03:11:45.864282] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:39.939 [2024-07-15 03:11:45.864347] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:39.939 [2024-07-15 03:11:45.864350] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:39.939 03:11:45 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:39.939 03:11:45 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@862 -- # return 0 00:08:39.939 03:11:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:39.939 03:11:45 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:39.939 03:11:45 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:39.939 03:11:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:39.939 03:11:45 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:08:39.939 03:11:46 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:39.939 03:11:46 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:39.939 [2024-07-15 03:11:46.004575] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:39.939 03:11:46 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:39.939 03:11:46 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:08:39.939 03:11:46 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:39.939 03:11:46 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:39.939 03:11:46 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:39.939 03:11:46 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:08:39.939 03:11:46 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:39.939 03:11:46 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:39.939 03:11:46 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:39.939 03:11:46 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:39.939 03:11:46 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:39.939 03:11:46 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:39.939 03:11:46 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:39.939 03:11:46 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:39.939 03:11:46 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:39.939 03:11:46 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:39.939 03:11:46 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:39.939 [2024-07-15 03:11:46.056328] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:39.939 03:11:46 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:39.939 03:11:46 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 1 -eq 1 ']' 00:08:39.940 03:11:46 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@27 -- # num_iterations=100 00:08:39.940 03:11:46 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@29 -- # NVME_CONNECT='nvme connect -i 8' 00:08:39.940 03:11:46 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:08:42.477 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:45.015 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:47.552 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:49.458 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:51.994 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:53.986 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:56.518 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:58.421 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:00.961 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:03.492 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:05.394 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:07.947 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:10.482 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:12.386 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:14.928 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:17.529 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:19.429 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:21.963 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:24.493 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:26.395 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:28.926 [2024-07-15 03:12:34.658740] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2175c40 is same with the state(5) to be set 00:09:28.926 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:30.833 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:33.371 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:35.904 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:37.842 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:40.371 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:42.309 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:44.845 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:47.376 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:49.284 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:51.816 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:54.351 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:56.258 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:58.796 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:00.705 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:03.239 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:05.823 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:08.350 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:10.252 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:12.783 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:14.688 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:17.223 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:19.756 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:21.659 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:24.195 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:26.731 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:28.635 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:31.173 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:33.076 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:35.612 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:38.150 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:40.682 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:42.590 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:45.132 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:47.038 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:49.575 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:52.108 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:54.054 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:56.591 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:58.494 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:01.026 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:03.562 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:05.471 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:08.008 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:10.537 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:12.440 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:14.971 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:16.873 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:19.429 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:21.965 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:23.870 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:26.413 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:28.947 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:30.863 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:33.390 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:35.289 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:37.823 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:40.346 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:42.273 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:44.796 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:47.344 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:49.239 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:51.765 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:54.290 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:56.187 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:58.708 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:00.603 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:03.128 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:05.682 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:07.608 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:10.143 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:12.676 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:14.583 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:17.121 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:19.656 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:21.564 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:24.095 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:26.630 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:28.535 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:31.101 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:31.101 03:15:36 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:12:31.101 03:15:36 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:12:31.101 03:15:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:31.101 03:15:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # sync 00:12:31.101 03:15:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:31.101 03:15:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@120 -- # set +e 00:12:31.101 03:15:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:31.101 03:15:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:31.101 rmmod nvme_tcp 00:12:31.101 rmmod nvme_fabrics 00:12:31.101 rmmod nvme_keyring 00:12:31.101 03:15:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:31.101 03:15:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set -e 00:12:31.101 03:15:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # return 0 00:12:31.101 03:15:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@489 -- # '[' -n 3089785 ']' 00:12:31.101 03:15:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@490 -- # killprocess 3089785 00:12:31.101 03:15:36 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@948 -- # '[' -z 3089785 ']' 00:12:31.101 03:15:36 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@952 -- # kill -0 3089785 00:12:31.101 03:15:36 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@953 -- # uname 00:12:31.101 03:15:36 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:31.101 03:15:36 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3089785 00:12:31.101 03:15:36 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:12:31.101 03:15:36 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:12:31.101 03:15:36 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3089785' 00:12:31.101 killing process with pid 3089785 00:12:31.101 03:15:36 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@967 -- # kill 3089785 00:12:31.101 03:15:36 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@972 -- # wait 3089785 00:12:31.101 03:15:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:31.101 03:15:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:31.101 03:15:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:31.101 03:15:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:31.101 03:15:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:31.101 03:15:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:31.101 03:15:37 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:31.101 03:15:37 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:33.640 03:15:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:33.640 00:12:33.640 real 3m55.826s 00:12:33.640 user 14m57.748s 00:12:33.640 sys 0m34.752s 00:12:33.640 03:15:39 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:33.640 03:15:39 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:33.640 ************************************ 00:12:33.641 END TEST nvmf_connect_disconnect 00:12:33.641 ************************************ 00:12:33.641 03:15:39 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:12:33.641 03:15:39 nvmf_tcp -- nvmf/nvmf.sh@28 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:12:33.641 03:15:39 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:12:33.641 03:15:39 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:33.641 03:15:39 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:33.641 ************************************ 00:12:33.641 START TEST nvmf_multitarget 00:12:33.641 ************************************ 00:12:33.641 03:15:39 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:12:33.641 * Looking for test storage... 00:12:33.641 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:33.641 03:15:39 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:33.641 03:15:39 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:12:33.641 03:15:39 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:33.641 03:15:39 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:33.641 03:15:39 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:33.641 03:15:39 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:33.641 03:15:39 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:33.641 03:15:39 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:33.641 03:15:39 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:33.641 03:15:39 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:33.641 03:15:39 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:33.641 03:15:39 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:33.641 03:15:39 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:12:33.641 03:15:39 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:12:33.641 03:15:39 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:33.641 03:15:39 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:33.641 03:15:39 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:33.641 03:15:39 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:33.641 03:15:39 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:33.641 03:15:39 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:33.641 03:15:39 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:33.641 03:15:39 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:33.641 03:15:39 nvmf_tcp.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:33.641 03:15:39 nvmf_tcp.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:33.641 03:15:39 nvmf_tcp.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:33.641 03:15:39 nvmf_tcp.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:12:33.641 03:15:39 nvmf_tcp.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:33.641 03:15:39 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@47 -- # : 0 00:12:33.641 03:15:39 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:33.641 03:15:39 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:33.641 03:15:39 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:33.641 03:15:39 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:33.641 03:15:39 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:33.641 03:15:39 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:33.641 03:15:39 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:33.641 03:15:39 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:33.641 03:15:39 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:12:33.641 03:15:39 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:12:33.641 03:15:39 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:33.641 03:15:39 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:33.641 03:15:39 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:33.641 03:15:39 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:33.641 03:15:39 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:33.641 03:15:39 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:33.641 03:15:39 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:33.641 03:15:39 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:33.641 03:15:39 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:33.641 03:15:39 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:33.641 03:15:39 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@285 -- # xtrace_disable 00:12:33.641 03:15:39 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:35.547 03:15:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:35.547 03:15:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@291 -- # pci_devs=() 00:12:35.547 03:15:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:35.547 03:15:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:35.547 03:15:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:35.547 03:15:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:35.547 03:15:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:35.547 03:15:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@295 -- # net_devs=() 00:12:35.547 03:15:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:35.547 03:15:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@296 -- # e810=() 00:12:35.547 03:15:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@296 -- # local -ga e810 00:12:35.547 03:15:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@297 -- # x722=() 00:12:35.547 03:15:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@297 -- # local -ga x722 00:12:35.547 03:15:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@298 -- # mlx=() 00:12:35.547 03:15:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@298 -- # local -ga mlx 00:12:35.547 03:15:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:35.547 03:15:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:35.547 03:15:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:35.547 03:15:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:35.547 03:15:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:35.547 03:15:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:35.547 03:15:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:35.547 03:15:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:35.547 03:15:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:35.547 03:15:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:35.547 03:15:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:35.547 03:15:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:35.547 03:15:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:35.547 03:15:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:35.547 03:15:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:35.547 03:15:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:35.547 03:15:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:35.547 03:15:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:35.547 03:15:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:12:35.547 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:12:35.547 03:15:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:35.547 03:15:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:35.547 03:15:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:35.547 03:15:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:35.547 03:15:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:35.547 03:15:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:35.547 03:15:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:12:35.547 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:12:35.547 03:15:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:35.547 03:15:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:35.547 03:15:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:35.547 03:15:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:35.547 03:15:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:35.547 03:15:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:35.547 03:15:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:35.547 03:15:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:35.547 03:15:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:35.547 03:15:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:35.547 03:15:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:35.547 03:15:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:35.547 03:15:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:35.547 03:15:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:35.547 03:15:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:35.547 03:15:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:12:35.547 Found net devices under 0000:0a:00.0: cvl_0_0 00:12:35.547 03:15:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:35.547 03:15:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:35.547 03:15:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:35.547 03:15:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:35.547 03:15:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:35.547 03:15:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:35.547 03:15:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:35.547 03:15:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:35.547 03:15:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:12:35.547 Found net devices under 0000:0a:00.1: cvl_0_1 00:12:35.547 03:15:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:35.547 03:15:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:35.547 03:15:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # is_hw=yes 00:12:35.547 03:15:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:35.547 03:15:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:35.547 03:15:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:35.547 03:15:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:35.547 03:15:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:35.547 03:15:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:35.547 03:15:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:35.547 03:15:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:35.547 03:15:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:35.547 03:15:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:35.547 03:15:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:35.547 03:15:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:35.547 03:15:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:35.547 03:15:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:35.547 03:15:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:35.547 03:15:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:35.547 03:15:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:35.547 03:15:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:35.547 03:15:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:35.547 03:15:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:35.547 03:15:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:35.547 03:15:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:35.547 03:15:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:35.547 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:35.547 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.197 ms 00:12:35.547 00:12:35.547 --- 10.0.0.2 ping statistics --- 00:12:35.547 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:35.547 rtt min/avg/max/mdev = 0.197/0.197/0.197/0.000 ms 00:12:35.547 03:15:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:35.547 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:35.547 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.113 ms 00:12:35.547 00:12:35.547 --- 10.0.0.1 ping statistics --- 00:12:35.547 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:35.547 rtt min/avg/max/mdev = 0.113/0.113/0.113/0.000 ms 00:12:35.547 03:15:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:35.547 03:15:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@422 -- # return 0 00:12:35.547 03:15:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:35.547 03:15:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:35.547 03:15:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:35.547 03:15:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:35.548 03:15:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:35.548 03:15:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:35.548 03:15:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:35.548 03:15:41 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:12:35.548 03:15:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:35.548 03:15:41 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:35.548 03:15:41 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:35.548 03:15:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@481 -- # nvmfpid=3121375 00:12:35.548 03:15:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:35.548 03:15:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@482 -- # waitforlisten 3121375 00:12:35.548 03:15:41 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@829 -- # '[' -z 3121375 ']' 00:12:35.548 03:15:41 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:35.548 03:15:41 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:35.548 03:15:41 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:35.548 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:35.548 03:15:41 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:35.548 03:15:41 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:35.548 [2024-07-15 03:15:41.483727] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:12:35.548 [2024-07-15 03:15:41.483811] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:35.548 EAL: No free 2048 kB hugepages reported on node 1 00:12:35.548 [2024-07-15 03:15:41.555968] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:35.548 [2024-07-15 03:15:41.649847] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:35.548 [2024-07-15 03:15:41.649917] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:35.548 [2024-07-15 03:15:41.649933] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:35.548 [2024-07-15 03:15:41.649947] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:35.548 [2024-07-15 03:15:41.649958] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:35.548 [2024-07-15 03:15:41.650023] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:35.548 [2024-07-15 03:15:41.650082] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:35.548 [2024-07-15 03:15:41.650138] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:12:35.548 [2024-07-15 03:15:41.650141] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:35.806 03:15:41 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:35.806 03:15:41 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@862 -- # return 0 00:12:35.806 03:15:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:35.806 03:15:41 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:35.806 03:15:41 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:35.806 03:15:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:35.806 03:15:41 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:12:35.806 03:15:41 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:35.806 03:15:41 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:12:35.806 03:15:41 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:12:35.806 03:15:41 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:12:36.064 "nvmf_tgt_1" 00:12:36.064 03:15:42 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:12:36.064 "nvmf_tgt_2" 00:12:36.064 03:15:42 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:36.064 03:15:42 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:12:36.322 03:15:42 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:12:36.322 03:15:42 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:12:36.322 true 00:12:36.322 03:15:42 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:12:36.322 true 00:12:36.322 03:15:42 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:36.322 03:15:42 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:12:36.581 03:15:42 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:12:36.581 03:15:42 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:12:36.581 03:15:42 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:12:36.581 03:15:42 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:36.581 03:15:42 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@117 -- # sync 00:12:36.581 03:15:42 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:36.581 03:15:42 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@120 -- # set +e 00:12:36.581 03:15:42 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:36.581 03:15:42 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:36.581 rmmod nvme_tcp 00:12:36.581 rmmod nvme_fabrics 00:12:36.581 rmmod nvme_keyring 00:12:36.581 03:15:42 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:36.581 03:15:42 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@124 -- # set -e 00:12:36.581 03:15:42 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@125 -- # return 0 00:12:36.581 03:15:42 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@489 -- # '[' -n 3121375 ']' 00:12:36.581 03:15:42 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@490 -- # killprocess 3121375 00:12:36.581 03:15:42 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@948 -- # '[' -z 3121375 ']' 00:12:36.581 03:15:42 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@952 -- # kill -0 3121375 00:12:36.581 03:15:42 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@953 -- # uname 00:12:36.581 03:15:42 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:36.581 03:15:42 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3121375 00:12:36.581 03:15:42 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:12:36.581 03:15:42 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:12:36.581 03:15:42 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3121375' 00:12:36.581 killing process with pid 3121375 00:12:36.581 03:15:42 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@967 -- # kill 3121375 00:12:36.581 03:15:42 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@972 -- # wait 3121375 00:12:36.839 03:15:42 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:36.839 03:15:42 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:36.839 03:15:42 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:36.839 03:15:42 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:36.839 03:15:42 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:36.839 03:15:42 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:36.839 03:15:42 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:36.839 03:15:42 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:39.378 03:15:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:39.378 00:12:39.378 real 0m5.662s 00:12:39.378 user 0m6.262s 00:12:39.378 sys 0m1.879s 00:12:39.378 03:15:44 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:39.378 03:15:44 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:39.378 ************************************ 00:12:39.378 END TEST nvmf_multitarget 00:12:39.378 ************************************ 00:12:39.378 03:15:44 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:12:39.378 03:15:44 nvmf_tcp -- nvmf/nvmf.sh@29 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:12:39.378 03:15:44 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:12:39.378 03:15:44 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:39.378 03:15:44 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:39.378 ************************************ 00:12:39.378 START TEST nvmf_rpc 00:12:39.378 ************************************ 00:12:39.378 03:15:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:12:39.378 * Looking for test storage... 00:12:39.378 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:39.378 03:15:45 nvmf_tcp.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:39.378 03:15:45 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:12:39.378 03:15:45 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:39.378 03:15:45 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:39.378 03:15:45 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:39.378 03:15:45 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:39.378 03:15:45 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:39.378 03:15:45 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:39.378 03:15:45 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:39.378 03:15:45 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:39.378 03:15:45 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:39.378 03:15:45 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:39.378 03:15:45 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:12:39.378 03:15:45 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:12:39.378 03:15:45 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:39.378 03:15:45 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:39.378 03:15:45 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:39.378 03:15:45 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:39.378 03:15:45 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:39.378 03:15:45 nvmf_tcp.nvmf_rpc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:39.378 03:15:45 nvmf_tcp.nvmf_rpc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:39.378 03:15:45 nvmf_tcp.nvmf_rpc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:39.378 03:15:45 nvmf_tcp.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:39.378 03:15:45 nvmf_tcp.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:39.378 03:15:45 nvmf_tcp.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:39.378 03:15:45 nvmf_tcp.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:12:39.378 03:15:45 nvmf_tcp.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:39.378 03:15:45 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@47 -- # : 0 00:12:39.378 03:15:45 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:39.378 03:15:45 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:39.378 03:15:45 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:39.378 03:15:45 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:39.378 03:15:45 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:39.378 03:15:45 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:39.378 03:15:45 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:39.378 03:15:45 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:39.378 03:15:45 nvmf_tcp.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:12:39.378 03:15:45 nvmf_tcp.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:12:39.378 03:15:45 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:39.379 03:15:45 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:39.379 03:15:45 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:39.379 03:15:45 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:39.379 03:15:45 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:39.379 03:15:45 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:39.379 03:15:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:39.379 03:15:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:39.379 03:15:45 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:39.379 03:15:45 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:39.379 03:15:45 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@285 -- # xtrace_disable 00:12:39.379 03:15:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:41.285 03:15:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:41.285 03:15:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@291 -- # pci_devs=() 00:12:41.285 03:15:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:41.285 03:15:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:41.285 03:15:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:41.285 03:15:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:41.285 03:15:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:41.285 03:15:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@295 -- # net_devs=() 00:12:41.285 03:15:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:41.285 03:15:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@296 -- # e810=() 00:12:41.285 03:15:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@296 -- # local -ga e810 00:12:41.285 03:15:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@297 -- # x722=() 00:12:41.285 03:15:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@297 -- # local -ga x722 00:12:41.285 03:15:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@298 -- # mlx=() 00:12:41.285 03:15:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@298 -- # local -ga mlx 00:12:41.285 03:15:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:41.285 03:15:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:41.285 03:15:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:41.285 03:15:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:41.285 03:15:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:41.285 03:15:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:41.285 03:15:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:41.285 03:15:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:41.285 03:15:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:41.285 03:15:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:41.285 03:15:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:41.285 03:15:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:41.285 03:15:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:41.285 03:15:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:41.285 03:15:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:41.285 03:15:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:41.285 03:15:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:41.285 03:15:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:41.285 03:15:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:12:41.285 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:12:41.285 03:15:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:41.285 03:15:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:41.285 03:15:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:41.285 03:15:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:41.285 03:15:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:41.286 03:15:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:41.286 03:15:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:12:41.286 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:12:41.286 03:15:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:41.286 03:15:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:41.286 03:15:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:41.286 03:15:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:41.286 03:15:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:41.286 03:15:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:41.286 03:15:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:41.286 03:15:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:41.286 03:15:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:41.286 03:15:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:41.286 03:15:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:41.286 03:15:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:41.286 03:15:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:41.286 03:15:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:41.286 03:15:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:41.286 03:15:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:12:41.286 Found net devices under 0000:0a:00.0: cvl_0_0 00:12:41.286 03:15:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:41.286 03:15:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:41.286 03:15:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:41.286 03:15:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:41.286 03:15:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:41.286 03:15:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:41.286 03:15:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:41.286 03:15:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:41.286 03:15:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:12:41.286 Found net devices under 0000:0a:00.1: cvl_0_1 00:12:41.286 03:15:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:41.286 03:15:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:41.286 03:15:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # is_hw=yes 00:12:41.286 03:15:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:41.286 03:15:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:41.286 03:15:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:41.286 03:15:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:41.286 03:15:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:41.286 03:15:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:41.286 03:15:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:41.286 03:15:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:41.286 03:15:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:41.286 03:15:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:41.286 03:15:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:41.286 03:15:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:41.286 03:15:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:41.286 03:15:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:41.286 03:15:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:41.286 03:15:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:41.286 03:15:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:41.286 03:15:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:41.286 03:15:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:41.286 03:15:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:41.286 03:15:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:41.286 03:15:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:41.286 03:15:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:41.286 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:41.286 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.209 ms 00:12:41.286 00:12:41.286 --- 10.0.0.2 ping statistics --- 00:12:41.286 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:41.286 rtt min/avg/max/mdev = 0.209/0.209/0.209/0.000 ms 00:12:41.286 03:15:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:41.286 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:41.286 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.132 ms 00:12:41.286 00:12:41.286 --- 10.0.0.1 ping statistics --- 00:12:41.286 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:41.286 rtt min/avg/max/mdev = 0.132/0.132/0.132/0.000 ms 00:12:41.286 03:15:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:41.286 03:15:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@422 -- # return 0 00:12:41.286 03:15:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:41.286 03:15:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:41.286 03:15:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:41.286 03:15:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:41.286 03:15:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:41.286 03:15:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:41.286 03:15:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:41.286 03:15:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:12:41.286 03:15:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:41.286 03:15:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:41.286 03:15:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:41.286 03:15:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@481 -- # nvmfpid=3123473 00:12:41.286 03:15:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:41.286 03:15:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@482 -- # waitforlisten 3123473 00:12:41.286 03:15:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@829 -- # '[' -z 3123473 ']' 00:12:41.286 03:15:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:41.286 03:15:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:41.286 03:15:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:41.286 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:41.286 03:15:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:41.286 03:15:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:41.286 [2024-07-15 03:15:47.195919] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:12:41.286 [2024-07-15 03:15:47.196014] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:41.286 EAL: No free 2048 kB hugepages reported on node 1 00:12:41.286 [2024-07-15 03:15:47.266172] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:41.286 [2024-07-15 03:15:47.361609] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:41.286 [2024-07-15 03:15:47.361657] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:41.286 [2024-07-15 03:15:47.361674] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:41.286 [2024-07-15 03:15:47.361687] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:41.286 [2024-07-15 03:15:47.361699] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:41.286 [2024-07-15 03:15:47.361769] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:41.286 [2024-07-15 03:15:47.361826] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:41.286 [2024-07-15 03:15:47.361849] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:12:41.286 [2024-07-15 03:15:47.361852] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:41.544 03:15:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:41.544 03:15:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@862 -- # return 0 00:12:41.544 03:15:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:41.545 03:15:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:41.545 03:15:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:41.545 03:15:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:41.545 03:15:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:12:41.545 03:15:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:41.545 03:15:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:41.545 03:15:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:41.545 03:15:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:12:41.545 "tick_rate": 2700000000, 00:12:41.545 "poll_groups": [ 00:12:41.545 { 00:12:41.545 "name": "nvmf_tgt_poll_group_000", 00:12:41.545 "admin_qpairs": 0, 00:12:41.545 "io_qpairs": 0, 00:12:41.545 "current_admin_qpairs": 0, 00:12:41.545 "current_io_qpairs": 0, 00:12:41.545 "pending_bdev_io": 0, 00:12:41.545 "completed_nvme_io": 0, 00:12:41.545 "transports": [] 00:12:41.545 }, 00:12:41.545 { 00:12:41.545 "name": "nvmf_tgt_poll_group_001", 00:12:41.545 "admin_qpairs": 0, 00:12:41.545 "io_qpairs": 0, 00:12:41.545 "current_admin_qpairs": 0, 00:12:41.545 "current_io_qpairs": 0, 00:12:41.545 "pending_bdev_io": 0, 00:12:41.545 "completed_nvme_io": 0, 00:12:41.545 "transports": [] 00:12:41.545 }, 00:12:41.545 { 00:12:41.545 "name": "nvmf_tgt_poll_group_002", 00:12:41.545 "admin_qpairs": 0, 00:12:41.545 "io_qpairs": 0, 00:12:41.545 "current_admin_qpairs": 0, 00:12:41.545 "current_io_qpairs": 0, 00:12:41.545 "pending_bdev_io": 0, 00:12:41.545 "completed_nvme_io": 0, 00:12:41.545 "transports": [] 00:12:41.545 }, 00:12:41.545 { 00:12:41.545 "name": "nvmf_tgt_poll_group_003", 00:12:41.545 "admin_qpairs": 0, 00:12:41.545 "io_qpairs": 0, 00:12:41.545 "current_admin_qpairs": 0, 00:12:41.545 "current_io_qpairs": 0, 00:12:41.545 "pending_bdev_io": 0, 00:12:41.545 "completed_nvme_io": 0, 00:12:41.545 "transports": [] 00:12:41.545 } 00:12:41.545 ] 00:12:41.545 }' 00:12:41.545 03:15:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:12:41.545 03:15:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:12:41.545 03:15:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:12:41.545 03:15:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:12:41.545 03:15:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:12:41.545 03:15:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:12:41.545 03:15:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:12:41.545 03:15:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:41.545 03:15:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:41.545 03:15:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:41.545 [2024-07-15 03:15:47.591006] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:41.545 03:15:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:41.545 03:15:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:12:41.545 03:15:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:41.545 03:15:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:41.545 03:15:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:41.545 03:15:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:12:41.545 "tick_rate": 2700000000, 00:12:41.545 "poll_groups": [ 00:12:41.545 { 00:12:41.545 "name": "nvmf_tgt_poll_group_000", 00:12:41.545 "admin_qpairs": 0, 00:12:41.545 "io_qpairs": 0, 00:12:41.545 "current_admin_qpairs": 0, 00:12:41.545 "current_io_qpairs": 0, 00:12:41.545 "pending_bdev_io": 0, 00:12:41.545 "completed_nvme_io": 0, 00:12:41.545 "transports": [ 00:12:41.545 { 00:12:41.545 "trtype": "TCP" 00:12:41.545 } 00:12:41.545 ] 00:12:41.545 }, 00:12:41.545 { 00:12:41.545 "name": "nvmf_tgt_poll_group_001", 00:12:41.545 "admin_qpairs": 0, 00:12:41.545 "io_qpairs": 0, 00:12:41.545 "current_admin_qpairs": 0, 00:12:41.545 "current_io_qpairs": 0, 00:12:41.545 "pending_bdev_io": 0, 00:12:41.545 "completed_nvme_io": 0, 00:12:41.545 "transports": [ 00:12:41.545 { 00:12:41.545 "trtype": "TCP" 00:12:41.545 } 00:12:41.545 ] 00:12:41.545 }, 00:12:41.545 { 00:12:41.545 "name": "nvmf_tgt_poll_group_002", 00:12:41.545 "admin_qpairs": 0, 00:12:41.545 "io_qpairs": 0, 00:12:41.545 "current_admin_qpairs": 0, 00:12:41.545 "current_io_qpairs": 0, 00:12:41.545 "pending_bdev_io": 0, 00:12:41.545 "completed_nvme_io": 0, 00:12:41.545 "transports": [ 00:12:41.545 { 00:12:41.545 "trtype": "TCP" 00:12:41.545 } 00:12:41.545 ] 00:12:41.545 }, 00:12:41.545 { 00:12:41.545 "name": "nvmf_tgt_poll_group_003", 00:12:41.545 "admin_qpairs": 0, 00:12:41.545 "io_qpairs": 0, 00:12:41.545 "current_admin_qpairs": 0, 00:12:41.545 "current_io_qpairs": 0, 00:12:41.545 "pending_bdev_io": 0, 00:12:41.545 "completed_nvme_io": 0, 00:12:41.545 "transports": [ 00:12:41.545 { 00:12:41.545 "trtype": "TCP" 00:12:41.545 } 00:12:41.545 ] 00:12:41.545 } 00:12:41.545 ] 00:12:41.545 }' 00:12:41.545 03:15:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:12:41.545 03:15:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:12:41.545 03:15:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:12:41.545 03:15:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:41.545 03:15:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:12:41.545 03:15:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:12:41.545 03:15:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:12:41.545 03:15:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:12:41.545 03:15:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:41.545 03:15:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:12:41.545 03:15:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:12:41.545 03:15:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:12:41.545 03:15:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:12:41.545 03:15:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:12:41.545 03:15:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:41.545 03:15:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:41.805 Malloc1 00:12:41.805 03:15:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:41.805 03:15:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:41.805 03:15:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:41.805 03:15:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:41.805 03:15:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:41.805 03:15:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:41.805 03:15:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:41.805 03:15:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:41.805 03:15:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:41.805 03:15:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:12:41.805 03:15:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:41.805 03:15:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:41.805 03:15:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:41.805 03:15:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:41.805 03:15:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:41.805 03:15:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:41.805 [2024-07-15 03:15:47.730915] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:41.805 03:15:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:41.805 03:15:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.2 -s 4420 00:12:41.805 03:15:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@648 -- # local es=0 00:12:41.805 03:15:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.2 -s 4420 00:12:41.805 03:15:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@636 -- # local arg=nvme 00:12:41.805 03:15:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:41.805 03:15:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # type -t nvme 00:12:41.805 03:15:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:41.805 03:15:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # type -P nvme 00:12:41.805 03:15:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:41.805 03:15:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # arg=/usr/sbin/nvme 00:12:41.805 03:15:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # [[ -x /usr/sbin/nvme ]] 00:12:41.805 03:15:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.2 -s 4420 00:12:41.805 [2024-07-15 03:15:47.753263] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55' 00:12:41.805 Failed to write to /dev/nvme-fabrics: Input/output error 00:12:41.805 could not add new controller: failed to write to nvme-fabrics device 00:12:41.805 03:15:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # es=1 00:12:41.805 03:15:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:41.805 03:15:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:12:41.805 03:15:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:41.805 03:15:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:12:41.805 03:15:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:41.806 03:15:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:41.806 03:15:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:41.806 03:15:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:42.375 03:15:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:12:42.375 03:15:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:12:42.375 03:15:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:12:42.375 03:15:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:12:42.375 03:15:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:12:44.279 03:15:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:12:44.279 03:15:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:12:44.279 03:15:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:12:44.279 03:15:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:12:44.279 03:15:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:12:44.279 03:15:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:12:44.279 03:15:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:44.538 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:44.538 03:15:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:44.538 03:15:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:12:44.538 03:15:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:12:44.538 03:15:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:44.538 03:15:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:12:44.538 03:15:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:44.538 03:15:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:12:44.538 03:15:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:12:44.539 03:15:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:44.539 03:15:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:44.539 03:15:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:44.539 03:15:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:44.539 03:15:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@648 -- # local es=0 00:12:44.539 03:15:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:44.539 03:15:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@636 -- # local arg=nvme 00:12:44.539 03:15:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:44.539 03:15:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # type -t nvme 00:12:44.539 03:15:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:44.539 03:15:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # type -P nvme 00:12:44.539 03:15:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:44.539 03:15:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # arg=/usr/sbin/nvme 00:12:44.539 03:15:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # [[ -x /usr/sbin/nvme ]] 00:12:44.539 03:15:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:44.539 [2024-07-15 03:15:50.568147] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55' 00:12:44.539 Failed to write to /dev/nvme-fabrics: Input/output error 00:12:44.539 could not add new controller: failed to write to nvme-fabrics device 00:12:44.539 03:15:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # es=1 00:12:44.539 03:15:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:44.539 03:15:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:12:44.539 03:15:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:44.539 03:15:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:12:44.539 03:15:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:44.539 03:15:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:44.539 03:15:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:44.539 03:15:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:45.108 03:15:51 nvmf_tcp.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:12:45.108 03:15:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:12:45.108 03:15:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:12:45.108 03:15:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:12:45.108 03:15:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:12:47.667 03:15:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:12:47.667 03:15:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:12:47.667 03:15:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:12:47.667 03:15:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:12:47.667 03:15:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:12:47.667 03:15:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:12:47.667 03:15:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:47.667 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:47.667 03:15:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:47.667 03:15:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:12:47.667 03:15:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:12:47.667 03:15:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:47.667 03:15:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:12:47.667 03:15:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:47.667 03:15:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:12:47.667 03:15:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:47.667 03:15:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:47.667 03:15:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:47.667 03:15:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:47.667 03:15:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:12:47.667 03:15:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:47.667 03:15:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:47.667 03:15:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:47.667 03:15:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:47.667 03:15:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:47.667 03:15:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:47.667 03:15:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:47.667 03:15:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:47.667 [2024-07-15 03:15:53.312264] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:47.667 03:15:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:47.667 03:15:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:47.667 03:15:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:47.667 03:15:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:47.667 03:15:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:47.667 03:15:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:47.667 03:15:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:47.667 03:15:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:47.667 03:15:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:47.667 03:15:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:47.924 03:15:54 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:47.924 03:15:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:12:47.924 03:15:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:12:47.924 03:15:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:12:47.924 03:15:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:12:50.458 03:15:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:12:50.458 03:15:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:12:50.458 03:15:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:12:50.458 03:15:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:12:50.458 03:15:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:12:50.458 03:15:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:12:50.458 03:15:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:50.458 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:50.458 03:15:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:50.458 03:15:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:12:50.458 03:15:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:12:50.458 03:15:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:50.458 03:15:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:12:50.458 03:15:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:50.458 03:15:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:12:50.458 03:15:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:50.458 03:15:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:50.458 03:15:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:50.458 03:15:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:50.458 03:15:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:50.458 03:15:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:50.458 03:15:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:50.458 03:15:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:50.458 03:15:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:50.458 03:15:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:50.458 03:15:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:50.458 03:15:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:50.458 03:15:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:50.458 03:15:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:50.458 03:15:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:50.458 03:15:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:50.459 [2024-07-15 03:15:56.117649] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:50.459 03:15:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:50.459 03:15:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:50.459 03:15:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:50.459 03:15:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:50.459 03:15:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:50.459 03:15:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:50.459 03:15:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:50.459 03:15:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:50.459 03:15:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:50.459 03:15:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:50.718 03:15:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:50.718 03:15:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:12:50.718 03:15:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:12:50.718 03:15:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:12:50.718 03:15:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:12:52.656 03:15:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:12:52.656 03:15:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:12:52.656 03:15:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:12:52.656 03:15:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:12:52.656 03:15:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:12:52.656 03:15:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:12:52.656 03:15:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:52.916 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:52.916 03:15:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:52.916 03:15:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:12:52.916 03:15:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:12:52.916 03:15:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:52.916 03:15:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:12:52.916 03:15:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:52.916 03:15:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:12:52.917 03:15:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:52.917 03:15:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:52.917 03:15:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:52.917 03:15:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:52.917 03:15:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:52.917 03:15:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:52.917 03:15:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:52.917 03:15:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:52.917 03:15:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:52.917 03:15:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:52.917 03:15:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:52.917 03:15:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:52.917 03:15:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:52.917 03:15:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:52.917 03:15:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:52.917 03:15:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:52.917 [2024-07-15 03:15:58.886097] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:52.917 03:15:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:52.917 03:15:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:52.917 03:15:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:52.917 03:15:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:52.917 03:15:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:52.917 03:15:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:52.917 03:15:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:52.917 03:15:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:52.917 03:15:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:52.917 03:15:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:53.486 03:15:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:53.486 03:15:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:12:53.486 03:15:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:12:53.486 03:15:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:12:53.486 03:15:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:12:55.393 03:16:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:12:55.393 03:16:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:12:55.393 03:16:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:12:55.393 03:16:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:12:55.393 03:16:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:12:55.393 03:16:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:12:55.393 03:16:01 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:55.652 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:55.652 03:16:01 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:55.652 03:16:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:12:55.652 03:16:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:12:55.652 03:16:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:55.652 03:16:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:12:55.652 03:16:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:55.652 03:16:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:12:55.652 03:16:01 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:55.652 03:16:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:55.652 03:16:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:55.652 03:16:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:55.652 03:16:01 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:55.652 03:16:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:55.652 03:16:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:55.652 03:16:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:55.652 03:16:01 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:55.652 03:16:01 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:55.652 03:16:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:55.652 03:16:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:55.652 03:16:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:55.652 03:16:01 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:55.652 03:16:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:55.652 03:16:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:55.652 [2024-07-15 03:16:01.654284] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:55.652 03:16:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:55.652 03:16:01 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:55.652 03:16:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:55.652 03:16:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:55.652 03:16:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:55.652 03:16:01 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:55.652 03:16:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:55.652 03:16:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:55.652 03:16:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:55.652 03:16:01 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:56.224 03:16:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:56.224 03:16:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:12:56.224 03:16:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:12:56.224 03:16:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:12:56.224 03:16:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:12:58.761 03:16:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:12:58.761 03:16:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:12:58.761 03:16:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:12:58.761 03:16:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:12:58.761 03:16:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:12:58.761 03:16:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:12:58.761 03:16:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:58.761 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:58.761 03:16:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:58.761 03:16:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:12:58.761 03:16:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:12:58.761 03:16:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:58.761 03:16:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:12:58.761 03:16:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:58.761 03:16:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:12:58.761 03:16:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:58.761 03:16:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:58.761 03:16:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:58.761 03:16:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:58.761 03:16:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:58.761 03:16:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:58.761 03:16:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:58.761 03:16:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:58.761 03:16:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:58.761 03:16:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:58.761 03:16:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:58.761 03:16:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:58.761 03:16:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:58.761 03:16:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:58.761 03:16:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:58.761 03:16:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:58.761 [2024-07-15 03:16:04.462562] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:58.761 03:16:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:58.761 03:16:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:58.761 03:16:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:58.761 03:16:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:58.761 03:16:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:58.761 03:16:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:58.761 03:16:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:58.761 03:16:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:58.761 03:16:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:58.761 03:16:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:59.021 03:16:05 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:59.021 03:16:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:12:59.021 03:16:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:12:59.021 03:16:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:12:59.021 03:16:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:13:00.926 03:16:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:13:00.926 03:16:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:13:00.926 03:16:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:13:01.185 03:16:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:13:01.185 03:16:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:13:01.185 03:16:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:13:01.185 03:16:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:01.185 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:01.185 03:16:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:01.185 03:16:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:13:01.185 03:16:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:13:01.185 03:16:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:01.185 03:16:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:13:01.185 03:16:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:01.185 03:16:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:13:01.185 03:16:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:01.185 03:16:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:01.185 03:16:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:01.185 03:16:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:01.185 03:16:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:01.185 03:16:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:01.185 03:16:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:01.185 03:16:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:01.185 03:16:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:13:01.185 03:16:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:01.185 03:16:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:01.185 03:16:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:01.185 03:16:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:01.185 03:16:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:01.185 03:16:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:01.186 03:16:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:01.186 03:16:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:01.186 [2024-07-15 03:16:07.229999] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:01.186 03:16:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:01.186 03:16:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:01.186 03:16:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:01.186 03:16:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:01.186 03:16:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:01.186 03:16:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:01.186 03:16:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:01.186 03:16:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:01.186 03:16:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:01.186 03:16:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:01.186 03:16:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:01.186 03:16:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:01.186 03:16:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:01.186 03:16:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:01.186 03:16:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:01.186 03:16:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:01.186 03:16:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:01.186 03:16:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:01.186 03:16:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:01.186 03:16:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:01.186 03:16:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:01.186 03:16:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:01.186 03:16:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:01.186 03:16:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:01.186 03:16:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:01.186 [2024-07-15 03:16:07.278039] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:01.186 03:16:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:01.186 03:16:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:01.186 03:16:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:01.186 03:16:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:01.186 03:16:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:01.186 03:16:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:01.186 03:16:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:01.186 03:16:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:01.186 03:16:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:01.186 03:16:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:01.186 03:16:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:01.186 03:16:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:01.186 03:16:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:01.186 03:16:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:01.186 03:16:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:01.186 03:16:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:01.186 03:16:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:01.186 03:16:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:01.186 03:16:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:01.186 03:16:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:01.186 03:16:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:01.186 03:16:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:01.186 03:16:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:01.186 03:16:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:01.186 03:16:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:01.186 [2024-07-15 03:16:07.326236] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:01.445 03:16:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:01.445 03:16:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:01.445 03:16:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:01.445 03:16:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:01.445 03:16:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:01.445 03:16:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:01.445 03:16:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:01.445 03:16:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:01.445 03:16:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:01.445 03:16:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:01.445 03:16:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:01.445 03:16:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:01.445 03:16:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:01.445 03:16:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:01.445 03:16:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:01.445 03:16:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:01.445 03:16:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:01.445 03:16:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:01.445 03:16:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:01.445 03:16:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:01.445 03:16:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:01.445 03:16:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:01.445 03:16:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:01.445 03:16:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:01.445 03:16:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:01.445 [2024-07-15 03:16:07.374403] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:01.445 03:16:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:01.445 03:16:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:01.445 03:16:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:01.445 03:16:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:01.445 03:16:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:01.445 03:16:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:01.445 03:16:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:01.445 03:16:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:01.445 03:16:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:01.445 03:16:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:01.445 03:16:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:01.445 03:16:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:01.445 03:16:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:01.445 03:16:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:01.445 03:16:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:01.445 03:16:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:01.445 03:16:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:01.445 03:16:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:01.445 03:16:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:01.445 03:16:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:01.445 03:16:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:01.445 03:16:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:01.445 03:16:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:01.445 03:16:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:01.445 03:16:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:01.445 [2024-07-15 03:16:07.422576] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:01.445 03:16:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:01.445 03:16:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:01.445 03:16:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:01.445 03:16:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:01.445 03:16:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:01.445 03:16:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:01.445 03:16:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:01.445 03:16:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:01.445 03:16:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:01.445 03:16:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:01.445 03:16:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:01.445 03:16:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:01.445 03:16:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:01.445 03:16:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:01.445 03:16:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:01.445 03:16:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:01.445 03:16:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:01.445 03:16:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:13:01.445 03:16:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:01.445 03:16:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:01.445 03:16:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:01.445 03:16:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:13:01.445 "tick_rate": 2700000000, 00:13:01.445 "poll_groups": [ 00:13:01.445 { 00:13:01.445 "name": "nvmf_tgt_poll_group_000", 00:13:01.445 "admin_qpairs": 2, 00:13:01.445 "io_qpairs": 84, 00:13:01.445 "current_admin_qpairs": 0, 00:13:01.445 "current_io_qpairs": 0, 00:13:01.445 "pending_bdev_io": 0, 00:13:01.445 "completed_nvme_io": 142, 00:13:01.445 "transports": [ 00:13:01.445 { 00:13:01.445 "trtype": "TCP" 00:13:01.445 } 00:13:01.445 ] 00:13:01.445 }, 00:13:01.445 { 00:13:01.445 "name": "nvmf_tgt_poll_group_001", 00:13:01.445 "admin_qpairs": 2, 00:13:01.445 "io_qpairs": 84, 00:13:01.445 "current_admin_qpairs": 0, 00:13:01.445 "current_io_qpairs": 0, 00:13:01.445 "pending_bdev_io": 0, 00:13:01.445 "completed_nvme_io": 182, 00:13:01.445 "transports": [ 00:13:01.445 { 00:13:01.445 "trtype": "TCP" 00:13:01.445 } 00:13:01.445 ] 00:13:01.445 }, 00:13:01.445 { 00:13:01.445 "name": "nvmf_tgt_poll_group_002", 00:13:01.445 "admin_qpairs": 1, 00:13:01.445 "io_qpairs": 84, 00:13:01.445 "current_admin_qpairs": 0, 00:13:01.445 "current_io_qpairs": 0, 00:13:01.445 "pending_bdev_io": 0, 00:13:01.445 "completed_nvme_io": 228, 00:13:01.445 "transports": [ 00:13:01.445 { 00:13:01.445 "trtype": "TCP" 00:13:01.445 } 00:13:01.445 ] 00:13:01.445 }, 00:13:01.445 { 00:13:01.445 "name": "nvmf_tgt_poll_group_003", 00:13:01.445 "admin_qpairs": 2, 00:13:01.445 "io_qpairs": 84, 00:13:01.445 "current_admin_qpairs": 0, 00:13:01.446 "current_io_qpairs": 0, 00:13:01.446 "pending_bdev_io": 0, 00:13:01.446 "completed_nvme_io": 134, 00:13:01.446 "transports": [ 00:13:01.446 { 00:13:01.446 "trtype": "TCP" 00:13:01.446 } 00:13:01.446 ] 00:13:01.446 } 00:13:01.446 ] 00:13:01.446 }' 00:13:01.446 03:16:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:13:01.446 03:16:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:13:01.446 03:16:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:13:01.446 03:16:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:13:01.446 03:16:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:13:01.446 03:16:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:13:01.446 03:16:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:13:01.446 03:16:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:13:01.446 03:16:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:13:01.446 03:16:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@113 -- # (( 336 > 0 )) 00:13:01.446 03:16:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:13:01.446 03:16:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:13:01.446 03:16:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:13:01.446 03:16:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:01.446 03:16:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@117 -- # sync 00:13:01.446 03:16:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:01.446 03:16:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@120 -- # set +e 00:13:01.446 03:16:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:01.446 03:16:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:01.446 rmmod nvme_tcp 00:13:01.446 rmmod nvme_fabrics 00:13:01.705 rmmod nvme_keyring 00:13:01.705 03:16:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:01.705 03:16:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@124 -- # set -e 00:13:01.705 03:16:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@125 -- # return 0 00:13:01.705 03:16:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@489 -- # '[' -n 3123473 ']' 00:13:01.705 03:16:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@490 -- # killprocess 3123473 00:13:01.705 03:16:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@948 -- # '[' -z 3123473 ']' 00:13:01.705 03:16:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@952 -- # kill -0 3123473 00:13:01.705 03:16:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@953 -- # uname 00:13:01.705 03:16:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:01.705 03:16:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3123473 00:13:01.705 03:16:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:13:01.705 03:16:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:13:01.705 03:16:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3123473' 00:13:01.705 killing process with pid 3123473 00:13:01.705 03:16:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@967 -- # kill 3123473 00:13:01.705 03:16:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@972 -- # wait 3123473 00:13:01.966 03:16:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:01.966 03:16:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:01.966 03:16:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:01.966 03:16:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:01.966 03:16:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:01.966 03:16:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:01.966 03:16:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:01.966 03:16:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:03.867 03:16:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:03.867 00:13:03.867 real 0m24.993s 00:13:03.867 user 1m21.352s 00:13:03.867 sys 0m4.028s 00:13:03.867 03:16:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:03.867 03:16:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:03.867 ************************************ 00:13:03.867 END TEST nvmf_rpc 00:13:03.867 ************************************ 00:13:03.867 03:16:09 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:13:03.867 03:16:09 nvmf_tcp -- nvmf/nvmf.sh@30 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:13:03.867 03:16:09 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:13:03.867 03:16:09 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:03.867 03:16:09 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:04.126 ************************************ 00:13:04.126 START TEST nvmf_invalid 00:13:04.126 ************************************ 00:13:04.126 03:16:10 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:13:04.126 * Looking for test storage... 00:13:04.126 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:04.126 03:16:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:04.126 03:16:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:13:04.126 03:16:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:04.126 03:16:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:04.126 03:16:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:04.126 03:16:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:04.126 03:16:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:04.126 03:16:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:04.126 03:16:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:04.126 03:16:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:04.127 03:16:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:04.127 03:16:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:04.127 03:16:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:13:04.127 03:16:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:13:04.127 03:16:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:04.127 03:16:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:04.127 03:16:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:04.127 03:16:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:04.127 03:16:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:04.127 03:16:10 nvmf_tcp.nvmf_invalid -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:04.127 03:16:10 nvmf_tcp.nvmf_invalid -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:04.127 03:16:10 nvmf_tcp.nvmf_invalid -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:04.127 03:16:10 nvmf_tcp.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:04.127 03:16:10 nvmf_tcp.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:04.127 03:16:10 nvmf_tcp.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:04.127 03:16:10 nvmf_tcp.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:13:04.127 03:16:10 nvmf_tcp.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:04.127 03:16:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@47 -- # : 0 00:13:04.127 03:16:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:04.127 03:16:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:04.127 03:16:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:04.127 03:16:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:04.127 03:16:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:04.127 03:16:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:04.127 03:16:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:04.127 03:16:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:04.127 03:16:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:13:04.127 03:16:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:04.127 03:16:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:13:04.127 03:16:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:13:04.127 03:16:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:13:04.127 03:16:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:13:04.127 03:16:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:04.127 03:16:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:04.127 03:16:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:04.127 03:16:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:04.127 03:16:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:04.127 03:16:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:04.127 03:16:10 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:04.127 03:16:10 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:04.127 03:16:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:04.127 03:16:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:04.127 03:16:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@285 -- # xtrace_disable 00:13:04.127 03:16:10 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:06.030 03:16:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:06.030 03:16:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@291 -- # pci_devs=() 00:13:06.030 03:16:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:06.030 03:16:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:06.030 03:16:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:06.030 03:16:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:06.030 03:16:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:06.030 03:16:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@295 -- # net_devs=() 00:13:06.030 03:16:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:06.030 03:16:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@296 -- # e810=() 00:13:06.030 03:16:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@296 -- # local -ga e810 00:13:06.030 03:16:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@297 -- # x722=() 00:13:06.030 03:16:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@297 -- # local -ga x722 00:13:06.030 03:16:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@298 -- # mlx=() 00:13:06.030 03:16:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@298 -- # local -ga mlx 00:13:06.030 03:16:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:06.030 03:16:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:06.030 03:16:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:06.030 03:16:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:06.030 03:16:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:06.030 03:16:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:06.030 03:16:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:06.030 03:16:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:06.030 03:16:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:06.030 03:16:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:06.030 03:16:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:06.030 03:16:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:06.030 03:16:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:06.030 03:16:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:06.030 03:16:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:06.030 03:16:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:06.030 03:16:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:06.030 03:16:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:06.030 03:16:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:13:06.030 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:13:06.030 03:16:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:06.030 03:16:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:06.030 03:16:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:06.030 03:16:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:06.030 03:16:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:06.030 03:16:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:06.030 03:16:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:13:06.030 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:13:06.030 03:16:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:06.030 03:16:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:06.030 03:16:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:06.030 03:16:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:06.030 03:16:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:06.030 03:16:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:06.030 03:16:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:06.030 03:16:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:06.030 03:16:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:06.030 03:16:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:06.030 03:16:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:06.030 03:16:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:06.030 03:16:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:06.030 03:16:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:06.030 03:16:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:06.030 03:16:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:13:06.030 Found net devices under 0000:0a:00.0: cvl_0_0 00:13:06.030 03:16:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:06.030 03:16:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:06.030 03:16:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:06.030 03:16:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:06.030 03:16:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:06.030 03:16:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:06.030 03:16:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:06.030 03:16:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:06.030 03:16:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:13:06.030 Found net devices under 0000:0a:00.1: cvl_0_1 00:13:06.030 03:16:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:06.030 03:16:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:06.030 03:16:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # is_hw=yes 00:13:06.030 03:16:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:06.030 03:16:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:06.030 03:16:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:06.030 03:16:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:06.030 03:16:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:06.030 03:16:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:06.030 03:16:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:06.030 03:16:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:06.030 03:16:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:06.030 03:16:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:06.030 03:16:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:06.030 03:16:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:06.031 03:16:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:06.031 03:16:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:06.031 03:16:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:06.031 03:16:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:06.031 03:16:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:06.031 03:16:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:06.031 03:16:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:06.031 03:16:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:06.289 03:16:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:06.289 03:16:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:06.289 03:16:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:06.289 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:06.289 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.196 ms 00:13:06.289 00:13:06.289 --- 10.0.0.2 ping statistics --- 00:13:06.289 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:06.289 rtt min/avg/max/mdev = 0.196/0.196/0.196/0.000 ms 00:13:06.289 03:16:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:06.289 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:06.289 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.116 ms 00:13:06.289 00:13:06.289 --- 10.0.0.1 ping statistics --- 00:13:06.289 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:06.289 rtt min/avg/max/mdev = 0.116/0.116/0.116/0.000 ms 00:13:06.289 03:16:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:06.289 03:16:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@422 -- # return 0 00:13:06.289 03:16:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:06.289 03:16:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:06.289 03:16:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:06.289 03:16:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:06.289 03:16:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:06.289 03:16:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:06.289 03:16:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:06.289 03:16:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:13:06.289 03:16:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:06.289 03:16:12 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:06.289 03:16:12 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:06.289 03:16:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@481 -- # nvmfpid=3127967 00:13:06.289 03:16:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:06.289 03:16:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@482 -- # waitforlisten 3127967 00:13:06.289 03:16:12 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@829 -- # '[' -z 3127967 ']' 00:13:06.289 03:16:12 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:06.289 03:16:12 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:06.289 03:16:12 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:06.289 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:06.289 03:16:12 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:06.289 03:16:12 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:06.289 [2024-07-15 03:16:12.296568] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:13:06.289 [2024-07-15 03:16:12.296654] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:06.289 EAL: No free 2048 kB hugepages reported on node 1 00:13:06.289 [2024-07-15 03:16:12.370012] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:06.548 [2024-07-15 03:16:12.465511] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:06.548 [2024-07-15 03:16:12.465569] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:06.548 [2024-07-15 03:16:12.465607] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:06.548 [2024-07-15 03:16:12.465627] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:06.548 [2024-07-15 03:16:12.465643] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:06.548 [2024-07-15 03:16:12.465776] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:06.548 [2024-07-15 03:16:12.465838] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:06.548 [2024-07-15 03:16:12.465906] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:13:06.548 [2024-07-15 03:16:12.465913] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:06.548 03:16:12 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:06.548 03:16:12 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@862 -- # return 0 00:13:06.548 03:16:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:06.548 03:16:12 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:06.548 03:16:12 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:06.548 03:16:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:06.548 03:16:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:13:06.548 03:16:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode17805 00:13:06.806 [2024-07-15 03:16:12.898797] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:13:06.806 03:16:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:13:06.806 { 00:13:06.806 "nqn": "nqn.2016-06.io.spdk:cnode17805", 00:13:06.806 "tgt_name": "foobar", 00:13:06.806 "method": "nvmf_create_subsystem", 00:13:06.806 "req_id": 1 00:13:06.806 } 00:13:06.806 Got JSON-RPC error response 00:13:06.806 response: 00:13:06.806 { 00:13:06.806 "code": -32603, 00:13:06.806 "message": "Unable to find target foobar" 00:13:06.806 }' 00:13:06.806 03:16:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:13:06.806 { 00:13:06.806 "nqn": "nqn.2016-06.io.spdk:cnode17805", 00:13:06.806 "tgt_name": "foobar", 00:13:06.806 "method": "nvmf_create_subsystem", 00:13:06.806 "req_id": 1 00:13:06.806 } 00:13:06.806 Got JSON-RPC error response 00:13:06.806 response: 00:13:06.806 { 00:13:06.806 "code": -32603, 00:13:06.806 "message": "Unable to find target foobar" 00:13:06.806 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:13:06.806 03:16:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:13:06.806 03:16:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode17206 00:13:07.065 [2024-07-15 03:16:13.195810] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode17206: invalid serial number 'SPDKISFASTANDAWESOME' 00:13:07.324 03:16:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:13:07.324 { 00:13:07.324 "nqn": "nqn.2016-06.io.spdk:cnode17206", 00:13:07.324 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:13:07.324 "method": "nvmf_create_subsystem", 00:13:07.324 "req_id": 1 00:13:07.324 } 00:13:07.324 Got JSON-RPC error response 00:13:07.324 response: 00:13:07.324 { 00:13:07.324 "code": -32602, 00:13:07.324 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:13:07.324 }' 00:13:07.324 03:16:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:13:07.324 { 00:13:07.324 "nqn": "nqn.2016-06.io.spdk:cnode17206", 00:13:07.324 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:13:07.324 "method": "nvmf_create_subsystem", 00:13:07.324 "req_id": 1 00:13:07.324 } 00:13:07.324 Got JSON-RPC error response 00:13:07.324 response: 00:13:07.324 { 00:13:07.324 "code": -32602, 00:13:07.324 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:13:07.324 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:13:07.324 03:16:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:13:07.324 03:16:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode6166 00:13:07.324 [2024-07-15 03:16:13.452638] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode6166: invalid model number 'SPDK_Controller' 00:13:07.583 03:16:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:13:07.583 { 00:13:07.583 "nqn": "nqn.2016-06.io.spdk:cnode6166", 00:13:07.583 "model_number": "SPDK_Controller\u001f", 00:13:07.583 "method": "nvmf_create_subsystem", 00:13:07.583 "req_id": 1 00:13:07.583 } 00:13:07.583 Got JSON-RPC error response 00:13:07.583 response: 00:13:07.583 { 00:13:07.583 "code": -32602, 00:13:07.583 "message": "Invalid MN SPDK_Controller\u001f" 00:13:07.583 }' 00:13:07.583 03:16:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:13:07.583 { 00:13:07.583 "nqn": "nqn.2016-06.io.spdk:cnode6166", 00:13:07.583 "model_number": "SPDK_Controller\u001f", 00:13:07.583 "method": "nvmf_create_subsystem", 00:13:07.583 "req_id": 1 00:13:07.583 } 00:13:07.583 Got JSON-RPC error response 00:13:07.583 response: 00:13:07.583 { 00:13:07.583 "code": -32602, 00:13:07.583 "message": "Invalid MN SPDK_Controller\u001f" 00:13:07.583 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:13:07.583 03:16:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:13:07.583 03:16:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:13:07.583 03:16:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:13:07.583 03:16:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:13:07.583 03:16:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:13:07.583 03:16:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:13:07.583 03:16:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:07.583 03:16:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 61 00:13:07.583 03:16:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3d' 00:13:07.583 03:16:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+== 00:13:07.583 03:16:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:07.583 03:16:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:07.583 03:16:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 92 00:13:07.583 03:16:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5c' 00:13:07.583 03:16:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='\' 00:13:07.583 03:16:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:07.583 03:16:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:07.583 03:16:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 87 00:13:07.583 03:16:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x57' 00:13:07.583 03:16:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=W 00:13:07.583 03:16:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:07.583 03:16:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:07.583 03:16:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 41 00:13:07.583 03:16:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x29' 00:13:07.583 03:16:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=')' 00:13:07.583 03:16:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:07.583 03:16:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:07.583 03:16:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 84 00:13:07.583 03:16:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x54' 00:13:07.583 03:16:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=T 00:13:07.583 03:16:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:07.583 03:16:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:07.583 03:16:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 65 00:13:07.583 03:16:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x41' 00:13:07.583 03:16:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=A 00:13:07.583 03:16:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:07.583 03:16:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:07.583 03:16:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 96 00:13:07.583 03:16:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x60' 00:13:07.583 03:16:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='`' 00:13:07.583 03:16:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:07.583 03:16:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:07.583 03:16:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 67 00:13:07.583 03:16:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x43' 00:13:07.583 03:16:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=C 00:13:07.583 03:16:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:07.583 03:16:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:07.583 03:16:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 103 00:13:07.583 03:16:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x67' 00:13:07.583 03:16:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=g 00:13:07.583 03:16:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:07.583 03:16:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:07.583 03:16:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 35 00:13:07.583 03:16:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x23' 00:13:07.583 03:16:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='#' 00:13:07.583 03:16:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:07.583 03:16:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:07.583 03:16:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 62 00:13:07.583 03:16:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3e' 00:13:07.583 03:16:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='>' 00:13:07.583 03:16:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:07.583 03:16:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:07.583 03:16:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 88 00:13:07.583 03:16:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x58' 00:13:07.583 03:16:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=X 00:13:07.583 03:16:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:07.583 03:16:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:07.583 03:16:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 37 00:13:07.583 03:16:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x25' 00:13:07.583 03:16:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=% 00:13:07.583 03:16:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:07.583 03:16:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:07.583 03:16:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 85 00:13:07.583 03:16:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x55' 00:13:07.583 03:16:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=U 00:13:07.583 03:16:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:07.583 03:16:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:07.583 03:16:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 36 00:13:07.583 03:16:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x24' 00:13:07.583 03:16:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='$' 00:13:07.583 03:16:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:07.583 03:16:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:07.583 03:16:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 124 00:13:07.583 03:16:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7c' 00:13:07.583 03:16:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='|' 00:13:07.583 03:16:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:07.583 03:16:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:07.583 03:16:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 98 00:13:07.583 03:16:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x62' 00:13:07.583 03:16:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=b 00:13:07.583 03:16:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:07.583 03:16:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:07.583 03:16:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 54 00:13:07.583 03:16:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x36' 00:13:07.583 03:16:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=6 00:13:07.583 03:16:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:07.583 03:16:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:07.583 03:16:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 67 00:13:07.583 03:16:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x43' 00:13:07.583 03:16:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=C 00:13:07.583 03:16:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:07.583 03:16:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:07.583 03:16:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 93 00:13:07.583 03:16:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5d' 00:13:07.583 03:16:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=']' 00:13:07.583 03:16:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:07.583 03:16:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:07.583 03:16:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 116 00:13:07.583 03:16:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x74' 00:13:07.583 03:16:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=t 00:13:07.583 03:16:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:07.583 03:16:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:07.583 03:16:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@28 -- # [[ = == \- ]] 00:13:07.583 03:16:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@31 -- # echo '=\W)TA`Cg#>X%U$|b6C]t' 00:13:07.583 03:16:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s '=\W)TA`Cg#>X%U$|b6C]t' nqn.2016-06.io.spdk:cnode7869 00:13:07.841 [2024-07-15 03:16:13.809849] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode7869: invalid serial number '=\W)TA`Cg#>X%U$|b6C]t' 00:13:07.841 03:16:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:13:07.841 { 00:13:07.841 "nqn": "nqn.2016-06.io.spdk:cnode7869", 00:13:07.841 "serial_number": "=\\W)TA`Cg#>X%U$|b6C]t", 00:13:07.841 "method": "nvmf_create_subsystem", 00:13:07.841 "req_id": 1 00:13:07.841 } 00:13:07.841 Got JSON-RPC error response 00:13:07.841 response: 00:13:07.841 { 00:13:07.841 "code": -32602, 00:13:07.841 "message": "Invalid SN =\\W)TA`Cg#>X%U$|b6C]t" 00:13:07.841 }' 00:13:07.841 03:16:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:13:07.841 { 00:13:07.841 "nqn": "nqn.2016-06.io.spdk:cnode7869", 00:13:07.841 "serial_number": "=\\W)TA`Cg#>X%U$|b6C]t", 00:13:07.841 "method": "nvmf_create_subsystem", 00:13:07.841 "req_id": 1 00:13:07.841 } 00:13:07.841 Got JSON-RPC error response 00:13:07.841 response: 00:13:07.841 { 00:13:07.841 "code": -32602, 00:13:07.841 "message": "Invalid SN =\\W)TA`Cg#>X%U$|b6C]t" 00:13:07.841 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:13:07.841 03:16:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:13:07.841 03:16:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:13:07.841 03:16:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:13:07.841 03:16:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:13:07.841 03:16:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:13:07.841 03:16:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:13:07.841 03:16:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:07.841 03:16:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 82 00:13:07.841 03:16:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x52' 00:13:07.841 03:16:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=R 00:13:07.841 03:16:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:07.841 03:16:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:07.841 03:16:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 90 00:13:07.841 03:16:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5a' 00:13:07.841 03:16:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=Z 00:13:07.841 03:16:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:07.841 03:16:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:07.841 03:16:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 99 00:13:07.841 03:16:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x63' 00:13:07.841 03:16:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=c 00:13:07.841 03:16:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:07.841 03:16:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:07.841 03:16:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 37 00:13:07.841 03:16:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x25' 00:13:07.841 03:16:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=% 00:13:07.841 03:16:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:07.841 03:16:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:07.841 03:16:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 116 00:13:07.841 03:16:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x74' 00:13:07.841 03:16:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=t 00:13:07.841 03:16:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:07.841 03:16:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:07.841 03:16:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 60 00:13:07.841 03:16:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3c' 00:13:07.841 03:16:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='<' 00:13:07.841 03:16:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:07.841 03:16:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:07.841 03:16:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 104 00:13:07.841 03:16:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x68' 00:13:07.841 03:16:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=h 00:13:07.841 03:16:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:07.841 03:16:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:07.841 03:16:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 47 00:13:07.841 03:16:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2f' 00:13:07.841 03:16:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=/ 00:13:07.842 03:16:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:07.842 03:16:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:07.842 03:16:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 108 00:13:07.842 03:16:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6c' 00:13:07.842 03:16:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=l 00:13:07.842 03:16:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:07.842 03:16:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:07.842 03:16:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 71 00:13:07.842 03:16:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x47' 00:13:07.842 03:16:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=G 00:13:07.842 03:16:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:07.842 03:16:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:07.842 03:16:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 48 00:13:07.842 03:16:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x30' 00:13:07.842 03:16:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=0 00:13:07.842 03:16:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:07.842 03:16:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:07.842 03:16:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 122 00:13:07.842 03:16:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7a' 00:13:07.842 03:16:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=z 00:13:07.842 03:16:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:07.842 03:16:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:07.842 03:16:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 40 00:13:07.842 03:16:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x28' 00:13:07.842 03:16:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='(' 00:13:07.842 03:16:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:07.842 03:16:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:07.842 03:16:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 104 00:13:07.842 03:16:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x68' 00:13:07.842 03:16:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=h 00:13:07.842 03:16:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:07.842 03:16:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:07.842 03:16:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 110 00:13:07.842 03:16:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6e' 00:13:07.842 03:16:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=n 00:13:07.842 03:16:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:07.842 03:16:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:07.842 03:16:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 75 00:13:07.842 03:16:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4b' 00:13:07.842 03:16:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=K 00:13:07.842 03:16:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:07.842 03:16:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:07.842 03:16:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 82 00:13:07.842 03:16:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x52' 00:13:07.842 03:16:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=R 00:13:07.842 03:16:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:07.842 03:16:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:07.842 03:16:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 115 00:13:07.842 03:16:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x73' 00:13:07.842 03:16:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=s 00:13:07.842 03:16:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:07.842 03:16:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:07.842 03:16:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 90 00:13:07.842 03:16:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5a' 00:13:07.842 03:16:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=Z 00:13:07.842 03:16:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:07.842 03:16:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:07.842 03:16:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 105 00:13:07.842 03:16:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x69' 00:13:07.842 03:16:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=i 00:13:07.842 03:16:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:07.842 03:16:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:07.842 03:16:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 119 00:13:07.842 03:16:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x77' 00:13:07.842 03:16:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=w 00:13:07.842 03:16:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:07.842 03:16:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:07.842 03:16:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 114 00:13:07.842 03:16:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x72' 00:13:07.842 03:16:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=r 00:13:07.842 03:16:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:07.842 03:16:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:07.842 03:16:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 101 00:13:07.842 03:16:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x65' 00:13:07.842 03:16:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=e 00:13:07.842 03:16:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:07.842 03:16:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:07.842 03:16:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 126 00:13:07.842 03:16:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7e' 00:13:07.842 03:16:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='~' 00:13:07.842 03:16:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:07.842 03:16:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:07.842 03:16:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 69 00:13:07.842 03:16:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x45' 00:13:07.842 03:16:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=E 00:13:07.842 03:16:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:07.842 03:16:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:07.842 03:16:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 112 00:13:07.842 03:16:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x70' 00:13:07.842 03:16:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=p 00:13:07.842 03:16:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:07.842 03:16:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:07.842 03:16:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 73 00:13:07.842 03:16:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x49' 00:13:07.842 03:16:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=I 00:13:07.842 03:16:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:07.842 03:16:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:07.842 03:16:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 55 00:13:07.842 03:16:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x37' 00:13:07.842 03:16:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=7 00:13:07.842 03:16:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:07.842 03:16:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:07.842 03:16:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 47 00:13:07.842 03:16:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2f' 00:13:07.842 03:16:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=/ 00:13:07.842 03:16:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:07.842 03:16:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:07.842 03:16:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 87 00:13:07.842 03:16:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x57' 00:13:07.842 03:16:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=W 00:13:07.842 03:16:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:07.842 03:16:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:07.842 03:16:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 114 00:13:07.842 03:16:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x72' 00:13:07.842 03:16:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=r 00:13:07.842 03:16:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:07.842 03:16:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:07.842 03:16:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 118 00:13:07.842 03:16:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x76' 00:13:07.842 03:16:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=v 00:13:07.842 03:16:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:07.842 03:16:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:07.842 03:16:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 69 00:13:07.842 03:16:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x45' 00:13:07.842 03:16:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=E 00:13:07.842 03:16:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:07.842 03:16:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:07.842 03:16:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 113 00:13:07.842 03:16:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x71' 00:13:07.842 03:16:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=q 00:13:07.842 03:16:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:07.842 03:16:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:07.842 03:16:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 73 00:13:07.842 03:16:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x49' 00:13:07.842 03:16:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=I 00:13:07.843 03:16:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:07.843 03:16:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:07.843 03:16:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 88 00:13:07.843 03:16:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x58' 00:13:07.843 03:16:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=X 00:13:07.843 03:16:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:07.843 03:16:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:07.843 03:16:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 54 00:13:07.843 03:16:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x36' 00:13:07.843 03:16:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=6 00:13:07.843 03:16:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:07.843 03:16:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:07.843 03:16:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 112 00:13:07.843 03:16:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x70' 00:13:07.843 03:16:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=p 00:13:07.843 03:16:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:07.843 03:16:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:07.843 03:16:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 88 00:13:07.843 03:16:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x58' 00:13:07.843 03:16:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=X 00:13:07.843 03:16:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:07.843 03:16:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:07.843 03:16:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 77 00:13:07.843 03:16:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4d' 00:13:07.843 03:16:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=M 00:13:07.843 03:16:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:07.843 03:16:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:07.843 03:16:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 57 00:13:07.843 03:16:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x39' 00:13:07.843 03:16:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=9 00:13:07.843 03:16:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:07.843 03:16:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:07.843 03:16:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@28 -- # [[ R == \- ]] 00:13:07.843 03:16:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@31 -- # echo 'RZc%t /dev/null' 00:13:10.500 03:16:16 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:13.039 03:16:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:13.039 00:13:13.039 real 0m8.665s 00:13:13.039 user 0m20.336s 00:13:13.039 sys 0m2.478s 00:13:13.039 03:16:18 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:13.039 03:16:18 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:13.039 ************************************ 00:13:13.039 END TEST nvmf_invalid 00:13:13.039 ************************************ 00:13:13.039 03:16:18 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:13:13.039 03:16:18 nvmf_tcp -- nvmf/nvmf.sh@31 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:13:13.039 03:16:18 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:13:13.039 03:16:18 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:13.039 03:16:18 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:13.039 ************************************ 00:13:13.039 START TEST nvmf_abort 00:13:13.039 ************************************ 00:13:13.039 03:16:18 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:13:13.039 * Looking for test storage... 00:13:13.039 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:13.039 03:16:18 nvmf_tcp.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:13.039 03:16:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:13:13.039 03:16:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:13.039 03:16:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:13.039 03:16:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:13.039 03:16:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:13.039 03:16:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:13.039 03:16:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:13.039 03:16:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:13.039 03:16:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:13.039 03:16:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:13.039 03:16:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:13.039 03:16:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:13:13.039 03:16:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:13:13.039 03:16:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:13.039 03:16:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:13.040 03:16:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:13.040 03:16:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:13.040 03:16:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:13.040 03:16:18 nvmf_tcp.nvmf_abort -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:13.040 03:16:18 nvmf_tcp.nvmf_abort -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:13.040 03:16:18 nvmf_tcp.nvmf_abort -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:13.040 03:16:18 nvmf_tcp.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:13.040 03:16:18 nvmf_tcp.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:13.040 03:16:18 nvmf_tcp.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:13.040 03:16:18 nvmf_tcp.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:13:13.040 03:16:18 nvmf_tcp.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:13.040 03:16:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@47 -- # : 0 00:13:13.040 03:16:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:13.040 03:16:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:13.040 03:16:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:13.040 03:16:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:13.040 03:16:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:13.040 03:16:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:13.040 03:16:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:13.040 03:16:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:13.040 03:16:18 nvmf_tcp.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:13.040 03:16:18 nvmf_tcp.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:13:13.040 03:16:18 nvmf_tcp.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:13:13.040 03:16:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:13.040 03:16:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:13.040 03:16:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:13.040 03:16:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:13.040 03:16:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:13.040 03:16:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:13.040 03:16:18 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:13.040 03:16:18 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:13.040 03:16:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:13.040 03:16:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:13.040 03:16:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@285 -- # xtrace_disable 00:13:13.040 03:16:18 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:14.949 03:16:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:14.949 03:16:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@291 -- # pci_devs=() 00:13:14.949 03:16:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:14.949 03:16:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:14.949 03:16:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:14.949 03:16:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:14.949 03:16:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:14.949 03:16:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@295 -- # net_devs=() 00:13:14.949 03:16:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:14.949 03:16:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@296 -- # e810=() 00:13:14.949 03:16:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@296 -- # local -ga e810 00:13:14.949 03:16:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@297 -- # x722=() 00:13:14.949 03:16:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@297 -- # local -ga x722 00:13:14.949 03:16:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@298 -- # mlx=() 00:13:14.949 03:16:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@298 -- # local -ga mlx 00:13:14.949 03:16:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:14.949 03:16:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:14.949 03:16:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:14.949 03:16:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:14.949 03:16:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:14.949 03:16:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:14.949 03:16:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:14.949 03:16:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:14.949 03:16:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:14.949 03:16:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:14.949 03:16:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:14.949 03:16:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:14.949 03:16:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:14.949 03:16:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:14.949 03:16:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:14.949 03:16:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:14.949 03:16:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:14.949 03:16:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:14.949 03:16:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:13:14.949 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:13:14.949 03:16:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:14.949 03:16:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:14.949 03:16:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:14.949 03:16:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:14.949 03:16:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:14.949 03:16:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:14.949 03:16:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:13:14.949 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:13:14.949 03:16:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:14.949 03:16:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:14.949 03:16:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:14.949 03:16:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:14.949 03:16:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:14.949 03:16:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:14.949 03:16:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:14.949 03:16:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:14.949 03:16:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:14.949 03:16:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:14.949 03:16:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:14.949 03:16:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:14.949 03:16:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:14.949 03:16:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:14.949 03:16:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:14.949 03:16:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:13:14.949 Found net devices under 0000:0a:00.0: cvl_0_0 00:13:14.949 03:16:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:14.949 03:16:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:14.949 03:16:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:14.949 03:16:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:14.949 03:16:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:14.949 03:16:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:14.949 03:16:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:14.949 03:16:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:14.949 03:16:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:13:14.949 Found net devices under 0000:0a:00.1: cvl_0_1 00:13:14.949 03:16:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:14.949 03:16:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:14.949 03:16:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # is_hw=yes 00:13:14.949 03:16:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:14.949 03:16:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:14.949 03:16:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:14.949 03:16:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:14.949 03:16:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:14.949 03:16:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:14.949 03:16:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:14.949 03:16:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:14.949 03:16:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:14.949 03:16:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:14.949 03:16:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:14.949 03:16:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:14.949 03:16:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:14.949 03:16:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:14.949 03:16:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:14.949 03:16:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:14.949 03:16:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:14.949 03:16:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:14.949 03:16:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:14.949 03:16:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:14.949 03:16:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:14.949 03:16:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:14.949 03:16:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:14.949 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:14.949 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.263 ms 00:13:14.949 00:13:14.949 --- 10.0.0.2 ping statistics --- 00:13:14.949 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:14.949 rtt min/avg/max/mdev = 0.263/0.263/0.263/0.000 ms 00:13:14.949 03:16:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:14.949 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:14.949 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.188 ms 00:13:14.949 00:13:14.949 --- 10.0.0.1 ping statistics --- 00:13:14.949 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:14.949 rtt min/avg/max/mdev = 0.188/0.188/0.188/0.000 ms 00:13:14.949 03:16:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:14.949 03:16:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@422 -- # return 0 00:13:14.949 03:16:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:14.949 03:16:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:14.949 03:16:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:14.949 03:16:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:14.950 03:16:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:14.950 03:16:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:14.950 03:16:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:14.950 03:16:21 nvmf_tcp.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:13:14.950 03:16:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:14.950 03:16:21 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:14.950 03:16:21 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:14.950 03:16:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@481 -- # nvmfpid=3130592 00:13:14.950 03:16:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:13:14.950 03:16:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@482 -- # waitforlisten 3130592 00:13:14.950 03:16:21 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@829 -- # '[' -z 3130592 ']' 00:13:14.950 03:16:21 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:14.950 03:16:21 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:14.950 03:16:21 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:14.950 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:14.950 03:16:21 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:14.950 03:16:21 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:15.210 [2024-07-15 03:16:21.115695] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:13:15.210 [2024-07-15 03:16:21.115781] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:15.210 EAL: No free 2048 kB hugepages reported on node 1 00:13:15.210 [2024-07-15 03:16:21.186324] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:15.210 [2024-07-15 03:16:21.283822] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:15.210 [2024-07-15 03:16:21.283899] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:15.210 [2024-07-15 03:16:21.283927] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:15.210 [2024-07-15 03:16:21.283941] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:15.210 [2024-07-15 03:16:21.283953] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:15.210 [2024-07-15 03:16:21.284044] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:15.210 [2024-07-15 03:16:21.287898] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:13:15.210 [2024-07-15 03:16:21.287912] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:15.470 03:16:21 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:15.470 03:16:21 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@862 -- # return 0 00:13:15.470 03:16:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:15.470 03:16:21 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:15.470 03:16:21 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:15.470 03:16:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:15.470 03:16:21 nvmf_tcp.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:13:15.470 03:16:21 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:15.470 03:16:21 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:15.470 [2024-07-15 03:16:21.437035] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:15.470 03:16:21 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:15.470 03:16:21 nvmf_tcp.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:13:15.470 03:16:21 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:15.470 03:16:21 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:15.470 Malloc0 00:13:15.470 03:16:21 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:15.470 03:16:21 nvmf_tcp.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:13:15.470 03:16:21 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:15.470 03:16:21 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:15.470 Delay0 00:13:15.470 03:16:21 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:15.470 03:16:21 nvmf_tcp.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:13:15.470 03:16:21 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:15.470 03:16:21 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:15.470 03:16:21 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:15.470 03:16:21 nvmf_tcp.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:13:15.470 03:16:21 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:15.470 03:16:21 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:15.470 03:16:21 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:15.470 03:16:21 nvmf_tcp.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:13:15.470 03:16:21 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:15.470 03:16:21 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:15.470 [2024-07-15 03:16:21.513743] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:15.470 03:16:21 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:15.470 03:16:21 nvmf_tcp.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:15.470 03:16:21 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:15.470 03:16:21 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:15.470 03:16:21 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:15.470 03:16:21 nvmf_tcp.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:13:15.470 EAL: No free 2048 kB hugepages reported on node 1 00:13:15.730 [2024-07-15 03:16:21.660019] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:13:17.635 Initializing NVMe Controllers 00:13:17.635 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:13:17.635 controller IO queue size 128 less than required 00:13:17.635 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:13:17.635 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:13:17.635 Initialization complete. Launching workers. 00:13:17.635 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 32851 00:13:17.635 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 32912, failed to submit 62 00:13:17.635 success 32855, unsuccess 57, failed 0 00:13:17.635 03:16:23 nvmf_tcp.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:13:17.635 03:16:23 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:17.635 03:16:23 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:17.635 03:16:23 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:17.635 03:16:23 nvmf_tcp.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:13:17.635 03:16:23 nvmf_tcp.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:13:17.635 03:16:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:17.635 03:16:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@117 -- # sync 00:13:17.635 03:16:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:17.635 03:16:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@120 -- # set +e 00:13:17.635 03:16:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:17.635 03:16:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:17.635 rmmod nvme_tcp 00:13:17.635 rmmod nvme_fabrics 00:13:17.635 rmmod nvme_keyring 00:13:17.635 03:16:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:17.635 03:16:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@124 -- # set -e 00:13:17.635 03:16:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@125 -- # return 0 00:13:17.635 03:16:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@489 -- # '[' -n 3130592 ']' 00:13:17.635 03:16:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@490 -- # killprocess 3130592 00:13:17.635 03:16:23 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@948 -- # '[' -z 3130592 ']' 00:13:17.635 03:16:23 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@952 -- # kill -0 3130592 00:13:17.635 03:16:23 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@953 -- # uname 00:13:17.635 03:16:23 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:17.635 03:16:23 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3130592 00:13:17.635 03:16:23 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:13:17.635 03:16:23 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:13:17.635 03:16:23 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3130592' 00:13:17.635 killing process with pid 3130592 00:13:17.894 03:16:23 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@967 -- # kill 3130592 00:13:17.894 03:16:23 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@972 -- # wait 3130592 00:13:17.894 03:16:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:17.894 03:16:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:17.894 03:16:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:17.894 03:16:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:17.894 03:16:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:17.894 03:16:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:17.894 03:16:24 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:17.894 03:16:24 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:20.429 03:16:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:20.429 00:13:20.429 real 0m7.348s 00:13:20.429 user 0m10.518s 00:13:20.429 sys 0m2.574s 00:13:20.429 03:16:26 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:20.429 03:16:26 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:20.429 ************************************ 00:13:20.429 END TEST nvmf_abort 00:13:20.429 ************************************ 00:13:20.429 03:16:26 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:13:20.429 03:16:26 nvmf_tcp -- nvmf/nvmf.sh@32 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:13:20.429 03:16:26 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:13:20.429 03:16:26 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:20.429 03:16:26 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:20.429 ************************************ 00:13:20.429 START TEST nvmf_ns_hotplug_stress 00:13:20.429 ************************************ 00:13:20.429 03:16:26 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:13:20.429 * Looking for test storage... 00:13:20.429 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:20.429 03:16:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:20.429 03:16:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:13:20.429 03:16:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:20.429 03:16:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:20.429 03:16:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:20.429 03:16:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:20.429 03:16:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:20.429 03:16:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:20.429 03:16:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:20.429 03:16:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:20.429 03:16:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:20.429 03:16:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:20.429 03:16:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:13:20.429 03:16:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:13:20.429 03:16:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:20.429 03:16:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:20.429 03:16:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:20.429 03:16:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:20.429 03:16:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:20.429 03:16:26 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:20.429 03:16:26 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:20.429 03:16:26 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:20.429 03:16:26 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:20.429 03:16:26 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:20.429 03:16:26 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:20.429 03:16:26 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:13:20.429 03:16:26 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:20.429 03:16:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@47 -- # : 0 00:13:20.429 03:16:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:20.429 03:16:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:20.429 03:16:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:20.429 03:16:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:20.429 03:16:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:20.429 03:16:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:20.429 03:16:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:20.429 03:16:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:20.429 03:16:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:20.429 03:16:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:13:20.429 03:16:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:20.429 03:16:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:20.429 03:16:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:20.429 03:16:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:20.429 03:16:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:20.429 03:16:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:20.429 03:16:26 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:20.429 03:16:26 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:20.429 03:16:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:20.429 03:16:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:20.429 03:16:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:13:20.429 03:16:26 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:13:22.333 03:16:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:22.333 03:16:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:13:22.333 03:16:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:22.333 03:16:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:22.333 03:16:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:22.333 03:16:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:22.333 03:16:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:22.333 03:16:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # net_devs=() 00:13:22.333 03:16:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:22.333 03:16:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # e810=() 00:13:22.333 03:16:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # local -ga e810 00:13:22.333 03:16:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # x722=() 00:13:22.333 03:16:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # local -ga x722 00:13:22.333 03:16:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # mlx=() 00:13:22.333 03:16:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:13:22.333 03:16:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:22.333 03:16:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:22.333 03:16:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:22.333 03:16:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:22.333 03:16:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:22.333 03:16:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:22.333 03:16:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:22.333 03:16:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:22.333 03:16:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:22.333 03:16:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:22.333 03:16:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:22.333 03:16:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:22.333 03:16:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:22.333 03:16:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:22.333 03:16:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:22.333 03:16:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:22.333 03:16:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:22.333 03:16:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:22.333 03:16:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:13:22.333 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:13:22.333 03:16:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:22.333 03:16:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:22.333 03:16:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:22.333 03:16:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:22.333 03:16:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:22.333 03:16:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:22.333 03:16:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:13:22.333 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:13:22.333 03:16:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:22.333 03:16:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:22.333 03:16:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:22.333 03:16:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:22.333 03:16:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:22.333 03:16:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:22.333 03:16:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:22.333 03:16:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:22.333 03:16:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:22.333 03:16:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:22.333 03:16:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:22.333 03:16:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:22.333 03:16:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:22.333 03:16:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:22.333 03:16:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:22.333 03:16:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:13:22.333 Found net devices under 0000:0a:00.0: cvl_0_0 00:13:22.333 03:16:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:22.333 03:16:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:22.333 03:16:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:22.333 03:16:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:22.334 03:16:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:22.334 03:16:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:22.334 03:16:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:22.334 03:16:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:22.334 03:16:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:13:22.334 Found net devices under 0000:0a:00.1: cvl_0_1 00:13:22.334 03:16:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:22.334 03:16:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:22.334 03:16:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:13:22.334 03:16:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:22.334 03:16:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:22.334 03:16:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:22.334 03:16:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:22.334 03:16:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:22.334 03:16:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:22.334 03:16:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:22.334 03:16:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:22.334 03:16:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:22.334 03:16:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:22.334 03:16:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:22.334 03:16:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:22.334 03:16:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:22.334 03:16:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:22.334 03:16:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:22.334 03:16:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:22.334 03:16:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:22.334 03:16:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:22.334 03:16:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:22.334 03:16:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:22.334 03:16:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:22.334 03:16:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:22.334 03:16:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:22.334 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:22.334 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.153 ms 00:13:22.334 00:13:22.334 --- 10.0.0.2 ping statistics --- 00:13:22.334 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:22.334 rtt min/avg/max/mdev = 0.153/0.153/0.153/0.000 ms 00:13:22.334 03:16:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:22.334 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:22.334 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.181 ms 00:13:22.334 00:13:22.334 --- 10.0.0.1 ping statistics --- 00:13:22.334 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:22.334 rtt min/avg/max/mdev = 0.181/0.181/0.181/0.000 ms 00:13:22.334 03:16:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:22.334 03:16:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # return 0 00:13:22.334 03:16:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:22.334 03:16:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:22.334 03:16:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:22.334 03:16:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:22.334 03:16:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:22.334 03:16:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:22.334 03:16:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:22.334 03:16:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:13:22.334 03:16:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:22.334 03:16:28 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:22.334 03:16:28 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:13:22.334 03:16:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@481 -- # nvmfpid=3132934 00:13:22.334 03:16:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:13:22.334 03:16:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # waitforlisten 3132934 00:13:22.334 03:16:28 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@829 -- # '[' -z 3132934 ']' 00:13:22.334 03:16:28 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:22.334 03:16:28 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:22.334 03:16:28 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:22.334 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:22.334 03:16:28 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:22.334 03:16:28 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:13:22.594 [2024-07-15 03:16:28.502557] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:13:22.594 [2024-07-15 03:16:28.502659] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:22.594 EAL: No free 2048 kB hugepages reported on node 1 00:13:22.594 [2024-07-15 03:16:28.580315] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:22.594 [2024-07-15 03:16:28.678390] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:22.594 [2024-07-15 03:16:28.678459] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:22.594 [2024-07-15 03:16:28.678476] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:22.594 [2024-07-15 03:16:28.678489] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:22.594 [2024-07-15 03:16:28.678501] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:22.594 [2024-07-15 03:16:28.678586] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:22.594 [2024-07-15 03:16:28.678643] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:13:22.594 [2024-07-15 03:16:28.678647] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:22.852 03:16:28 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:22.852 03:16:28 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@862 -- # return 0 00:13:22.852 03:16:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:22.852 03:16:28 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:22.852 03:16:28 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:13:22.852 03:16:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:22.852 03:16:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:13:22.852 03:16:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:13:23.108 [2024-07-15 03:16:29.096677] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:23.108 03:16:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:13:23.365 03:16:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:23.621 [2024-07-15 03:16:29.616086] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:23.621 03:16:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:23.878 03:16:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:13:24.135 Malloc0 00:13:24.135 03:16:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:13:24.392 Delay0 00:13:24.392 03:16:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:24.649 03:16:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:13:24.907 NULL1 00:13:24.907 03:16:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:13:25.165 03:16:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=3133229 00:13:25.165 03:16:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:13:25.165 03:16:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3133229 00:13:25.165 03:16:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:25.165 EAL: No free 2048 kB hugepages reported on node 1 00:13:26.541 Read completed with error (sct=0, sc=11) 00:13:26.541 03:16:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:26.541 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:26.541 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:26.541 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:26.541 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:26.541 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:26.541 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:26.541 03:16:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:13:26.541 03:16:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:13:26.799 true 00:13:26.799 03:16:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3133229 00:13:26.799 03:16:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:27.731 03:16:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:27.989 03:16:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:13:27.989 03:16:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:13:28.247 true 00:13:28.247 03:16:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3133229 00:13:28.247 03:16:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:28.505 03:16:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:28.765 03:16:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:13:28.765 03:16:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:13:28.765 true 00:13:28.765 03:16:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3133229 00:13:29.024 03:16:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:29.283 03:16:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:29.283 03:16:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:13:29.283 03:16:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:13:29.543 true 00:13:29.543 03:16:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3133229 00:13:29.543 03:16:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:30.954 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:30.954 03:16:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:30.954 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:30.954 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:30.954 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:30.954 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:30.954 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:30.954 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:30.954 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:30.954 03:16:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:13:30.954 03:16:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:13:31.213 true 00:13:31.213 03:16:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3133229 00:13:31.213 03:16:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:32.149 03:16:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:32.407 03:16:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:13:32.407 03:16:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:13:32.665 true 00:13:32.665 03:16:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3133229 00:13:32.665 03:16:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:32.922 03:16:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:33.182 03:16:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:13:33.182 03:16:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:13:33.182 true 00:13:33.182 03:16:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3133229 00:13:33.182 03:16:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:33.748 03:16:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:33.748 03:16:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:13:33.748 03:16:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:13:34.005 true 00:13:34.005 03:16:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3133229 00:13:34.005 03:16:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:35.384 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:35.384 03:16:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:35.384 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:35.384 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:35.384 03:16:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:13:35.384 03:16:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:13:35.642 true 00:13:35.642 03:16:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3133229 00:13:35.642 03:16:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:35.899 03:16:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:36.154 03:16:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:13:36.154 03:16:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:13:36.411 true 00:13:36.411 03:16:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3133229 00:13:36.411 03:16:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:37.342 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:37.342 03:16:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:37.599 03:16:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:13:37.599 03:16:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:13:37.856 true 00:13:37.856 03:16:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3133229 00:13:37.856 03:16:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:38.113 03:16:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:38.370 03:16:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:13:38.370 03:16:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:13:38.627 true 00:13:38.627 03:16:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3133229 00:13:38.628 03:16:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:39.562 03:16:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:39.562 03:16:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:13:39.562 03:16:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:13:39.820 true 00:13:39.820 03:16:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3133229 00:13:39.820 03:16:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:40.078 03:16:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:40.336 03:16:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:13:40.336 03:16:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:13:40.594 true 00:13:40.594 03:16:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3133229 00:13:40.594 03:16:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:40.852 03:16:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:41.109 03:16:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:13:41.109 03:16:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:13:41.366 true 00:13:41.366 03:16:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3133229 00:13:41.366 03:16:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:42.302 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:42.302 03:16:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:42.560 03:16:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:13:42.560 03:16:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:13:42.818 true 00:13:42.818 03:16:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3133229 00:13:42.818 03:16:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:43.076 03:16:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:43.334 03:16:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:13:43.334 03:16:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:13:43.592 true 00:13:43.592 03:16:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3133229 00:13:43.592 03:16:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:44.528 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:44.528 03:16:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:44.787 03:16:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:13:44.787 03:16:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:13:45.045 true 00:13:45.045 03:16:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3133229 00:13:45.045 03:16:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:45.303 03:16:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:45.561 03:16:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:13:45.561 03:16:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:13:45.819 true 00:13:45.819 03:16:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3133229 00:13:45.819 03:16:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:46.800 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:46.800 03:16:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:46.800 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:46.800 03:16:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:13:46.800 03:16:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:13:47.058 true 00:13:47.058 03:16:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3133229 00:13:47.058 03:16:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:47.315 03:16:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:47.572 03:16:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:13:47.572 03:16:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:13:47.829 true 00:13:47.829 03:16:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3133229 00:13:47.829 03:16:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:48.764 03:16:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:49.021 03:16:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:13:49.021 03:16:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:13:49.279 true 00:13:49.279 03:16:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3133229 00:13:49.279 03:16:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:49.537 03:16:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:49.795 03:16:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:13:49.795 03:16:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:13:50.053 true 00:13:50.053 03:16:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3133229 00:13:50.053 03:16:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:50.310 03:16:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:50.567 03:16:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:13:50.567 03:16:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:13:50.825 true 00:13:50.825 03:16:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3133229 00:13:50.825 03:16:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:51.759 03:16:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:52.016 03:16:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:13:52.016 03:16:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:13:52.274 true 00:13:52.274 03:16:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3133229 00:13:52.274 03:16:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:52.842 03:16:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:52.842 03:16:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:13:52.842 03:16:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:13:53.101 true 00:13:53.101 03:16:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3133229 00:13:53.359 03:16:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:53.359 03:16:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:53.617 03:16:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:13:53.617 03:16:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:13:53.875 true 00:13:53.875 03:16:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3133229 00:13:53.875 03:16:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:55.247 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:55.247 03:17:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:55.247 03:17:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:13:55.247 03:17:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:13:55.504 Initializing NVMe Controllers 00:13:55.504 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:13:55.504 Controller IO queue size 128, less than required. 00:13:55.504 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:13:55.504 Controller IO queue size 128, less than required. 00:13:55.504 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:13:55.504 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:13:55.504 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:13:55.504 Initialization complete. Launching workers. 00:13:55.504 ======================================================== 00:13:55.504 Latency(us) 00:13:55.504 Device Information : IOPS MiB/s Average min max 00:13:55.504 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 828.39 0.40 75349.75 2907.63 1071377.85 00:13:55.504 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 10452.43 5.10 12247.07 1626.59 447432.54 00:13:55.504 ======================================================== 00:13:55.504 Total : 11280.82 5.51 16880.92 1626.59 1071377.85 00:13:55.504 00:13:55.504 true 00:13:55.504 03:17:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3133229 00:13:55.504 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (3133229) - No such process 00:13:55.504 03:17:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 3133229 00:13:55.504 03:17:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:55.762 03:17:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:56.018 03:17:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:13:56.018 03:17:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:13:56.018 03:17:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:13:56.018 03:17:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:56.018 03:17:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:13:56.275 null0 00:13:56.275 03:17:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:56.275 03:17:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:56.275 03:17:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:13:56.532 null1 00:13:56.532 03:17:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:56.532 03:17:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:56.532 03:17:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:13:56.790 null2 00:13:56.790 03:17:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:56.790 03:17:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:56.790 03:17:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:13:57.048 null3 00:13:57.048 03:17:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:57.048 03:17:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:57.048 03:17:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:13:57.305 null4 00:13:57.305 03:17:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:57.305 03:17:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:57.305 03:17:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:13:57.563 null5 00:13:57.563 03:17:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:57.563 03:17:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:57.563 03:17:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:13:57.820 null6 00:13:57.820 03:17:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:57.820 03:17:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:57.820 03:17:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:13:58.079 null7 00:13:58.079 03:17:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:58.079 03:17:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:58.079 03:17:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:13:58.079 03:17:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:58.079 03:17:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:58.079 03:17:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:13:58.079 03:17:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:58.079 03:17:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:13:58.079 03:17:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:58.079 03:17:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:58.079 03:17:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:58.079 03:17:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:58.079 03:17:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:58.079 03:17:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:13:58.079 03:17:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:58.079 03:17:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:13:58.079 03:17:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:58.079 03:17:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:58.079 03:17:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:58.079 03:17:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:58.079 03:17:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:58.079 03:17:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:13:58.079 03:17:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:58.079 03:17:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:13:58.079 03:17:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:58.079 03:17:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:58.079 03:17:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:58.079 03:17:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:58.079 03:17:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:58.079 03:17:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:13:58.079 03:17:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:58.079 03:17:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:58.079 03:17:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:13:58.079 03:17:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:58.079 03:17:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:58.079 03:17:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:58.079 03:17:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:58.079 03:17:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:13:58.079 03:17:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:58.079 03:17:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:13:58.079 03:17:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:58.079 03:17:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:58.079 03:17:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:58.079 03:17:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:58.079 03:17:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:58.079 03:17:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:13:58.079 03:17:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:58.079 03:17:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:13:58.079 03:17:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:58.079 03:17:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:58.079 03:17:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:58.079 03:17:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:58.079 03:17:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:58.079 03:17:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:13:58.079 03:17:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:58.079 03:17:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:13:58.079 03:17:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:58.079 03:17:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:58.079 03:17:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:58.079 03:17:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:58.079 03:17:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:58.079 03:17:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:13:58.079 03:17:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:58.079 03:17:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:13:58.079 03:17:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:58.079 03:17:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:58.079 03:17:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 3137150 3137151 3137153 3137155 3137157 3137160 3137163 3137166 00:13:58.079 03:17:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:58.079 03:17:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:58.337 03:17:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:58.337 03:17:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:58.337 03:17:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:58.337 03:17:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:58.337 03:17:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:58.337 03:17:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:58.337 03:17:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:58.337 03:17:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:58.594 03:17:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:58.594 03:17:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:58.594 03:17:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:58.594 03:17:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:58.594 03:17:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:58.594 03:17:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:58.594 03:17:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:58.594 03:17:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:58.594 03:17:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:58.594 03:17:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:58.594 03:17:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:58.594 03:17:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:58.594 03:17:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:58.594 03:17:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:58.594 03:17:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:58.594 03:17:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:58.594 03:17:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:58.594 03:17:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:58.594 03:17:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:58.594 03:17:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:58.594 03:17:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:58.594 03:17:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:58.594 03:17:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:58.594 03:17:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:58.852 03:17:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:58.852 03:17:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:58.852 03:17:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:58.852 03:17:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:58.852 03:17:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:58.852 03:17:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:58.852 03:17:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:58.852 03:17:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:59.110 03:17:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:59.110 03:17:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:59.110 03:17:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:59.110 03:17:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:59.110 03:17:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:59.110 03:17:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:59.110 03:17:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:59.110 03:17:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:59.110 03:17:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:59.110 03:17:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:59.110 03:17:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:59.110 03:17:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:59.110 03:17:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:59.110 03:17:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:59.110 03:17:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:59.110 03:17:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:59.110 03:17:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:59.110 03:17:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:59.110 03:17:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:59.110 03:17:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:59.110 03:17:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:59.110 03:17:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:59.110 03:17:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:59.110 03:17:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:59.368 03:17:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:59.368 03:17:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:59.368 03:17:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:59.368 03:17:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:59.369 03:17:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:59.369 03:17:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:59.369 03:17:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:59.369 03:17:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:59.626 03:17:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:59.626 03:17:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:59.626 03:17:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:59.626 03:17:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:59.626 03:17:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:59.626 03:17:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:59.626 03:17:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:59.626 03:17:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:59.626 03:17:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:59.626 03:17:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:59.626 03:17:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:59.626 03:17:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:59.626 03:17:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:59.626 03:17:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:59.626 03:17:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:59.626 03:17:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:59.626 03:17:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:59.626 03:17:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:59.626 03:17:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:59.626 03:17:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:59.626 03:17:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:59.626 03:17:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:59.626 03:17:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:59.626 03:17:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:59.883 03:17:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:59.883 03:17:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:59.883 03:17:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:59.883 03:17:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:59.883 03:17:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:59.883 03:17:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:59.883 03:17:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:59.883 03:17:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:00.142 03:17:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:00.142 03:17:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:00.142 03:17:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:00.142 03:17:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:00.142 03:17:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:00.142 03:17:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:00.142 03:17:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:00.142 03:17:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:00.142 03:17:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:00.142 03:17:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:00.142 03:17:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:00.142 03:17:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:00.142 03:17:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:00.142 03:17:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:00.142 03:17:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:00.142 03:17:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:00.142 03:17:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:00.142 03:17:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:00.142 03:17:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:00.142 03:17:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:00.142 03:17:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:00.142 03:17:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:00.142 03:17:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:00.142 03:17:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:00.400 03:17:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:00.400 03:17:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:00.400 03:17:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:00.400 03:17:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:00.400 03:17:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:00.400 03:17:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:00.400 03:17:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:00.400 03:17:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:00.657 03:17:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:00.657 03:17:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:00.657 03:17:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:00.657 03:17:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:00.657 03:17:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:00.657 03:17:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:00.657 03:17:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:00.657 03:17:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:00.657 03:17:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:00.657 03:17:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:00.657 03:17:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:00.657 03:17:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:00.657 03:17:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:00.657 03:17:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:00.657 03:17:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:00.657 03:17:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:00.657 03:17:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:00.657 03:17:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:00.657 03:17:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:00.657 03:17:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:00.657 03:17:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:00.657 03:17:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:00.657 03:17:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:00.657 03:17:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:00.914 03:17:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:00.914 03:17:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:00.914 03:17:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:00.914 03:17:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:00.914 03:17:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:00.914 03:17:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:00.914 03:17:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:01.171 03:17:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:01.171 03:17:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:01.171 03:17:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:01.171 03:17:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:01.171 03:17:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:01.171 03:17:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:01.171 03:17:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:01.171 03:17:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:01.171 03:17:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:01.171 03:17:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:01.428 03:17:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:01.428 03:17:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:01.428 03:17:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:01.428 03:17:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:01.428 03:17:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:01.428 03:17:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:01.428 03:17:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:01.429 03:17:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:01.429 03:17:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:01.429 03:17:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:01.429 03:17:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:01.429 03:17:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:01.429 03:17:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:01.429 03:17:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:01.429 03:17:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:01.429 03:17:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:01.429 03:17:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:01.686 03:17:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:01.686 03:17:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:01.686 03:17:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:01.686 03:17:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:01.686 03:17:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:01.686 03:17:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:01.979 03:17:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:01.979 03:17:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:01.979 03:17:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:01.979 03:17:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:01.979 03:17:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:01.979 03:17:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:01.979 03:17:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:01.979 03:17:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:01.979 03:17:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:01.979 03:17:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:01.979 03:17:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:01.979 03:17:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:01.979 03:17:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:01.979 03:17:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:01.979 03:17:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:01.979 03:17:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:01.979 03:17:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:01.979 03:17:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:01.979 03:17:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:01.979 03:17:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:01.979 03:17:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:01.979 03:17:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:01.979 03:17:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:01.979 03:17:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:02.236 03:17:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:02.236 03:17:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:02.236 03:17:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:02.236 03:17:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:02.236 03:17:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:02.237 03:17:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:02.237 03:17:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:02.237 03:17:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:02.494 03:17:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:02.494 03:17:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:02.494 03:17:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:02.494 03:17:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:02.494 03:17:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:02.494 03:17:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:02.494 03:17:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:02.494 03:17:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:02.494 03:17:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:02.494 03:17:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:02.494 03:17:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:02.494 03:17:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:02.494 03:17:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:02.494 03:17:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:02.494 03:17:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:02.494 03:17:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:02.494 03:17:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:02.494 03:17:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:02.494 03:17:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:02.494 03:17:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:02.494 03:17:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:02.494 03:17:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:02.494 03:17:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:02.495 03:17:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:02.753 03:17:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:02.753 03:17:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:02.753 03:17:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:02.753 03:17:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:02.753 03:17:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:02.753 03:17:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:02.753 03:17:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:02.753 03:17:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:03.011 03:17:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:03.011 03:17:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:03.011 03:17:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:03.011 03:17:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:03.011 03:17:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:03.011 03:17:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:03.011 03:17:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:03.011 03:17:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:03.011 03:17:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:03.011 03:17:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:03.011 03:17:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:03.011 03:17:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:03.011 03:17:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:03.011 03:17:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:03.011 03:17:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:03.011 03:17:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:03.011 03:17:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:03.011 03:17:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:03.011 03:17:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:03.011 03:17:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:03.011 03:17:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:03.012 03:17:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:03.012 03:17:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:03.012 03:17:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:03.270 03:17:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:03.270 03:17:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:03.270 03:17:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:03.270 03:17:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:03.270 03:17:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:03.270 03:17:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:03.270 03:17:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:03.270 03:17:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:03.528 03:17:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:03.528 03:17:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:03.528 03:17:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:03.528 03:17:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:03.528 03:17:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:03.528 03:17:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:03.528 03:17:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:03.528 03:17:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:03.528 03:17:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:03.528 03:17:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:03.528 03:17:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:03.528 03:17:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:03.528 03:17:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:03.528 03:17:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:03.528 03:17:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:03.528 03:17:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:03.528 03:17:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:14:03.528 03:17:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:14:03.528 03:17:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:03.528 03:17:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # sync 00:14:03.528 03:17:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:03.528 03:17:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@120 -- # set +e 00:14:03.528 03:17:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:03.528 03:17:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:03.528 rmmod nvme_tcp 00:14:03.528 rmmod nvme_fabrics 00:14:03.529 rmmod nvme_keyring 00:14:03.529 03:17:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:03.529 03:17:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set -e 00:14:03.529 03:17:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # return 0 00:14:03.529 03:17:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@489 -- # '[' -n 3132934 ']' 00:14:03.529 03:17:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@490 -- # killprocess 3132934 00:14:03.529 03:17:09 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@948 -- # '[' -z 3132934 ']' 00:14:03.529 03:17:09 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@952 -- # kill -0 3132934 00:14:03.529 03:17:09 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@953 -- # uname 00:14:03.529 03:17:09 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:03.529 03:17:09 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3132934 00:14:03.529 03:17:09 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:14:03.529 03:17:09 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:14:03.529 03:17:09 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3132934' 00:14:03.529 killing process with pid 3132934 00:14:03.529 03:17:09 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@967 -- # kill 3132934 00:14:03.529 03:17:09 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # wait 3132934 00:14:03.788 03:17:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:03.788 03:17:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:03.788 03:17:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:03.788 03:17:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:03.788 03:17:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:03.788 03:17:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:03.789 03:17:09 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:03.789 03:17:09 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:06.323 03:17:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:06.323 00:14:06.323 real 0m45.758s 00:14:06.323 user 3m29.359s 00:14:06.323 sys 0m15.717s 00:14:06.323 03:17:11 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:06.323 03:17:11 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:14:06.323 ************************************ 00:14:06.323 END TEST nvmf_ns_hotplug_stress 00:14:06.323 ************************************ 00:14:06.323 03:17:11 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:14:06.323 03:17:11 nvmf_tcp -- nvmf/nvmf.sh@33 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:14:06.323 03:17:11 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:06.323 03:17:11 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:06.323 03:17:11 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:06.323 ************************************ 00:14:06.323 START TEST nvmf_connect_stress 00:14:06.323 ************************************ 00:14:06.323 03:17:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:14:06.323 * Looking for test storage... 00:14:06.323 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:06.323 03:17:11 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:06.323 03:17:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:14:06.323 03:17:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:06.323 03:17:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:06.323 03:17:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:06.323 03:17:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:06.323 03:17:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:06.323 03:17:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:06.323 03:17:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:06.323 03:17:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:06.323 03:17:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:06.323 03:17:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:06.323 03:17:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:06.323 03:17:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:14:06.323 03:17:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:06.323 03:17:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:06.323 03:17:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:06.323 03:17:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:06.323 03:17:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:06.323 03:17:11 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:06.323 03:17:11 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:06.323 03:17:11 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:06.323 03:17:11 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:06.323 03:17:11 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:06.323 03:17:11 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:06.323 03:17:11 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:14:06.324 03:17:11 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:06.324 03:17:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@47 -- # : 0 00:14:06.324 03:17:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:06.324 03:17:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:06.324 03:17:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:06.324 03:17:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:06.324 03:17:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:06.324 03:17:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:06.324 03:17:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:06.324 03:17:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:06.324 03:17:11 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:14:06.324 03:17:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:06.324 03:17:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:06.324 03:17:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:06.324 03:17:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:06.324 03:17:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:06.324 03:17:12 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:06.324 03:17:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:06.324 03:17:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:06.324 03:17:12 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:06.324 03:17:12 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:06.324 03:17:12 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:14:06.324 03:17:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:08.225 03:17:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:08.225 03:17:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:14:08.225 03:17:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:08.225 03:17:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:08.225 03:17:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:08.225 03:17:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:08.225 03:17:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:08.225 03:17:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@295 -- # net_devs=() 00:14:08.225 03:17:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:08.225 03:17:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@296 -- # e810=() 00:14:08.225 03:17:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@296 -- # local -ga e810 00:14:08.225 03:17:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@297 -- # x722=() 00:14:08.225 03:17:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@297 -- # local -ga x722 00:14:08.225 03:17:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@298 -- # mlx=() 00:14:08.225 03:17:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:14:08.225 03:17:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:08.225 03:17:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:08.225 03:17:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:08.225 03:17:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:08.225 03:17:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:08.225 03:17:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:08.226 03:17:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:08.226 03:17:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:08.226 03:17:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:08.226 03:17:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:08.226 03:17:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:08.226 03:17:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:08.226 03:17:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:08.226 03:17:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:08.226 03:17:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:08.226 03:17:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:08.226 03:17:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:08.226 03:17:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:08.226 03:17:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:14:08.226 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:14:08.226 03:17:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:08.226 03:17:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:08.226 03:17:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:08.226 03:17:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:08.226 03:17:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:08.226 03:17:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:08.226 03:17:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:14:08.226 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:14:08.226 03:17:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:08.226 03:17:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:08.226 03:17:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:08.226 03:17:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:08.226 03:17:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:08.226 03:17:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:08.226 03:17:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:08.226 03:17:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:08.226 03:17:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:08.226 03:17:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:08.226 03:17:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:08.226 03:17:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:08.226 03:17:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:08.226 03:17:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:08.226 03:17:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:08.226 03:17:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:14:08.226 Found net devices under 0000:0a:00.0: cvl_0_0 00:14:08.226 03:17:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:08.226 03:17:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:08.226 03:17:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:08.226 03:17:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:08.226 03:17:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:08.226 03:17:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:08.226 03:17:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:08.226 03:17:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:08.226 03:17:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:14:08.226 Found net devices under 0000:0a:00.1: cvl_0_1 00:14:08.226 03:17:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:08.226 03:17:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:08.226 03:17:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:14:08.226 03:17:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:08.226 03:17:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:08.226 03:17:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:08.226 03:17:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:08.226 03:17:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:08.226 03:17:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:08.226 03:17:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:08.226 03:17:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:08.226 03:17:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:08.226 03:17:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:08.226 03:17:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:08.226 03:17:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:08.226 03:17:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:08.226 03:17:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:08.226 03:17:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:08.226 03:17:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:08.226 03:17:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:08.226 03:17:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:08.226 03:17:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:08.226 03:17:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:08.226 03:17:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:08.226 03:17:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:08.226 03:17:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:08.226 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:08.226 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.179 ms 00:14:08.226 00:14:08.226 --- 10.0.0.2 ping statistics --- 00:14:08.226 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:08.226 rtt min/avg/max/mdev = 0.179/0.179/0.179/0.000 ms 00:14:08.226 03:17:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:08.226 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:08.226 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.055 ms 00:14:08.226 00:14:08.226 --- 10.0.0.1 ping statistics --- 00:14:08.226 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:08.226 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:14:08.226 03:17:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:08.226 03:17:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@422 -- # return 0 00:14:08.226 03:17:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:08.226 03:17:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:08.226 03:17:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:08.226 03:17:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:08.226 03:17:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:08.226 03:17:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:08.226 03:17:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:08.226 03:17:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:14:08.226 03:17:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:08.226 03:17:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:08.226 03:17:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:08.226 03:17:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@481 -- # nvmfpid=3139920 00:14:08.226 03:17:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:14:08.226 03:17:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@482 -- # waitforlisten 3139920 00:14:08.226 03:17:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@829 -- # '[' -z 3139920 ']' 00:14:08.226 03:17:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:08.226 03:17:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:08.226 03:17:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:08.226 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:08.226 03:17:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:08.226 03:17:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:08.226 [2024-07-15 03:17:14.227050] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:14:08.226 [2024-07-15 03:17:14.227140] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:08.226 EAL: No free 2048 kB hugepages reported on node 1 00:14:08.226 [2024-07-15 03:17:14.299026] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:08.486 [2024-07-15 03:17:14.390076] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:08.486 [2024-07-15 03:17:14.390133] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:08.486 [2024-07-15 03:17:14.390159] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:08.486 [2024-07-15 03:17:14.390173] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:08.486 [2024-07-15 03:17:14.390185] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:08.486 [2024-07-15 03:17:14.390288] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:08.486 [2024-07-15 03:17:14.390387] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:14:08.486 [2024-07-15 03:17:14.390389] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:08.486 03:17:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:08.486 03:17:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@862 -- # return 0 00:14:08.486 03:17:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:08.486 03:17:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:08.486 03:17:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:08.486 03:17:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:08.486 03:17:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:08.486 03:17:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:08.486 03:17:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:08.486 [2024-07-15 03:17:14.539823] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:08.486 03:17:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:08.486 03:17:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:14:08.486 03:17:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:08.486 03:17:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:08.486 03:17:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:08.486 03:17:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:08.486 03:17:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:08.486 03:17:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:08.486 [2024-07-15 03:17:14.577060] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:08.486 03:17:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:08.486 03:17:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:14:08.486 03:17:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:08.486 03:17:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:08.486 NULL1 00:14:08.486 03:17:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:08.486 03:17:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=3140043 00:14:08.486 03:17:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:14:08.486 03:17:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:14:08.486 03:17:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:14:08.486 03:17:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:14:08.486 03:17:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:08.486 03:17:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:08.486 03:17:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:08.486 03:17:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:08.486 03:17:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:08.486 03:17:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:08.486 03:17:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:08.486 03:17:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:08.486 03:17:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:08.486 03:17:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:08.486 03:17:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:08.486 03:17:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:08.486 03:17:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:08.486 03:17:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:08.486 03:17:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:08.486 03:17:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:08.486 03:17:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:08.486 03:17:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:08.486 03:17:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:08.486 03:17:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:08.486 03:17:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:08.486 03:17:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:08.486 03:17:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:08.486 03:17:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:08.486 EAL: No free 2048 kB hugepages reported on node 1 00:14:08.486 03:17:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:08.486 03:17:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:08.486 03:17:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:08.486 03:17:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:08.486 03:17:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:08.486 03:17:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:08.486 03:17:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:08.486 03:17:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:08.486 03:17:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:08.486 03:17:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:08.744 03:17:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:08.744 03:17:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:08.744 03:17:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:08.744 03:17:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:08.744 03:17:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:08.744 03:17:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:08.744 03:17:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3140043 00:14:08.744 03:17:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:08.744 03:17:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:08.744 03:17:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:09.001 03:17:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:09.001 03:17:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3140043 00:14:09.001 03:17:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:09.001 03:17:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:09.001 03:17:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:09.259 03:17:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:09.259 03:17:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3140043 00:14:09.259 03:17:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:09.259 03:17:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:09.259 03:17:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:09.516 03:17:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:09.516 03:17:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3140043 00:14:09.516 03:17:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:09.516 03:17:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:09.516 03:17:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:10.080 03:17:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:10.080 03:17:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3140043 00:14:10.080 03:17:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:10.080 03:17:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:10.080 03:17:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:10.337 03:17:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:10.337 03:17:16 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3140043 00:14:10.337 03:17:16 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:10.337 03:17:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:10.337 03:17:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:10.596 03:17:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:10.596 03:17:16 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3140043 00:14:10.596 03:17:16 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:10.596 03:17:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:10.596 03:17:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:10.854 03:17:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:10.854 03:17:16 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3140043 00:14:10.854 03:17:16 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:10.854 03:17:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:10.854 03:17:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:11.112 03:17:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:11.112 03:17:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3140043 00:14:11.112 03:17:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:11.112 03:17:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:11.112 03:17:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:11.677 03:17:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:11.677 03:17:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3140043 00:14:11.677 03:17:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:11.677 03:17:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:11.677 03:17:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:11.934 03:17:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:11.934 03:17:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3140043 00:14:11.934 03:17:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:11.934 03:17:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:11.935 03:17:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:12.192 03:17:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:12.192 03:17:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3140043 00:14:12.192 03:17:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:12.192 03:17:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:12.192 03:17:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:12.450 03:17:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:12.450 03:17:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3140043 00:14:12.450 03:17:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:12.450 03:17:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:12.450 03:17:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:12.707 03:17:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:12.707 03:17:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3140043 00:14:12.707 03:17:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:12.707 03:17:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:12.707 03:17:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:13.272 03:17:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:13.272 03:17:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3140043 00:14:13.272 03:17:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:13.272 03:17:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:13.272 03:17:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:13.530 03:17:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:13.530 03:17:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3140043 00:14:13.530 03:17:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:13.530 03:17:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:13.530 03:17:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:13.787 03:17:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:13.787 03:17:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3140043 00:14:13.787 03:17:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:13.787 03:17:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:13.787 03:17:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:14.044 03:17:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:14.044 03:17:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3140043 00:14:14.044 03:17:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:14.044 03:17:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:14.044 03:17:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:14.301 03:17:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:14.301 03:17:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3140043 00:14:14.301 03:17:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:14.301 03:17:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:14.301 03:17:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:14.866 03:17:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:14.866 03:17:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3140043 00:14:14.866 03:17:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:14.866 03:17:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:14.866 03:17:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:15.123 03:17:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:15.123 03:17:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3140043 00:14:15.123 03:17:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:15.123 03:17:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:15.123 03:17:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:15.381 03:17:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:15.381 03:17:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3140043 00:14:15.381 03:17:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:15.381 03:17:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:15.381 03:17:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:15.639 03:17:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:15.639 03:17:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3140043 00:14:15.639 03:17:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:15.639 03:17:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:15.639 03:17:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:15.897 03:17:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:15.897 03:17:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3140043 00:14:15.897 03:17:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:15.897 03:17:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:15.897 03:17:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:16.462 03:17:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:16.462 03:17:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3140043 00:14:16.462 03:17:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:16.462 03:17:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:16.462 03:17:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:16.719 03:17:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:16.719 03:17:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3140043 00:14:16.719 03:17:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:16.719 03:17:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:16.719 03:17:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:16.977 03:17:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:16.977 03:17:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3140043 00:14:16.977 03:17:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:16.977 03:17:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:16.977 03:17:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:17.235 03:17:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:17.235 03:17:23 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3140043 00:14:17.235 03:17:23 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:17.235 03:17:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:17.235 03:17:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:17.493 03:17:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:17.493 03:17:23 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3140043 00:14:17.493 03:17:23 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:17.493 03:17:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:17.493 03:17:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:18.057 03:17:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:18.057 03:17:23 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3140043 00:14:18.057 03:17:23 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:18.057 03:17:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:18.057 03:17:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:18.342 03:17:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:18.342 03:17:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3140043 00:14:18.342 03:17:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:18.342 03:17:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:18.342 03:17:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:18.599 03:17:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:18.599 03:17:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3140043 00:14:18.599 03:17:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:18.599 03:17:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:18.599 03:17:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:18.857 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:18.857 03:17:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:18.857 03:17:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3140043 00:14:18.857 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (3140043) - No such process 00:14:18.857 03:17:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 3140043 00:14:18.857 03:17:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:14:18.857 03:17:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:14:18.857 03:17:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:14:18.857 03:17:24 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:18.857 03:17:24 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@117 -- # sync 00:14:18.857 03:17:24 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:18.857 03:17:24 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@120 -- # set +e 00:14:18.857 03:17:24 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:18.857 03:17:24 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:18.857 rmmod nvme_tcp 00:14:18.857 rmmod nvme_fabrics 00:14:18.857 rmmod nvme_keyring 00:14:18.857 03:17:24 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:18.857 03:17:24 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@124 -- # set -e 00:14:18.857 03:17:24 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@125 -- # return 0 00:14:18.857 03:17:24 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@489 -- # '[' -n 3139920 ']' 00:14:18.857 03:17:24 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@490 -- # killprocess 3139920 00:14:18.857 03:17:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@948 -- # '[' -z 3139920 ']' 00:14:18.857 03:17:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@952 -- # kill -0 3139920 00:14:18.857 03:17:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@953 -- # uname 00:14:18.857 03:17:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:18.858 03:17:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3139920 00:14:19.117 03:17:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:14:19.117 03:17:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:14:19.117 03:17:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3139920' 00:14:19.117 killing process with pid 3139920 00:14:19.117 03:17:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@967 -- # kill 3139920 00:14:19.117 03:17:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@972 -- # wait 3139920 00:14:19.117 03:17:25 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:19.117 03:17:25 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:19.117 03:17:25 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:19.117 03:17:25 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:19.117 03:17:25 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:19.117 03:17:25 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:19.117 03:17:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:19.117 03:17:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:21.652 03:17:27 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:21.652 00:14:21.652 real 0m15.361s 00:14:21.652 user 0m38.314s 00:14:21.652 sys 0m6.004s 00:14:21.652 03:17:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:21.652 03:17:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:21.652 ************************************ 00:14:21.652 END TEST nvmf_connect_stress 00:14:21.652 ************************************ 00:14:21.652 03:17:27 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:14:21.652 03:17:27 nvmf_tcp -- nvmf/nvmf.sh@34 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:14:21.652 03:17:27 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:21.652 03:17:27 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:21.652 03:17:27 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:21.652 ************************************ 00:14:21.652 START TEST nvmf_fused_ordering 00:14:21.652 ************************************ 00:14:21.652 03:17:27 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:14:21.652 * Looking for test storage... 00:14:21.652 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:21.652 03:17:27 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:21.652 03:17:27 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:14:21.652 03:17:27 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:21.652 03:17:27 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:21.652 03:17:27 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:21.652 03:17:27 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:21.652 03:17:27 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:21.652 03:17:27 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:21.652 03:17:27 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:21.652 03:17:27 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:21.652 03:17:27 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:21.652 03:17:27 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:21.652 03:17:27 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:21.652 03:17:27 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:14:21.652 03:17:27 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:21.652 03:17:27 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:21.652 03:17:27 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:21.652 03:17:27 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:21.652 03:17:27 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:21.652 03:17:27 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:21.653 03:17:27 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:21.653 03:17:27 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:21.653 03:17:27 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:21.653 03:17:27 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:21.653 03:17:27 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:21.653 03:17:27 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:14:21.653 03:17:27 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:21.653 03:17:27 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@47 -- # : 0 00:14:21.653 03:17:27 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:21.653 03:17:27 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:21.653 03:17:27 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:21.653 03:17:27 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:21.653 03:17:27 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:21.653 03:17:27 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:21.653 03:17:27 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:21.653 03:17:27 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:21.653 03:17:27 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:14:21.653 03:17:27 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:21.653 03:17:27 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:21.653 03:17:27 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:21.653 03:17:27 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:21.653 03:17:27 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:21.653 03:17:27 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:21.653 03:17:27 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:21.653 03:17:27 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:21.653 03:17:27 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:21.653 03:17:27 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:21.653 03:17:27 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@285 -- # xtrace_disable 00:14:21.653 03:17:27 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:23.558 03:17:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:23.558 03:17:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@291 -- # pci_devs=() 00:14:23.558 03:17:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:23.558 03:17:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:23.558 03:17:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:23.558 03:17:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:23.558 03:17:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:23.558 03:17:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@295 -- # net_devs=() 00:14:23.558 03:17:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:23.558 03:17:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@296 -- # e810=() 00:14:23.558 03:17:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@296 -- # local -ga e810 00:14:23.558 03:17:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@297 -- # x722=() 00:14:23.558 03:17:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@297 -- # local -ga x722 00:14:23.558 03:17:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@298 -- # mlx=() 00:14:23.558 03:17:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@298 -- # local -ga mlx 00:14:23.558 03:17:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:23.558 03:17:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:23.558 03:17:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:23.558 03:17:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:23.558 03:17:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:23.558 03:17:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:23.558 03:17:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:23.558 03:17:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:23.558 03:17:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:23.558 03:17:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:23.558 03:17:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:23.558 03:17:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:23.558 03:17:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:23.558 03:17:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:23.558 03:17:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:23.558 03:17:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:23.558 03:17:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:23.558 03:17:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:23.558 03:17:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:14:23.558 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:14:23.558 03:17:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:23.558 03:17:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:23.558 03:17:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:23.558 03:17:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:23.558 03:17:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:23.558 03:17:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:23.558 03:17:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:14:23.558 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:14:23.558 03:17:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:23.558 03:17:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:23.558 03:17:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:23.558 03:17:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:23.558 03:17:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:23.558 03:17:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:23.558 03:17:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:23.558 03:17:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:23.558 03:17:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:23.558 03:17:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:23.558 03:17:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:23.558 03:17:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:23.558 03:17:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:23.558 03:17:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:23.558 03:17:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:23.558 03:17:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:14:23.558 Found net devices under 0000:0a:00.0: cvl_0_0 00:14:23.558 03:17:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:23.558 03:17:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:23.558 03:17:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:23.558 03:17:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:23.558 03:17:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:23.558 03:17:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:23.558 03:17:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:23.558 03:17:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:23.558 03:17:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:14:23.558 Found net devices under 0000:0a:00.1: cvl_0_1 00:14:23.558 03:17:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:23.558 03:17:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:23.558 03:17:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # is_hw=yes 00:14:23.558 03:17:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:23.558 03:17:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:23.558 03:17:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:23.558 03:17:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:23.558 03:17:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:23.558 03:17:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:23.558 03:17:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:23.558 03:17:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:23.558 03:17:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:23.558 03:17:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:23.558 03:17:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:23.558 03:17:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:23.558 03:17:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:23.558 03:17:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:23.558 03:17:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:23.558 03:17:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:23.558 03:17:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:23.558 03:17:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:23.558 03:17:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:23.558 03:17:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:23.558 03:17:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:23.558 03:17:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:23.558 03:17:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:23.558 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:23.558 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.202 ms 00:14:23.558 00:14:23.558 --- 10.0.0.2 ping statistics --- 00:14:23.558 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:23.558 rtt min/avg/max/mdev = 0.202/0.202/0.202/0.000 ms 00:14:23.558 03:17:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:23.558 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:23.558 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.072 ms 00:14:23.558 00:14:23.558 --- 10.0.0.1 ping statistics --- 00:14:23.558 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:23.559 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:14:23.559 03:17:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:23.559 03:17:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@422 -- # return 0 00:14:23.559 03:17:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:23.559 03:17:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:23.559 03:17:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:23.559 03:17:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:23.559 03:17:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:23.559 03:17:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:23.559 03:17:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:23.559 03:17:29 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:14:23.559 03:17:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:23.559 03:17:29 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:23.559 03:17:29 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:23.559 03:17:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@481 -- # nvmfpid=3143191 00:14:23.559 03:17:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:14:23.559 03:17:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@482 -- # waitforlisten 3143191 00:14:23.559 03:17:29 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@829 -- # '[' -z 3143191 ']' 00:14:23.559 03:17:29 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:23.559 03:17:29 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:23.559 03:17:29 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:23.559 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:23.559 03:17:29 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:23.559 03:17:29 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:23.559 [2024-07-15 03:17:29.553467] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:14:23.559 [2024-07-15 03:17:29.553566] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:23.559 EAL: No free 2048 kB hugepages reported on node 1 00:14:23.559 [2024-07-15 03:17:29.622146] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:23.817 [2024-07-15 03:17:29.711629] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:23.817 [2024-07-15 03:17:29.711685] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:23.817 [2024-07-15 03:17:29.711705] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:23.817 [2024-07-15 03:17:29.711719] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:23.817 [2024-07-15 03:17:29.711732] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:23.818 [2024-07-15 03:17:29.711762] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:23.818 03:17:29 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:23.818 03:17:29 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@862 -- # return 0 00:14:23.818 03:17:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:23.818 03:17:29 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:23.818 03:17:29 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:23.818 03:17:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:23.818 03:17:29 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:23.818 03:17:29 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:23.818 03:17:29 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:23.818 [2024-07-15 03:17:29.860298] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:23.818 03:17:29 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:23.818 03:17:29 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:14:23.818 03:17:29 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:23.818 03:17:29 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:23.818 03:17:29 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:23.818 03:17:29 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:23.818 03:17:29 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:23.818 03:17:29 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:23.818 [2024-07-15 03:17:29.876528] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:23.818 03:17:29 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:23.818 03:17:29 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:14:23.818 03:17:29 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:23.818 03:17:29 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:23.818 NULL1 00:14:23.818 03:17:29 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:23.818 03:17:29 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:14:23.818 03:17:29 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:23.818 03:17:29 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:23.818 03:17:29 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:23.818 03:17:29 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:14:23.818 03:17:29 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:23.818 03:17:29 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:23.818 03:17:29 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:23.818 03:17:29 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:14:23.818 [2024-07-15 03:17:29.921307] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:14:23.818 [2024-07-15 03:17:29.921351] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3143218 ] 00:14:23.818 EAL: No free 2048 kB hugepages reported on node 1 00:14:24.384 Attached to nqn.2016-06.io.spdk:cnode1 00:14:24.384 Namespace ID: 1 size: 1GB 00:14:24.384 fused_ordering(0) 00:14:24.384 fused_ordering(1) 00:14:24.384 fused_ordering(2) 00:14:24.384 fused_ordering(3) 00:14:24.384 fused_ordering(4) 00:14:24.384 fused_ordering(5) 00:14:24.384 fused_ordering(6) 00:14:24.384 fused_ordering(7) 00:14:24.384 fused_ordering(8) 00:14:24.384 fused_ordering(9) 00:14:24.384 fused_ordering(10) 00:14:24.384 fused_ordering(11) 00:14:24.384 fused_ordering(12) 00:14:24.384 fused_ordering(13) 00:14:24.384 fused_ordering(14) 00:14:24.384 fused_ordering(15) 00:14:24.384 fused_ordering(16) 00:14:24.384 fused_ordering(17) 00:14:24.384 fused_ordering(18) 00:14:24.384 fused_ordering(19) 00:14:24.384 fused_ordering(20) 00:14:24.384 fused_ordering(21) 00:14:24.384 fused_ordering(22) 00:14:24.384 fused_ordering(23) 00:14:24.384 fused_ordering(24) 00:14:24.384 fused_ordering(25) 00:14:24.384 fused_ordering(26) 00:14:24.384 fused_ordering(27) 00:14:24.384 fused_ordering(28) 00:14:24.384 fused_ordering(29) 00:14:24.384 fused_ordering(30) 00:14:24.384 fused_ordering(31) 00:14:24.384 fused_ordering(32) 00:14:24.384 fused_ordering(33) 00:14:24.384 fused_ordering(34) 00:14:24.384 fused_ordering(35) 00:14:24.384 fused_ordering(36) 00:14:24.384 fused_ordering(37) 00:14:24.384 fused_ordering(38) 00:14:24.384 fused_ordering(39) 00:14:24.384 fused_ordering(40) 00:14:24.384 fused_ordering(41) 00:14:24.384 fused_ordering(42) 00:14:24.384 fused_ordering(43) 00:14:24.384 fused_ordering(44) 00:14:24.384 fused_ordering(45) 00:14:24.384 fused_ordering(46) 00:14:24.384 fused_ordering(47) 00:14:24.384 fused_ordering(48) 00:14:24.384 fused_ordering(49) 00:14:24.384 fused_ordering(50) 00:14:24.384 fused_ordering(51) 00:14:24.384 fused_ordering(52) 00:14:24.384 fused_ordering(53) 00:14:24.384 fused_ordering(54) 00:14:24.384 fused_ordering(55) 00:14:24.384 fused_ordering(56) 00:14:24.384 fused_ordering(57) 00:14:24.384 fused_ordering(58) 00:14:24.384 fused_ordering(59) 00:14:24.384 fused_ordering(60) 00:14:24.384 fused_ordering(61) 00:14:24.384 fused_ordering(62) 00:14:24.384 fused_ordering(63) 00:14:24.384 fused_ordering(64) 00:14:24.384 fused_ordering(65) 00:14:24.384 fused_ordering(66) 00:14:24.384 fused_ordering(67) 00:14:24.384 fused_ordering(68) 00:14:24.384 fused_ordering(69) 00:14:24.384 fused_ordering(70) 00:14:24.384 fused_ordering(71) 00:14:24.384 fused_ordering(72) 00:14:24.384 fused_ordering(73) 00:14:24.384 fused_ordering(74) 00:14:24.384 fused_ordering(75) 00:14:24.384 fused_ordering(76) 00:14:24.384 fused_ordering(77) 00:14:24.384 fused_ordering(78) 00:14:24.384 fused_ordering(79) 00:14:24.384 fused_ordering(80) 00:14:24.384 fused_ordering(81) 00:14:24.384 fused_ordering(82) 00:14:24.384 fused_ordering(83) 00:14:24.384 fused_ordering(84) 00:14:24.384 fused_ordering(85) 00:14:24.384 fused_ordering(86) 00:14:24.384 fused_ordering(87) 00:14:24.384 fused_ordering(88) 00:14:24.384 fused_ordering(89) 00:14:24.384 fused_ordering(90) 00:14:24.384 fused_ordering(91) 00:14:24.384 fused_ordering(92) 00:14:24.384 fused_ordering(93) 00:14:24.384 fused_ordering(94) 00:14:24.384 fused_ordering(95) 00:14:24.384 fused_ordering(96) 00:14:24.384 fused_ordering(97) 00:14:24.384 fused_ordering(98) 00:14:24.384 fused_ordering(99) 00:14:24.384 fused_ordering(100) 00:14:24.384 fused_ordering(101) 00:14:24.384 fused_ordering(102) 00:14:24.384 fused_ordering(103) 00:14:24.384 fused_ordering(104) 00:14:24.384 fused_ordering(105) 00:14:24.384 fused_ordering(106) 00:14:24.384 fused_ordering(107) 00:14:24.384 fused_ordering(108) 00:14:24.384 fused_ordering(109) 00:14:24.384 fused_ordering(110) 00:14:24.384 fused_ordering(111) 00:14:24.384 fused_ordering(112) 00:14:24.384 fused_ordering(113) 00:14:24.384 fused_ordering(114) 00:14:24.384 fused_ordering(115) 00:14:24.384 fused_ordering(116) 00:14:24.384 fused_ordering(117) 00:14:24.384 fused_ordering(118) 00:14:24.384 fused_ordering(119) 00:14:24.384 fused_ordering(120) 00:14:24.384 fused_ordering(121) 00:14:24.384 fused_ordering(122) 00:14:24.384 fused_ordering(123) 00:14:24.384 fused_ordering(124) 00:14:24.384 fused_ordering(125) 00:14:24.384 fused_ordering(126) 00:14:24.384 fused_ordering(127) 00:14:24.384 fused_ordering(128) 00:14:24.384 fused_ordering(129) 00:14:24.384 fused_ordering(130) 00:14:24.384 fused_ordering(131) 00:14:24.384 fused_ordering(132) 00:14:24.384 fused_ordering(133) 00:14:24.384 fused_ordering(134) 00:14:24.384 fused_ordering(135) 00:14:24.384 fused_ordering(136) 00:14:24.384 fused_ordering(137) 00:14:24.384 fused_ordering(138) 00:14:24.384 fused_ordering(139) 00:14:24.384 fused_ordering(140) 00:14:24.384 fused_ordering(141) 00:14:24.384 fused_ordering(142) 00:14:24.384 fused_ordering(143) 00:14:24.384 fused_ordering(144) 00:14:24.384 fused_ordering(145) 00:14:24.384 fused_ordering(146) 00:14:24.384 fused_ordering(147) 00:14:24.384 fused_ordering(148) 00:14:24.384 fused_ordering(149) 00:14:24.384 fused_ordering(150) 00:14:24.384 fused_ordering(151) 00:14:24.384 fused_ordering(152) 00:14:24.384 fused_ordering(153) 00:14:24.384 fused_ordering(154) 00:14:24.384 fused_ordering(155) 00:14:24.384 fused_ordering(156) 00:14:24.384 fused_ordering(157) 00:14:24.384 fused_ordering(158) 00:14:24.384 fused_ordering(159) 00:14:24.384 fused_ordering(160) 00:14:24.384 fused_ordering(161) 00:14:24.384 fused_ordering(162) 00:14:24.384 fused_ordering(163) 00:14:24.384 fused_ordering(164) 00:14:24.384 fused_ordering(165) 00:14:24.384 fused_ordering(166) 00:14:24.384 fused_ordering(167) 00:14:24.384 fused_ordering(168) 00:14:24.384 fused_ordering(169) 00:14:24.384 fused_ordering(170) 00:14:24.384 fused_ordering(171) 00:14:24.384 fused_ordering(172) 00:14:24.384 fused_ordering(173) 00:14:24.384 fused_ordering(174) 00:14:24.384 fused_ordering(175) 00:14:24.385 fused_ordering(176) 00:14:24.385 fused_ordering(177) 00:14:24.385 fused_ordering(178) 00:14:24.385 fused_ordering(179) 00:14:24.385 fused_ordering(180) 00:14:24.385 fused_ordering(181) 00:14:24.385 fused_ordering(182) 00:14:24.385 fused_ordering(183) 00:14:24.385 fused_ordering(184) 00:14:24.385 fused_ordering(185) 00:14:24.385 fused_ordering(186) 00:14:24.385 fused_ordering(187) 00:14:24.385 fused_ordering(188) 00:14:24.385 fused_ordering(189) 00:14:24.385 fused_ordering(190) 00:14:24.385 fused_ordering(191) 00:14:24.385 fused_ordering(192) 00:14:24.385 fused_ordering(193) 00:14:24.385 fused_ordering(194) 00:14:24.385 fused_ordering(195) 00:14:24.385 fused_ordering(196) 00:14:24.385 fused_ordering(197) 00:14:24.385 fused_ordering(198) 00:14:24.385 fused_ordering(199) 00:14:24.385 fused_ordering(200) 00:14:24.385 fused_ordering(201) 00:14:24.385 fused_ordering(202) 00:14:24.385 fused_ordering(203) 00:14:24.385 fused_ordering(204) 00:14:24.385 fused_ordering(205) 00:14:24.950 fused_ordering(206) 00:14:24.950 fused_ordering(207) 00:14:24.950 fused_ordering(208) 00:14:24.950 fused_ordering(209) 00:14:24.950 fused_ordering(210) 00:14:24.950 fused_ordering(211) 00:14:24.950 fused_ordering(212) 00:14:24.950 fused_ordering(213) 00:14:24.950 fused_ordering(214) 00:14:24.950 fused_ordering(215) 00:14:24.950 fused_ordering(216) 00:14:24.950 fused_ordering(217) 00:14:24.950 fused_ordering(218) 00:14:24.950 fused_ordering(219) 00:14:24.950 fused_ordering(220) 00:14:24.950 fused_ordering(221) 00:14:24.950 fused_ordering(222) 00:14:24.950 fused_ordering(223) 00:14:24.950 fused_ordering(224) 00:14:24.950 fused_ordering(225) 00:14:24.950 fused_ordering(226) 00:14:24.950 fused_ordering(227) 00:14:24.950 fused_ordering(228) 00:14:24.950 fused_ordering(229) 00:14:24.950 fused_ordering(230) 00:14:24.950 fused_ordering(231) 00:14:24.950 fused_ordering(232) 00:14:24.950 fused_ordering(233) 00:14:24.950 fused_ordering(234) 00:14:24.950 fused_ordering(235) 00:14:24.950 fused_ordering(236) 00:14:24.950 fused_ordering(237) 00:14:24.950 fused_ordering(238) 00:14:24.950 fused_ordering(239) 00:14:24.950 fused_ordering(240) 00:14:24.950 fused_ordering(241) 00:14:24.950 fused_ordering(242) 00:14:24.950 fused_ordering(243) 00:14:24.950 fused_ordering(244) 00:14:24.950 fused_ordering(245) 00:14:24.950 fused_ordering(246) 00:14:24.950 fused_ordering(247) 00:14:24.950 fused_ordering(248) 00:14:24.950 fused_ordering(249) 00:14:24.950 fused_ordering(250) 00:14:24.950 fused_ordering(251) 00:14:24.950 fused_ordering(252) 00:14:24.950 fused_ordering(253) 00:14:24.950 fused_ordering(254) 00:14:24.950 fused_ordering(255) 00:14:24.950 fused_ordering(256) 00:14:24.950 fused_ordering(257) 00:14:24.950 fused_ordering(258) 00:14:24.950 fused_ordering(259) 00:14:24.951 fused_ordering(260) 00:14:24.951 fused_ordering(261) 00:14:24.951 fused_ordering(262) 00:14:24.951 fused_ordering(263) 00:14:24.951 fused_ordering(264) 00:14:24.951 fused_ordering(265) 00:14:24.951 fused_ordering(266) 00:14:24.951 fused_ordering(267) 00:14:24.951 fused_ordering(268) 00:14:24.951 fused_ordering(269) 00:14:24.951 fused_ordering(270) 00:14:24.951 fused_ordering(271) 00:14:24.951 fused_ordering(272) 00:14:24.951 fused_ordering(273) 00:14:24.951 fused_ordering(274) 00:14:24.951 fused_ordering(275) 00:14:24.951 fused_ordering(276) 00:14:24.951 fused_ordering(277) 00:14:24.951 fused_ordering(278) 00:14:24.951 fused_ordering(279) 00:14:24.951 fused_ordering(280) 00:14:24.951 fused_ordering(281) 00:14:24.951 fused_ordering(282) 00:14:24.951 fused_ordering(283) 00:14:24.951 fused_ordering(284) 00:14:24.951 fused_ordering(285) 00:14:24.951 fused_ordering(286) 00:14:24.951 fused_ordering(287) 00:14:24.951 fused_ordering(288) 00:14:24.951 fused_ordering(289) 00:14:24.951 fused_ordering(290) 00:14:24.951 fused_ordering(291) 00:14:24.951 fused_ordering(292) 00:14:24.951 fused_ordering(293) 00:14:24.951 fused_ordering(294) 00:14:24.951 fused_ordering(295) 00:14:24.951 fused_ordering(296) 00:14:24.951 fused_ordering(297) 00:14:24.951 fused_ordering(298) 00:14:24.951 fused_ordering(299) 00:14:24.951 fused_ordering(300) 00:14:24.951 fused_ordering(301) 00:14:24.951 fused_ordering(302) 00:14:24.951 fused_ordering(303) 00:14:24.951 fused_ordering(304) 00:14:24.951 fused_ordering(305) 00:14:24.951 fused_ordering(306) 00:14:24.951 fused_ordering(307) 00:14:24.951 fused_ordering(308) 00:14:24.951 fused_ordering(309) 00:14:24.951 fused_ordering(310) 00:14:24.951 fused_ordering(311) 00:14:24.951 fused_ordering(312) 00:14:24.951 fused_ordering(313) 00:14:24.951 fused_ordering(314) 00:14:24.951 fused_ordering(315) 00:14:24.951 fused_ordering(316) 00:14:24.951 fused_ordering(317) 00:14:24.951 fused_ordering(318) 00:14:24.951 fused_ordering(319) 00:14:24.951 fused_ordering(320) 00:14:24.951 fused_ordering(321) 00:14:24.951 fused_ordering(322) 00:14:24.951 fused_ordering(323) 00:14:24.951 fused_ordering(324) 00:14:24.951 fused_ordering(325) 00:14:24.951 fused_ordering(326) 00:14:24.951 fused_ordering(327) 00:14:24.951 fused_ordering(328) 00:14:24.951 fused_ordering(329) 00:14:24.951 fused_ordering(330) 00:14:24.951 fused_ordering(331) 00:14:24.951 fused_ordering(332) 00:14:24.951 fused_ordering(333) 00:14:24.951 fused_ordering(334) 00:14:24.951 fused_ordering(335) 00:14:24.951 fused_ordering(336) 00:14:24.951 fused_ordering(337) 00:14:24.951 fused_ordering(338) 00:14:24.951 fused_ordering(339) 00:14:24.951 fused_ordering(340) 00:14:24.951 fused_ordering(341) 00:14:24.951 fused_ordering(342) 00:14:24.951 fused_ordering(343) 00:14:24.951 fused_ordering(344) 00:14:24.951 fused_ordering(345) 00:14:24.951 fused_ordering(346) 00:14:24.951 fused_ordering(347) 00:14:24.951 fused_ordering(348) 00:14:24.951 fused_ordering(349) 00:14:24.951 fused_ordering(350) 00:14:24.951 fused_ordering(351) 00:14:24.951 fused_ordering(352) 00:14:24.951 fused_ordering(353) 00:14:24.951 fused_ordering(354) 00:14:24.951 fused_ordering(355) 00:14:24.951 fused_ordering(356) 00:14:24.951 fused_ordering(357) 00:14:24.951 fused_ordering(358) 00:14:24.951 fused_ordering(359) 00:14:24.951 fused_ordering(360) 00:14:24.951 fused_ordering(361) 00:14:24.951 fused_ordering(362) 00:14:24.951 fused_ordering(363) 00:14:24.951 fused_ordering(364) 00:14:24.951 fused_ordering(365) 00:14:24.951 fused_ordering(366) 00:14:24.951 fused_ordering(367) 00:14:24.951 fused_ordering(368) 00:14:24.951 fused_ordering(369) 00:14:24.951 fused_ordering(370) 00:14:24.951 fused_ordering(371) 00:14:24.951 fused_ordering(372) 00:14:24.951 fused_ordering(373) 00:14:24.951 fused_ordering(374) 00:14:24.951 fused_ordering(375) 00:14:24.951 fused_ordering(376) 00:14:24.951 fused_ordering(377) 00:14:24.951 fused_ordering(378) 00:14:24.951 fused_ordering(379) 00:14:24.951 fused_ordering(380) 00:14:24.951 fused_ordering(381) 00:14:24.951 fused_ordering(382) 00:14:24.951 fused_ordering(383) 00:14:24.951 fused_ordering(384) 00:14:24.951 fused_ordering(385) 00:14:24.951 fused_ordering(386) 00:14:24.951 fused_ordering(387) 00:14:24.951 fused_ordering(388) 00:14:24.951 fused_ordering(389) 00:14:24.951 fused_ordering(390) 00:14:24.951 fused_ordering(391) 00:14:24.951 fused_ordering(392) 00:14:24.951 fused_ordering(393) 00:14:24.951 fused_ordering(394) 00:14:24.951 fused_ordering(395) 00:14:24.951 fused_ordering(396) 00:14:24.951 fused_ordering(397) 00:14:24.951 fused_ordering(398) 00:14:24.951 fused_ordering(399) 00:14:24.951 fused_ordering(400) 00:14:24.951 fused_ordering(401) 00:14:24.951 fused_ordering(402) 00:14:24.951 fused_ordering(403) 00:14:24.951 fused_ordering(404) 00:14:24.951 fused_ordering(405) 00:14:24.951 fused_ordering(406) 00:14:24.951 fused_ordering(407) 00:14:24.951 fused_ordering(408) 00:14:24.951 fused_ordering(409) 00:14:24.951 fused_ordering(410) 00:14:25.210 fused_ordering(411) 00:14:25.210 fused_ordering(412) 00:14:25.210 fused_ordering(413) 00:14:25.210 fused_ordering(414) 00:14:25.210 fused_ordering(415) 00:14:25.210 fused_ordering(416) 00:14:25.210 fused_ordering(417) 00:14:25.210 fused_ordering(418) 00:14:25.210 fused_ordering(419) 00:14:25.210 fused_ordering(420) 00:14:25.210 fused_ordering(421) 00:14:25.210 fused_ordering(422) 00:14:25.210 fused_ordering(423) 00:14:25.210 fused_ordering(424) 00:14:25.210 fused_ordering(425) 00:14:25.210 fused_ordering(426) 00:14:25.210 fused_ordering(427) 00:14:25.210 fused_ordering(428) 00:14:25.210 fused_ordering(429) 00:14:25.210 fused_ordering(430) 00:14:25.210 fused_ordering(431) 00:14:25.210 fused_ordering(432) 00:14:25.210 fused_ordering(433) 00:14:25.210 fused_ordering(434) 00:14:25.210 fused_ordering(435) 00:14:25.210 fused_ordering(436) 00:14:25.210 fused_ordering(437) 00:14:25.210 fused_ordering(438) 00:14:25.210 fused_ordering(439) 00:14:25.210 fused_ordering(440) 00:14:25.210 fused_ordering(441) 00:14:25.210 fused_ordering(442) 00:14:25.210 fused_ordering(443) 00:14:25.210 fused_ordering(444) 00:14:25.210 fused_ordering(445) 00:14:25.210 fused_ordering(446) 00:14:25.210 fused_ordering(447) 00:14:25.210 fused_ordering(448) 00:14:25.210 fused_ordering(449) 00:14:25.210 fused_ordering(450) 00:14:25.210 fused_ordering(451) 00:14:25.210 fused_ordering(452) 00:14:25.210 fused_ordering(453) 00:14:25.210 fused_ordering(454) 00:14:25.210 fused_ordering(455) 00:14:25.210 fused_ordering(456) 00:14:25.210 fused_ordering(457) 00:14:25.210 fused_ordering(458) 00:14:25.210 fused_ordering(459) 00:14:25.210 fused_ordering(460) 00:14:25.210 fused_ordering(461) 00:14:25.210 fused_ordering(462) 00:14:25.210 fused_ordering(463) 00:14:25.210 fused_ordering(464) 00:14:25.210 fused_ordering(465) 00:14:25.210 fused_ordering(466) 00:14:25.210 fused_ordering(467) 00:14:25.210 fused_ordering(468) 00:14:25.210 fused_ordering(469) 00:14:25.210 fused_ordering(470) 00:14:25.210 fused_ordering(471) 00:14:25.210 fused_ordering(472) 00:14:25.210 fused_ordering(473) 00:14:25.210 fused_ordering(474) 00:14:25.210 fused_ordering(475) 00:14:25.210 fused_ordering(476) 00:14:25.210 fused_ordering(477) 00:14:25.210 fused_ordering(478) 00:14:25.210 fused_ordering(479) 00:14:25.210 fused_ordering(480) 00:14:25.210 fused_ordering(481) 00:14:25.210 fused_ordering(482) 00:14:25.210 fused_ordering(483) 00:14:25.210 fused_ordering(484) 00:14:25.210 fused_ordering(485) 00:14:25.210 fused_ordering(486) 00:14:25.210 fused_ordering(487) 00:14:25.210 fused_ordering(488) 00:14:25.210 fused_ordering(489) 00:14:25.210 fused_ordering(490) 00:14:25.210 fused_ordering(491) 00:14:25.210 fused_ordering(492) 00:14:25.210 fused_ordering(493) 00:14:25.210 fused_ordering(494) 00:14:25.210 fused_ordering(495) 00:14:25.210 fused_ordering(496) 00:14:25.210 fused_ordering(497) 00:14:25.210 fused_ordering(498) 00:14:25.210 fused_ordering(499) 00:14:25.210 fused_ordering(500) 00:14:25.210 fused_ordering(501) 00:14:25.210 fused_ordering(502) 00:14:25.210 fused_ordering(503) 00:14:25.210 fused_ordering(504) 00:14:25.210 fused_ordering(505) 00:14:25.210 fused_ordering(506) 00:14:25.210 fused_ordering(507) 00:14:25.210 fused_ordering(508) 00:14:25.210 fused_ordering(509) 00:14:25.210 fused_ordering(510) 00:14:25.210 fused_ordering(511) 00:14:25.210 fused_ordering(512) 00:14:25.210 fused_ordering(513) 00:14:25.210 fused_ordering(514) 00:14:25.210 fused_ordering(515) 00:14:25.210 fused_ordering(516) 00:14:25.210 fused_ordering(517) 00:14:25.210 fused_ordering(518) 00:14:25.210 fused_ordering(519) 00:14:25.210 fused_ordering(520) 00:14:25.210 fused_ordering(521) 00:14:25.210 fused_ordering(522) 00:14:25.210 fused_ordering(523) 00:14:25.210 fused_ordering(524) 00:14:25.210 fused_ordering(525) 00:14:25.210 fused_ordering(526) 00:14:25.210 fused_ordering(527) 00:14:25.210 fused_ordering(528) 00:14:25.210 fused_ordering(529) 00:14:25.210 fused_ordering(530) 00:14:25.210 fused_ordering(531) 00:14:25.210 fused_ordering(532) 00:14:25.210 fused_ordering(533) 00:14:25.210 fused_ordering(534) 00:14:25.210 fused_ordering(535) 00:14:25.210 fused_ordering(536) 00:14:25.210 fused_ordering(537) 00:14:25.210 fused_ordering(538) 00:14:25.210 fused_ordering(539) 00:14:25.210 fused_ordering(540) 00:14:25.210 fused_ordering(541) 00:14:25.210 fused_ordering(542) 00:14:25.210 fused_ordering(543) 00:14:25.210 fused_ordering(544) 00:14:25.210 fused_ordering(545) 00:14:25.210 fused_ordering(546) 00:14:25.210 fused_ordering(547) 00:14:25.210 fused_ordering(548) 00:14:25.210 fused_ordering(549) 00:14:25.210 fused_ordering(550) 00:14:25.210 fused_ordering(551) 00:14:25.210 fused_ordering(552) 00:14:25.210 fused_ordering(553) 00:14:25.210 fused_ordering(554) 00:14:25.210 fused_ordering(555) 00:14:25.210 fused_ordering(556) 00:14:25.210 fused_ordering(557) 00:14:25.210 fused_ordering(558) 00:14:25.210 fused_ordering(559) 00:14:25.210 fused_ordering(560) 00:14:25.210 fused_ordering(561) 00:14:25.210 fused_ordering(562) 00:14:25.210 fused_ordering(563) 00:14:25.210 fused_ordering(564) 00:14:25.210 fused_ordering(565) 00:14:25.210 fused_ordering(566) 00:14:25.210 fused_ordering(567) 00:14:25.210 fused_ordering(568) 00:14:25.210 fused_ordering(569) 00:14:25.210 fused_ordering(570) 00:14:25.210 fused_ordering(571) 00:14:25.210 fused_ordering(572) 00:14:25.210 fused_ordering(573) 00:14:25.210 fused_ordering(574) 00:14:25.210 fused_ordering(575) 00:14:25.210 fused_ordering(576) 00:14:25.210 fused_ordering(577) 00:14:25.210 fused_ordering(578) 00:14:25.210 fused_ordering(579) 00:14:25.210 fused_ordering(580) 00:14:25.210 fused_ordering(581) 00:14:25.210 fused_ordering(582) 00:14:25.210 fused_ordering(583) 00:14:25.210 fused_ordering(584) 00:14:25.210 fused_ordering(585) 00:14:25.210 fused_ordering(586) 00:14:25.210 fused_ordering(587) 00:14:25.210 fused_ordering(588) 00:14:25.210 fused_ordering(589) 00:14:25.210 fused_ordering(590) 00:14:25.210 fused_ordering(591) 00:14:25.210 fused_ordering(592) 00:14:25.210 fused_ordering(593) 00:14:25.210 fused_ordering(594) 00:14:25.210 fused_ordering(595) 00:14:25.210 fused_ordering(596) 00:14:25.210 fused_ordering(597) 00:14:25.210 fused_ordering(598) 00:14:25.210 fused_ordering(599) 00:14:25.210 fused_ordering(600) 00:14:25.210 fused_ordering(601) 00:14:25.210 fused_ordering(602) 00:14:25.210 fused_ordering(603) 00:14:25.210 fused_ordering(604) 00:14:25.210 fused_ordering(605) 00:14:25.210 fused_ordering(606) 00:14:25.210 fused_ordering(607) 00:14:25.210 fused_ordering(608) 00:14:25.210 fused_ordering(609) 00:14:25.210 fused_ordering(610) 00:14:25.210 fused_ordering(611) 00:14:25.210 fused_ordering(612) 00:14:25.210 fused_ordering(613) 00:14:25.210 fused_ordering(614) 00:14:25.210 fused_ordering(615) 00:14:25.776 fused_ordering(616) 00:14:25.777 fused_ordering(617) 00:14:25.777 fused_ordering(618) 00:14:25.777 fused_ordering(619) 00:14:25.777 fused_ordering(620) 00:14:25.777 fused_ordering(621) 00:14:25.777 fused_ordering(622) 00:14:25.777 fused_ordering(623) 00:14:25.777 fused_ordering(624) 00:14:25.777 fused_ordering(625) 00:14:25.777 fused_ordering(626) 00:14:25.777 fused_ordering(627) 00:14:25.777 fused_ordering(628) 00:14:25.777 fused_ordering(629) 00:14:25.777 fused_ordering(630) 00:14:25.777 fused_ordering(631) 00:14:25.777 fused_ordering(632) 00:14:25.777 fused_ordering(633) 00:14:25.777 fused_ordering(634) 00:14:25.777 fused_ordering(635) 00:14:25.777 fused_ordering(636) 00:14:25.777 fused_ordering(637) 00:14:25.777 fused_ordering(638) 00:14:25.777 fused_ordering(639) 00:14:25.777 fused_ordering(640) 00:14:25.777 fused_ordering(641) 00:14:25.777 fused_ordering(642) 00:14:25.777 fused_ordering(643) 00:14:25.777 fused_ordering(644) 00:14:25.777 fused_ordering(645) 00:14:25.777 fused_ordering(646) 00:14:25.777 fused_ordering(647) 00:14:25.777 fused_ordering(648) 00:14:25.777 fused_ordering(649) 00:14:25.777 fused_ordering(650) 00:14:25.777 fused_ordering(651) 00:14:25.777 fused_ordering(652) 00:14:25.777 fused_ordering(653) 00:14:25.777 fused_ordering(654) 00:14:25.777 fused_ordering(655) 00:14:25.777 fused_ordering(656) 00:14:25.777 fused_ordering(657) 00:14:25.777 fused_ordering(658) 00:14:25.777 fused_ordering(659) 00:14:25.777 fused_ordering(660) 00:14:25.777 fused_ordering(661) 00:14:25.777 fused_ordering(662) 00:14:25.777 fused_ordering(663) 00:14:25.777 fused_ordering(664) 00:14:25.777 fused_ordering(665) 00:14:25.777 fused_ordering(666) 00:14:25.777 fused_ordering(667) 00:14:25.777 fused_ordering(668) 00:14:25.777 fused_ordering(669) 00:14:25.777 fused_ordering(670) 00:14:25.777 fused_ordering(671) 00:14:25.777 fused_ordering(672) 00:14:25.777 fused_ordering(673) 00:14:25.777 fused_ordering(674) 00:14:25.777 fused_ordering(675) 00:14:25.777 fused_ordering(676) 00:14:25.777 fused_ordering(677) 00:14:25.777 fused_ordering(678) 00:14:25.777 fused_ordering(679) 00:14:25.777 fused_ordering(680) 00:14:25.777 fused_ordering(681) 00:14:25.777 fused_ordering(682) 00:14:25.777 fused_ordering(683) 00:14:25.777 fused_ordering(684) 00:14:25.777 fused_ordering(685) 00:14:25.777 fused_ordering(686) 00:14:25.777 fused_ordering(687) 00:14:25.777 fused_ordering(688) 00:14:25.777 fused_ordering(689) 00:14:25.777 fused_ordering(690) 00:14:25.777 fused_ordering(691) 00:14:25.777 fused_ordering(692) 00:14:25.777 fused_ordering(693) 00:14:25.777 fused_ordering(694) 00:14:25.777 fused_ordering(695) 00:14:25.777 fused_ordering(696) 00:14:25.777 fused_ordering(697) 00:14:25.777 fused_ordering(698) 00:14:25.777 fused_ordering(699) 00:14:25.777 fused_ordering(700) 00:14:25.777 fused_ordering(701) 00:14:25.777 fused_ordering(702) 00:14:25.777 fused_ordering(703) 00:14:25.777 fused_ordering(704) 00:14:25.777 fused_ordering(705) 00:14:25.777 fused_ordering(706) 00:14:25.777 fused_ordering(707) 00:14:25.777 fused_ordering(708) 00:14:25.777 fused_ordering(709) 00:14:25.777 fused_ordering(710) 00:14:25.777 fused_ordering(711) 00:14:25.777 fused_ordering(712) 00:14:25.777 fused_ordering(713) 00:14:25.777 fused_ordering(714) 00:14:25.777 fused_ordering(715) 00:14:25.777 fused_ordering(716) 00:14:25.777 fused_ordering(717) 00:14:25.777 fused_ordering(718) 00:14:25.777 fused_ordering(719) 00:14:25.777 fused_ordering(720) 00:14:25.777 fused_ordering(721) 00:14:25.777 fused_ordering(722) 00:14:25.777 fused_ordering(723) 00:14:25.777 fused_ordering(724) 00:14:25.777 fused_ordering(725) 00:14:25.777 fused_ordering(726) 00:14:25.777 fused_ordering(727) 00:14:25.777 fused_ordering(728) 00:14:25.777 fused_ordering(729) 00:14:25.777 fused_ordering(730) 00:14:25.777 fused_ordering(731) 00:14:25.777 fused_ordering(732) 00:14:25.777 fused_ordering(733) 00:14:25.777 fused_ordering(734) 00:14:25.777 fused_ordering(735) 00:14:25.777 fused_ordering(736) 00:14:25.777 fused_ordering(737) 00:14:25.777 fused_ordering(738) 00:14:25.777 fused_ordering(739) 00:14:25.777 fused_ordering(740) 00:14:25.777 fused_ordering(741) 00:14:25.777 fused_ordering(742) 00:14:25.777 fused_ordering(743) 00:14:25.777 fused_ordering(744) 00:14:25.777 fused_ordering(745) 00:14:25.777 fused_ordering(746) 00:14:25.777 fused_ordering(747) 00:14:25.777 fused_ordering(748) 00:14:25.777 fused_ordering(749) 00:14:25.777 fused_ordering(750) 00:14:25.777 fused_ordering(751) 00:14:25.777 fused_ordering(752) 00:14:25.777 fused_ordering(753) 00:14:25.777 fused_ordering(754) 00:14:25.777 fused_ordering(755) 00:14:25.777 fused_ordering(756) 00:14:25.777 fused_ordering(757) 00:14:25.777 fused_ordering(758) 00:14:25.777 fused_ordering(759) 00:14:25.777 fused_ordering(760) 00:14:25.777 fused_ordering(761) 00:14:25.777 fused_ordering(762) 00:14:25.777 fused_ordering(763) 00:14:25.777 fused_ordering(764) 00:14:25.777 fused_ordering(765) 00:14:25.777 fused_ordering(766) 00:14:25.777 fused_ordering(767) 00:14:25.777 fused_ordering(768) 00:14:25.777 fused_ordering(769) 00:14:25.777 fused_ordering(770) 00:14:25.777 fused_ordering(771) 00:14:25.777 fused_ordering(772) 00:14:25.777 fused_ordering(773) 00:14:25.777 fused_ordering(774) 00:14:25.777 fused_ordering(775) 00:14:25.777 fused_ordering(776) 00:14:25.777 fused_ordering(777) 00:14:25.777 fused_ordering(778) 00:14:25.777 fused_ordering(779) 00:14:25.777 fused_ordering(780) 00:14:25.777 fused_ordering(781) 00:14:25.777 fused_ordering(782) 00:14:25.777 fused_ordering(783) 00:14:25.777 fused_ordering(784) 00:14:25.777 fused_ordering(785) 00:14:25.777 fused_ordering(786) 00:14:25.777 fused_ordering(787) 00:14:25.777 fused_ordering(788) 00:14:25.777 fused_ordering(789) 00:14:25.777 fused_ordering(790) 00:14:25.777 fused_ordering(791) 00:14:25.777 fused_ordering(792) 00:14:25.777 fused_ordering(793) 00:14:25.777 fused_ordering(794) 00:14:25.777 fused_ordering(795) 00:14:25.777 fused_ordering(796) 00:14:25.777 fused_ordering(797) 00:14:25.777 fused_ordering(798) 00:14:25.777 fused_ordering(799) 00:14:25.777 fused_ordering(800) 00:14:25.777 fused_ordering(801) 00:14:25.777 fused_ordering(802) 00:14:25.777 fused_ordering(803) 00:14:25.777 fused_ordering(804) 00:14:25.777 fused_ordering(805) 00:14:25.777 fused_ordering(806) 00:14:25.777 fused_ordering(807) 00:14:25.777 fused_ordering(808) 00:14:25.777 fused_ordering(809) 00:14:25.777 fused_ordering(810) 00:14:25.777 fused_ordering(811) 00:14:25.777 fused_ordering(812) 00:14:25.777 fused_ordering(813) 00:14:25.777 fused_ordering(814) 00:14:25.777 fused_ordering(815) 00:14:25.777 fused_ordering(816) 00:14:25.777 fused_ordering(817) 00:14:25.777 fused_ordering(818) 00:14:25.777 fused_ordering(819) 00:14:25.777 fused_ordering(820) 00:14:26.711 fused_ordering(821) 00:14:26.711 fused_ordering(822) 00:14:26.711 fused_ordering(823) 00:14:26.711 fused_ordering(824) 00:14:26.711 fused_ordering(825) 00:14:26.711 fused_ordering(826) 00:14:26.711 fused_ordering(827) 00:14:26.711 fused_ordering(828) 00:14:26.711 fused_ordering(829) 00:14:26.711 fused_ordering(830) 00:14:26.711 fused_ordering(831) 00:14:26.711 fused_ordering(832) 00:14:26.711 fused_ordering(833) 00:14:26.711 fused_ordering(834) 00:14:26.711 fused_ordering(835) 00:14:26.711 fused_ordering(836) 00:14:26.711 fused_ordering(837) 00:14:26.711 fused_ordering(838) 00:14:26.711 fused_ordering(839) 00:14:26.711 fused_ordering(840) 00:14:26.711 fused_ordering(841) 00:14:26.711 fused_ordering(842) 00:14:26.711 fused_ordering(843) 00:14:26.711 fused_ordering(844) 00:14:26.711 fused_ordering(845) 00:14:26.711 fused_ordering(846) 00:14:26.711 fused_ordering(847) 00:14:26.711 fused_ordering(848) 00:14:26.711 fused_ordering(849) 00:14:26.711 fused_ordering(850) 00:14:26.711 fused_ordering(851) 00:14:26.711 fused_ordering(852) 00:14:26.711 fused_ordering(853) 00:14:26.711 fused_ordering(854) 00:14:26.711 fused_ordering(855) 00:14:26.711 fused_ordering(856) 00:14:26.711 fused_ordering(857) 00:14:26.711 fused_ordering(858) 00:14:26.711 fused_ordering(859) 00:14:26.711 fused_ordering(860) 00:14:26.711 fused_ordering(861) 00:14:26.711 fused_ordering(862) 00:14:26.711 fused_ordering(863) 00:14:26.711 fused_ordering(864) 00:14:26.711 fused_ordering(865) 00:14:26.711 fused_ordering(866) 00:14:26.711 fused_ordering(867) 00:14:26.711 fused_ordering(868) 00:14:26.711 fused_ordering(869) 00:14:26.711 fused_ordering(870) 00:14:26.711 fused_ordering(871) 00:14:26.711 fused_ordering(872) 00:14:26.711 fused_ordering(873) 00:14:26.711 fused_ordering(874) 00:14:26.711 fused_ordering(875) 00:14:26.711 fused_ordering(876) 00:14:26.711 fused_ordering(877) 00:14:26.711 fused_ordering(878) 00:14:26.711 fused_ordering(879) 00:14:26.711 fused_ordering(880) 00:14:26.711 fused_ordering(881) 00:14:26.711 fused_ordering(882) 00:14:26.711 fused_ordering(883) 00:14:26.711 fused_ordering(884) 00:14:26.711 fused_ordering(885) 00:14:26.711 fused_ordering(886) 00:14:26.711 fused_ordering(887) 00:14:26.711 fused_ordering(888) 00:14:26.711 fused_ordering(889) 00:14:26.711 fused_ordering(890) 00:14:26.711 fused_ordering(891) 00:14:26.711 fused_ordering(892) 00:14:26.711 fused_ordering(893) 00:14:26.711 fused_ordering(894) 00:14:26.711 fused_ordering(895) 00:14:26.711 fused_ordering(896) 00:14:26.711 fused_ordering(897) 00:14:26.711 fused_ordering(898) 00:14:26.711 fused_ordering(899) 00:14:26.711 fused_ordering(900) 00:14:26.711 fused_ordering(901) 00:14:26.711 fused_ordering(902) 00:14:26.711 fused_ordering(903) 00:14:26.711 fused_ordering(904) 00:14:26.711 fused_ordering(905) 00:14:26.711 fused_ordering(906) 00:14:26.711 fused_ordering(907) 00:14:26.711 fused_ordering(908) 00:14:26.711 fused_ordering(909) 00:14:26.711 fused_ordering(910) 00:14:26.711 fused_ordering(911) 00:14:26.711 fused_ordering(912) 00:14:26.711 fused_ordering(913) 00:14:26.711 fused_ordering(914) 00:14:26.711 fused_ordering(915) 00:14:26.711 fused_ordering(916) 00:14:26.711 fused_ordering(917) 00:14:26.711 fused_ordering(918) 00:14:26.711 fused_ordering(919) 00:14:26.711 fused_ordering(920) 00:14:26.711 fused_ordering(921) 00:14:26.711 fused_ordering(922) 00:14:26.711 fused_ordering(923) 00:14:26.711 fused_ordering(924) 00:14:26.711 fused_ordering(925) 00:14:26.711 fused_ordering(926) 00:14:26.711 fused_ordering(927) 00:14:26.711 fused_ordering(928) 00:14:26.711 fused_ordering(929) 00:14:26.711 fused_ordering(930) 00:14:26.711 fused_ordering(931) 00:14:26.711 fused_ordering(932) 00:14:26.711 fused_ordering(933) 00:14:26.711 fused_ordering(934) 00:14:26.711 fused_ordering(935) 00:14:26.711 fused_ordering(936) 00:14:26.711 fused_ordering(937) 00:14:26.711 fused_ordering(938) 00:14:26.711 fused_ordering(939) 00:14:26.711 fused_ordering(940) 00:14:26.711 fused_ordering(941) 00:14:26.711 fused_ordering(942) 00:14:26.711 fused_ordering(943) 00:14:26.711 fused_ordering(944) 00:14:26.711 fused_ordering(945) 00:14:26.711 fused_ordering(946) 00:14:26.711 fused_ordering(947) 00:14:26.711 fused_ordering(948) 00:14:26.711 fused_ordering(949) 00:14:26.711 fused_ordering(950) 00:14:26.711 fused_ordering(951) 00:14:26.711 fused_ordering(952) 00:14:26.711 fused_ordering(953) 00:14:26.711 fused_ordering(954) 00:14:26.711 fused_ordering(955) 00:14:26.711 fused_ordering(956) 00:14:26.711 fused_ordering(957) 00:14:26.711 fused_ordering(958) 00:14:26.711 fused_ordering(959) 00:14:26.711 fused_ordering(960) 00:14:26.711 fused_ordering(961) 00:14:26.711 fused_ordering(962) 00:14:26.711 fused_ordering(963) 00:14:26.711 fused_ordering(964) 00:14:26.711 fused_ordering(965) 00:14:26.711 fused_ordering(966) 00:14:26.711 fused_ordering(967) 00:14:26.711 fused_ordering(968) 00:14:26.711 fused_ordering(969) 00:14:26.711 fused_ordering(970) 00:14:26.711 fused_ordering(971) 00:14:26.711 fused_ordering(972) 00:14:26.711 fused_ordering(973) 00:14:26.711 fused_ordering(974) 00:14:26.711 fused_ordering(975) 00:14:26.711 fused_ordering(976) 00:14:26.711 fused_ordering(977) 00:14:26.711 fused_ordering(978) 00:14:26.711 fused_ordering(979) 00:14:26.711 fused_ordering(980) 00:14:26.711 fused_ordering(981) 00:14:26.711 fused_ordering(982) 00:14:26.711 fused_ordering(983) 00:14:26.711 fused_ordering(984) 00:14:26.711 fused_ordering(985) 00:14:26.711 fused_ordering(986) 00:14:26.711 fused_ordering(987) 00:14:26.711 fused_ordering(988) 00:14:26.711 fused_ordering(989) 00:14:26.711 fused_ordering(990) 00:14:26.711 fused_ordering(991) 00:14:26.711 fused_ordering(992) 00:14:26.711 fused_ordering(993) 00:14:26.711 fused_ordering(994) 00:14:26.711 fused_ordering(995) 00:14:26.711 fused_ordering(996) 00:14:26.711 fused_ordering(997) 00:14:26.711 fused_ordering(998) 00:14:26.711 fused_ordering(999) 00:14:26.711 fused_ordering(1000) 00:14:26.711 fused_ordering(1001) 00:14:26.711 fused_ordering(1002) 00:14:26.711 fused_ordering(1003) 00:14:26.711 fused_ordering(1004) 00:14:26.711 fused_ordering(1005) 00:14:26.711 fused_ordering(1006) 00:14:26.711 fused_ordering(1007) 00:14:26.712 fused_ordering(1008) 00:14:26.712 fused_ordering(1009) 00:14:26.712 fused_ordering(1010) 00:14:26.712 fused_ordering(1011) 00:14:26.712 fused_ordering(1012) 00:14:26.712 fused_ordering(1013) 00:14:26.712 fused_ordering(1014) 00:14:26.712 fused_ordering(1015) 00:14:26.712 fused_ordering(1016) 00:14:26.712 fused_ordering(1017) 00:14:26.712 fused_ordering(1018) 00:14:26.712 fused_ordering(1019) 00:14:26.712 fused_ordering(1020) 00:14:26.712 fused_ordering(1021) 00:14:26.712 fused_ordering(1022) 00:14:26.712 fused_ordering(1023) 00:14:26.712 03:17:32 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:14:26.712 03:17:32 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:14:26.712 03:17:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:26.712 03:17:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@117 -- # sync 00:14:26.712 03:17:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:26.712 03:17:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@120 -- # set +e 00:14:26.712 03:17:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:26.712 03:17:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:26.712 rmmod nvme_tcp 00:14:26.712 rmmod nvme_fabrics 00:14:26.712 rmmod nvme_keyring 00:14:26.712 03:17:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:26.712 03:17:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set -e 00:14:26.712 03:17:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@125 -- # return 0 00:14:26.712 03:17:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@489 -- # '[' -n 3143191 ']' 00:14:26.712 03:17:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@490 -- # killprocess 3143191 00:14:26.712 03:17:32 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@948 -- # '[' -z 3143191 ']' 00:14:26.712 03:17:32 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@952 -- # kill -0 3143191 00:14:26.712 03:17:32 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@953 -- # uname 00:14:26.712 03:17:32 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:26.712 03:17:32 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3143191 00:14:26.712 03:17:32 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:14:26.712 03:17:32 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:14:26.712 03:17:32 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3143191' 00:14:26.712 killing process with pid 3143191 00:14:26.712 03:17:32 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@967 -- # kill 3143191 00:14:26.712 03:17:32 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@972 -- # wait 3143191 00:14:26.971 03:17:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:26.971 03:17:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:26.971 03:17:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:26.971 03:17:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:26.971 03:17:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:26.971 03:17:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:26.971 03:17:32 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:26.971 03:17:32 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:28.876 03:17:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:28.876 00:14:28.876 real 0m7.659s 00:14:28.876 user 0m5.248s 00:14:28.876 sys 0m3.396s 00:14:28.876 03:17:34 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:28.876 03:17:35 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:28.876 ************************************ 00:14:28.876 END TEST nvmf_fused_ordering 00:14:28.876 ************************************ 00:14:29.135 03:17:35 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:14:29.135 03:17:35 nvmf_tcp -- nvmf/nvmf.sh@35 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:14:29.135 03:17:35 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:29.135 03:17:35 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:29.135 03:17:35 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:29.135 ************************************ 00:14:29.135 START TEST nvmf_delete_subsystem 00:14:29.135 ************************************ 00:14:29.135 03:17:35 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:14:29.135 * Looking for test storage... 00:14:29.135 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:29.135 03:17:35 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:29.135 03:17:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:14:29.135 03:17:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:29.135 03:17:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:29.135 03:17:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:29.135 03:17:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:29.135 03:17:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:29.135 03:17:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:29.135 03:17:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:29.135 03:17:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:29.135 03:17:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:29.135 03:17:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:29.135 03:17:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:29.135 03:17:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:14:29.135 03:17:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:29.135 03:17:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:29.135 03:17:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:29.135 03:17:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:29.135 03:17:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:29.135 03:17:35 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:29.135 03:17:35 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:29.135 03:17:35 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:29.135 03:17:35 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:29.135 03:17:35 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:29.135 03:17:35 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:29.135 03:17:35 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:14:29.135 03:17:35 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:29.135 03:17:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@47 -- # : 0 00:14:29.135 03:17:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:29.135 03:17:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:29.135 03:17:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:29.135 03:17:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:29.135 03:17:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:29.135 03:17:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:29.135 03:17:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:29.135 03:17:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:29.135 03:17:35 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:14:29.135 03:17:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:29.135 03:17:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:29.135 03:17:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:29.135 03:17:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:29.135 03:17:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:29.135 03:17:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:29.135 03:17:35 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:29.135 03:17:35 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:29.135 03:17:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:29.135 03:17:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:29.135 03:17:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@285 -- # xtrace_disable 00:14:29.135 03:17:35 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:31.038 03:17:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:31.038 03:17:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # pci_devs=() 00:14:31.038 03:17:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:31.038 03:17:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:31.038 03:17:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:31.038 03:17:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:31.038 03:17:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:31.038 03:17:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # net_devs=() 00:14:31.038 03:17:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:31.038 03:17:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # e810=() 00:14:31.038 03:17:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # local -ga e810 00:14:31.038 03:17:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # x722=() 00:14:31.038 03:17:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # local -ga x722 00:14:31.038 03:17:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # mlx=() 00:14:31.038 03:17:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # local -ga mlx 00:14:31.038 03:17:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:31.038 03:17:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:31.038 03:17:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:31.038 03:17:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:31.038 03:17:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:31.038 03:17:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:31.038 03:17:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:31.038 03:17:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:31.038 03:17:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:31.038 03:17:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:31.038 03:17:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:31.038 03:17:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:31.038 03:17:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:31.038 03:17:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:31.038 03:17:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:31.038 03:17:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:31.038 03:17:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:31.038 03:17:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:31.038 03:17:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:14:31.038 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:14:31.038 03:17:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:31.038 03:17:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:31.038 03:17:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:31.038 03:17:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:31.038 03:17:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:31.038 03:17:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:31.038 03:17:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:14:31.038 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:14:31.038 03:17:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:31.038 03:17:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:31.038 03:17:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:31.038 03:17:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:31.038 03:17:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:31.038 03:17:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:31.038 03:17:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:31.038 03:17:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:31.038 03:17:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:31.038 03:17:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:31.038 03:17:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:31.038 03:17:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:31.038 03:17:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:31.038 03:17:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:31.038 03:17:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:31.039 03:17:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:14:31.039 Found net devices under 0000:0a:00.0: cvl_0_0 00:14:31.039 03:17:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:31.039 03:17:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:31.039 03:17:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:31.039 03:17:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:31.039 03:17:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:31.039 03:17:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:31.039 03:17:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:31.039 03:17:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:31.039 03:17:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:14:31.039 Found net devices under 0000:0a:00.1: cvl_0_1 00:14:31.039 03:17:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:31.039 03:17:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:31.039 03:17:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # is_hw=yes 00:14:31.039 03:17:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:31.039 03:17:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:31.039 03:17:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:31.039 03:17:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:31.039 03:17:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:31.039 03:17:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:31.039 03:17:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:31.039 03:17:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:31.039 03:17:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:31.039 03:17:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:31.039 03:17:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:31.039 03:17:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:31.039 03:17:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:31.039 03:17:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:31.039 03:17:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:31.039 03:17:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:31.297 03:17:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:31.297 03:17:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:31.297 03:17:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:31.297 03:17:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:31.297 03:17:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:31.297 03:17:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:31.297 03:17:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:31.297 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:31.297 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.194 ms 00:14:31.297 00:14:31.297 --- 10.0.0.2 ping statistics --- 00:14:31.297 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:31.297 rtt min/avg/max/mdev = 0.194/0.194/0.194/0.000 ms 00:14:31.297 03:17:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:31.297 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:31.297 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.107 ms 00:14:31.297 00:14:31.297 --- 10.0.0.1 ping statistics --- 00:14:31.298 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:31.298 rtt min/avg/max/mdev = 0.107/0.107/0.107/0.000 ms 00:14:31.298 03:17:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:31.298 03:17:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # return 0 00:14:31.298 03:17:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:31.298 03:17:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:31.298 03:17:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:31.298 03:17:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:31.298 03:17:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:31.298 03:17:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:31.298 03:17:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:31.298 03:17:37 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:14:31.298 03:17:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:31.298 03:17:37 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:31.298 03:17:37 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:31.298 03:17:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@481 -- # nvmfpid=3145535 00:14:31.298 03:17:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:14:31.298 03:17:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # waitforlisten 3145535 00:14:31.298 03:17:37 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@829 -- # '[' -z 3145535 ']' 00:14:31.298 03:17:37 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:31.298 03:17:37 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:31.298 03:17:37 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:31.298 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:31.298 03:17:37 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:31.298 03:17:37 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:31.298 [2024-07-15 03:17:37.359378] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:14:31.298 [2024-07-15 03:17:37.359461] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:31.298 EAL: No free 2048 kB hugepages reported on node 1 00:14:31.298 [2024-07-15 03:17:37.425110] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:14:31.556 [2024-07-15 03:17:37.515933] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:31.556 [2024-07-15 03:17:37.515989] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:31.556 [2024-07-15 03:17:37.516003] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:31.556 [2024-07-15 03:17:37.516014] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:31.556 [2024-07-15 03:17:37.516024] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:31.556 [2024-07-15 03:17:37.516085] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:31.556 [2024-07-15 03:17:37.516089] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:31.556 03:17:37 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:31.556 03:17:37 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@862 -- # return 0 00:14:31.556 03:17:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:31.556 03:17:37 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:31.556 03:17:37 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:31.556 03:17:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:31.556 03:17:37 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:31.556 03:17:37 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:31.556 03:17:37 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:31.556 [2024-07-15 03:17:37.661423] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:31.557 03:17:37 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:31.557 03:17:37 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:14:31.557 03:17:37 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:31.557 03:17:37 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:31.557 03:17:37 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:31.557 03:17:37 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:31.557 03:17:37 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:31.557 03:17:37 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:31.557 [2024-07-15 03:17:37.677626] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:31.557 03:17:37 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:31.557 03:17:37 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:14:31.557 03:17:37 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:31.557 03:17:37 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:31.557 NULL1 00:14:31.557 03:17:37 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:31.557 03:17:37 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:14:31.557 03:17:37 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:31.557 03:17:37 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:31.557 Delay0 00:14:31.557 03:17:37 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:31.557 03:17:37 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:31.557 03:17:37 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:31.557 03:17:37 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:31.814 03:17:37 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:31.815 03:17:37 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=3145560 00:14:31.815 03:17:37 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:14:31.815 03:17:37 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:14:31.815 EAL: No free 2048 kB hugepages reported on node 1 00:14:31.815 [2024-07-15 03:17:37.762318] subsystem.c:1568:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:14:33.711 03:17:39 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:33.711 03:17:39 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:33.711 03:17:39 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:33.969 Read completed with error (sct=0, sc=8) 00:14:33.969 Write completed with error (sct=0, sc=8) 00:14:33.969 Read completed with error (sct=0, sc=8) 00:14:33.969 starting I/O failed: -6 00:14:33.969 Read completed with error (sct=0, sc=8) 00:14:33.969 Write completed with error (sct=0, sc=8) 00:14:33.969 Write completed with error (sct=0, sc=8) 00:14:33.969 Read completed with error (sct=0, sc=8) 00:14:33.969 starting I/O failed: -6 00:14:33.969 Read completed with error (sct=0, sc=8) 00:14:33.969 Read completed with error (sct=0, sc=8) 00:14:33.969 Read completed with error (sct=0, sc=8) 00:14:33.969 Write completed with error (sct=0, sc=8) 00:14:33.969 starting I/O failed: -6 00:14:33.969 Read completed with error (sct=0, sc=8) 00:14:33.969 Read completed with error (sct=0, sc=8) 00:14:33.969 Read completed with error (sct=0, sc=8) 00:14:33.969 Read completed with error (sct=0, sc=8) 00:14:33.969 starting I/O failed: -6 00:14:33.969 Read completed with error (sct=0, sc=8) 00:14:33.969 Read completed with error (sct=0, sc=8) 00:14:33.969 Read completed with error (sct=0, sc=8) 00:14:33.969 Read completed with error (sct=0, sc=8) 00:14:33.969 starting I/O failed: -6 00:14:33.969 Write completed with error (sct=0, sc=8) 00:14:33.969 Read completed with error (sct=0, sc=8) 00:14:33.969 Read completed with error (sct=0, sc=8) 00:14:33.969 Read completed with error (sct=0, sc=8) 00:14:33.969 starting I/O failed: -6 00:14:33.969 Read completed with error (sct=0, sc=8) 00:14:33.969 Write completed with error (sct=0, sc=8) 00:14:33.969 Write completed with error (sct=0, sc=8) 00:14:33.969 Write completed with error (sct=0, sc=8) 00:14:33.969 starting I/O failed: -6 00:14:33.969 Read completed with error (sct=0, sc=8) 00:14:33.969 Read completed with error (sct=0, sc=8) 00:14:33.969 Read completed with error (sct=0, sc=8) 00:14:33.969 Read completed with error (sct=0, sc=8) 00:14:33.969 starting I/O failed: -6 00:14:33.969 Read completed with error (sct=0, sc=8) 00:14:33.969 Write completed with error (sct=0, sc=8) 00:14:33.969 Write completed with error (sct=0, sc=8) 00:14:33.969 Read completed with error (sct=0, sc=8) 00:14:33.969 starting I/O failed: -6 00:14:33.969 Write completed with error (sct=0, sc=8) 00:14:33.969 Read completed with error (sct=0, sc=8) 00:14:33.969 Read completed with error (sct=0, sc=8) 00:14:33.969 Read completed with error (sct=0, sc=8) 00:14:33.969 starting I/O failed: -6 00:14:33.969 Read completed with error (sct=0, sc=8) 00:14:33.969 Write completed with error (sct=0, sc=8) 00:14:33.969 Write completed with error (sct=0, sc=8) 00:14:33.969 Read completed with error (sct=0, sc=8) 00:14:33.969 starting I/O failed: -6 00:14:33.969 Read completed with error (sct=0, sc=8) 00:14:33.969 Write completed with error (sct=0, sc=8) 00:14:33.969 Read completed with error (sct=0, sc=8) 00:14:33.969 [2024-07-15 03:17:39.882833] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa9d40 is same with the state(5) to be set 00:14:33.969 Read completed with error (sct=0, sc=8) 00:14:33.969 Read completed with error (sct=0, sc=8) 00:14:33.969 Read completed with error (sct=0, sc=8) 00:14:33.969 Read completed with error (sct=0, sc=8) 00:14:33.969 starting I/O failed: -6 00:14:33.969 Write completed with error (sct=0, sc=8) 00:14:33.969 Read completed with error (sct=0, sc=8) 00:14:33.969 Read completed with error (sct=0, sc=8) 00:14:33.969 Read completed with error (sct=0, sc=8) 00:14:33.969 Read completed with error (sct=0, sc=8) 00:14:33.969 Read completed with error (sct=0, sc=8) 00:14:33.969 Write completed with error (sct=0, sc=8) 00:14:33.969 Read completed with error (sct=0, sc=8) 00:14:33.969 Read completed with error (sct=0, sc=8) 00:14:33.969 Read completed with error (sct=0, sc=8) 00:14:33.969 Read completed with error (sct=0, sc=8) 00:14:33.969 Read completed with error (sct=0, sc=8) 00:14:33.969 starting I/O failed: -6 00:14:33.969 Read completed with error (sct=0, sc=8) 00:14:33.970 Write completed with error (sct=0, sc=8) 00:14:33.970 Read completed with error (sct=0, sc=8) 00:14:33.970 Read completed with error (sct=0, sc=8) 00:14:33.970 Read completed with error (sct=0, sc=8) 00:14:33.970 Write completed with error (sct=0, sc=8) 00:14:33.970 Write completed with error (sct=0, sc=8) 00:14:33.970 Read completed with error (sct=0, sc=8) 00:14:33.970 Read completed with error (sct=0, sc=8) 00:14:33.970 Write completed with error (sct=0, sc=8) 00:14:33.970 Read completed with error (sct=0, sc=8) 00:14:33.970 Write completed with error (sct=0, sc=8) 00:14:33.970 starting I/O failed: -6 00:14:33.970 Write completed with error (sct=0, sc=8) 00:14:33.970 Read completed with error (sct=0, sc=8) 00:14:33.970 Read completed with error (sct=0, sc=8) 00:14:33.970 Write completed with error (sct=0, sc=8) 00:14:33.970 Read completed with error (sct=0, sc=8) 00:14:33.970 Read completed with error (sct=0, sc=8) 00:14:33.970 Read completed with error (sct=0, sc=8) 00:14:33.970 Write completed with error (sct=0, sc=8) 00:14:33.970 Read completed with error (sct=0, sc=8) 00:14:33.970 Read completed with error (sct=0, sc=8) 00:14:33.970 Read completed with error (sct=0, sc=8) 00:14:33.970 Read completed with error (sct=0, sc=8) 00:14:33.970 starting I/O failed: -6 00:14:33.970 Read completed with error (sct=0, sc=8) 00:14:33.970 Write completed with error (sct=0, sc=8) 00:14:33.970 Write completed with error (sct=0, sc=8) 00:14:33.970 Read completed with error (sct=0, sc=8) 00:14:33.970 Read completed with error (sct=0, sc=8) 00:14:33.970 Write completed with error (sct=0, sc=8) 00:14:33.970 Read completed with error (sct=0, sc=8) 00:14:33.970 Write completed with error (sct=0, sc=8) 00:14:33.970 Write completed with error (sct=0, sc=8) 00:14:33.970 Read completed with error (sct=0, sc=8) 00:14:33.970 Read completed with error (sct=0, sc=8) 00:14:33.970 starting I/O failed: -6 00:14:33.970 Read completed with error (sct=0, sc=8) 00:14:33.970 Write completed with error (sct=0, sc=8) 00:14:33.970 Read completed with error (sct=0, sc=8) 00:14:33.970 Write completed with error (sct=0, sc=8) 00:14:33.970 Read completed with error (sct=0, sc=8) 00:14:33.970 Read completed with error (sct=0, sc=8) 00:14:33.970 Write completed with error (sct=0, sc=8) 00:14:33.970 Read completed with error (sct=0, sc=8) 00:14:33.970 Read completed with error (sct=0, sc=8) 00:14:33.970 Write completed with error (sct=0, sc=8) 00:14:33.970 starting I/O failed: -6 00:14:33.970 Read completed with error (sct=0, sc=8) 00:14:33.970 Read completed with error (sct=0, sc=8) 00:14:33.970 Write completed with error (sct=0, sc=8) 00:14:33.970 Write completed with error (sct=0, sc=8) 00:14:33.970 Write completed with error (sct=0, sc=8) 00:14:33.970 Write completed with error (sct=0, sc=8) 00:14:33.970 Read completed with error (sct=0, sc=8) 00:14:33.970 Write completed with error (sct=0, sc=8) 00:14:33.970 Read completed with error (sct=0, sc=8) 00:14:33.970 Read completed with error (sct=0, sc=8) 00:14:33.970 Write completed with error (sct=0, sc=8) 00:14:33.970 Read completed with error (sct=0, sc=8) 00:14:33.970 starting I/O failed: -6 00:14:33.970 Read completed with error (sct=0, sc=8) 00:14:33.970 Read completed with error (sct=0, sc=8) 00:14:33.970 Read completed with error (sct=0, sc=8) 00:14:33.970 Read completed with error (sct=0, sc=8) 00:14:33.970 Read completed with error (sct=0, sc=8) 00:14:33.970 Read completed with error (sct=0, sc=8) 00:14:33.970 Read completed with error (sct=0, sc=8) 00:14:33.970 Read completed with error (sct=0, sc=8) 00:14:33.970 Write completed with error (sct=0, sc=8) 00:14:33.970 Write completed with error (sct=0, sc=8) 00:14:33.970 Read completed with error (sct=0, sc=8) 00:14:33.970 starting I/O failed: -6 00:14:33.970 Read completed with error (sct=0, sc=8) 00:14:33.970 Read completed with error (sct=0, sc=8) 00:14:33.970 Read completed with error (sct=0, sc=8) 00:14:33.970 Read completed with error (sct=0, sc=8) 00:14:33.970 Write completed with error (sct=0, sc=8) 00:14:33.970 Read completed with error (sct=0, sc=8) 00:14:33.970 Read completed with error (sct=0, sc=8) 00:14:33.970 Read completed with error (sct=0, sc=8) 00:14:33.970 starting I/O failed: -6 00:14:33.970 Read completed with error (sct=0, sc=8) 00:14:33.970 Write completed with error (sct=0, sc=8) 00:14:33.970 Read completed with error (sct=0, sc=8) 00:14:33.970 Read completed with error (sct=0, sc=8) 00:14:33.970 starting I/O failed: -6 00:14:33.970 Read completed with error (sct=0, sc=8) 00:14:33.970 Read completed with error (sct=0, sc=8) 00:14:33.970 Read completed with error (sct=0, sc=8) 00:14:33.970 [2024-07-15 03:17:39.883637] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f93a8000c00 is same with the state(5) to be set 00:14:33.970 Write completed with error (sct=0, sc=8) 00:14:33.970 Read completed with error (sct=0, sc=8) 00:14:33.970 Write completed with error (sct=0, sc=8) 00:14:33.970 Read completed with error (sct=0, sc=8) 00:14:33.970 Read completed with error (sct=0, sc=8) 00:14:33.970 Read completed with error (sct=0, sc=8) 00:14:33.970 Read completed with error (sct=0, sc=8) 00:14:33.970 Write completed with error (sct=0, sc=8) 00:14:33.970 Read completed with error (sct=0, sc=8) 00:14:33.970 Read completed with error (sct=0, sc=8) 00:14:33.970 Read completed with error (sct=0, sc=8) 00:14:33.970 Write completed with error (sct=0, sc=8) 00:14:33.970 Write completed with error (sct=0, sc=8) 00:14:33.970 Write completed with error (sct=0, sc=8) 00:14:33.970 Read completed with error (sct=0, sc=8) 00:14:33.970 Write completed with error (sct=0, sc=8) 00:14:33.970 Write completed with error (sct=0, sc=8) 00:14:33.970 Read completed with error (sct=0, sc=8) 00:14:33.970 Read completed with error (sct=0, sc=8) 00:14:33.970 Read completed with error (sct=0, sc=8) 00:14:33.970 Read completed with error (sct=0, sc=8) 00:14:33.970 Read completed with error (sct=0, sc=8) 00:14:33.970 Write completed with error (sct=0, sc=8) 00:14:33.970 Read completed with error (sct=0, sc=8) 00:14:33.970 Read completed with error (sct=0, sc=8) 00:14:33.970 Read completed with error (sct=0, sc=8) 00:14:33.970 Read completed with error (sct=0, sc=8) 00:14:33.970 Read completed with error (sct=0, sc=8) 00:14:33.970 Read completed with error (sct=0, sc=8) 00:14:33.970 Read completed with error (sct=0, sc=8) 00:14:33.970 Write completed with error (sct=0, sc=8) 00:14:33.970 Read completed with error (sct=0, sc=8) 00:14:33.970 Write completed with error (sct=0, sc=8) 00:14:33.970 Read completed with error (sct=0, sc=8) 00:14:33.970 Write completed with error (sct=0, sc=8) 00:14:33.970 Read completed with error (sct=0, sc=8) 00:14:33.970 Write completed with error (sct=0, sc=8) 00:14:33.970 Read completed with error (sct=0, sc=8) 00:14:33.970 Write completed with error (sct=0, sc=8) 00:14:33.970 Read completed with error (sct=0, sc=8) 00:14:33.970 Read completed with error (sct=0, sc=8) 00:14:33.970 Write completed with error (sct=0, sc=8) 00:14:33.970 Read completed with error (sct=0, sc=8) 00:14:33.970 Write completed with error (sct=0, sc=8) 00:14:33.970 Read completed with error (sct=0, sc=8) 00:14:33.970 Read completed with error (sct=0, sc=8) 00:14:33.970 Read completed with error (sct=0, sc=8) 00:14:33.970 Read completed with error (sct=0, sc=8) 00:14:33.970 Write completed with error (sct=0, sc=8) 00:14:33.970 Read completed with error (sct=0, sc=8) 00:14:33.970 Read completed with error (sct=0, sc=8) 00:14:33.970 Read completed with error (sct=0, sc=8) 00:14:34.903 [2024-07-15 03:17:40.857253] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc1630 is same with the state(5) to be set 00:14:34.903 Read completed with error (sct=0, sc=8) 00:14:34.903 Read completed with error (sct=0, sc=8) 00:14:34.903 Read completed with error (sct=0, sc=8) 00:14:34.903 Write completed with error (sct=0, sc=8) 00:14:34.903 Read completed with error (sct=0, sc=8) 00:14:34.903 Write completed with error (sct=0, sc=8) 00:14:34.903 Read completed with error (sct=0, sc=8) 00:14:34.903 Read completed with error (sct=0, sc=8) 00:14:34.903 Read completed with error (sct=0, sc=8) 00:14:34.903 Write completed with error (sct=0, sc=8) 00:14:34.903 Write completed with error (sct=0, sc=8) 00:14:34.903 Read completed with error (sct=0, sc=8) 00:14:34.903 Read completed with error (sct=0, sc=8) 00:14:34.903 Read completed with error (sct=0, sc=8) 00:14:34.903 Read completed with error (sct=0, sc=8) 00:14:34.903 Read completed with error (sct=0, sc=8) 00:14:34.903 Write completed with error (sct=0, sc=8) 00:14:34.903 Read completed with error (sct=0, sc=8) 00:14:34.903 Read completed with error (sct=0, sc=8) 00:14:34.903 Write completed with error (sct=0, sc=8) 00:14:34.903 Write completed with error (sct=0, sc=8) 00:14:34.903 Read completed with error (sct=0, sc=8) 00:14:34.904 Read completed with error (sct=0, sc=8) 00:14:34.904 Read completed with error (sct=0, sc=8) 00:14:34.904 Read completed with error (sct=0, sc=8) 00:14:34.904 [2024-07-15 03:17:40.885295] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa4ec0 is same with the state(5) to be set 00:14:34.904 Write completed with error (sct=0, sc=8) 00:14:34.904 Read completed with error (sct=0, sc=8) 00:14:34.904 Read completed with error (sct=0, sc=8) 00:14:34.904 Write completed with error (sct=0, sc=8) 00:14:34.904 Write completed with error (sct=0, sc=8) 00:14:34.904 Read completed with error (sct=0, sc=8) 00:14:34.904 Read completed with error (sct=0, sc=8) 00:14:34.904 Write completed with error (sct=0, sc=8) 00:14:34.904 Write completed with error (sct=0, sc=8) 00:14:34.904 Write completed with error (sct=0, sc=8) 00:14:34.904 Read completed with error (sct=0, sc=8) 00:14:34.904 Read completed with error (sct=0, sc=8) 00:14:34.904 Read completed with error (sct=0, sc=8) 00:14:34.904 Write completed with error (sct=0, sc=8) 00:14:34.904 Read completed with error (sct=0, sc=8) 00:14:34.904 Read completed with error (sct=0, sc=8) 00:14:34.904 Write completed with error (sct=0, sc=8) 00:14:34.904 Write completed with error (sct=0, sc=8) 00:14:34.904 Write completed with error (sct=0, sc=8) 00:14:34.904 Write completed with error (sct=0, sc=8) 00:14:34.904 Write completed with error (sct=0, sc=8) 00:14:34.904 Read completed with error (sct=0, sc=8) 00:14:34.904 Write completed with error (sct=0, sc=8) 00:14:34.904 Read completed with error (sct=0, sc=8) 00:14:34.904 Read completed with error (sct=0, sc=8) 00:14:34.904 Write completed with error (sct=0, sc=8) 00:14:34.904 Write completed with error (sct=0, sc=8) 00:14:34.904 Read completed with error (sct=0, sc=8) 00:14:34.904 Read completed with error (sct=0, sc=8) 00:14:34.904 Write completed with error (sct=0, sc=8) 00:14:34.904 Read completed with error (sct=0, sc=8) 00:14:34.904 Read completed with error (sct=0, sc=8) 00:14:34.904 Read completed with error (sct=0, sc=8) 00:14:34.904 [2024-07-15 03:17:40.885597] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f93a800d600 is same with the state(5) to be set 00:14:34.904 Write completed with error (sct=0, sc=8) 00:14:34.904 Read completed with error (sct=0, sc=8) 00:14:34.904 Write completed with error (sct=0, sc=8) 00:14:34.904 Read completed with error (sct=0, sc=8) 00:14:34.904 Read completed with error (sct=0, sc=8) 00:14:34.904 Write completed with error (sct=0, sc=8) 00:14:34.904 Write completed with error (sct=0, sc=8) 00:14:34.904 Write completed with error (sct=0, sc=8) 00:14:34.904 Write completed with error (sct=0, sc=8) 00:14:34.904 Read completed with error (sct=0, sc=8) 00:14:34.904 Read completed with error (sct=0, sc=8) 00:14:34.904 Write completed with error (sct=0, sc=8) 00:14:34.904 Read completed with error (sct=0, sc=8) 00:14:34.904 Read completed with error (sct=0, sc=8) 00:14:34.904 Read completed with error (sct=0, sc=8) 00:14:34.904 Read completed with error (sct=0, sc=8) 00:14:34.904 Read completed with error (sct=0, sc=8) 00:14:34.904 Read completed with error (sct=0, sc=8) 00:14:34.904 Read completed with error (sct=0, sc=8) 00:14:34.904 Read completed with error (sct=0, sc=8) 00:14:34.904 Read completed with error (sct=0, sc=8) 00:14:34.904 Read completed with error (sct=0, sc=8) 00:14:34.904 Read completed with error (sct=0, sc=8) 00:14:34.904 Write completed with error (sct=0, sc=8) 00:14:34.904 Read completed with error (sct=0, sc=8) 00:14:34.904 Read completed with error (sct=0, sc=8) 00:14:34.904 Read completed with error (sct=0, sc=8) 00:14:34.904 Read completed with error (sct=0, sc=8) 00:14:34.904 Write completed with error (sct=0, sc=8) 00:14:34.904 Write completed with error (sct=0, sc=8) 00:14:34.904 Read completed with error (sct=0, sc=8) 00:14:34.904 Read completed with error (sct=0, sc=8) 00:14:34.904 Read completed with error (sct=0, sc=8) 00:14:34.904 [2024-07-15 03:17:40.885840] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f93a800cfe0 is same with the state(5) to be set 00:14:34.904 Read completed with error (sct=0, sc=8) 00:14:34.904 Write completed with error (sct=0, sc=8) 00:14:34.904 Write completed with error (sct=0, sc=8) 00:14:34.904 Read completed with error (sct=0, sc=8) 00:14:34.904 Read completed with error (sct=0, sc=8) 00:14:34.904 Read completed with error (sct=0, sc=8) 00:14:34.904 Read completed with error (sct=0, sc=8) 00:14:34.904 Write completed with error (sct=0, sc=8) 00:14:34.904 Write completed with error (sct=0, sc=8) 00:14:34.904 Read completed with error (sct=0, sc=8) 00:14:34.904 Read completed with error (sct=0, sc=8) 00:14:34.904 Read completed with error (sct=0, sc=8) 00:14:34.904 Write completed with error (sct=0, sc=8) 00:14:34.904 Write completed with error (sct=0, sc=8) 00:14:34.904 Read completed with error (sct=0, sc=8) 00:14:34.904 Read completed with error (sct=0, sc=8) 00:14:34.904 Read completed with error (sct=0, sc=8) 00:14:34.904 Read completed with error (sct=0, sc=8) 00:14:34.904 Write completed with error (sct=0, sc=8) 00:14:34.904 Write completed with error (sct=0, sc=8) 00:14:34.904 Read completed with error (sct=0, sc=8) 00:14:34.904 Read completed with error (sct=0, sc=8) 00:14:34.904 Read completed with error (sct=0, sc=8) 00:14:34.904 Write completed with error (sct=0, sc=8) 00:14:34.904 Read completed with error (sct=0, sc=8) 00:14:34.904 Read completed with error (sct=0, sc=8) 00:14:34.904 [2024-07-15 03:17:40.886060] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa4b00 is same with the state(5) to be set 00:14:34.904 03:17:40 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:34.904 03:17:40 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:14:34.904 Initializing NVMe Controllers 00:14:34.904 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:34.904 Controller IO queue size 128, less than required. 00:14:34.904 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:34.904 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:14:34.904 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:14:34.904 Initialization complete. Launching workers. 00:14:34.904 ======================================================== 00:14:34.904 Latency(us) 00:14:34.904 Device Information : IOPS MiB/s Average min max 00:14:34.904 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 171.33 0.08 894981.70 657.53 2002956.88 00:14:34.904 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 161.90 0.08 993264.81 383.90 2003148.26 00:14:34.904 ======================================================== 00:14:34.904 Total : 333.23 0.16 942731.76 383.90 2003148.26 00:14:34.904 00:14:34.904 03:17:40 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 3145560 00:14:34.905 03:17:40 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:14:34.905 [2024-07-15 03:17:40.887330] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfc1630 (9): Bad file descriptor 00:14:34.905 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:14:35.469 03:17:41 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:14:35.469 03:17:41 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 3145560 00:14:35.469 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (3145560) - No such process 00:14:35.469 03:17:41 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 3145560 00:14:35.469 03:17:41 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@648 -- # local es=0 00:14:35.469 03:17:41 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # valid_exec_arg wait 3145560 00:14:35.469 03:17:41 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@636 -- # local arg=wait 00:14:35.469 03:17:41 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:35.469 03:17:41 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # type -t wait 00:14:35.469 03:17:41 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:35.469 03:17:41 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@651 -- # wait 3145560 00:14:35.469 03:17:41 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@651 -- # es=1 00:14:35.469 03:17:41 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:35.469 03:17:41 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:35.469 03:17:41 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:35.469 03:17:41 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:14:35.470 03:17:41 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:35.470 03:17:41 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:35.470 03:17:41 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:35.470 03:17:41 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:35.470 03:17:41 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:35.470 03:17:41 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:35.470 [2024-07-15 03:17:41.404061] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:35.470 03:17:41 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:35.470 03:17:41 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:35.470 03:17:41 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:35.470 03:17:41 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:35.470 03:17:41 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:35.470 03:17:41 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=3146011 00:14:35.470 03:17:41 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:14:35.470 03:17:41 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:14:35.470 03:17:41 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3146011 00:14:35.470 03:17:41 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:14:35.470 EAL: No free 2048 kB hugepages reported on node 1 00:14:35.470 [2024-07-15 03:17:41.464934] subsystem.c:1568:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:14:36.034 03:17:41 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:14:36.034 03:17:41 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3146011 00:14:36.034 03:17:41 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:14:36.292 03:17:42 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:14:36.292 03:17:42 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3146011 00:14:36.292 03:17:42 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:14:36.856 03:17:42 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:14:36.856 03:17:42 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3146011 00:14:36.856 03:17:42 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:14:37.425 03:17:43 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:14:37.425 03:17:43 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3146011 00:14:37.425 03:17:43 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:14:37.987 03:17:43 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:14:37.988 03:17:43 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3146011 00:14:37.988 03:17:43 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:14:38.551 03:17:44 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:14:38.551 03:17:44 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3146011 00:14:38.551 03:17:44 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:14:38.551 Initializing NVMe Controllers 00:14:38.551 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:38.551 Controller IO queue size 128, less than required. 00:14:38.551 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:38.551 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:14:38.551 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:14:38.551 Initialization complete. Launching workers. 00:14:38.551 ======================================================== 00:14:38.551 Latency(us) 00:14:38.551 Device Information : IOPS MiB/s Average min max 00:14:38.551 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1004434.20 1000200.92 1010973.13 00:14:38.551 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1004492.42 1000211.00 1012381.04 00:14:38.551 ======================================================== 00:14:38.551 Total : 256.00 0.12 1004463.31 1000200.92 1012381.04 00:14:38.551 00:14:38.808 03:17:44 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:14:38.808 03:17:44 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3146011 00:14:38.808 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (3146011) - No such process 00:14:38.808 03:17:44 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 3146011 00:14:38.808 03:17:44 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:14:38.808 03:17:44 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:14:38.808 03:17:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:38.808 03:17:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # sync 00:14:38.808 03:17:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:38.808 03:17:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@120 -- # set +e 00:14:38.808 03:17:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:38.808 03:17:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:38.808 rmmod nvme_tcp 00:14:39.065 rmmod nvme_fabrics 00:14:39.065 rmmod nvme_keyring 00:14:39.065 03:17:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:39.065 03:17:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set -e 00:14:39.065 03:17:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # return 0 00:14:39.065 03:17:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@489 -- # '[' -n 3145535 ']' 00:14:39.065 03:17:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@490 -- # killprocess 3145535 00:14:39.065 03:17:44 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@948 -- # '[' -z 3145535 ']' 00:14:39.065 03:17:44 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@952 -- # kill -0 3145535 00:14:39.065 03:17:44 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@953 -- # uname 00:14:39.065 03:17:44 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:39.065 03:17:44 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3145535 00:14:39.065 03:17:45 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:14:39.065 03:17:45 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:14:39.065 03:17:45 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3145535' 00:14:39.065 killing process with pid 3145535 00:14:39.065 03:17:45 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@967 -- # kill 3145535 00:14:39.065 03:17:45 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # wait 3145535 00:14:39.325 03:17:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:39.325 03:17:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:39.325 03:17:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:39.325 03:17:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:39.325 03:17:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:39.325 03:17:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:39.325 03:17:45 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:39.325 03:17:45 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:41.227 03:17:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:41.227 00:14:41.227 real 0m12.240s 00:14:41.227 user 0m27.697s 00:14:41.227 sys 0m2.943s 00:14:41.227 03:17:47 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:41.227 03:17:47 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:41.227 ************************************ 00:14:41.227 END TEST nvmf_delete_subsystem 00:14:41.227 ************************************ 00:14:41.227 03:17:47 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:14:41.227 03:17:47 nvmf_tcp -- nvmf/nvmf.sh@36 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:14:41.227 03:17:47 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:41.227 03:17:47 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:41.227 03:17:47 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:41.227 ************************************ 00:14:41.227 START TEST nvmf_ns_masking 00:14:41.227 ************************************ 00:14:41.227 03:17:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1123 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:14:41.485 * Looking for test storage... 00:14:41.485 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:41.485 03:17:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:41.485 03:17:47 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:14:41.485 03:17:47 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:41.485 03:17:47 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:41.485 03:17:47 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:41.485 03:17:47 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:41.485 03:17:47 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:41.485 03:17:47 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:41.485 03:17:47 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:41.485 03:17:47 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:41.485 03:17:47 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:41.485 03:17:47 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:41.485 03:17:47 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:41.485 03:17:47 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:14:41.485 03:17:47 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:41.485 03:17:47 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:41.485 03:17:47 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:41.485 03:17:47 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:41.485 03:17:47 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:41.485 03:17:47 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:41.485 03:17:47 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:41.485 03:17:47 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:41.485 03:17:47 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:41.485 03:17:47 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:41.485 03:17:47 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:41.485 03:17:47 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:14:41.485 03:17:47 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:41.485 03:17:47 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@47 -- # : 0 00:14:41.485 03:17:47 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:41.485 03:17:47 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:41.485 03:17:47 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:41.485 03:17:47 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:41.485 03:17:47 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:41.485 03:17:47 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:41.485 03:17:47 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:41.485 03:17:47 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:41.485 03:17:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:41.485 03:17:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:14:41.485 03:17:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:14:41.485 03:17:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:14:41.485 03:17:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=589b856c-2d6a-4783-a57f-05a072c337fc 00:14:41.485 03:17:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:14:41.485 03:17:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=c59aac97-19eb-4239-8383-fd5ab49557c9 00:14:41.485 03:17:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:14:41.485 03:17:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:14:41.486 03:17:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:14:41.486 03:17:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:14:41.486 03:17:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=38f91a07-99b8-4ad7-88c2-727663be2870 00:14:41.486 03:17:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:14:41.486 03:17:47 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:41.486 03:17:47 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:41.486 03:17:47 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:41.486 03:17:47 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:41.486 03:17:47 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:41.486 03:17:47 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:41.486 03:17:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:41.486 03:17:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:41.486 03:17:47 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:41.486 03:17:47 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:41.486 03:17:47 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@285 -- # xtrace_disable 00:14:41.486 03:17:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:43.384 03:17:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:43.384 03:17:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@291 -- # pci_devs=() 00:14:43.384 03:17:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:43.384 03:17:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:43.384 03:17:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:43.384 03:17:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:43.384 03:17:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:43.384 03:17:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@295 -- # net_devs=() 00:14:43.384 03:17:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:43.384 03:17:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@296 -- # e810=() 00:14:43.384 03:17:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@296 -- # local -ga e810 00:14:43.384 03:17:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@297 -- # x722=() 00:14:43.384 03:17:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@297 -- # local -ga x722 00:14:43.384 03:17:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@298 -- # mlx=() 00:14:43.384 03:17:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@298 -- # local -ga mlx 00:14:43.384 03:17:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:43.384 03:17:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:43.384 03:17:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:43.384 03:17:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:43.384 03:17:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:43.384 03:17:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:43.384 03:17:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:43.385 03:17:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:43.385 03:17:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:43.385 03:17:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:43.385 03:17:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:43.385 03:17:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:43.385 03:17:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:43.385 03:17:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:43.385 03:17:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:43.385 03:17:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:43.385 03:17:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:43.385 03:17:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:43.385 03:17:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:14:43.385 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:14:43.385 03:17:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:43.385 03:17:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:43.385 03:17:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:43.385 03:17:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:43.385 03:17:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:43.385 03:17:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:43.385 03:17:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:14:43.385 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:14:43.385 03:17:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:43.385 03:17:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:43.385 03:17:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:43.385 03:17:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:43.385 03:17:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:43.385 03:17:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:43.385 03:17:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:43.385 03:17:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:43.385 03:17:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:43.385 03:17:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:43.385 03:17:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:43.385 03:17:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:43.385 03:17:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:43.385 03:17:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:43.385 03:17:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:43.385 03:17:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:14:43.385 Found net devices under 0000:0a:00.0: cvl_0_0 00:14:43.385 03:17:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:43.385 03:17:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:43.385 03:17:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:43.385 03:17:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:43.385 03:17:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:43.385 03:17:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:43.385 03:17:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:43.385 03:17:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:43.385 03:17:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:14:43.385 Found net devices under 0000:0a:00.1: cvl_0_1 00:14:43.385 03:17:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:43.385 03:17:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:43.385 03:17:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # is_hw=yes 00:14:43.385 03:17:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:43.385 03:17:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:43.385 03:17:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:43.385 03:17:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:43.385 03:17:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:43.385 03:17:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:43.385 03:17:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:43.385 03:17:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:43.385 03:17:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:43.385 03:17:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:43.385 03:17:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:43.385 03:17:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:43.385 03:17:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:43.385 03:17:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:43.385 03:17:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:43.385 03:17:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:43.385 03:17:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:43.385 03:17:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:43.385 03:17:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:43.385 03:17:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:43.385 03:17:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:43.385 03:17:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:43.385 03:17:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:43.385 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:43.385 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.131 ms 00:14:43.385 00:14:43.385 --- 10.0.0.2 ping statistics --- 00:14:43.385 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:43.385 rtt min/avg/max/mdev = 0.131/0.131/0.131/0.000 ms 00:14:43.385 03:17:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:43.385 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:43.385 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.178 ms 00:14:43.385 00:14:43.385 --- 10.0.0.1 ping statistics --- 00:14:43.385 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:43.385 rtt min/avg/max/mdev = 0.178/0.178/0.178/0.000 ms 00:14:43.385 03:17:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:43.385 03:17:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@422 -- # return 0 00:14:43.385 03:17:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:43.385 03:17:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:43.385 03:17:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:43.385 03:17:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:43.385 03:17:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:43.385 03:17:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:43.385 03:17:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:43.385 03:17:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:14:43.385 03:17:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:43.385 03:17:49 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:43.385 03:17:49 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:43.385 03:17:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@481 -- # nvmfpid=3148430 00:14:43.385 03:17:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:14:43.385 03:17:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@482 -- # waitforlisten 3148430 00:14:43.385 03:17:49 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@829 -- # '[' -z 3148430 ']' 00:14:43.385 03:17:49 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:43.385 03:17:49 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:43.385 03:17:49 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:43.385 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:43.385 03:17:49 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:43.385 03:17:49 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:43.385 [2024-07-15 03:17:49.505285] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:14:43.385 [2024-07-15 03:17:49.505359] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:43.643 EAL: No free 2048 kB hugepages reported on node 1 00:14:43.644 [2024-07-15 03:17:49.568885] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:43.644 [2024-07-15 03:17:49.653280] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:43.644 [2024-07-15 03:17:49.653343] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:43.644 [2024-07-15 03:17:49.653370] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:43.644 [2024-07-15 03:17:49.653381] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:43.644 [2024-07-15 03:17:49.653390] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:43.644 [2024-07-15 03:17:49.653425] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:43.644 03:17:49 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:43.644 03:17:49 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@862 -- # return 0 00:14:43.644 03:17:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:43.644 03:17:49 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:43.644 03:17:49 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:43.901 03:17:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:43.901 03:17:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:14:44.158 [2024-07-15 03:17:50.074534] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:44.158 03:17:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:14:44.158 03:17:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:14:44.158 03:17:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:14:44.414 Malloc1 00:14:44.414 03:17:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:14:44.671 Malloc2 00:14:44.671 03:17:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:14:44.928 03:17:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:14:45.185 03:17:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:45.443 [2024-07-15 03:17:51.540356] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:45.443 03:17:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:14:45.443 03:17:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 38f91a07-99b8-4ad7-88c2-727663be2870 -a 10.0.0.2 -s 4420 -i 4 00:14:45.701 03:17:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:14:45.701 03:17:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:14:45.701 03:17:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:14:45.701 03:17:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:14:45.701 03:17:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:14:48.230 03:17:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:14:48.230 03:17:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:14:48.230 03:17:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:14:48.230 03:17:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:14:48.230 03:17:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:14:48.230 03:17:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:14:48.230 03:17:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:14:48.230 03:17:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:14:48.230 03:17:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:14:48.230 03:17:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:14:48.230 03:17:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:14:48.230 03:17:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:48.230 03:17:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:48.230 [ 0]:0x1 00:14:48.230 03:17:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:48.230 03:17:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:48.230 03:17:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=f69795dcefbe451e8d8c0f42dc8a60f8 00:14:48.230 03:17:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ f69795dcefbe451e8d8c0f42dc8a60f8 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:48.230 03:17:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:14:48.230 03:17:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:14:48.230 03:17:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:48.230 03:17:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:48.230 [ 0]:0x1 00:14:48.230 03:17:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:48.230 03:17:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:48.230 03:17:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=f69795dcefbe451e8d8c0f42dc8a60f8 00:14:48.230 03:17:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ f69795dcefbe451e8d8c0f42dc8a60f8 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:48.230 03:17:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:14:48.230 03:17:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:48.230 03:17:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:48.230 [ 1]:0x2 00:14:48.230 03:17:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:48.230 03:17:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:48.230 03:17:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=7c240dc6177749fdaa63430ecaab83d2 00:14:48.230 03:17:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 7c240dc6177749fdaa63430ecaab83d2 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:48.230 03:17:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:14:48.230 03:17:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:48.230 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:48.230 03:17:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:48.488 03:17:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:14:48.745 03:17:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:14:48.745 03:17:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 38f91a07-99b8-4ad7-88c2-727663be2870 -a 10.0.0.2 -s 4420 -i 4 00:14:49.003 03:17:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:14:49.003 03:17:55 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:14:49.003 03:17:55 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:14:49.003 03:17:55 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 1 ]] 00:14:49.003 03:17:55 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=1 00:14:49.003 03:17:55 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:14:50.901 03:17:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:14:50.901 03:17:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:14:50.901 03:17:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:14:51.158 03:17:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:14:51.158 03:17:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:14:51.158 03:17:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:14:51.158 03:17:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:14:51.158 03:17:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:14:51.158 03:17:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:14:51.158 03:17:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:14:51.158 03:17:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:14:51.158 03:17:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:14:51.158 03:17:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:14:51.158 03:17:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:14:51.158 03:17:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:51.158 03:17:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:14:51.158 03:17:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:51.158 03:17:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:14:51.158 03:17:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:51.158 03:17:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:51.158 03:17:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:51.158 03:17:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:51.158 03:17:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:14:51.158 03:17:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:51.158 03:17:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:14:51.158 03:17:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:51.158 03:17:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:51.158 03:17:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:51.158 03:17:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:14:51.158 03:17:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:51.158 03:17:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:51.158 [ 0]:0x2 00:14:51.158 03:17:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:51.158 03:17:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:51.158 03:17:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=7c240dc6177749fdaa63430ecaab83d2 00:14:51.158 03:17:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 7c240dc6177749fdaa63430ecaab83d2 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:51.158 03:17:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:51.417 03:17:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:14:51.417 03:17:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:51.417 03:17:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:51.417 [ 0]:0x1 00:14:51.417 03:17:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:51.417 03:17:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:51.417 03:17:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=f69795dcefbe451e8d8c0f42dc8a60f8 00:14:51.417 03:17:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ f69795dcefbe451e8d8c0f42dc8a60f8 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:51.417 03:17:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:14:51.417 03:17:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:51.417 03:17:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:51.417 [ 1]:0x2 00:14:51.417 03:17:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:51.417 03:17:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:51.674 03:17:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=7c240dc6177749fdaa63430ecaab83d2 00:14:51.674 03:17:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 7c240dc6177749fdaa63430ecaab83d2 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:51.674 03:17:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:51.932 03:17:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:14:51.932 03:17:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:14:51.932 03:17:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:14:51.932 03:17:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:14:51.932 03:17:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:51.932 03:17:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:14:51.932 03:17:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:51.932 03:17:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:14:51.932 03:17:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:51.932 03:17:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:51.932 03:17:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:51.932 03:17:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:51.932 03:17:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:14:51.932 03:17:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:51.932 03:17:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:14:51.932 03:17:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:51.932 03:17:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:51.932 03:17:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:51.932 03:17:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:14:51.932 03:17:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:51.932 03:17:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:51.932 [ 0]:0x2 00:14:51.932 03:17:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:51.932 03:17:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:51.932 03:17:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=7c240dc6177749fdaa63430ecaab83d2 00:14:51.932 03:17:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 7c240dc6177749fdaa63430ecaab83d2 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:51.932 03:17:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:14:51.932 03:17:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:51.932 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:51.932 03:17:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:52.190 03:17:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:14:52.190 03:17:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 38f91a07-99b8-4ad7-88c2-727663be2870 -a 10.0.0.2 -s 4420 -i 4 00:14:52.448 03:17:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:14:52.448 03:17:58 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:14:52.448 03:17:58 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:14:52.448 03:17:58 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:14:52.448 03:17:58 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:14:52.448 03:17:58 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:14:54.345 03:18:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:14:54.345 03:18:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:14:54.345 03:18:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:14:54.345 03:18:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:14:54.345 03:18:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:14:54.345 03:18:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:14:54.345 03:18:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:14:54.345 03:18:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:14:54.345 03:18:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:14:54.345 03:18:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:14:54.345 03:18:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:14:54.345 03:18:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:54.345 03:18:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:54.345 [ 0]:0x1 00:14:54.345 03:18:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:54.345 03:18:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:54.603 03:18:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=f69795dcefbe451e8d8c0f42dc8a60f8 00:14:54.603 03:18:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ f69795dcefbe451e8d8c0f42dc8a60f8 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:54.603 03:18:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:14:54.603 03:18:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:54.603 03:18:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:54.603 [ 1]:0x2 00:14:54.603 03:18:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:54.603 03:18:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:54.603 03:18:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=7c240dc6177749fdaa63430ecaab83d2 00:14:54.603 03:18:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 7c240dc6177749fdaa63430ecaab83d2 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:54.603 03:18:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:54.860 03:18:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:14:54.860 03:18:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:14:54.860 03:18:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:14:54.860 03:18:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:14:54.860 03:18:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:54.860 03:18:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:14:54.860 03:18:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:54.860 03:18:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:14:54.860 03:18:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:54.860 03:18:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:54.860 03:18:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:54.860 03:18:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:54.860 03:18:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:14:54.860 03:18:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:54.860 03:18:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:14:54.860 03:18:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:54.860 03:18:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:54.860 03:18:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:54.860 03:18:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:14:54.860 03:18:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:54.860 03:18:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:54.860 [ 0]:0x2 00:14:54.860 03:18:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:54.860 03:18:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:54.860 03:18:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=7c240dc6177749fdaa63430ecaab83d2 00:14:54.860 03:18:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 7c240dc6177749fdaa63430ecaab83d2 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:54.860 03:18:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:14:54.860 03:18:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:14:54.861 03:18:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:14:54.861 03:18:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:54.861 03:18:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:54.861 03:18:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:54.861 03:18:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:54.861 03:18:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:54.861 03:18:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:54.861 03:18:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:54.861 03:18:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:14:54.861 03:18:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:14:55.118 [2024-07-15 03:18:01.178358] nvmf_rpc.c:1791:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:14:55.118 request: 00:14:55.118 { 00:14:55.118 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:55.118 "nsid": 2, 00:14:55.118 "host": "nqn.2016-06.io.spdk:host1", 00:14:55.118 "method": "nvmf_ns_remove_host", 00:14:55.118 "req_id": 1 00:14:55.118 } 00:14:55.118 Got JSON-RPC error response 00:14:55.118 response: 00:14:55.118 { 00:14:55.118 "code": -32602, 00:14:55.118 "message": "Invalid parameters" 00:14:55.118 } 00:14:55.118 03:18:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:14:55.118 03:18:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:55.118 03:18:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:55.118 03:18:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:55.118 03:18:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:14:55.118 03:18:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:14:55.118 03:18:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:14:55.118 03:18:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:14:55.118 03:18:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:55.118 03:18:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:14:55.118 03:18:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:55.118 03:18:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:14:55.118 03:18:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:55.118 03:18:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:55.118 03:18:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:55.118 03:18:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:55.118 03:18:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:14:55.118 03:18:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:55.118 03:18:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:14:55.118 03:18:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:55.118 03:18:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:55.118 03:18:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:55.119 03:18:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:14:55.119 03:18:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:55.119 03:18:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:55.119 [ 0]:0x2 00:14:55.119 03:18:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:55.119 03:18:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:55.377 03:18:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=7c240dc6177749fdaa63430ecaab83d2 00:14:55.377 03:18:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 7c240dc6177749fdaa63430ecaab83d2 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:55.377 03:18:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:14:55.377 03:18:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:55.377 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:55.377 03:18:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=3150012 00:14:55.377 03:18:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:14:55.377 03:18:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:14:55.377 03:18:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 3150012 /var/tmp/host.sock 00:14:55.377 03:18:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@829 -- # '[' -z 3150012 ']' 00:14:55.377 03:18:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/host.sock 00:14:55.377 03:18:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:55.377 03:18:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:14:55.377 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:14:55.377 03:18:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:55.377 03:18:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:55.377 [2024-07-15 03:18:01.395566] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:14:55.377 [2024-07-15 03:18:01.395649] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3150012 ] 00:14:55.377 EAL: No free 2048 kB hugepages reported on node 1 00:14:55.377 [2024-07-15 03:18:01.457543] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:55.635 [2024-07-15 03:18:01.551200] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:55.924 03:18:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:55.924 03:18:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@862 -- # return 0 00:14:55.924 03:18:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:56.181 03:18:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:56.181 03:18:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid 589b856c-2d6a-4783-a57f-05a072c337fc 00:14:56.181 03:18:02 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@759 -- # tr -d - 00:14:56.181 03:18:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 589B856C2D6A4783A57F05A072C337FC -i 00:14:56.437 03:18:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid c59aac97-19eb-4239-8383-fd5ab49557c9 00:14:56.437 03:18:02 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@759 -- # tr -d - 00:14:56.437 03:18:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g C59AAC9719EB42398383FD5AB49557C9 -i 00:14:56.694 03:18:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:56.951 03:18:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:14:57.208 03:18:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:14:57.208 03:18:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:14:57.772 nvme0n1 00:14:57.772 03:18:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:14:57.772 03:18:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:14:58.029 nvme1n2 00:14:58.029 03:18:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:14:58.029 03:18:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:14:58.029 03:18:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:14:58.029 03:18:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:14:58.029 03:18:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:14:58.286 03:18:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:14:58.286 03:18:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:14:58.286 03:18:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:14:58.286 03:18:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:14:58.543 03:18:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ 589b856c-2d6a-4783-a57f-05a072c337fc == \5\8\9\b\8\5\6\c\-\2\d\6\a\-\4\7\8\3\-\a\5\7\f\-\0\5\a\0\7\2\c\3\3\7\f\c ]] 00:14:58.543 03:18:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:14:58.543 03:18:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:14:58.543 03:18:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:14:58.801 03:18:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ c59aac97-19eb-4239-8383-fd5ab49557c9 == \c\5\9\a\a\c\9\7\-\1\9\e\b\-\4\2\3\9\-\8\3\8\3\-\f\d\5\a\b\4\9\5\5\7\c\9 ]] 00:14:58.801 03:18:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@138 -- # killprocess 3150012 00:14:58.801 03:18:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@948 -- # '[' -z 3150012 ']' 00:14:58.801 03:18:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@952 -- # kill -0 3150012 00:14:58.801 03:18:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # uname 00:14:58.801 03:18:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:58.801 03:18:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3150012 00:14:58.801 03:18:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:14:58.801 03:18:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:14:58.801 03:18:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3150012' 00:14:58.801 killing process with pid 3150012 00:14:58.801 03:18:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@967 -- # kill 3150012 00:14:58.801 03:18:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@972 -- # wait 3150012 00:14:59.367 03:18:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:59.625 03:18:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@141 -- # trap - SIGINT SIGTERM EXIT 00:14:59.625 03:18:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@142 -- # nvmftestfini 00:14:59.625 03:18:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:59.625 03:18:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@117 -- # sync 00:14:59.625 03:18:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:59.625 03:18:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@120 -- # set +e 00:14:59.625 03:18:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:59.625 03:18:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:59.625 rmmod nvme_tcp 00:14:59.625 rmmod nvme_fabrics 00:14:59.625 rmmod nvme_keyring 00:14:59.625 03:18:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:59.625 03:18:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@124 -- # set -e 00:14:59.625 03:18:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@125 -- # return 0 00:14:59.625 03:18:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@489 -- # '[' -n 3148430 ']' 00:14:59.625 03:18:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@490 -- # killprocess 3148430 00:14:59.625 03:18:05 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@948 -- # '[' -z 3148430 ']' 00:14:59.625 03:18:05 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@952 -- # kill -0 3148430 00:14:59.625 03:18:05 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # uname 00:14:59.625 03:18:05 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:59.625 03:18:05 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3148430 00:14:59.625 03:18:05 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:14:59.625 03:18:05 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:14:59.625 03:18:05 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3148430' 00:14:59.625 killing process with pid 3148430 00:14:59.625 03:18:05 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@967 -- # kill 3148430 00:14:59.625 03:18:05 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@972 -- # wait 3148430 00:14:59.884 03:18:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:59.884 03:18:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:59.884 03:18:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:59.884 03:18:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:59.884 03:18:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:59.884 03:18:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:59.884 03:18:05 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:59.885 03:18:05 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:02.418 03:18:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:02.418 00:15:02.418 real 0m20.645s 00:15:02.418 user 0m26.908s 00:15:02.418 sys 0m4.017s 00:15:02.418 03:18:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:02.418 03:18:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:15:02.418 ************************************ 00:15:02.418 END TEST nvmf_ns_masking 00:15:02.418 ************************************ 00:15:02.418 03:18:08 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:15:02.418 03:18:08 nvmf_tcp -- nvmf/nvmf.sh@37 -- # [[ 1 -eq 1 ]] 00:15:02.418 03:18:08 nvmf_tcp -- nvmf/nvmf.sh@38 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:15:02.418 03:18:08 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:15:02.419 03:18:08 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:02.419 03:18:08 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:02.419 ************************************ 00:15:02.419 START TEST nvmf_nvme_cli 00:15:02.419 ************************************ 00:15:02.419 03:18:08 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:15:02.419 * Looking for test storage... 00:15:02.419 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:02.419 03:18:08 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:02.419 03:18:08 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:15:02.419 03:18:08 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:02.419 03:18:08 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:02.419 03:18:08 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:02.419 03:18:08 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:02.419 03:18:08 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:02.419 03:18:08 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:02.419 03:18:08 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:02.419 03:18:08 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:02.419 03:18:08 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:02.419 03:18:08 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:02.419 03:18:08 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:02.419 03:18:08 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:15:02.419 03:18:08 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:02.419 03:18:08 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:02.419 03:18:08 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:02.419 03:18:08 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:02.419 03:18:08 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:02.419 03:18:08 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:02.419 03:18:08 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:02.419 03:18:08 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:02.419 03:18:08 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:02.419 03:18:08 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:02.419 03:18:08 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:02.419 03:18:08 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:15:02.419 03:18:08 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:02.419 03:18:08 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@47 -- # : 0 00:15:02.419 03:18:08 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:02.419 03:18:08 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:02.419 03:18:08 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:02.419 03:18:08 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:02.419 03:18:08 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:02.419 03:18:08 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:02.419 03:18:08 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:02.419 03:18:08 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:02.419 03:18:08 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:02.419 03:18:08 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:02.419 03:18:08 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:15:02.419 03:18:08 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:15:02.419 03:18:08 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:02.419 03:18:08 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:02.419 03:18:08 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:02.419 03:18:08 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:02.419 03:18:08 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:02.419 03:18:08 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:02.419 03:18:08 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:02.419 03:18:08 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:02.419 03:18:08 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:15:02.419 03:18:08 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:15:02.419 03:18:08 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@285 -- # xtrace_disable 00:15:02.419 03:18:08 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:04.320 03:18:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:04.320 03:18:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@291 -- # pci_devs=() 00:15:04.320 03:18:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:04.320 03:18:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:04.320 03:18:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:04.320 03:18:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:04.320 03:18:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:04.320 03:18:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@295 -- # net_devs=() 00:15:04.320 03:18:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:04.320 03:18:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@296 -- # e810=() 00:15:04.320 03:18:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@296 -- # local -ga e810 00:15:04.320 03:18:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@297 -- # x722=() 00:15:04.320 03:18:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@297 -- # local -ga x722 00:15:04.320 03:18:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@298 -- # mlx=() 00:15:04.320 03:18:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@298 -- # local -ga mlx 00:15:04.320 03:18:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:04.320 03:18:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:04.320 03:18:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:04.320 03:18:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:04.320 03:18:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:04.320 03:18:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:04.320 03:18:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:04.320 03:18:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:04.320 03:18:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:04.320 03:18:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:04.320 03:18:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:04.320 03:18:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:04.320 03:18:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:04.320 03:18:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:04.321 03:18:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:04.321 03:18:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:04.321 03:18:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:04.321 03:18:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:04.321 03:18:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:15:04.321 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:15:04.321 03:18:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:04.321 03:18:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:04.321 03:18:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:04.321 03:18:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:04.321 03:18:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:04.321 03:18:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:04.321 03:18:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:15:04.321 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:15:04.321 03:18:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:04.321 03:18:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:04.321 03:18:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:04.321 03:18:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:04.321 03:18:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:04.321 03:18:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:04.321 03:18:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:04.321 03:18:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:04.321 03:18:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:04.321 03:18:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:04.321 03:18:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:04.321 03:18:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:04.321 03:18:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:04.321 03:18:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:04.321 03:18:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:04.321 03:18:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:15:04.321 Found net devices under 0000:0a:00.0: cvl_0_0 00:15:04.321 03:18:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:04.321 03:18:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:04.321 03:18:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:04.321 03:18:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:04.321 03:18:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:04.321 03:18:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:04.321 03:18:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:04.321 03:18:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:04.321 03:18:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:15:04.321 Found net devices under 0000:0a:00.1: cvl_0_1 00:15:04.321 03:18:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:04.321 03:18:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:04.321 03:18:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # is_hw=yes 00:15:04.321 03:18:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:15:04.321 03:18:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:15:04.321 03:18:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:15:04.321 03:18:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:04.321 03:18:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:04.321 03:18:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:04.321 03:18:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:04.321 03:18:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:04.321 03:18:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:04.321 03:18:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:04.321 03:18:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:04.321 03:18:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:04.321 03:18:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:04.321 03:18:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:04.321 03:18:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:04.321 03:18:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:04.321 03:18:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:04.321 03:18:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:04.321 03:18:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:04.321 03:18:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:04.321 03:18:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:04.321 03:18:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:04.321 03:18:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:04.321 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:04.321 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.281 ms 00:15:04.321 00:15:04.321 --- 10.0.0.2 ping statistics --- 00:15:04.321 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:04.321 rtt min/avg/max/mdev = 0.281/0.281/0.281/0.000 ms 00:15:04.321 03:18:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:04.321 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:04.321 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.184 ms 00:15:04.321 00:15:04.321 --- 10.0.0.1 ping statistics --- 00:15:04.321 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:04.321 rtt min/avg/max/mdev = 0.184/0.184/0.184/0.000 ms 00:15:04.321 03:18:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:04.321 03:18:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@422 -- # return 0 00:15:04.321 03:18:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:04.321 03:18:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:04.321 03:18:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:04.321 03:18:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:04.321 03:18:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:04.321 03:18:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:04.321 03:18:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:04.321 03:18:10 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:15:04.321 03:18:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:04.321 03:18:10 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:04.321 03:18:10 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:04.321 03:18:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@481 -- # nvmfpid=3153029 00:15:04.321 03:18:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:04.321 03:18:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@482 -- # waitforlisten 3153029 00:15:04.321 03:18:10 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@829 -- # '[' -z 3153029 ']' 00:15:04.321 03:18:10 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:04.321 03:18:10 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:04.321 03:18:10 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:04.321 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:04.321 03:18:10 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:04.321 03:18:10 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:04.321 [2024-07-15 03:18:10.228822] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:15:04.321 [2024-07-15 03:18:10.228924] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:04.321 EAL: No free 2048 kB hugepages reported on node 1 00:15:04.321 [2024-07-15 03:18:10.294353] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:04.321 [2024-07-15 03:18:10.385184] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:04.321 [2024-07-15 03:18:10.385256] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:04.321 [2024-07-15 03:18:10.385285] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:04.321 [2024-07-15 03:18:10.385297] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:04.321 [2024-07-15 03:18:10.385307] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:04.321 [2024-07-15 03:18:10.385356] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:04.321 [2024-07-15 03:18:10.385414] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:15:04.321 [2024-07-15 03:18:10.385479] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:15:04.321 [2024-07-15 03:18:10.385481] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:04.580 03:18:10 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:04.580 03:18:10 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@862 -- # return 0 00:15:04.580 03:18:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:04.580 03:18:10 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:04.580 03:18:10 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:04.580 03:18:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:04.580 03:18:10 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:04.580 03:18:10 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:04.580 03:18:10 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:04.580 [2024-07-15 03:18:10.534515] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:04.580 03:18:10 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:04.580 03:18:10 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:15:04.580 03:18:10 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:04.580 03:18:10 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:04.580 Malloc0 00:15:04.580 03:18:10 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:04.580 03:18:10 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:15:04.580 03:18:10 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:04.580 03:18:10 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:04.580 Malloc1 00:15:04.580 03:18:10 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:04.580 03:18:10 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:15:04.580 03:18:10 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:04.580 03:18:10 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:04.580 03:18:10 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:04.580 03:18:10 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:04.580 03:18:10 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:04.580 03:18:10 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:04.580 03:18:10 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:04.580 03:18:10 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:04.580 03:18:10 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:04.580 03:18:10 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:04.580 03:18:10 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:04.580 03:18:10 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:04.580 03:18:10 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:04.580 03:18:10 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:04.580 [2024-07-15 03:18:10.620641] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:04.580 03:18:10 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:04.580 03:18:10 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:15:04.580 03:18:10 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:04.580 03:18:10 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:04.580 03:18:10 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:04.580 03:18:10 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 4420 00:15:04.837 00:15:04.837 Discovery Log Number of Records 2, Generation counter 2 00:15:04.837 =====Discovery Log Entry 0====== 00:15:04.837 trtype: tcp 00:15:04.837 adrfam: ipv4 00:15:04.837 subtype: current discovery subsystem 00:15:04.837 treq: not required 00:15:04.837 portid: 0 00:15:04.837 trsvcid: 4420 00:15:04.837 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:15:04.837 traddr: 10.0.0.2 00:15:04.837 eflags: explicit discovery connections, duplicate discovery information 00:15:04.837 sectype: none 00:15:04.837 =====Discovery Log Entry 1====== 00:15:04.837 trtype: tcp 00:15:04.837 adrfam: ipv4 00:15:04.837 subtype: nvme subsystem 00:15:04.837 treq: not required 00:15:04.837 portid: 0 00:15:04.837 trsvcid: 4420 00:15:04.837 subnqn: nqn.2016-06.io.spdk:cnode1 00:15:04.837 traddr: 10.0.0.2 00:15:04.837 eflags: none 00:15:04.837 sectype: none 00:15:04.837 03:18:10 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:15:04.837 03:18:10 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:15:04.837 03:18:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:15:04.837 03:18:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:15:04.837 03:18:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:15:04.837 03:18:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:15:04.837 03:18:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:15:04.837 03:18:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:15:04.837 03:18:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:15:04.837 03:18:10 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:15:04.837 03:18:10 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:05.402 03:18:11 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:15:05.402 03:18:11 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1198 -- # local i=0 00:15:05.402 03:18:11 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:15:05.402 03:18:11 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:15:05.402 03:18:11 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:15:05.402 03:18:11 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # sleep 2 00:15:07.303 03:18:13 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:15:07.303 03:18:13 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:15:07.303 03:18:13 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:15:07.303 03:18:13 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:15:07.303 03:18:13 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:15:07.303 03:18:13 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # return 0 00:15:07.303 03:18:13 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:15:07.303 03:18:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:15:07.303 03:18:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:15:07.303 03:18:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:15:07.303 03:18:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:15:07.303 03:18:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:15:07.303 03:18:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:15:07.303 03:18:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:15:07.303 03:18:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:15:07.303 03:18:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:15:07.303 03:18:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:15:07.303 03:18:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:15:07.303 03:18:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:15:07.303 03:18:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:15:07.303 03:18:13 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n2 00:15:07.303 /dev/nvme0n1 ]] 00:15:07.303 03:18:13 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:15:07.303 03:18:13 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:15:07.303 03:18:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:15:07.303 03:18:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:15:07.303 03:18:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:15:07.303 03:18:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:15:07.303 03:18:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:15:07.303 03:18:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:15:07.303 03:18:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:15:07.303 03:18:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:15:07.303 03:18:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:15:07.303 03:18:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:15:07.303 03:18:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:15:07.303 03:18:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:15:07.303 03:18:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:15:07.303 03:18:13 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:15:07.303 03:18:13 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:07.559 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:07.559 03:18:13 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:07.559 03:18:13 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1219 -- # local i=0 00:15:07.559 03:18:13 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:15:07.559 03:18:13 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:07.559 03:18:13 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:15:07.559 03:18:13 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:07.559 03:18:13 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # return 0 00:15:07.559 03:18:13 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:15:07.560 03:18:13 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:07.560 03:18:13 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:07.560 03:18:13 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:07.560 03:18:13 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:07.560 03:18:13 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:15:07.560 03:18:13 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:15:07.560 03:18:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:07.560 03:18:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@117 -- # sync 00:15:07.560 03:18:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:07.560 03:18:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@120 -- # set +e 00:15:07.560 03:18:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:07.560 03:18:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:07.560 rmmod nvme_tcp 00:15:07.560 rmmod nvme_fabrics 00:15:07.560 rmmod nvme_keyring 00:15:07.560 03:18:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:07.560 03:18:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set -e 00:15:07.560 03:18:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@125 -- # return 0 00:15:07.560 03:18:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@489 -- # '[' -n 3153029 ']' 00:15:07.560 03:18:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@490 -- # killprocess 3153029 00:15:07.560 03:18:13 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@948 -- # '[' -z 3153029 ']' 00:15:07.560 03:18:13 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@952 -- # kill -0 3153029 00:15:07.560 03:18:13 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@953 -- # uname 00:15:07.560 03:18:13 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:07.560 03:18:13 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3153029 00:15:07.560 03:18:13 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:07.560 03:18:13 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:07.560 03:18:13 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3153029' 00:15:07.560 killing process with pid 3153029 00:15:07.560 03:18:13 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@967 -- # kill 3153029 00:15:07.560 03:18:13 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@972 -- # wait 3153029 00:15:07.818 03:18:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:07.818 03:18:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:07.818 03:18:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:07.818 03:18:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:07.818 03:18:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:07.818 03:18:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:07.818 03:18:13 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:07.818 03:18:13 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:10.351 03:18:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:10.351 00:15:10.351 real 0m7.874s 00:15:10.351 user 0m14.307s 00:15:10.351 sys 0m2.110s 00:15:10.351 03:18:15 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:10.351 03:18:15 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:10.351 ************************************ 00:15:10.351 END TEST nvmf_nvme_cli 00:15:10.351 ************************************ 00:15:10.351 03:18:15 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:15:10.351 03:18:15 nvmf_tcp -- nvmf/nvmf.sh@40 -- # [[ 1 -eq 1 ]] 00:15:10.351 03:18:15 nvmf_tcp -- nvmf/nvmf.sh@41 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:15:10.351 03:18:15 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:15:10.351 03:18:15 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:10.351 03:18:15 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:10.351 ************************************ 00:15:10.351 START TEST nvmf_vfio_user 00:15:10.351 ************************************ 00:15:10.351 03:18:15 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:15:10.351 * Looking for test storage... 00:15:10.351 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:10.351 03:18:16 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:10.351 03:18:16 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:15:10.351 03:18:16 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:10.352 03:18:16 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:10.352 03:18:16 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:10.352 03:18:16 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:10.352 03:18:16 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:10.352 03:18:16 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:10.352 03:18:16 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:10.352 03:18:16 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:10.352 03:18:16 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:10.352 03:18:16 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:10.352 03:18:16 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:10.352 03:18:16 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:15:10.352 03:18:16 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:10.352 03:18:16 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:10.352 03:18:16 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:10.352 03:18:16 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:10.352 03:18:16 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:10.352 03:18:16 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:10.352 03:18:16 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:10.352 03:18:16 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:10.352 03:18:16 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:10.352 03:18:16 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:10.352 03:18:16 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:10.352 03:18:16 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:15:10.352 03:18:16 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:10.352 03:18:16 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@47 -- # : 0 00:15:10.352 03:18:16 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:10.352 03:18:16 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:10.352 03:18:16 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:10.352 03:18:16 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:10.352 03:18:16 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:10.352 03:18:16 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:10.352 03:18:16 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:10.352 03:18:16 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:10.352 03:18:16 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:15:10.352 03:18:16 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:15:10.352 03:18:16 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:15:10.352 03:18:16 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:10.352 03:18:16 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:15:10.352 03:18:16 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:15:10.352 03:18:16 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:15:10.352 03:18:16 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:15:10.352 03:18:16 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:15:10.352 03:18:16 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:15:10.352 03:18:16 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=3153832 00:15:10.352 03:18:16 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:15:10.352 03:18:16 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 3153832' 00:15:10.352 Process pid: 3153832 00:15:10.352 03:18:16 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:15:10.352 03:18:16 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 3153832 00:15:10.352 03:18:16 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@829 -- # '[' -z 3153832 ']' 00:15:10.352 03:18:16 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:10.352 03:18:16 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:10.352 03:18:16 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:10.352 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:10.352 03:18:16 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:10.352 03:18:16 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:15:10.352 [2024-07-15 03:18:16.072147] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:15:10.352 [2024-07-15 03:18:16.072259] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:10.352 EAL: No free 2048 kB hugepages reported on node 1 00:15:10.352 [2024-07-15 03:18:16.133666] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:10.352 [2024-07-15 03:18:16.220576] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:10.352 [2024-07-15 03:18:16.220633] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:10.352 [2024-07-15 03:18:16.220661] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:10.352 [2024-07-15 03:18:16.220679] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:10.352 [2024-07-15 03:18:16.220689] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:10.352 [2024-07-15 03:18:16.220849] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:10.352 [2024-07-15 03:18:16.220909] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:15:10.352 [2024-07-15 03:18:16.220935] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:15:10.352 [2024-07-15 03:18:16.220937] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:10.352 03:18:16 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:10.352 03:18:16 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@862 -- # return 0 00:15:10.352 03:18:16 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:15:11.285 03:18:17 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:15:11.543 03:18:17 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:15:11.543 03:18:17 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:15:11.543 03:18:17 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:11.543 03:18:17 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:15:11.543 03:18:17 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:15:11.801 Malloc1 00:15:11.801 03:18:17 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:15:12.059 03:18:18 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:15:12.317 03:18:18 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:15:12.575 03:18:18 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:12.575 03:18:18 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:15:12.575 03:18:18 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:15:12.833 Malloc2 00:15:12.833 03:18:18 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:15:13.091 03:18:19 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:15:13.349 03:18:19 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:15:13.608 03:18:19 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:15:13.608 03:18:19 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:15:13.608 03:18:19 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:13.608 03:18:19 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:15:13.608 03:18:19 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:15:13.608 03:18:19 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:15:13.608 [2024-07-15 03:18:19.647859] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:15:13.609 [2024-07-15 03:18:19.647943] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3154257 ] 00:15:13.609 EAL: No free 2048 kB hugepages reported on node 1 00:15:13.609 [2024-07-15 03:18:19.682218] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:15:13.609 [2024-07-15 03:18:19.691366] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:15:13.609 [2024-07-15 03:18:19.691398] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f305f1a9000 00:15:13.609 [2024-07-15 03:18:19.692358] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:13.609 [2024-07-15 03:18:19.693356] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:13.609 [2024-07-15 03:18:19.694361] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:13.609 [2024-07-15 03:18:19.695366] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:13.609 [2024-07-15 03:18:19.696370] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:13.609 [2024-07-15 03:18:19.697372] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:13.609 [2024-07-15 03:18:19.698378] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:13.609 [2024-07-15 03:18:19.699383] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:13.609 [2024-07-15 03:18:19.700389] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:15:13.609 [2024-07-15 03:18:19.700409] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f305df5d000 00:15:13.609 [2024-07-15 03:18:19.701526] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:15:13.609 [2024-07-15 03:18:19.716534] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:15:13.609 [2024-07-15 03:18:19.716572] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to connect adminq (no timeout) 00:15:13.609 [2024-07-15 03:18:19.721515] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:15:13.609 [2024-07-15 03:18:19.721567] nvme_pcie_common.c: 132:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:15:13.609 [2024-07-15 03:18:19.721666] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for connect adminq (no timeout) 00:15:13.609 [2024-07-15 03:18:19.721701] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs (no timeout) 00:15:13.609 [2024-07-15 03:18:19.721712] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs wait for vs (no timeout) 00:15:13.609 [2024-07-15 03:18:19.722514] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:15:13.609 [2024-07-15 03:18:19.722536] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap (no timeout) 00:15:13.609 [2024-07-15 03:18:19.722556] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap wait for cap (no timeout) 00:15:13.609 [2024-07-15 03:18:19.723515] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:15:13.609 [2024-07-15 03:18:19.723533] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en (no timeout) 00:15:13.609 [2024-07-15 03:18:19.723546] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en wait for cc (timeout 15000 ms) 00:15:13.609 [2024-07-15 03:18:19.724521] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:15:13.609 [2024-07-15 03:18:19.724539] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:15:13.609 [2024-07-15 03:18:19.725527] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:15:13.609 [2024-07-15 03:18:19.725546] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 0 && CSTS.RDY = 0 00:15:13.609 [2024-07-15 03:18:19.725555] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to controller is disabled (timeout 15000 ms) 00:15:13.609 [2024-07-15 03:18:19.725567] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:15:13.609 [2024-07-15 03:18:19.725676] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Setting CC.EN = 1 00:15:13.609 [2024-07-15 03:18:19.725683] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:15:13.609 [2024-07-15 03:18:19.725692] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:15:13.609 [2024-07-15 03:18:19.726534] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:15:13.609 [2024-07-15 03:18:19.727538] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:15:13.609 [2024-07-15 03:18:19.728548] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:15:13.609 [2024-07-15 03:18:19.729539] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:13.609 [2024-07-15 03:18:19.729650] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:15:13.609 [2024-07-15 03:18:19.730561] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:15:13.609 [2024-07-15 03:18:19.730579] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:15:13.609 [2024-07-15 03:18:19.730588] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to reset admin queue (timeout 30000 ms) 00:15:13.609 [2024-07-15 03:18:19.730612] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller (no timeout) 00:15:13.609 [2024-07-15 03:18:19.730625] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify controller (timeout 30000 ms) 00:15:13.609 [2024-07-15 03:18:19.730654] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:13.609 [2024-07-15 03:18:19.730664] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:13.609 [2024-07-15 03:18:19.730691] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:13.609 [2024-07-15 03:18:19.730739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:15:13.609 [2024-07-15 03:18:19.730758] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_xfer_size 131072 00:15:13.609 [2024-07-15 03:18:19.730771] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] MDTS max_xfer_size 131072 00:15:13.609 [2024-07-15 03:18:19.730779] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CNTLID 0x0001 00:15:13.609 [2024-07-15 03:18:19.730786] nvme_ctrlr.c:2071:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:15:13.609 [2024-07-15 03:18:19.730794] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_sges 1 00:15:13.609 [2024-07-15 03:18:19.730802] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] fuses compare and write: 1 00:15:13.609 [2024-07-15 03:18:19.730809] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to configure AER (timeout 30000 ms) 00:15:13.609 [2024-07-15 03:18:19.730822] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for configure aer (timeout 30000 ms) 00:15:13.609 [2024-07-15 03:18:19.730838] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:15:13.609 [2024-07-15 03:18:19.730856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:15:13.609 [2024-07-15 03:18:19.730902] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:15:13.609 [2024-07-15 03:18:19.730917] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:15:13.609 [2024-07-15 03:18:19.730929] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:15:13.609 [2024-07-15 03:18:19.730941] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:15:13.609 [2024-07-15 03:18:19.730949] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set keep alive timeout (timeout 30000 ms) 00:15:13.609 [2024-07-15 03:18:19.730965] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:15:13.609 [2024-07-15 03:18:19.730980] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:15:13.609 [2024-07-15 03:18:19.730993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:15:13.609 [2024-07-15 03:18:19.731004] nvme_ctrlr.c:3010:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Controller adjusted keep alive timeout to 0 ms 00:15:13.609 [2024-07-15 03:18:19.731013] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller iocs specific (timeout 30000 ms) 00:15:13.609 [2024-07-15 03:18:19.731024] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set number of queues (timeout 30000 ms) 00:15:13.609 [2024-07-15 03:18:19.731035] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set number of queues (timeout 30000 ms) 00:15:13.610 [2024-07-15 03:18:19.731048] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:15:13.610 [2024-07-15 03:18:19.731063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:15:13.610 [2024-07-15 03:18:19.731130] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify active ns (timeout 30000 ms) 00:15:13.610 [2024-07-15 03:18:19.731144] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify active ns (timeout 30000 ms) 00:15:13.610 [2024-07-15 03:18:19.731158] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:15:13.610 [2024-07-15 03:18:19.731166] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:15:13.610 [2024-07-15 03:18:19.731191] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:15:13.610 [2024-07-15 03:18:19.731208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:15:13.610 [2024-07-15 03:18:19.731227] nvme_ctrlr.c:4693:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Namespace 1 was added 00:15:13.610 [2024-07-15 03:18:19.731259] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns (timeout 30000 ms) 00:15:13.610 [2024-07-15 03:18:19.731275] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify ns (timeout 30000 ms) 00:15:13.610 [2024-07-15 03:18:19.731287] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:13.610 [2024-07-15 03:18:19.731295] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:13.610 [2024-07-15 03:18:19.731304] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:13.610 [2024-07-15 03:18:19.731328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:15:13.610 [2024-07-15 03:18:19.731351] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:15:13.610 [2024-07-15 03:18:19.731366] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:15:13.610 [2024-07-15 03:18:19.731377] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:13.610 [2024-07-15 03:18:19.731385] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:13.610 [2024-07-15 03:18:19.731394] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:13.610 [2024-07-15 03:18:19.731407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:15:13.610 [2024-07-15 03:18:19.731422] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns iocs specific (timeout 30000 ms) 00:15:13.610 [2024-07-15 03:18:19.731432] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported log pages (timeout 30000 ms) 00:15:13.610 [2024-07-15 03:18:19.731446] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported features (timeout 30000 ms) 00:15:13.610 [2024-07-15 03:18:19.731457] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host behavior support feature (timeout 30000 ms) 00:15:13.610 [2024-07-15 03:18:19.731465] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set doorbell buffer config (timeout 30000 ms) 00:15:13.610 [2024-07-15 03:18:19.731477] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host ID (timeout 30000 ms) 00:15:13.610 [2024-07-15 03:18:19.731486] nvme_ctrlr.c:3110:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] NVMe-oF transport - not sending Set Features - Host ID 00:15:13.610 [2024-07-15 03:18:19.731493] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to transport ready (timeout 30000 ms) 00:15:13.610 [2024-07-15 03:18:19.731501] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to ready (no timeout) 00:15:13.610 [2024-07-15 03:18:19.731531] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:15:13.610 [2024-07-15 03:18:19.731549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:15:13.610 [2024-07-15 03:18:19.731568] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:15:13.610 [2024-07-15 03:18:19.731580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:15:13.610 [2024-07-15 03:18:19.731595] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:15:13.610 [2024-07-15 03:18:19.731609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:15:13.610 [2024-07-15 03:18:19.731625] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:15:13.610 [2024-07-15 03:18:19.731636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:15:13.610 [2024-07-15 03:18:19.731659] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:15:13.610 [2024-07-15 03:18:19.731668] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:15:13.610 [2024-07-15 03:18:19.731674] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:15:13.610 [2024-07-15 03:18:19.731680] nvme_pcie_common.c:1254:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:15:13.610 [2024-07-15 03:18:19.731689] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:15:13.610 [2024-07-15 03:18:19.731700] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:15:13.610 [2024-07-15 03:18:19.731707] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:15:13.610 [2024-07-15 03:18:19.731716] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:15:13.610 [2024-07-15 03:18:19.731726] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:15:13.610 [2024-07-15 03:18:19.731734] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:13.610 [2024-07-15 03:18:19.731742] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:13.610 [2024-07-15 03:18:19.731754] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:15:13.610 [2024-07-15 03:18:19.731761] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:15:13.610 [2024-07-15 03:18:19.731770] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:15:13.610 [2024-07-15 03:18:19.731781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:15:13.610 [2024-07-15 03:18:19.731800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:15:13.610 [2024-07-15 03:18:19.731820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:15:13.610 [2024-07-15 03:18:19.731832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:15:13.610 ===================================================== 00:15:13.610 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:13.610 ===================================================== 00:15:13.610 Controller Capabilities/Features 00:15:13.610 ================================ 00:15:13.610 Vendor ID: 4e58 00:15:13.610 Subsystem Vendor ID: 4e58 00:15:13.610 Serial Number: SPDK1 00:15:13.610 Model Number: SPDK bdev Controller 00:15:13.610 Firmware Version: 24.09 00:15:13.610 Recommended Arb Burst: 6 00:15:13.610 IEEE OUI Identifier: 8d 6b 50 00:15:13.610 Multi-path I/O 00:15:13.610 May have multiple subsystem ports: Yes 00:15:13.610 May have multiple controllers: Yes 00:15:13.610 Associated with SR-IOV VF: No 00:15:13.610 Max Data Transfer Size: 131072 00:15:13.610 Max Number of Namespaces: 32 00:15:13.610 Max Number of I/O Queues: 127 00:15:13.610 NVMe Specification Version (VS): 1.3 00:15:13.610 NVMe Specification Version (Identify): 1.3 00:15:13.610 Maximum Queue Entries: 256 00:15:13.610 Contiguous Queues Required: Yes 00:15:13.610 Arbitration Mechanisms Supported 00:15:13.610 Weighted Round Robin: Not Supported 00:15:13.610 Vendor Specific: Not Supported 00:15:13.610 Reset Timeout: 15000 ms 00:15:13.610 Doorbell Stride: 4 bytes 00:15:13.610 NVM Subsystem Reset: Not Supported 00:15:13.610 Command Sets Supported 00:15:13.610 NVM Command Set: Supported 00:15:13.610 Boot Partition: Not Supported 00:15:13.610 Memory Page Size Minimum: 4096 bytes 00:15:13.610 Memory Page Size Maximum: 4096 bytes 00:15:13.610 Persistent Memory Region: Not Supported 00:15:13.610 Optional Asynchronous Events Supported 00:15:13.610 Namespace Attribute Notices: Supported 00:15:13.610 Firmware Activation Notices: Not Supported 00:15:13.610 ANA Change Notices: Not Supported 00:15:13.610 PLE Aggregate Log Change Notices: Not Supported 00:15:13.610 LBA Status Info Alert Notices: Not Supported 00:15:13.610 EGE Aggregate Log Change Notices: Not Supported 00:15:13.610 Normal NVM Subsystem Shutdown event: Not Supported 00:15:13.610 Zone Descriptor Change Notices: Not Supported 00:15:13.610 Discovery Log Change Notices: Not Supported 00:15:13.610 Controller Attributes 00:15:13.610 128-bit Host Identifier: Supported 00:15:13.610 Non-Operational Permissive Mode: Not Supported 00:15:13.610 NVM Sets: Not Supported 00:15:13.610 Read Recovery Levels: Not Supported 00:15:13.610 Endurance Groups: Not Supported 00:15:13.610 Predictable Latency Mode: Not Supported 00:15:13.610 Traffic Based Keep ALive: Not Supported 00:15:13.611 Namespace Granularity: Not Supported 00:15:13.611 SQ Associations: Not Supported 00:15:13.611 UUID List: Not Supported 00:15:13.611 Multi-Domain Subsystem: Not Supported 00:15:13.611 Fixed Capacity Management: Not Supported 00:15:13.611 Variable Capacity Management: Not Supported 00:15:13.611 Delete Endurance Group: Not Supported 00:15:13.611 Delete NVM Set: Not Supported 00:15:13.611 Extended LBA Formats Supported: Not Supported 00:15:13.611 Flexible Data Placement Supported: Not Supported 00:15:13.611 00:15:13.611 Controller Memory Buffer Support 00:15:13.611 ================================ 00:15:13.611 Supported: No 00:15:13.611 00:15:13.611 Persistent Memory Region Support 00:15:13.611 ================================ 00:15:13.611 Supported: No 00:15:13.611 00:15:13.611 Admin Command Set Attributes 00:15:13.611 ============================ 00:15:13.611 Security Send/Receive: Not Supported 00:15:13.611 Format NVM: Not Supported 00:15:13.611 Firmware Activate/Download: Not Supported 00:15:13.611 Namespace Management: Not Supported 00:15:13.611 Device Self-Test: Not Supported 00:15:13.611 Directives: Not Supported 00:15:13.611 NVMe-MI: Not Supported 00:15:13.611 Virtualization Management: Not Supported 00:15:13.611 Doorbell Buffer Config: Not Supported 00:15:13.611 Get LBA Status Capability: Not Supported 00:15:13.611 Command & Feature Lockdown Capability: Not Supported 00:15:13.611 Abort Command Limit: 4 00:15:13.611 Async Event Request Limit: 4 00:15:13.611 Number of Firmware Slots: N/A 00:15:13.611 Firmware Slot 1 Read-Only: N/A 00:15:13.611 Firmware Activation Without Reset: N/A 00:15:13.611 Multiple Update Detection Support: N/A 00:15:13.611 Firmware Update Granularity: No Information Provided 00:15:13.611 Per-Namespace SMART Log: No 00:15:13.611 Asymmetric Namespace Access Log Page: Not Supported 00:15:13.611 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:15:13.611 Command Effects Log Page: Supported 00:15:13.611 Get Log Page Extended Data: Supported 00:15:13.611 Telemetry Log Pages: Not Supported 00:15:13.611 Persistent Event Log Pages: Not Supported 00:15:13.611 Supported Log Pages Log Page: May Support 00:15:13.611 Commands Supported & Effects Log Page: Not Supported 00:15:13.611 Feature Identifiers & Effects Log Page:May Support 00:15:13.611 NVMe-MI Commands & Effects Log Page: May Support 00:15:13.611 Data Area 4 for Telemetry Log: Not Supported 00:15:13.611 Error Log Page Entries Supported: 128 00:15:13.611 Keep Alive: Supported 00:15:13.611 Keep Alive Granularity: 10000 ms 00:15:13.611 00:15:13.611 NVM Command Set Attributes 00:15:13.611 ========================== 00:15:13.611 Submission Queue Entry Size 00:15:13.611 Max: 64 00:15:13.611 Min: 64 00:15:13.611 Completion Queue Entry Size 00:15:13.611 Max: 16 00:15:13.611 Min: 16 00:15:13.611 Number of Namespaces: 32 00:15:13.611 Compare Command: Supported 00:15:13.611 Write Uncorrectable Command: Not Supported 00:15:13.611 Dataset Management Command: Supported 00:15:13.611 Write Zeroes Command: Supported 00:15:13.611 Set Features Save Field: Not Supported 00:15:13.611 Reservations: Not Supported 00:15:13.611 Timestamp: Not Supported 00:15:13.611 Copy: Supported 00:15:13.611 Volatile Write Cache: Present 00:15:13.611 Atomic Write Unit (Normal): 1 00:15:13.611 Atomic Write Unit (PFail): 1 00:15:13.611 Atomic Compare & Write Unit: 1 00:15:13.611 Fused Compare & Write: Supported 00:15:13.611 Scatter-Gather List 00:15:13.611 SGL Command Set: Supported (Dword aligned) 00:15:13.611 SGL Keyed: Not Supported 00:15:13.611 SGL Bit Bucket Descriptor: Not Supported 00:15:13.611 SGL Metadata Pointer: Not Supported 00:15:13.611 Oversized SGL: Not Supported 00:15:13.611 SGL Metadata Address: Not Supported 00:15:13.611 SGL Offset: Not Supported 00:15:13.611 Transport SGL Data Block: Not Supported 00:15:13.611 Replay Protected Memory Block: Not Supported 00:15:13.611 00:15:13.611 Firmware Slot Information 00:15:13.611 ========================= 00:15:13.611 Active slot: 1 00:15:13.611 Slot 1 Firmware Revision: 24.09 00:15:13.611 00:15:13.611 00:15:13.611 Commands Supported and Effects 00:15:13.611 ============================== 00:15:13.611 Admin Commands 00:15:13.611 -------------- 00:15:13.611 Get Log Page (02h): Supported 00:15:13.611 Identify (06h): Supported 00:15:13.611 Abort (08h): Supported 00:15:13.611 Set Features (09h): Supported 00:15:13.611 Get Features (0Ah): Supported 00:15:13.611 Asynchronous Event Request (0Ch): Supported 00:15:13.611 Keep Alive (18h): Supported 00:15:13.611 I/O Commands 00:15:13.611 ------------ 00:15:13.611 Flush (00h): Supported LBA-Change 00:15:13.611 Write (01h): Supported LBA-Change 00:15:13.611 Read (02h): Supported 00:15:13.611 Compare (05h): Supported 00:15:13.611 Write Zeroes (08h): Supported LBA-Change 00:15:13.611 Dataset Management (09h): Supported LBA-Change 00:15:13.611 Copy (19h): Supported LBA-Change 00:15:13.611 00:15:13.611 Error Log 00:15:13.611 ========= 00:15:13.611 00:15:13.611 Arbitration 00:15:13.611 =========== 00:15:13.611 Arbitration Burst: 1 00:15:13.611 00:15:13.611 Power Management 00:15:13.611 ================ 00:15:13.611 Number of Power States: 1 00:15:13.611 Current Power State: Power State #0 00:15:13.611 Power State #0: 00:15:13.611 Max Power: 0.00 W 00:15:13.611 Non-Operational State: Operational 00:15:13.611 Entry Latency: Not Reported 00:15:13.611 Exit Latency: Not Reported 00:15:13.611 Relative Read Throughput: 0 00:15:13.611 Relative Read Latency: 0 00:15:13.611 Relative Write Throughput: 0 00:15:13.611 Relative Write Latency: 0 00:15:13.611 Idle Power: Not Reported 00:15:13.611 Active Power: Not Reported 00:15:13.611 Non-Operational Permissive Mode: Not Supported 00:15:13.611 00:15:13.611 Health Information 00:15:13.611 ================== 00:15:13.611 Critical Warnings: 00:15:13.611 Available Spare Space: OK 00:15:13.611 Temperature: OK 00:15:13.611 Device Reliability: OK 00:15:13.611 Read Only: No 00:15:13.611 Volatile Memory Backup: OK 00:15:13.611 Current Temperature: 0 Kelvin (-273 Celsius) 00:15:13.611 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:15:13.611 Available Spare: 0% 00:15:13.611 Available Sp[2024-07-15 03:18:19.731987] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:15:13.611 [2024-07-15 03:18:19.732005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:15:13.611 [2024-07-15 03:18:19.732056] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Prepare to destruct SSD 00:15:13.611 [2024-07-15 03:18:19.732074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.611 [2024-07-15 03:18:19.732086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.611 [2024-07-15 03:18:19.732096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.611 [2024-07-15 03:18:19.732106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.611 [2024-07-15 03:18:19.735889] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:15:13.611 [2024-07-15 03:18:19.735912] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:15:13.611 [2024-07-15 03:18:19.736593] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:13.611 [2024-07-15 03:18:19.736683] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] RTD3E = 0 us 00:15:13.611 [2024-07-15 03:18:19.736698] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown timeout = 10000 ms 00:15:13.611 [2024-07-15 03:18:19.737600] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:15:13.611 [2024-07-15 03:18:19.737624] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown complete in 0 milliseconds 00:15:13.611 [2024-07-15 03:18:19.737680] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:15:13.611 [2024-07-15 03:18:19.739645] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:15:13.870 are Threshold: 0% 00:15:13.870 Life Percentage Used: 0% 00:15:13.870 Data Units Read: 0 00:15:13.870 Data Units Written: 0 00:15:13.870 Host Read Commands: 0 00:15:13.870 Host Write Commands: 0 00:15:13.870 Controller Busy Time: 0 minutes 00:15:13.870 Power Cycles: 0 00:15:13.870 Power On Hours: 0 hours 00:15:13.870 Unsafe Shutdowns: 0 00:15:13.870 Unrecoverable Media Errors: 0 00:15:13.870 Lifetime Error Log Entries: 0 00:15:13.870 Warning Temperature Time: 0 minutes 00:15:13.870 Critical Temperature Time: 0 minutes 00:15:13.870 00:15:13.870 Number of Queues 00:15:13.870 ================ 00:15:13.870 Number of I/O Submission Queues: 127 00:15:13.870 Number of I/O Completion Queues: 127 00:15:13.870 00:15:13.870 Active Namespaces 00:15:13.870 ================= 00:15:13.870 Namespace ID:1 00:15:13.870 Error Recovery Timeout: Unlimited 00:15:13.870 Command Set Identifier: NVM (00h) 00:15:13.870 Deallocate: Supported 00:15:13.870 Deallocated/Unwritten Error: Not Supported 00:15:13.870 Deallocated Read Value: Unknown 00:15:13.870 Deallocate in Write Zeroes: Not Supported 00:15:13.870 Deallocated Guard Field: 0xFFFF 00:15:13.870 Flush: Supported 00:15:13.870 Reservation: Supported 00:15:13.870 Namespace Sharing Capabilities: Multiple Controllers 00:15:13.870 Size (in LBAs): 131072 (0GiB) 00:15:13.870 Capacity (in LBAs): 131072 (0GiB) 00:15:13.870 Utilization (in LBAs): 131072 (0GiB) 00:15:13.870 NGUID: 24EF26F947254EAF95305F73C0239119 00:15:13.870 UUID: 24ef26f9-4725-4eaf-9530-5f73c0239119 00:15:13.870 Thin Provisioning: Not Supported 00:15:13.870 Per-NS Atomic Units: Yes 00:15:13.870 Atomic Boundary Size (Normal): 0 00:15:13.870 Atomic Boundary Size (PFail): 0 00:15:13.870 Atomic Boundary Offset: 0 00:15:13.870 Maximum Single Source Range Length: 65535 00:15:13.870 Maximum Copy Length: 65535 00:15:13.870 Maximum Source Range Count: 1 00:15:13.870 NGUID/EUI64 Never Reused: No 00:15:13.870 Namespace Write Protected: No 00:15:13.870 Number of LBA Formats: 1 00:15:13.870 Current LBA Format: LBA Format #00 00:15:13.870 LBA Format #00: Data Size: 512 Metadata Size: 0 00:15:13.870 00:15:13.870 03:18:19 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:15:13.870 EAL: No free 2048 kB hugepages reported on node 1 00:15:13.870 [2024-07-15 03:18:19.969716] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:19.171 Initializing NVMe Controllers 00:15:19.171 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:19.171 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:15:19.171 Initialization complete. Launching workers. 00:15:19.171 ======================================================== 00:15:19.171 Latency(us) 00:15:19.171 Device Information : IOPS MiB/s Average min max 00:15:19.171 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 34101.60 133.21 3753.04 1183.65 7526.79 00:15:19.171 ======================================================== 00:15:19.171 Total : 34101.60 133.21 3753.04 1183.65 7526.79 00:15:19.171 00:15:19.171 [2024-07-15 03:18:24.992202] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:19.171 03:18:25 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:15:19.171 EAL: No free 2048 kB hugepages reported on node 1 00:15:19.171 [2024-07-15 03:18:25.236421] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:24.429 Initializing NVMe Controllers 00:15:24.429 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:24.429 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:15:24.429 Initialization complete. Launching workers. 00:15:24.429 ======================================================== 00:15:24.429 Latency(us) 00:15:24.429 Device Information : IOPS MiB/s Average min max 00:15:24.429 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 16025.60 62.60 7994.05 6912.99 15990.30 00:15:24.429 ======================================================== 00:15:24.429 Total : 16025.60 62.60 7994.05 6912.99 15990.30 00:15:24.429 00:15:24.429 [2024-07-15 03:18:30.271310] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:24.429 03:18:30 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:15:24.429 EAL: No free 2048 kB hugepages reported on node 1 00:15:24.429 [2024-07-15 03:18:30.482366] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:29.688 [2024-07-15 03:18:35.554227] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:29.688 Initializing NVMe Controllers 00:15:29.688 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:29.688 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:29.688 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:15:29.688 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:15:29.688 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:15:29.688 Initialization complete. Launching workers. 00:15:29.688 Starting thread on core 2 00:15:29.688 Starting thread on core 3 00:15:29.688 Starting thread on core 1 00:15:29.688 03:18:35 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:15:29.688 EAL: No free 2048 kB hugepages reported on node 1 00:15:29.947 [2024-07-15 03:18:35.852840] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:34.133 [2024-07-15 03:18:39.467142] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:34.133 Initializing NVMe Controllers 00:15:34.133 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:15:34.133 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:15:34.133 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:15:34.133 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:15:34.133 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:15:34.133 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:15:34.133 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:15:34.133 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:15:34.133 Initialization complete. Launching workers. 00:15:34.133 Starting thread on core 1 with urgent priority queue 00:15:34.133 Starting thread on core 2 with urgent priority queue 00:15:34.133 Starting thread on core 3 with urgent priority queue 00:15:34.133 Starting thread on core 0 with urgent priority queue 00:15:34.133 SPDK bdev Controller (SPDK1 ) core 0: 2252.00 IO/s 44.40 secs/100000 ios 00:15:34.133 SPDK bdev Controller (SPDK1 ) core 1: 2374.67 IO/s 42.11 secs/100000 ios 00:15:34.133 SPDK bdev Controller (SPDK1 ) core 2: 2367.67 IO/s 42.24 secs/100000 ios 00:15:34.133 SPDK bdev Controller (SPDK1 ) core 3: 2291.67 IO/s 43.64 secs/100000 ios 00:15:34.133 ======================================================== 00:15:34.133 00:15:34.133 03:18:39 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:15:34.133 EAL: No free 2048 kB hugepages reported on node 1 00:15:34.133 [2024-07-15 03:18:39.756404] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:34.133 Initializing NVMe Controllers 00:15:34.133 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:15:34.133 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:15:34.133 Namespace ID: 1 size: 0GB 00:15:34.133 Initialization complete. 00:15:34.133 INFO: using host memory buffer for IO 00:15:34.133 Hello world! 00:15:34.133 [2024-07-15 03:18:39.791030] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:34.133 03:18:39 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:15:34.133 EAL: No free 2048 kB hugepages reported on node 1 00:15:34.133 [2024-07-15 03:18:40.095427] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:35.065 Initializing NVMe Controllers 00:15:35.065 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:15:35.065 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:15:35.065 Initialization complete. Launching workers. 00:15:35.065 submit (in ns) avg, min, max = 8444.4, 3505.6, 4015972.2 00:15:35.065 complete (in ns) avg, min, max = 24770.6, 2072.2, 4022942.2 00:15:35.065 00:15:35.065 Submit histogram 00:15:35.065 ================ 00:15:35.065 Range in us Cumulative Count 00:15:35.065 3.484 - 3.508: 0.0150% ( 2) 00:15:35.065 3.508 - 3.532: 0.7597% ( 99) 00:15:35.065 3.532 - 3.556: 2.7379% ( 263) 00:15:35.065 3.556 - 3.579: 7.6344% ( 651) 00:15:35.065 3.579 - 3.603: 15.2388% ( 1011) 00:15:35.065 3.603 - 3.627: 25.0545% ( 1305) 00:15:35.065 3.627 - 3.650: 33.9601% ( 1184) 00:15:35.065 3.650 - 3.674: 40.9703% ( 932) 00:15:35.065 3.674 - 3.698: 46.4159% ( 724) 00:15:35.065 3.698 - 3.721: 52.0873% ( 754) 00:15:35.065 3.721 - 3.745: 56.5175% ( 589) 00:15:35.065 3.745 - 3.769: 60.2407% ( 495) 00:15:35.065 3.769 - 3.793: 64.0918% ( 512) 00:15:35.065 3.793 - 3.816: 67.7924% ( 492) 00:15:35.065 3.816 - 3.840: 71.9143% ( 548) 00:15:35.065 3.840 - 3.864: 76.5100% ( 611) 00:15:35.065 3.864 - 3.887: 80.3610% ( 512) 00:15:35.065 3.887 - 3.911: 83.2343% ( 382) 00:15:35.065 3.911 - 3.935: 85.8518% ( 348) 00:15:35.065 3.935 - 3.959: 87.6044% ( 233) 00:15:35.065 3.959 - 3.982: 89.2441% ( 218) 00:15:35.065 3.982 - 4.006: 90.6581% ( 188) 00:15:35.065 4.006 - 4.030: 91.7187% ( 141) 00:15:35.065 4.030 - 4.053: 92.7115% ( 132) 00:15:35.065 4.053 - 4.077: 93.6743% ( 128) 00:15:35.065 4.077 - 4.101: 94.4641% ( 105) 00:15:35.065 4.101 - 4.124: 94.9605% ( 66) 00:15:35.065 4.124 - 4.148: 95.4494% ( 65) 00:15:35.065 4.148 - 4.172: 95.7728% ( 43) 00:15:35.065 4.172 - 4.196: 96.0737% ( 40) 00:15:35.065 4.196 - 4.219: 96.3069% ( 31) 00:15:35.065 4.219 - 4.243: 96.4949% ( 25) 00:15:35.065 4.243 - 4.267: 96.6604% ( 22) 00:15:35.065 4.267 - 4.290: 96.7807% ( 16) 00:15:35.065 4.290 - 4.314: 96.8484% ( 9) 00:15:35.065 4.314 - 4.338: 96.9613% ( 15) 00:15:35.065 4.338 - 4.361: 97.0590% ( 13) 00:15:35.065 4.361 - 4.385: 97.0741% ( 2) 00:15:35.065 4.385 - 4.409: 97.1493% ( 10) 00:15:35.065 4.409 - 4.433: 97.1869% ( 5) 00:15:35.065 4.433 - 4.456: 97.2095% ( 3) 00:15:35.065 4.456 - 4.480: 97.2320% ( 3) 00:15:35.065 4.480 - 4.504: 97.2396% ( 1) 00:15:35.065 4.504 - 4.527: 97.2772% ( 5) 00:15:35.065 4.527 - 4.551: 97.2922% ( 2) 00:15:35.065 4.551 - 4.575: 97.3298% ( 5) 00:15:35.065 4.575 - 4.599: 97.3449% ( 2) 00:15:35.065 4.599 - 4.622: 97.3599% ( 2) 00:15:35.065 4.622 - 4.646: 97.3674% ( 1) 00:15:35.065 4.646 - 4.670: 97.3750% ( 1) 00:15:35.065 4.670 - 4.693: 97.3975% ( 3) 00:15:35.065 4.693 - 4.717: 97.4276% ( 4) 00:15:35.065 4.717 - 4.741: 97.4803% ( 7) 00:15:35.065 4.741 - 4.764: 97.5254% ( 6) 00:15:35.065 4.764 - 4.788: 97.5705% ( 6) 00:15:35.065 4.788 - 4.812: 97.6081% ( 5) 00:15:35.065 4.812 - 4.836: 97.6382% ( 4) 00:15:35.065 4.836 - 4.859: 97.6758% ( 5) 00:15:35.065 4.859 - 4.883: 97.7360% ( 8) 00:15:35.065 4.883 - 4.907: 97.7811% ( 6) 00:15:35.065 4.907 - 4.930: 97.8263% ( 6) 00:15:35.065 4.930 - 4.954: 97.8639% ( 5) 00:15:35.065 4.954 - 4.978: 97.9015% ( 5) 00:15:35.065 4.978 - 5.001: 97.9391% ( 5) 00:15:35.065 5.001 - 5.025: 97.9767% ( 5) 00:15:35.065 5.025 - 5.049: 98.0068% ( 4) 00:15:35.065 5.049 - 5.073: 98.0444% ( 5) 00:15:35.065 5.073 - 5.096: 98.0970% ( 7) 00:15:35.065 5.096 - 5.120: 98.1121% ( 2) 00:15:35.065 5.120 - 5.144: 98.1422% ( 4) 00:15:35.065 5.144 - 5.167: 98.1572% ( 2) 00:15:35.065 5.167 - 5.191: 98.1722% ( 2) 00:15:35.065 5.191 - 5.215: 98.1873% ( 2) 00:15:35.065 5.215 - 5.239: 98.1948% ( 1) 00:15:35.065 5.239 - 5.262: 98.2023% ( 1) 00:15:35.065 5.262 - 5.286: 98.2099% ( 1) 00:15:35.065 5.286 - 5.310: 98.2174% ( 1) 00:15:35.065 5.310 - 5.333: 98.2249% ( 1) 00:15:35.065 5.404 - 5.428: 98.2324% ( 1) 00:15:35.065 5.523 - 5.547: 98.2399% ( 1) 00:15:35.065 5.618 - 5.641: 98.2475% ( 1) 00:15:35.065 5.689 - 5.713: 98.2550% ( 1) 00:15:35.065 5.760 - 5.784: 98.2625% ( 1) 00:15:35.065 5.855 - 5.879: 98.2700% ( 1) 00:15:35.065 5.902 - 5.926: 98.2775% ( 1) 00:15:35.065 5.926 - 5.950: 98.2851% ( 1) 00:15:35.065 6.068 - 6.116: 98.2926% ( 1) 00:15:35.065 6.116 - 6.163: 98.3001% ( 1) 00:15:35.065 6.258 - 6.305: 98.3076% ( 1) 00:15:35.065 6.542 - 6.590: 98.3152% ( 1) 00:15:35.065 6.637 - 6.684: 98.3302% ( 2) 00:15:35.065 6.684 - 6.732: 98.3377% ( 1) 00:15:35.065 6.732 - 6.779: 98.3452% ( 1) 00:15:35.065 6.827 - 6.874: 98.3528% ( 1) 00:15:35.065 7.064 - 7.111: 98.3603% ( 1) 00:15:35.065 7.111 - 7.159: 98.3678% ( 1) 00:15:35.065 7.301 - 7.348: 98.3753% ( 1) 00:15:35.065 7.348 - 7.396: 98.3829% ( 1) 00:15:35.065 7.396 - 7.443: 98.3904% ( 1) 00:15:35.065 7.490 - 7.538: 98.3979% ( 1) 00:15:35.065 7.585 - 7.633: 98.4054% ( 1) 00:15:35.065 7.727 - 7.775: 98.4129% ( 1) 00:15:35.065 8.107 - 8.154: 98.4280% ( 2) 00:15:35.065 8.154 - 8.201: 98.4355% ( 1) 00:15:35.065 8.201 - 8.249: 98.4430% ( 1) 00:15:35.065 8.249 - 8.296: 98.4505% ( 1) 00:15:35.065 8.391 - 8.439: 98.4581% ( 1) 00:15:35.065 8.486 - 8.533: 98.4656% ( 1) 00:15:35.065 8.581 - 8.628: 98.4806% ( 2) 00:15:35.065 8.628 - 8.676: 98.4882% ( 1) 00:15:35.065 8.723 - 8.770: 98.4957% ( 1) 00:15:35.065 8.818 - 8.865: 98.5032% ( 1) 00:15:35.065 8.865 - 8.913: 98.5107% ( 1) 00:15:35.065 8.960 - 9.007: 98.5333% ( 3) 00:15:35.065 9.007 - 9.055: 98.5483% ( 2) 00:15:35.065 9.102 - 9.150: 98.5634% ( 2) 00:15:35.065 9.150 - 9.197: 98.5784% ( 2) 00:15:35.065 9.197 - 9.244: 98.5859% ( 1) 00:15:35.065 9.244 - 9.292: 98.6010% ( 2) 00:15:35.065 9.292 - 9.339: 98.6160% ( 2) 00:15:35.065 9.339 - 9.387: 98.6235% ( 1) 00:15:35.065 9.387 - 9.434: 98.6311% ( 1) 00:15:35.065 9.481 - 9.529: 98.6386% ( 1) 00:15:35.065 9.671 - 9.719: 98.6461% ( 1) 00:15:35.065 9.813 - 9.861: 98.6536% ( 1) 00:15:35.065 9.956 - 10.003: 98.6612% ( 1) 00:15:35.065 10.003 - 10.050: 98.6687% ( 1) 00:15:35.065 10.050 - 10.098: 98.6837% ( 2) 00:15:35.065 10.098 - 10.145: 98.6912% ( 1) 00:15:35.065 10.145 - 10.193: 98.7063% ( 2) 00:15:35.065 10.382 - 10.430: 98.7138% ( 1) 00:15:35.065 10.524 - 10.572: 98.7288% ( 2) 00:15:35.065 10.619 - 10.667: 98.7589% ( 4) 00:15:35.065 10.714 - 10.761: 98.7665% ( 1) 00:15:35.065 10.809 - 10.856: 98.7815% ( 2) 00:15:35.065 10.999 - 11.046: 98.7965% ( 2) 00:15:35.065 11.046 - 11.093: 98.8041% ( 1) 00:15:35.065 11.283 - 11.330: 98.8116% ( 1) 00:15:35.065 11.662 - 11.710: 98.8191% ( 1) 00:15:35.065 11.757 - 11.804: 98.8266% ( 1) 00:15:35.065 11.804 - 11.852: 98.8341% ( 1) 00:15:35.065 12.089 - 12.136: 98.8417% ( 1) 00:15:35.065 12.231 - 12.326: 98.8492% ( 1) 00:15:35.065 12.326 - 12.421: 98.8567% ( 1) 00:15:35.065 12.421 - 12.516: 98.8718% ( 2) 00:15:35.065 12.516 - 12.610: 98.8793% ( 1) 00:15:35.065 12.705 - 12.800: 98.8868% ( 1) 00:15:35.065 13.084 - 13.179: 98.8943% ( 1) 00:15:35.065 13.179 - 13.274: 98.9018% ( 1) 00:15:35.065 13.274 - 13.369: 98.9244% ( 3) 00:15:35.065 13.464 - 13.559: 98.9319% ( 1) 00:15:35.065 13.559 - 13.653: 98.9470% ( 2) 00:15:35.065 13.653 - 13.748: 98.9545% ( 1) 00:15:35.065 13.748 - 13.843: 98.9620% ( 1) 00:15:35.065 13.938 - 14.033: 98.9695% ( 1) 00:15:35.065 14.222 - 14.317: 98.9771% ( 1) 00:15:35.065 14.317 - 14.412: 98.9846% ( 1) 00:15:35.065 14.412 - 14.507: 98.9996% ( 2) 00:15:35.065 14.507 - 14.601: 99.0071% ( 1) 00:15:35.065 14.601 - 14.696: 99.0147% ( 1) 00:15:35.065 14.696 - 14.791: 99.0222% ( 1) 00:15:35.065 14.791 - 14.886: 99.0372% ( 2) 00:15:35.065 14.981 - 15.076: 99.0448% ( 1) 00:15:35.065 15.170 - 15.265: 99.0598% ( 2) 00:15:35.065 16.877 - 16.972: 99.0673% ( 1) 00:15:35.065 17.161 - 17.256: 99.0824% ( 2) 00:15:35.065 17.256 - 17.351: 99.0974% ( 2) 00:15:35.065 17.351 - 17.446: 99.1275% ( 4) 00:15:35.065 17.446 - 17.541: 99.1576% ( 4) 00:15:35.065 17.541 - 17.636: 99.1877% ( 4) 00:15:35.065 17.636 - 17.730: 99.2403% ( 7) 00:15:35.065 17.730 - 17.825: 99.3231% ( 11) 00:15:35.065 17.825 - 17.920: 99.3607% ( 5) 00:15:35.065 17.920 - 18.015: 99.3757% ( 2) 00:15:35.065 18.015 - 18.110: 99.4284% ( 7) 00:15:35.065 18.110 - 18.204: 99.4735% ( 6) 00:15:35.065 18.204 - 18.299: 99.5337% ( 8) 00:15:35.065 18.299 - 18.394: 99.6014% ( 9) 00:15:35.065 18.394 - 18.489: 99.6766% ( 10) 00:15:35.065 18.489 - 18.584: 99.7292% ( 7) 00:15:35.065 18.584 - 18.679: 99.7518% ( 3) 00:15:35.065 18.679 - 18.773: 99.7744% ( 3) 00:15:35.065 18.773 - 18.868: 99.7894% ( 2) 00:15:35.065 18.868 - 18.963: 99.8044% ( 2) 00:15:35.065 19.058 - 19.153: 99.8120% ( 1) 00:15:35.065 19.247 - 19.342: 99.8195% ( 1) 00:15:35.065 19.437 - 19.532: 99.8270% ( 1) 00:15:35.065 19.627 - 19.721: 99.8345% ( 1) 00:15:35.065 19.816 - 19.911: 99.8420% ( 1) 00:15:35.065 19.911 - 20.006: 99.8496% ( 1) 00:15:35.065 20.385 - 20.480: 99.8571% ( 1) 00:15:35.065 21.807 - 21.902: 99.8646% ( 1) 00:15:35.065 22.661 - 22.756: 99.8721% ( 1) 00:15:35.065 23.704 - 23.799: 99.8797% ( 1) 00:15:35.065 34.133 - 34.323: 99.8872% ( 1) 00:15:35.065 3980.705 - 4004.978: 99.9850% ( 13) 00:15:35.065 4004.978 - 4029.250: 100.0000% ( 2) 00:15:35.065 00:15:35.065 Complete histogram 00:15:35.065 ================== 00:15:35.065 Range in us Cumulative Count 00:15:35.065 2.062 - 2.074: 0.0075% ( 1) 00:15:35.065 2.074 - 2.086: 16.0135% ( 2128) 00:15:35.065 2.086 - 2.098: 41.7300% ( 3419) 00:15:35.065 2.098 - 2.110: 43.3471% ( 215) 00:15:35.065 2.110 - 2.121: 51.7262% ( 1114) 00:15:35.065 2.121 - 2.133: 56.3370% ( 613) 00:15:35.065 2.133 - 2.145: 57.8413% ( 200) 00:15:35.065 2.145 - 2.157: 69.0636% ( 1492) 00:15:35.065 2.157 - 2.169: 75.1185% ( 805) 00:15:35.065 2.169 - 2.181: 76.0587% ( 125) 00:15:35.065 2.181 - 2.193: 79.6690% ( 480) 00:15:35.065 2.193 - 2.204: 81.4291% ( 234) 00:15:35.065 2.204 - 2.216: 81.9556% ( 70) 00:15:35.065 2.216 - 2.228: 85.9346% ( 529) 00:15:35.065 2.228 - 2.240: 88.8906% ( 393) 00:15:35.065 2.240 - 2.252: 91.0643% ( 289) 00:15:35.065 2.252 - 2.264: 92.8996% ( 244) 00:15:35.065 2.264 - 2.276: 93.6066% ( 94) 00:15:35.065 2.276 - 2.287: 93.9451% ( 45) 00:15:35.065 2.287 - 2.299: 94.2760% ( 44) 00:15:35.065 2.299 - 2.311: 94.5995% ( 43) 00:15:35.065 2.311 - 2.323: 95.2012% ( 80) 00:15:35.065 2.323 - 2.335: 95.5096% ( 41) 00:15:35.065 2.335 - 2.347: 95.6074% ( 13) 00:15:35.065 2.347 - 2.359: 95.6675% ( 8) 00:15:35.065 2.359 - 2.370: 95.6901% ( 3) 00:15:35.065 2.370 - 2.382: 95.7428% ( 7) 00:15:35.065 2.382 - 2.394: 95.8932% ( 20) 00:15:35.065 2.394 - 2.406: 96.2768% ( 51) 00:15:35.065 2.406 - 2.418: 96.4648% ( 25) 00:15:35.065 2.418 - 2.430: 96.6228% ( 21) 00:15:35.065 2.430 - 2.441: 96.8635% ( 32) 00:15:35.066 2.441 - 2.453: 97.0139% ( 20) 00:15:35.066 2.453 - 2.465: 97.1568% ( 19) 00:15:35.066 2.465 - 2.477: 97.3674% ( 28) 00:15:35.066 2.477 - 2.489: 97.4953% ( 17) 00:15:35.066 2.489 - 2.501: 97.6081% ( 15) 00:15:35.066 2.501 - 2.513: 97.7886% ( 24) 00:15:35.066 2.513 - 2.524: 97.8714% ( 11) 00:15:35.066 2.524 - 2.536: 98.0068% ( 18) 00:15:35.066 2.536 - 2.548: 98.0820% ( 10) 00:15:35.066 2.548 - 2.560: 98.1422% ( 8) 00:15:35.066 2.560 - 2.572: 98.2174% ( 10) 00:15:35.066 2.572 - 2.584: 98.2775% ( 8) 00:15:35.066 2.584 - 2.596: 98.3302% ( 7) 00:15:35.066 2.596 - 2.607: 98.3528% ( 3) 00:15:35.066 2.607 - 2.619: 98.3753% ( 3) 00:15:35.066 2.619 - 2.631: 98.3829% ( 1) 00:15:35.066 2.631 - 2.643: 98.3904% ( 1) 00:15:35.066 2.643 - 2.655: 98.4129% ( 3) 00:15:35.066 2.655 - 2.667: 98.4205% ( 1) 00:15:35.066 2.702 - 2.714: 98.4280% ( 1) 00:15:35.066 2.714 - 2.726: 98.4355% ( 1) 00:15:35.066 2.726 - 2.738: 98.4430% ( 1) 00:15:35.066 2.750 - 2.761: 9[2024-07-15 03:18:41.116669] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:35.066 8.4581% ( 2) 00:15:35.066 2.761 - 2.773: 98.4656% ( 1) 00:15:35.066 2.773 - 2.785: 98.4731% ( 1) 00:15:35.066 2.809 - 2.821: 98.4806% ( 1) 00:15:35.066 2.833 - 2.844: 98.4882% ( 1) 00:15:35.066 2.880 - 2.892: 98.4957% ( 1) 00:15:35.066 2.927 - 2.939: 98.5032% ( 1) 00:15:35.066 2.999 - 3.010: 98.5107% ( 1) 00:15:35.066 3.295 - 3.319: 98.5182% ( 1) 00:15:35.066 3.319 - 3.342: 98.5258% ( 1) 00:15:35.066 3.342 - 3.366: 98.5333% ( 1) 00:15:35.066 3.390 - 3.413: 98.5408% ( 1) 00:15:35.066 3.413 - 3.437: 98.5558% ( 2) 00:15:35.066 3.437 - 3.461: 98.5709% ( 2) 00:15:35.066 3.484 - 3.508: 98.5784% ( 1) 00:15:35.066 3.508 - 3.532: 98.5935% ( 2) 00:15:35.066 3.532 - 3.556: 98.6085% ( 2) 00:15:35.066 3.579 - 3.603: 98.6235% ( 2) 00:15:35.066 3.603 - 3.627: 98.6311% ( 1) 00:15:35.066 3.627 - 3.650: 98.6386% ( 1) 00:15:35.066 3.650 - 3.674: 98.6461% ( 1) 00:15:35.066 3.698 - 3.721: 98.6687% ( 3) 00:15:35.066 3.745 - 3.769: 98.6762% ( 1) 00:15:35.066 3.769 - 3.793: 98.6912% ( 2) 00:15:35.066 3.793 - 3.816: 98.6988% ( 1) 00:15:35.066 3.840 - 3.864: 98.7063% ( 1) 00:15:35.066 3.864 - 3.887: 98.7138% ( 1) 00:15:35.066 3.887 - 3.911: 98.7288% ( 2) 00:15:35.066 3.911 - 3.935: 98.7364% ( 1) 00:15:35.066 3.959 - 3.982: 98.7439% ( 1) 00:15:35.066 4.006 - 4.030: 98.7665% ( 3) 00:15:35.066 4.053 - 4.077: 98.7740% ( 1) 00:15:35.066 4.148 - 4.172: 98.7815% ( 1) 00:15:35.066 4.385 - 4.409: 98.7890% ( 1) 00:15:35.066 4.409 - 4.433: 98.7965% ( 1) 00:15:35.066 5.902 - 5.926: 98.8041% ( 1) 00:15:35.066 6.044 - 6.068: 98.8116% ( 1) 00:15:35.066 6.779 - 6.827: 98.8191% ( 1) 00:15:35.066 6.827 - 6.874: 98.8266% ( 1) 00:15:35.066 6.921 - 6.969: 98.8341% ( 1) 00:15:35.066 6.969 - 7.016: 98.8417% ( 1) 00:15:35.066 7.016 - 7.064: 98.8492% ( 1) 00:15:35.066 7.206 - 7.253: 98.8567% ( 1) 00:15:35.066 7.490 - 7.538: 98.8642% ( 1) 00:15:35.066 7.538 - 7.585: 98.8718% ( 1) 00:15:35.066 7.585 - 7.633: 98.8793% ( 1) 00:15:35.066 7.727 - 7.775: 98.8868% ( 1) 00:15:35.066 7.870 - 7.917: 98.8943% ( 1) 00:15:35.066 8.154 - 8.201: 98.9018% ( 1) 00:15:35.066 8.439 - 8.486: 98.9094% ( 1) 00:15:35.066 8.818 - 8.865: 98.9169% ( 1) 00:15:35.066 9.102 - 9.150: 98.9244% ( 1) 00:15:35.066 10.999 - 11.046: 98.9319% ( 1) 00:15:35.066 11.046 - 11.093: 98.9395% ( 1) 00:15:35.066 12.089 - 12.136: 98.9470% ( 1) 00:15:35.066 15.360 - 15.455: 98.9620% ( 2) 00:15:35.066 15.455 - 15.550: 98.9695% ( 1) 00:15:35.066 15.550 - 15.644: 98.9771% ( 1) 00:15:35.066 15.644 - 15.739: 98.9996% ( 3) 00:15:35.066 15.739 - 15.834: 99.0147% ( 2) 00:15:35.066 15.834 - 15.929: 99.0222% ( 1) 00:15:35.066 15.929 - 16.024: 99.0448% ( 3) 00:15:35.066 16.119 - 16.213: 99.0824% ( 5) 00:15:35.066 16.213 - 16.308: 99.1124% ( 4) 00:15:35.066 16.308 - 16.403: 99.1350% ( 3) 00:15:35.066 16.403 - 16.498: 99.1726% ( 5) 00:15:35.066 16.498 - 16.593: 99.1801% ( 1) 00:15:35.066 16.593 - 16.687: 99.2328% ( 7) 00:15:35.066 16.687 - 16.782: 99.2779% ( 6) 00:15:35.066 16.782 - 16.877: 99.3005% ( 3) 00:15:35.066 16.877 - 16.972: 99.3531% ( 7) 00:15:35.066 16.972 - 17.067: 99.3607% ( 1) 00:15:35.066 17.067 - 17.161: 99.3682% ( 1) 00:15:35.066 17.161 - 17.256: 99.3983% ( 4) 00:15:35.066 17.730 - 17.825: 99.4058% ( 1) 00:15:35.066 17.825 - 17.920: 99.4133% ( 1) 00:15:35.066 18.394 - 18.489: 99.4208% ( 1) 00:15:35.066 18.773 - 18.868: 99.4284% ( 1) 00:15:35.066 56.889 - 57.268: 99.4359% ( 1) 00:15:35.066 3616.616 - 3640.889: 99.4434% ( 1) 00:15:35.066 3980.705 - 4004.978: 99.9173% ( 63) 00:15:35.066 4004.978 - 4029.250: 100.0000% ( 11) 00:15:35.066 00:15:35.066 03:18:41 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:15:35.066 03:18:41 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:15:35.066 03:18:41 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:15:35.066 03:18:41 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:15:35.066 03:18:41 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:15:35.323 [ 00:15:35.323 { 00:15:35.323 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:35.323 "subtype": "Discovery", 00:15:35.323 "listen_addresses": [], 00:15:35.323 "allow_any_host": true, 00:15:35.323 "hosts": [] 00:15:35.323 }, 00:15:35.323 { 00:15:35.323 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:15:35.323 "subtype": "NVMe", 00:15:35.323 "listen_addresses": [ 00:15:35.323 { 00:15:35.323 "trtype": "VFIOUSER", 00:15:35.323 "adrfam": "IPv4", 00:15:35.323 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:15:35.324 "trsvcid": "0" 00:15:35.324 } 00:15:35.324 ], 00:15:35.324 "allow_any_host": true, 00:15:35.324 "hosts": [], 00:15:35.324 "serial_number": "SPDK1", 00:15:35.324 "model_number": "SPDK bdev Controller", 00:15:35.324 "max_namespaces": 32, 00:15:35.324 "min_cntlid": 1, 00:15:35.324 "max_cntlid": 65519, 00:15:35.324 "namespaces": [ 00:15:35.324 { 00:15:35.324 "nsid": 1, 00:15:35.324 "bdev_name": "Malloc1", 00:15:35.324 "name": "Malloc1", 00:15:35.324 "nguid": "24EF26F947254EAF95305F73C0239119", 00:15:35.324 "uuid": "24ef26f9-4725-4eaf-9530-5f73c0239119" 00:15:35.324 } 00:15:35.324 ] 00:15:35.324 }, 00:15:35.324 { 00:15:35.324 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:15:35.324 "subtype": "NVMe", 00:15:35.324 "listen_addresses": [ 00:15:35.324 { 00:15:35.324 "trtype": "VFIOUSER", 00:15:35.324 "adrfam": "IPv4", 00:15:35.324 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:15:35.324 "trsvcid": "0" 00:15:35.324 } 00:15:35.324 ], 00:15:35.324 "allow_any_host": true, 00:15:35.324 "hosts": [], 00:15:35.324 "serial_number": "SPDK2", 00:15:35.324 "model_number": "SPDK bdev Controller", 00:15:35.324 "max_namespaces": 32, 00:15:35.324 "min_cntlid": 1, 00:15:35.324 "max_cntlid": 65519, 00:15:35.324 "namespaces": [ 00:15:35.324 { 00:15:35.324 "nsid": 1, 00:15:35.324 "bdev_name": "Malloc2", 00:15:35.324 "name": "Malloc2", 00:15:35.324 "nguid": "94BFD051513B4EB5A63FA198B52FAF09", 00:15:35.324 "uuid": "94bfd051-513b-4eb5-a63f-a198b52faf09" 00:15:35.324 } 00:15:35.324 ] 00:15:35.324 } 00:15:35.324 ] 00:15:35.324 03:18:41 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:15:35.324 03:18:41 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=3156795 00:15:35.324 03:18:41 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:15:35.324 03:18:41 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:15:35.324 03:18:41 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1265 -- # local i=0 00:15:35.324 03:18:41 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:35.324 03:18:41 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:35.324 03:18:41 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # return 0 00:15:35.324 03:18:41 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:15:35.324 03:18:41 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:15:35.582 EAL: No free 2048 kB hugepages reported on node 1 00:15:35.582 [2024-07-15 03:18:41.596372] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:35.840 Malloc3 00:15:35.840 03:18:41 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:15:35.840 [2024-07-15 03:18:41.965961] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:35.840 03:18:41 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:15:36.096 Asynchronous Event Request test 00:15:36.096 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:15:36.096 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:15:36.096 Registering asynchronous event callbacks... 00:15:36.096 Starting namespace attribute notice tests for all controllers... 00:15:36.096 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:15:36.096 aer_cb - Changed Namespace 00:15:36.096 Cleaning up... 00:15:36.354 [ 00:15:36.354 { 00:15:36.354 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:36.354 "subtype": "Discovery", 00:15:36.354 "listen_addresses": [], 00:15:36.354 "allow_any_host": true, 00:15:36.354 "hosts": [] 00:15:36.354 }, 00:15:36.354 { 00:15:36.354 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:15:36.354 "subtype": "NVMe", 00:15:36.354 "listen_addresses": [ 00:15:36.354 { 00:15:36.354 "trtype": "VFIOUSER", 00:15:36.354 "adrfam": "IPv4", 00:15:36.354 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:15:36.354 "trsvcid": "0" 00:15:36.354 } 00:15:36.354 ], 00:15:36.354 "allow_any_host": true, 00:15:36.354 "hosts": [], 00:15:36.354 "serial_number": "SPDK1", 00:15:36.354 "model_number": "SPDK bdev Controller", 00:15:36.354 "max_namespaces": 32, 00:15:36.354 "min_cntlid": 1, 00:15:36.354 "max_cntlid": 65519, 00:15:36.354 "namespaces": [ 00:15:36.354 { 00:15:36.354 "nsid": 1, 00:15:36.354 "bdev_name": "Malloc1", 00:15:36.354 "name": "Malloc1", 00:15:36.354 "nguid": "24EF26F947254EAF95305F73C0239119", 00:15:36.354 "uuid": "24ef26f9-4725-4eaf-9530-5f73c0239119" 00:15:36.354 }, 00:15:36.354 { 00:15:36.354 "nsid": 2, 00:15:36.354 "bdev_name": "Malloc3", 00:15:36.354 "name": "Malloc3", 00:15:36.354 "nguid": "D4DE2962E02649C9B60DBAFD9A3C3DEA", 00:15:36.354 "uuid": "d4de2962-e026-49c9-b60d-bafd9a3c3dea" 00:15:36.354 } 00:15:36.354 ] 00:15:36.354 }, 00:15:36.354 { 00:15:36.354 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:15:36.354 "subtype": "NVMe", 00:15:36.354 "listen_addresses": [ 00:15:36.354 { 00:15:36.354 "trtype": "VFIOUSER", 00:15:36.354 "adrfam": "IPv4", 00:15:36.354 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:15:36.354 "trsvcid": "0" 00:15:36.354 } 00:15:36.354 ], 00:15:36.354 "allow_any_host": true, 00:15:36.354 "hosts": [], 00:15:36.354 "serial_number": "SPDK2", 00:15:36.354 "model_number": "SPDK bdev Controller", 00:15:36.354 "max_namespaces": 32, 00:15:36.354 "min_cntlid": 1, 00:15:36.354 "max_cntlid": 65519, 00:15:36.354 "namespaces": [ 00:15:36.354 { 00:15:36.354 "nsid": 1, 00:15:36.354 "bdev_name": "Malloc2", 00:15:36.354 "name": "Malloc2", 00:15:36.354 "nguid": "94BFD051513B4EB5A63FA198B52FAF09", 00:15:36.354 "uuid": "94bfd051-513b-4eb5-a63f-a198b52faf09" 00:15:36.354 } 00:15:36.354 ] 00:15:36.354 } 00:15:36.354 ] 00:15:36.354 03:18:42 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 3156795 00:15:36.354 03:18:42 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:36.354 03:18:42 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:15:36.354 03:18:42 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:15:36.354 03:18:42 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:15:36.354 [2024-07-15 03:18:42.271391] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:15:36.354 [2024-07-15 03:18:42.271429] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3156926 ] 00:15:36.354 EAL: No free 2048 kB hugepages reported on node 1 00:15:36.354 [2024-07-15 03:18:42.305823] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:15:36.354 [2024-07-15 03:18:42.314229] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:15:36.354 [2024-07-15 03:18:42.314260] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7fa2366d3000 00:15:36.354 [2024-07-15 03:18:42.315210] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:36.354 [2024-07-15 03:18:42.316216] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:36.354 [2024-07-15 03:18:42.317225] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:36.354 [2024-07-15 03:18:42.318244] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:36.354 [2024-07-15 03:18:42.319233] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:36.354 [2024-07-15 03:18:42.320241] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:36.354 [2024-07-15 03:18:42.321248] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:36.354 [2024-07-15 03:18:42.322275] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:36.354 [2024-07-15 03:18:42.323268] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:15:36.354 [2024-07-15 03:18:42.323289] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7fa235487000 00:15:36.354 [2024-07-15 03:18:42.324401] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:15:36.354 [2024-07-15 03:18:42.336528] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:15:36.354 [2024-07-15 03:18:42.336562] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to connect adminq (no timeout) 00:15:36.354 [2024-07-15 03:18:42.345703] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:15:36.354 [2024-07-15 03:18:42.345759] nvme_pcie_common.c: 132:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:15:36.354 [2024-07-15 03:18:42.345847] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for connect adminq (no timeout) 00:15:36.354 [2024-07-15 03:18:42.345893] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs (no timeout) 00:15:36.354 [2024-07-15 03:18:42.345905] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs wait for vs (no timeout) 00:15:36.354 [2024-07-15 03:18:42.346696] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:15:36.354 [2024-07-15 03:18:42.346715] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap (no timeout) 00:15:36.354 [2024-07-15 03:18:42.346728] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap wait for cap (no timeout) 00:15:36.354 [2024-07-15 03:18:42.347700] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:15:36.354 [2024-07-15 03:18:42.347719] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en (no timeout) 00:15:36.354 [2024-07-15 03:18:42.347733] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en wait for cc (timeout 15000 ms) 00:15:36.354 [2024-07-15 03:18:42.348709] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:15:36.354 [2024-07-15 03:18:42.348730] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:15:36.354 [2024-07-15 03:18:42.349712] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:15:36.354 [2024-07-15 03:18:42.349732] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 0 && CSTS.RDY = 0 00:15:36.354 [2024-07-15 03:18:42.349741] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to controller is disabled (timeout 15000 ms) 00:15:36.354 [2024-07-15 03:18:42.349753] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:15:36.354 [2024-07-15 03:18:42.349869] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Setting CC.EN = 1 00:15:36.354 [2024-07-15 03:18:42.349888] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:15:36.354 [2024-07-15 03:18:42.349898] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:15:36.354 [2024-07-15 03:18:42.350718] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:15:36.355 [2024-07-15 03:18:42.351727] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:15:36.355 [2024-07-15 03:18:42.352737] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:15:36.355 [2024-07-15 03:18:42.353733] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:36.355 [2024-07-15 03:18:42.353812] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:15:36.355 [2024-07-15 03:18:42.354749] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:15:36.355 [2024-07-15 03:18:42.354768] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:15:36.355 [2024-07-15 03:18:42.354778] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to reset admin queue (timeout 30000 ms) 00:15:36.355 [2024-07-15 03:18:42.354801] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller (no timeout) 00:15:36.355 [2024-07-15 03:18:42.354815] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify controller (timeout 30000 ms) 00:15:36.355 [2024-07-15 03:18:42.354836] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:36.355 [2024-07-15 03:18:42.354846] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:36.355 [2024-07-15 03:18:42.354887] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:36.355 [2024-07-15 03:18:42.366891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:15:36.355 [2024-07-15 03:18:42.366914] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_xfer_size 131072 00:15:36.355 [2024-07-15 03:18:42.366928] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] MDTS max_xfer_size 131072 00:15:36.355 [2024-07-15 03:18:42.366937] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CNTLID 0x0001 00:15:36.355 [2024-07-15 03:18:42.366945] nvme_ctrlr.c:2071:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:15:36.355 [2024-07-15 03:18:42.366954] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_sges 1 00:15:36.355 [2024-07-15 03:18:42.366962] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] fuses compare and write: 1 00:15:36.355 [2024-07-15 03:18:42.366970] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to configure AER (timeout 30000 ms) 00:15:36.355 [2024-07-15 03:18:42.366983] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for configure aer (timeout 30000 ms) 00:15:36.355 [2024-07-15 03:18:42.367004] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:15:36.355 [2024-07-15 03:18:42.374890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:15:36.355 [2024-07-15 03:18:42.374918] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:15:36.355 [2024-07-15 03:18:42.374932] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:15:36.355 [2024-07-15 03:18:42.374944] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:15:36.355 [2024-07-15 03:18:42.374956] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:15:36.355 [2024-07-15 03:18:42.374964] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set keep alive timeout (timeout 30000 ms) 00:15:36.355 [2024-07-15 03:18:42.374979] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:15:36.355 [2024-07-15 03:18:42.374994] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:15:36.355 [2024-07-15 03:18:42.382889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:15:36.355 [2024-07-15 03:18:42.382908] nvme_ctrlr.c:3010:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Controller adjusted keep alive timeout to 0 ms 00:15:36.355 [2024-07-15 03:18:42.382917] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller iocs specific (timeout 30000 ms) 00:15:36.355 [2024-07-15 03:18:42.382929] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set number of queues (timeout 30000 ms) 00:15:36.355 [2024-07-15 03:18:42.382939] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set number of queues (timeout 30000 ms) 00:15:36.355 [2024-07-15 03:18:42.382953] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:15:36.355 [2024-07-15 03:18:42.388885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:15:36.355 [2024-07-15 03:18:42.388958] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify active ns (timeout 30000 ms) 00:15:36.355 [2024-07-15 03:18:42.388974] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify active ns (timeout 30000 ms) 00:15:36.355 [2024-07-15 03:18:42.388989] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:15:36.355 [2024-07-15 03:18:42.388997] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:15:36.355 [2024-07-15 03:18:42.389007] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:15:36.355 [2024-07-15 03:18:42.398889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:15:36.355 [2024-07-15 03:18:42.398912] nvme_ctrlr.c:4693:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Namespace 1 was added 00:15:36.355 [2024-07-15 03:18:42.398932] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns (timeout 30000 ms) 00:15:36.355 [2024-07-15 03:18:42.398947] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify ns (timeout 30000 ms) 00:15:36.355 [2024-07-15 03:18:42.398964] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:36.355 [2024-07-15 03:18:42.398973] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:36.355 [2024-07-15 03:18:42.398982] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:36.355 [2024-07-15 03:18:42.406887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:15:36.355 [2024-07-15 03:18:42.406915] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify namespace id descriptors (timeout 30000 ms) 00:15:36.355 [2024-07-15 03:18:42.406932] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:15:36.355 [2024-07-15 03:18:42.406945] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:36.355 [2024-07-15 03:18:42.406953] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:36.355 [2024-07-15 03:18:42.406963] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:36.355 [2024-07-15 03:18:42.414890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:15:36.355 [2024-07-15 03:18:42.414911] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns iocs specific (timeout 30000 ms) 00:15:36.355 [2024-07-15 03:18:42.414925] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported log pages (timeout 30000 ms) 00:15:36.355 [2024-07-15 03:18:42.414940] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported features (timeout 30000 ms) 00:15:36.355 [2024-07-15 03:18:42.414951] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host behavior support feature (timeout 30000 ms) 00:15:36.355 [2024-07-15 03:18:42.414960] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set doorbell buffer config (timeout 30000 ms) 00:15:36.355 [2024-07-15 03:18:42.414969] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host ID (timeout 30000 ms) 00:15:36.355 [2024-07-15 03:18:42.414977] nvme_ctrlr.c:3110:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] NVMe-oF transport - not sending Set Features - Host ID 00:15:36.355 [2024-07-15 03:18:42.414985] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to transport ready (timeout 30000 ms) 00:15:36.355 [2024-07-15 03:18:42.414994] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to ready (no timeout) 00:15:36.355 [2024-07-15 03:18:42.415019] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:15:36.355 [2024-07-15 03:18:42.422885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:15:36.355 [2024-07-15 03:18:42.422912] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:15:36.355 [2024-07-15 03:18:42.430890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:15:36.355 [2024-07-15 03:18:42.430915] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:15:36.355 [2024-07-15 03:18:42.438904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:15:36.355 [2024-07-15 03:18:42.438934] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:15:36.355 [2024-07-15 03:18:42.446891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:15:36.355 [2024-07-15 03:18:42.446924] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:15:36.355 [2024-07-15 03:18:42.446936] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:15:36.355 [2024-07-15 03:18:42.446942] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:15:36.355 [2024-07-15 03:18:42.446948] nvme_pcie_common.c:1254:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:15:36.356 [2024-07-15 03:18:42.446958] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:15:36.356 [2024-07-15 03:18:42.446970] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:15:36.356 [2024-07-15 03:18:42.446978] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:15:36.356 [2024-07-15 03:18:42.446987] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:15:36.356 [2024-07-15 03:18:42.446998] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:15:36.356 [2024-07-15 03:18:42.447006] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:36.356 [2024-07-15 03:18:42.447014] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:36.356 [2024-07-15 03:18:42.447026] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:15:36.356 [2024-07-15 03:18:42.447034] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:15:36.356 [2024-07-15 03:18:42.447043] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:15:36.356 [2024-07-15 03:18:42.454890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:15:36.356 [2024-07-15 03:18:42.454917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:15:36.356 [2024-07-15 03:18:42.454935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:15:36.356 [2024-07-15 03:18:42.454947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:15:36.356 ===================================================== 00:15:36.356 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:36.356 ===================================================== 00:15:36.356 Controller Capabilities/Features 00:15:36.356 ================================ 00:15:36.356 Vendor ID: 4e58 00:15:36.356 Subsystem Vendor ID: 4e58 00:15:36.356 Serial Number: SPDK2 00:15:36.356 Model Number: SPDK bdev Controller 00:15:36.356 Firmware Version: 24.09 00:15:36.356 Recommended Arb Burst: 6 00:15:36.356 IEEE OUI Identifier: 8d 6b 50 00:15:36.356 Multi-path I/O 00:15:36.356 May have multiple subsystem ports: Yes 00:15:36.356 May have multiple controllers: Yes 00:15:36.356 Associated with SR-IOV VF: No 00:15:36.356 Max Data Transfer Size: 131072 00:15:36.356 Max Number of Namespaces: 32 00:15:36.356 Max Number of I/O Queues: 127 00:15:36.356 NVMe Specification Version (VS): 1.3 00:15:36.356 NVMe Specification Version (Identify): 1.3 00:15:36.356 Maximum Queue Entries: 256 00:15:36.356 Contiguous Queues Required: Yes 00:15:36.356 Arbitration Mechanisms Supported 00:15:36.356 Weighted Round Robin: Not Supported 00:15:36.356 Vendor Specific: Not Supported 00:15:36.356 Reset Timeout: 15000 ms 00:15:36.356 Doorbell Stride: 4 bytes 00:15:36.356 NVM Subsystem Reset: Not Supported 00:15:36.356 Command Sets Supported 00:15:36.356 NVM Command Set: Supported 00:15:36.356 Boot Partition: Not Supported 00:15:36.356 Memory Page Size Minimum: 4096 bytes 00:15:36.356 Memory Page Size Maximum: 4096 bytes 00:15:36.356 Persistent Memory Region: Not Supported 00:15:36.356 Optional Asynchronous Events Supported 00:15:36.356 Namespace Attribute Notices: Supported 00:15:36.356 Firmware Activation Notices: Not Supported 00:15:36.356 ANA Change Notices: Not Supported 00:15:36.356 PLE Aggregate Log Change Notices: Not Supported 00:15:36.356 LBA Status Info Alert Notices: Not Supported 00:15:36.356 EGE Aggregate Log Change Notices: Not Supported 00:15:36.356 Normal NVM Subsystem Shutdown event: Not Supported 00:15:36.356 Zone Descriptor Change Notices: Not Supported 00:15:36.356 Discovery Log Change Notices: Not Supported 00:15:36.356 Controller Attributes 00:15:36.356 128-bit Host Identifier: Supported 00:15:36.356 Non-Operational Permissive Mode: Not Supported 00:15:36.356 NVM Sets: Not Supported 00:15:36.356 Read Recovery Levels: Not Supported 00:15:36.356 Endurance Groups: Not Supported 00:15:36.356 Predictable Latency Mode: Not Supported 00:15:36.356 Traffic Based Keep ALive: Not Supported 00:15:36.356 Namespace Granularity: Not Supported 00:15:36.356 SQ Associations: Not Supported 00:15:36.356 UUID List: Not Supported 00:15:36.356 Multi-Domain Subsystem: Not Supported 00:15:36.356 Fixed Capacity Management: Not Supported 00:15:36.356 Variable Capacity Management: Not Supported 00:15:36.356 Delete Endurance Group: Not Supported 00:15:36.356 Delete NVM Set: Not Supported 00:15:36.356 Extended LBA Formats Supported: Not Supported 00:15:36.356 Flexible Data Placement Supported: Not Supported 00:15:36.356 00:15:36.356 Controller Memory Buffer Support 00:15:36.356 ================================ 00:15:36.356 Supported: No 00:15:36.356 00:15:36.356 Persistent Memory Region Support 00:15:36.356 ================================ 00:15:36.356 Supported: No 00:15:36.356 00:15:36.356 Admin Command Set Attributes 00:15:36.356 ============================ 00:15:36.356 Security Send/Receive: Not Supported 00:15:36.356 Format NVM: Not Supported 00:15:36.356 Firmware Activate/Download: Not Supported 00:15:36.356 Namespace Management: Not Supported 00:15:36.356 Device Self-Test: Not Supported 00:15:36.356 Directives: Not Supported 00:15:36.356 NVMe-MI: Not Supported 00:15:36.356 Virtualization Management: Not Supported 00:15:36.356 Doorbell Buffer Config: Not Supported 00:15:36.356 Get LBA Status Capability: Not Supported 00:15:36.356 Command & Feature Lockdown Capability: Not Supported 00:15:36.356 Abort Command Limit: 4 00:15:36.356 Async Event Request Limit: 4 00:15:36.356 Number of Firmware Slots: N/A 00:15:36.356 Firmware Slot 1 Read-Only: N/A 00:15:36.356 Firmware Activation Without Reset: N/A 00:15:36.356 Multiple Update Detection Support: N/A 00:15:36.356 Firmware Update Granularity: No Information Provided 00:15:36.356 Per-Namespace SMART Log: No 00:15:36.356 Asymmetric Namespace Access Log Page: Not Supported 00:15:36.356 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:15:36.356 Command Effects Log Page: Supported 00:15:36.356 Get Log Page Extended Data: Supported 00:15:36.356 Telemetry Log Pages: Not Supported 00:15:36.356 Persistent Event Log Pages: Not Supported 00:15:36.356 Supported Log Pages Log Page: May Support 00:15:36.356 Commands Supported & Effects Log Page: Not Supported 00:15:36.356 Feature Identifiers & Effects Log Page:May Support 00:15:36.356 NVMe-MI Commands & Effects Log Page: May Support 00:15:36.356 Data Area 4 for Telemetry Log: Not Supported 00:15:36.356 Error Log Page Entries Supported: 128 00:15:36.356 Keep Alive: Supported 00:15:36.356 Keep Alive Granularity: 10000 ms 00:15:36.356 00:15:36.356 NVM Command Set Attributes 00:15:36.356 ========================== 00:15:36.356 Submission Queue Entry Size 00:15:36.356 Max: 64 00:15:36.356 Min: 64 00:15:36.356 Completion Queue Entry Size 00:15:36.356 Max: 16 00:15:36.356 Min: 16 00:15:36.356 Number of Namespaces: 32 00:15:36.356 Compare Command: Supported 00:15:36.356 Write Uncorrectable Command: Not Supported 00:15:36.356 Dataset Management Command: Supported 00:15:36.356 Write Zeroes Command: Supported 00:15:36.356 Set Features Save Field: Not Supported 00:15:36.356 Reservations: Not Supported 00:15:36.356 Timestamp: Not Supported 00:15:36.356 Copy: Supported 00:15:36.356 Volatile Write Cache: Present 00:15:36.356 Atomic Write Unit (Normal): 1 00:15:36.356 Atomic Write Unit (PFail): 1 00:15:36.356 Atomic Compare & Write Unit: 1 00:15:36.356 Fused Compare & Write: Supported 00:15:36.356 Scatter-Gather List 00:15:36.356 SGL Command Set: Supported (Dword aligned) 00:15:36.356 SGL Keyed: Not Supported 00:15:36.356 SGL Bit Bucket Descriptor: Not Supported 00:15:36.356 SGL Metadata Pointer: Not Supported 00:15:36.356 Oversized SGL: Not Supported 00:15:36.356 SGL Metadata Address: Not Supported 00:15:36.356 SGL Offset: Not Supported 00:15:36.356 Transport SGL Data Block: Not Supported 00:15:36.356 Replay Protected Memory Block: Not Supported 00:15:36.356 00:15:36.356 Firmware Slot Information 00:15:36.356 ========================= 00:15:36.356 Active slot: 1 00:15:36.356 Slot 1 Firmware Revision: 24.09 00:15:36.356 00:15:36.356 00:15:36.356 Commands Supported and Effects 00:15:36.356 ============================== 00:15:36.356 Admin Commands 00:15:36.356 -------------- 00:15:36.356 Get Log Page (02h): Supported 00:15:36.356 Identify (06h): Supported 00:15:36.356 Abort (08h): Supported 00:15:36.356 Set Features (09h): Supported 00:15:36.356 Get Features (0Ah): Supported 00:15:36.356 Asynchronous Event Request (0Ch): Supported 00:15:36.356 Keep Alive (18h): Supported 00:15:36.356 I/O Commands 00:15:36.356 ------------ 00:15:36.356 Flush (00h): Supported LBA-Change 00:15:36.357 Write (01h): Supported LBA-Change 00:15:36.357 Read (02h): Supported 00:15:36.357 Compare (05h): Supported 00:15:36.357 Write Zeroes (08h): Supported LBA-Change 00:15:36.357 Dataset Management (09h): Supported LBA-Change 00:15:36.357 Copy (19h): Supported LBA-Change 00:15:36.357 00:15:36.357 Error Log 00:15:36.357 ========= 00:15:36.357 00:15:36.357 Arbitration 00:15:36.357 =========== 00:15:36.357 Arbitration Burst: 1 00:15:36.357 00:15:36.357 Power Management 00:15:36.357 ================ 00:15:36.357 Number of Power States: 1 00:15:36.357 Current Power State: Power State #0 00:15:36.357 Power State #0: 00:15:36.357 Max Power: 0.00 W 00:15:36.357 Non-Operational State: Operational 00:15:36.357 Entry Latency: Not Reported 00:15:36.357 Exit Latency: Not Reported 00:15:36.357 Relative Read Throughput: 0 00:15:36.357 Relative Read Latency: 0 00:15:36.357 Relative Write Throughput: 0 00:15:36.357 Relative Write Latency: 0 00:15:36.357 Idle Power: Not Reported 00:15:36.357 Active Power: Not Reported 00:15:36.357 Non-Operational Permissive Mode: Not Supported 00:15:36.357 00:15:36.357 Health Information 00:15:36.357 ================== 00:15:36.357 Critical Warnings: 00:15:36.357 Available Spare Space: OK 00:15:36.357 Temperature: OK 00:15:36.357 Device Reliability: OK 00:15:36.357 Read Only: No 00:15:36.357 Volatile Memory Backup: OK 00:15:36.357 Current Temperature: 0 Kelvin (-273 Celsius) 00:15:36.357 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:15:36.357 Available Spare: 0% 00:15:36.357 Available Sp[2024-07-15 03:18:42.455065] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:15:36.357 [2024-07-15 03:18:42.461890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:15:36.357 [2024-07-15 03:18:42.461943] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Prepare to destruct SSD 00:15:36.357 [2024-07-15 03:18:42.461961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.357 [2024-07-15 03:18:42.461972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.357 [2024-07-15 03:18:42.461982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.357 [2024-07-15 03:18:42.461992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.357 [2024-07-15 03:18:42.465891] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:15:36.357 [2024-07-15 03:18:42.465917] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:15:36.357 [2024-07-15 03:18:42.465979] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:36.357 [2024-07-15 03:18:42.466064] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] RTD3E = 0 us 00:15:36.357 [2024-07-15 03:18:42.466080] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown timeout = 10000 ms 00:15:36.357 [2024-07-15 03:18:42.466988] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:15:36.357 [2024-07-15 03:18:42.467013] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown complete in 0 milliseconds 00:15:36.357 [2024-07-15 03:18:42.467064] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:15:36.357 [2024-07-15 03:18:42.468255] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:15:36.613 are Threshold: 0% 00:15:36.613 Life Percentage Used: 0% 00:15:36.613 Data Units Read: 0 00:15:36.613 Data Units Written: 0 00:15:36.613 Host Read Commands: 0 00:15:36.613 Host Write Commands: 0 00:15:36.613 Controller Busy Time: 0 minutes 00:15:36.613 Power Cycles: 0 00:15:36.613 Power On Hours: 0 hours 00:15:36.613 Unsafe Shutdowns: 0 00:15:36.613 Unrecoverable Media Errors: 0 00:15:36.613 Lifetime Error Log Entries: 0 00:15:36.613 Warning Temperature Time: 0 minutes 00:15:36.613 Critical Temperature Time: 0 minutes 00:15:36.613 00:15:36.613 Number of Queues 00:15:36.613 ================ 00:15:36.613 Number of I/O Submission Queues: 127 00:15:36.613 Number of I/O Completion Queues: 127 00:15:36.613 00:15:36.613 Active Namespaces 00:15:36.613 ================= 00:15:36.613 Namespace ID:1 00:15:36.613 Error Recovery Timeout: Unlimited 00:15:36.613 Command Set Identifier: NVM (00h) 00:15:36.613 Deallocate: Supported 00:15:36.613 Deallocated/Unwritten Error: Not Supported 00:15:36.613 Deallocated Read Value: Unknown 00:15:36.613 Deallocate in Write Zeroes: Not Supported 00:15:36.613 Deallocated Guard Field: 0xFFFF 00:15:36.613 Flush: Supported 00:15:36.613 Reservation: Supported 00:15:36.613 Namespace Sharing Capabilities: Multiple Controllers 00:15:36.613 Size (in LBAs): 131072 (0GiB) 00:15:36.613 Capacity (in LBAs): 131072 (0GiB) 00:15:36.613 Utilization (in LBAs): 131072 (0GiB) 00:15:36.613 NGUID: 94BFD051513B4EB5A63FA198B52FAF09 00:15:36.613 UUID: 94bfd051-513b-4eb5-a63f-a198b52faf09 00:15:36.613 Thin Provisioning: Not Supported 00:15:36.613 Per-NS Atomic Units: Yes 00:15:36.613 Atomic Boundary Size (Normal): 0 00:15:36.613 Atomic Boundary Size (PFail): 0 00:15:36.613 Atomic Boundary Offset: 0 00:15:36.613 Maximum Single Source Range Length: 65535 00:15:36.613 Maximum Copy Length: 65535 00:15:36.613 Maximum Source Range Count: 1 00:15:36.613 NGUID/EUI64 Never Reused: No 00:15:36.613 Namespace Write Protected: No 00:15:36.613 Number of LBA Formats: 1 00:15:36.613 Current LBA Format: LBA Format #00 00:15:36.613 LBA Format #00: Data Size: 512 Metadata Size: 0 00:15:36.613 00:15:36.613 03:18:42 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:15:36.613 EAL: No free 2048 kB hugepages reported on node 1 00:15:36.613 [2024-07-15 03:18:42.697700] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:41.871 Initializing NVMe Controllers 00:15:41.871 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:41.871 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:15:41.871 Initialization complete. Launching workers. 00:15:41.871 ======================================================== 00:15:41.871 Latency(us) 00:15:41.871 Device Information : IOPS MiB/s Average min max 00:15:41.871 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 34322.62 134.07 3728.64 1190.86 7331.77 00:15:41.871 ======================================================== 00:15:41.872 Total : 34322.62 134.07 3728.64 1190.86 7331.77 00:15:41.872 00:15:41.872 [2024-07-15 03:18:47.802233] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:41.872 03:18:47 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:15:41.872 EAL: No free 2048 kB hugepages reported on node 1 00:15:42.129 [2024-07-15 03:18:48.048886] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:47.430 Initializing NVMe Controllers 00:15:47.430 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:47.430 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:15:47.430 Initialization complete. Launching workers. 00:15:47.430 ======================================================== 00:15:47.430 Latency(us) 00:15:47.430 Device Information : IOPS MiB/s Average min max 00:15:47.430 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 31803.02 124.23 4023.55 1199.53 8279.25 00:15:47.430 ======================================================== 00:15:47.430 Total : 31803.02 124.23 4023.55 1199.53 8279.25 00:15:47.430 00:15:47.430 [2024-07-15 03:18:53.066647] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:47.430 03:18:53 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:15:47.430 EAL: No free 2048 kB hugepages reported on node 1 00:15:47.430 [2024-07-15 03:18:53.278638] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:52.685 [2024-07-15 03:18:58.411024] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:52.685 Initializing NVMe Controllers 00:15:52.685 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:52.685 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:52.685 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:15:52.685 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:15:52.685 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:15:52.685 Initialization complete. Launching workers. 00:15:52.685 Starting thread on core 2 00:15:52.685 Starting thread on core 3 00:15:52.685 Starting thread on core 1 00:15:52.685 03:18:58 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:15:52.685 EAL: No free 2048 kB hugepages reported on node 1 00:15:52.685 [2024-07-15 03:18:58.720393] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:55.963 [2024-07-15 03:19:01.777203] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:55.963 Initializing NVMe Controllers 00:15:55.963 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:15:55.963 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:15:55.963 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:15:55.963 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:15:55.963 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:15:55.963 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:15:55.963 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:15:55.963 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:15:55.963 Initialization complete. Launching workers. 00:15:55.963 Starting thread on core 1 with urgent priority queue 00:15:55.963 Starting thread on core 2 with urgent priority queue 00:15:55.963 Starting thread on core 3 with urgent priority queue 00:15:55.963 Starting thread on core 0 with urgent priority queue 00:15:55.963 SPDK bdev Controller (SPDK2 ) core 0: 3395.33 IO/s 29.45 secs/100000 ios 00:15:55.963 SPDK bdev Controller (SPDK2 ) core 1: 2911.67 IO/s 34.34 secs/100000 ios 00:15:55.963 SPDK bdev Controller (SPDK2 ) core 2: 3392.00 IO/s 29.48 secs/100000 ios 00:15:55.963 SPDK bdev Controller (SPDK2 ) core 3: 3334.33 IO/s 29.99 secs/100000 ios 00:15:55.963 ======================================================== 00:15:55.963 00:15:55.963 03:19:01 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:15:55.963 EAL: No free 2048 kB hugepages reported on node 1 00:15:55.963 [2024-07-15 03:19:02.078369] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:55.963 Initializing NVMe Controllers 00:15:55.963 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:15:55.963 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:15:55.963 Namespace ID: 1 size: 0GB 00:15:55.963 Initialization complete. 00:15:55.963 INFO: using host memory buffer for IO 00:15:55.963 Hello world! 00:15:55.963 [2024-07-15 03:19:02.090452] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:56.221 03:19:02 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:15:56.221 EAL: No free 2048 kB hugepages reported on node 1 00:15:56.479 [2024-07-15 03:19:02.378375] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:57.411 Initializing NVMe Controllers 00:15:57.411 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:15:57.411 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:15:57.411 Initialization complete. Launching workers. 00:15:57.411 submit (in ns) avg, min, max = 8366.1, 3504.4, 4019594.4 00:15:57.411 complete (in ns) avg, min, max = 25225.6, 2057.8, 6010642.2 00:15:57.411 00:15:57.411 Submit histogram 00:15:57.411 ================ 00:15:57.411 Range in us Cumulative Count 00:15:57.411 3.484 - 3.508: 0.0151% ( 2) 00:15:57.411 3.508 - 3.532: 0.4317% ( 55) 00:15:57.411 3.532 - 3.556: 1.0301% ( 79) 00:15:57.411 3.556 - 3.579: 3.0523% ( 267) 00:15:57.411 3.579 - 3.603: 6.0895% ( 401) 00:15:57.411 3.603 - 3.627: 11.4974% ( 714) 00:15:57.411 3.627 - 3.650: 18.6246% ( 941) 00:15:57.411 3.650 - 3.674: 26.3501% ( 1020) 00:15:57.411 3.674 - 3.698: 33.7499% ( 977) 00:15:57.411 3.698 - 3.721: 41.6648% ( 1045) 00:15:57.411 3.721 - 3.745: 49.2540% ( 1002) 00:15:57.411 3.745 - 3.769: 55.3586% ( 806) 00:15:57.411 3.769 - 3.793: 59.9864% ( 611) 00:15:57.411 3.793 - 3.816: 63.6068% ( 478) 00:15:57.411 3.816 - 3.840: 66.6591% ( 403) 00:15:57.411 3.840 - 3.864: 70.1583% ( 462) 00:15:57.411 3.864 - 3.887: 73.5287% ( 445) 00:15:57.411 3.887 - 3.911: 77.3006% ( 498) 00:15:57.411 3.911 - 3.935: 80.8983% ( 475) 00:15:57.411 3.935 - 3.959: 84.0718% ( 419) 00:15:57.411 3.959 - 3.982: 86.5334% ( 325) 00:15:57.411 3.982 - 4.006: 88.7374% ( 291) 00:15:57.411 4.006 - 4.030: 90.0174% ( 169) 00:15:57.411 4.030 - 4.053: 91.2747% ( 166) 00:15:57.411 4.053 - 4.077: 92.1609% ( 117) 00:15:57.411 4.077 - 4.101: 92.9486% ( 104) 00:15:57.411 4.101 - 4.124: 93.6833% ( 97) 00:15:57.411 4.124 - 4.148: 94.4558% ( 102) 00:15:57.411 4.148 - 4.172: 94.9633% ( 67) 00:15:57.411 4.172 - 4.196: 95.3723% ( 54) 00:15:57.411 4.196 - 4.219: 95.7434% ( 49) 00:15:57.411 4.219 - 4.243: 96.0161% ( 36) 00:15:57.411 4.243 - 4.267: 96.2130% ( 26) 00:15:57.411 4.267 - 4.290: 96.4099% ( 26) 00:15:57.411 4.290 - 4.314: 96.5387% ( 17) 00:15:57.411 4.314 - 4.338: 96.6523% ( 15) 00:15:57.411 4.338 - 4.361: 96.7735% ( 16) 00:15:57.412 4.361 - 4.385: 96.8795% ( 14) 00:15:57.412 4.385 - 4.409: 96.9249% ( 6) 00:15:57.412 4.409 - 4.433: 96.9855% ( 8) 00:15:57.412 4.433 - 4.456: 97.0461% ( 8) 00:15:57.412 4.456 - 4.480: 97.0840% ( 5) 00:15:57.412 4.480 - 4.504: 97.1294% ( 6) 00:15:57.412 4.504 - 4.527: 97.1749% ( 6) 00:15:57.412 4.527 - 4.551: 97.1825% ( 1) 00:15:57.412 4.551 - 4.575: 97.2052% ( 3) 00:15:57.412 4.575 - 4.599: 97.2203% ( 2) 00:15:57.412 4.599 - 4.622: 97.2658% ( 6) 00:15:57.412 4.622 - 4.646: 97.2733% ( 1) 00:15:57.412 4.646 - 4.670: 97.2885% ( 2) 00:15:57.412 4.670 - 4.693: 97.2961% ( 1) 00:15:57.412 4.693 - 4.717: 97.3112% ( 2) 00:15:57.412 4.741 - 4.764: 97.3188% ( 1) 00:15:57.412 4.764 - 4.788: 97.3264% ( 1) 00:15:57.412 4.812 - 4.836: 97.3491% ( 3) 00:15:57.412 4.836 - 4.859: 97.3870% ( 5) 00:15:57.412 4.859 - 4.883: 97.4173% ( 4) 00:15:57.412 4.883 - 4.907: 97.4551% ( 5) 00:15:57.412 4.907 - 4.930: 97.4854% ( 4) 00:15:57.412 4.930 - 4.954: 97.5157% ( 4) 00:15:57.412 4.954 - 4.978: 97.5687% ( 7) 00:15:57.412 4.978 - 5.001: 97.6218% ( 7) 00:15:57.412 5.001 - 5.025: 97.6672% ( 6) 00:15:57.412 5.025 - 5.049: 97.7278% ( 8) 00:15:57.412 5.049 - 5.073: 97.7732% ( 6) 00:15:57.412 5.073 - 5.096: 97.8414% ( 9) 00:15:57.412 5.096 - 5.120: 97.8944% ( 7) 00:15:57.412 5.120 - 5.144: 97.9171% ( 3) 00:15:57.412 5.144 - 5.167: 97.9323% ( 2) 00:15:57.412 5.167 - 5.191: 97.9853% ( 7) 00:15:57.412 5.191 - 5.215: 98.0383% ( 7) 00:15:57.412 5.215 - 5.239: 98.0838% ( 6) 00:15:57.412 5.239 - 5.262: 98.1216% ( 5) 00:15:57.412 5.262 - 5.286: 98.1444% ( 3) 00:15:57.412 5.286 - 5.310: 98.1595% ( 2) 00:15:57.412 5.310 - 5.333: 98.1671% ( 1) 00:15:57.412 5.333 - 5.357: 98.1747% ( 1) 00:15:57.412 5.357 - 5.381: 98.1822% ( 1) 00:15:57.412 5.404 - 5.428: 98.2050% ( 3) 00:15:57.412 5.476 - 5.499: 98.2277% ( 3) 00:15:57.412 5.570 - 5.594: 98.2352% ( 1) 00:15:57.412 5.594 - 5.618: 98.2504% ( 2) 00:15:57.412 5.713 - 5.736: 98.2580% ( 1) 00:15:57.412 6.044 - 6.068: 98.2655% ( 1) 00:15:57.412 6.116 - 6.163: 98.2731% ( 1) 00:15:57.412 6.353 - 6.400: 98.2807% ( 1) 00:15:57.412 6.400 - 6.447: 98.2883% ( 1) 00:15:57.412 6.590 - 6.637: 98.2958% ( 1) 00:15:57.412 6.874 - 6.921: 98.3034% ( 1) 00:15:57.412 6.921 - 6.969: 98.3110% ( 1) 00:15:57.412 7.111 - 7.159: 98.3186% ( 1) 00:15:57.412 7.253 - 7.301: 98.3261% ( 1) 00:15:57.412 7.301 - 7.348: 98.3337% ( 1) 00:15:57.412 7.348 - 7.396: 98.3413% ( 1) 00:15:57.412 7.396 - 7.443: 98.3489% ( 1) 00:15:57.412 7.443 - 7.490: 98.3640% ( 2) 00:15:57.412 7.490 - 7.538: 98.3716% ( 1) 00:15:57.412 7.585 - 7.633: 98.3792% ( 1) 00:15:57.412 7.633 - 7.680: 98.3943% ( 2) 00:15:57.412 7.870 - 7.917: 98.4019% ( 1) 00:15:57.412 8.107 - 8.154: 98.4246% ( 3) 00:15:57.412 8.154 - 8.201: 98.4625% ( 5) 00:15:57.412 8.201 - 8.249: 98.4700% ( 1) 00:15:57.412 8.249 - 8.296: 98.4776% ( 1) 00:15:57.412 8.296 - 8.344: 98.4928% ( 2) 00:15:57.412 8.391 - 8.439: 98.5003% ( 1) 00:15:57.412 8.439 - 8.486: 98.5079% ( 1) 00:15:57.412 8.486 - 8.533: 98.5155% ( 1) 00:15:57.412 8.533 - 8.581: 98.5231% ( 1) 00:15:57.412 8.581 - 8.628: 98.5306% ( 1) 00:15:57.412 8.628 - 8.676: 98.5382% ( 1) 00:15:57.412 8.723 - 8.770: 98.5609% ( 3) 00:15:57.412 8.770 - 8.818: 98.5685% ( 1) 00:15:57.412 8.865 - 8.913: 98.5761% ( 1) 00:15:57.412 8.913 - 8.960: 98.5912% ( 2) 00:15:57.412 9.055 - 9.102: 98.5988% ( 1) 00:15:57.412 9.102 - 9.150: 98.6140% ( 2) 00:15:57.412 9.197 - 9.244: 98.6215% ( 1) 00:15:57.412 9.529 - 9.576: 98.6367% ( 2) 00:15:57.412 9.671 - 9.719: 98.6442% ( 1) 00:15:57.412 9.766 - 9.813: 98.6518% ( 1) 00:15:57.412 9.813 - 9.861: 98.6594% ( 1) 00:15:57.412 9.956 - 10.003: 98.6670% ( 1) 00:15:57.412 10.003 - 10.050: 98.6821% ( 2) 00:15:57.412 10.050 - 10.098: 98.6973% ( 2) 00:15:57.412 10.098 - 10.145: 98.7124% ( 2) 00:15:57.412 10.145 - 10.193: 98.7200% ( 1) 00:15:57.412 10.193 - 10.240: 98.7276% ( 1) 00:15:57.412 10.240 - 10.287: 98.7351% ( 1) 00:15:57.412 10.287 - 10.335: 98.7427% ( 1) 00:15:57.412 10.382 - 10.430: 98.7503% ( 1) 00:15:57.412 10.619 - 10.667: 98.7579% ( 1) 00:15:57.412 10.714 - 10.761: 98.7654% ( 1) 00:15:57.412 10.761 - 10.809: 98.7730% ( 1) 00:15:57.412 10.904 - 10.951: 98.7806% ( 1) 00:15:57.412 11.188 - 11.236: 98.7957% ( 2) 00:15:57.412 11.330 - 11.378: 98.8033% ( 1) 00:15:57.412 11.615 - 11.662: 98.8109% ( 1) 00:15:57.412 11.710 - 11.757: 98.8260% ( 2) 00:15:57.412 11.804 - 11.852: 98.8336% ( 1) 00:15:57.412 12.136 - 12.231: 98.8487% ( 2) 00:15:57.412 12.231 - 12.326: 98.8715% ( 3) 00:15:57.412 12.326 - 12.421: 98.8790% ( 1) 00:15:57.412 12.421 - 12.516: 98.8866% ( 1) 00:15:57.412 12.516 - 12.610: 98.8942% ( 1) 00:15:57.412 12.610 - 12.705: 98.9093% ( 2) 00:15:57.412 12.800 - 12.895: 98.9169% ( 1) 00:15:57.412 12.895 - 12.990: 98.9321% ( 2) 00:15:57.412 12.990 - 13.084: 98.9472% ( 2) 00:15:57.412 13.084 - 13.179: 98.9548% ( 1) 00:15:57.412 13.179 - 13.274: 98.9624% ( 1) 00:15:57.412 13.369 - 13.464: 98.9699% ( 1) 00:15:57.412 13.559 - 13.653: 98.9775% ( 1) 00:15:57.412 13.748 - 13.843: 98.9851% ( 1) 00:15:57.412 13.843 - 13.938: 98.9927% ( 1) 00:15:57.412 13.938 - 14.033: 99.0002% ( 1) 00:15:57.412 14.033 - 14.127: 99.0078% ( 1) 00:15:57.412 14.317 - 14.412: 99.0154% ( 1) 00:15:57.412 14.507 - 14.601: 99.0229% ( 1) 00:15:57.412 14.601 - 14.696: 99.0305% ( 1) 00:15:57.412 14.696 - 14.791: 99.0457% ( 2) 00:15:57.412 14.791 - 14.886: 99.0532% ( 1) 00:15:57.412 14.886 - 14.981: 99.0684% ( 2) 00:15:57.412 17.067 - 17.161: 99.0760% ( 1) 00:15:57.412 17.256 - 17.351: 99.0835% ( 1) 00:15:57.412 17.351 - 17.446: 99.0911% ( 1) 00:15:57.412 17.446 - 17.541: 99.1138% ( 3) 00:15:57.412 17.541 - 17.636: 99.1517% ( 5) 00:15:57.412 17.636 - 17.730: 99.2047% ( 7) 00:15:57.412 17.730 - 17.825: 99.2350% ( 4) 00:15:57.412 17.825 - 17.920: 99.2956% ( 8) 00:15:57.412 17.920 - 18.015: 99.3486% ( 7) 00:15:57.412 18.015 - 18.110: 99.3941% ( 6) 00:15:57.412 18.110 - 18.204: 99.4698% ( 10) 00:15:57.412 18.204 - 18.299: 99.5380% ( 9) 00:15:57.412 18.299 - 18.394: 99.5531% ( 2) 00:15:57.412 18.394 - 18.489: 99.5986% ( 6) 00:15:57.412 18.489 - 18.584: 99.6592% ( 8) 00:15:57.412 18.584 - 18.679: 99.7198% ( 8) 00:15:57.412 18.679 - 18.773: 99.7728% ( 7) 00:15:57.412 18.773 - 18.868: 99.7879% ( 2) 00:15:57.412 18.868 - 18.963: 99.8031% ( 2) 00:15:57.412 18.963 - 19.058: 99.8182% ( 2) 00:15:57.412 19.058 - 19.153: 99.8334% ( 2) 00:15:57.412 19.247 - 19.342: 99.8409% ( 1) 00:15:57.412 19.437 - 19.532: 99.8485% ( 1) 00:15:57.412 19.911 - 20.006: 99.8561% ( 1) 00:15:57.412 20.385 - 20.480: 99.8637% ( 1) 00:15:57.412 22.281 - 22.376: 99.8712% ( 1) 00:15:57.412 23.040 - 23.135: 99.8788% ( 1) 00:15:57.412 23.988 - 24.083: 99.8864% ( 1) 00:15:57.412 2014.625 - 2026.761: 99.8940% ( 1) 00:15:57.412 3980.705 - 4004.978: 99.9621% ( 9) 00:15:57.412 4004.978 - 4029.250: 100.0000% ( 5) 00:15:57.412 00:15:57.412 Complete histogram 00:15:57.412 ================== 00:15:57.412 Range in us Cumulative Count 00:15:57.412 2.050 - 2.062: 0.9543% ( 126) 00:15:57.412 2.062 - 2.074: 40.4075% ( 5209) 00:15:57.412 2.074 - 2.086: 50.4128% ( 1321) 00:15:57.412 2.086 - 2.098: 51.8367% ( 188) 00:15:57.412 2.098 - 2.110: 56.8734% ( 665) 00:15:57.412 2.110 - 2.121: 58.6382% ( 233) 00:15:57.412 2.121 - 2.133: 63.7128% ( 670) 00:15:57.412 2.133 - 2.145: 77.9141% ( 1875) 00:15:57.412 2.145 - 2.157: 80.2924% ( 314) 00:15:57.412 2.157 - 2.169: 81.6784% ( 183) 00:15:57.412 2.169 - 2.181: 83.9430% ( 299) 00:15:57.412 2.181 - 2.193: 84.8216% ( 116) 00:15:57.412 2.193 - 2.204: 86.0638% ( 164) 00:15:57.412 2.204 - 2.216: 89.8053% ( 494) 00:15:57.412 2.216 - 2.228: 91.0854% ( 169) 00:15:57.412 2.228 - 2.240: 92.8425% ( 232) 00:15:57.412 2.240 - 2.252: 94.1528% ( 173) 00:15:57.412 2.252 - 2.264: 94.5391% ( 51) 00:15:57.412 2.264 - 2.276: 94.7360% ( 26) 00:15:57.412 2.276 - 2.287: 94.8800% ( 19) 00:15:57.413 2.287 - 2.299: 95.1981% ( 42) 00:15:57.413 2.299 - 2.311: 95.6525% ( 60) 00:15:57.413 2.311 - 2.323: 95.8949% ( 32) 00:15:57.413 2.323 - 2.335: 95.9858% ( 12) 00:15:57.413 2.335 - 2.347: 96.0236% ( 5) 00:15:57.413 2.347 - 2.359: 96.0766% ( 7) 00:15:57.413 2.359 - 2.370: 96.1372% ( 8) 00:15:57.413 2.370 - 2.382: 96.3114% ( 23) 00:15:57.413 2.382 - 2.394: 96.6144% ( 40) 00:15:57.413 2.394 - 2.406: 96.8416% ( 30) 00:15:57.413 2.406 - 2.418: 96.9628% ( 16) 00:15:57.413 2.418 - 2.430: 97.1370% ( 23) 00:15:57.413 2.430 - 2.441: 97.3642% ( 30) 00:15:57.413 2.441 - 2.453: 97.5309% ( 22) 00:15:57.413 2.453 - 2.465: 97.6899% ( 21) 00:15:57.413 2.465 - 2.477: 97.8035% ( 15) 00:15:57.413 2.477 - 2.489: 97.9247% ( 16) 00:15:57.413 2.489 - 2.501: 98.0610% ( 18) 00:15:57.413 2.501 - 2.513: 98.1898% ( 17) 00:15:57.413 2.513 - 2.524: 98.2352% ( 6) 00:15:57.413 2.524 - 2.536: 98.3261% ( 12) 00:15:57.413 2.536 - 2.548: 98.3716% ( 6) 00:15:57.413 2.548 - 2.560: 98.4322% ( 8) 00:15:57.413 2.560 - 2.572: 98.4700% ( 5) 00:15:57.413 2.572 - 2.584: 98.5306% ( 8) 00:15:57.413 2.584 - 2.596: 98.5761% ( 6) 00:15:57.413 2.596 - 2.607: 98.5912% ( 2) 00:15:57.413 2.619 - 2.631: 98.6140% ( 3) 00:15:57.413 2.643 - 2.655: 98.6215% ( 1) 00:15:57.413 2.655 - 2.667: 98.6367% ( 2) 00:15:57.413 2.690 - 2.702: 98.6442% ( 1) 00:15:57.413 2.702 - 2.714: 98.6670% ( 3) 00:15:57.413 2.714 - 2.726: 98.6745% ( 1) 00:15:57.413 2.726 - 2.738: 98.6821% ( 1) 00:15:57.413 2.738 - 2.750: 98.6897% ( 1) 00:15:57.413 2.785 - 2.797: 98.6973% ( 1) 00:15:57.413 2.797 - 2.809: 98.7048% ( 1) 00:15:57.413 2.821 - 2.833: 9[2024-07-15 03:19:03.474589] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:57.413 8.7124% ( 1) 00:15:57.413 2.833 - 2.844: 98.7200% ( 1) 00:15:57.413 2.963 - 2.975: 98.7276% ( 1) 00:15:57.413 3.058 - 3.081: 98.7351% ( 1) 00:15:57.413 3.176 - 3.200: 98.7427% ( 1) 00:15:57.413 3.437 - 3.461: 98.7503% ( 1) 00:15:57.413 3.484 - 3.508: 98.7579% ( 1) 00:15:57.413 3.508 - 3.532: 98.7654% ( 1) 00:15:57.413 3.532 - 3.556: 98.7806% ( 2) 00:15:57.413 3.650 - 3.674: 98.7957% ( 2) 00:15:57.413 3.793 - 3.816: 98.8109% ( 2) 00:15:57.413 3.816 - 3.840: 98.8260% ( 2) 00:15:57.413 3.840 - 3.864: 98.8639% ( 5) 00:15:57.413 3.864 - 3.887: 98.8790% ( 2) 00:15:57.413 3.887 - 3.911: 98.8866% ( 1) 00:15:57.413 3.911 - 3.935: 98.8942% ( 1) 00:15:57.413 4.006 - 4.030: 98.9018% ( 1) 00:15:57.413 4.030 - 4.053: 98.9093% ( 1) 00:15:57.413 4.101 - 4.124: 98.9169% ( 1) 00:15:57.413 4.148 - 4.172: 98.9245% ( 1) 00:15:57.413 4.243 - 4.267: 98.9321% ( 1) 00:15:57.413 5.950 - 5.973: 98.9396% ( 1) 00:15:57.413 6.068 - 6.116: 98.9472% ( 1) 00:15:57.413 6.827 - 6.874: 98.9548% ( 1) 00:15:57.413 6.874 - 6.921: 98.9624% ( 1) 00:15:57.413 7.016 - 7.064: 98.9699% ( 1) 00:15:57.413 7.159 - 7.206: 98.9775% ( 1) 00:15:57.413 7.538 - 7.585: 98.9851% ( 1) 00:15:57.413 7.585 - 7.633: 98.9927% ( 1) 00:15:57.413 7.680 - 7.727: 99.0078% ( 2) 00:15:57.413 7.775 - 7.822: 99.0229% ( 2) 00:15:57.413 8.865 - 8.913: 99.0305% ( 1) 00:15:57.413 9.766 - 9.813: 99.0381% ( 1) 00:15:57.413 15.360 - 15.455: 99.0457% ( 1) 00:15:57.413 15.550 - 15.644: 99.0608% ( 2) 00:15:57.413 15.644 - 15.739: 99.0684% ( 1) 00:15:57.413 15.834 - 15.929: 99.0760% ( 1) 00:15:57.413 15.929 - 16.024: 99.0835% ( 1) 00:15:57.413 16.024 - 16.119: 99.1290% ( 6) 00:15:57.413 16.119 - 16.213: 99.1744% ( 6) 00:15:57.413 16.213 - 16.308: 99.2047% ( 4) 00:15:57.413 16.308 - 16.403: 99.2199% ( 2) 00:15:57.413 16.403 - 16.498: 99.2502% ( 4) 00:15:57.413 16.498 - 16.593: 99.2577% ( 1) 00:15:57.413 16.593 - 16.687: 99.2805% ( 3) 00:15:57.413 16.687 - 16.782: 99.3032% ( 3) 00:15:57.413 16.782 - 16.877: 99.3108% ( 1) 00:15:57.413 16.877 - 16.972: 99.3183% ( 1) 00:15:57.413 17.067 - 17.161: 99.3486% ( 4) 00:15:57.413 17.256 - 17.351: 99.3562% ( 1) 00:15:57.413 17.446 - 17.541: 99.3714% ( 2) 00:15:57.413 17.541 - 17.636: 99.3789% ( 1) 00:15:57.413 18.110 - 18.204: 99.3941% ( 2) 00:15:57.413 18.204 - 18.299: 99.4017% ( 1) 00:15:57.413 18.299 - 18.394: 99.4168% ( 2) 00:15:57.413 19.153 - 19.247: 99.4244% ( 1) 00:15:57.413 2014.625 - 2026.761: 99.4319% ( 1) 00:15:57.413 2026.761 - 2038.898: 99.4471% ( 2) 00:15:57.413 3980.705 - 4004.978: 99.8409% ( 52) 00:15:57.413 4004.978 - 4029.250: 99.9773% ( 18) 00:15:57.413 5995.330 - 6019.603: 100.0000% ( 3) 00:15:57.413 00:15:57.413 03:19:03 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:15:57.413 03:19:03 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:15:57.413 03:19:03 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:15:57.413 03:19:03 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:15:57.413 03:19:03 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:15:57.670 [ 00:15:57.670 { 00:15:57.670 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:57.670 "subtype": "Discovery", 00:15:57.670 "listen_addresses": [], 00:15:57.670 "allow_any_host": true, 00:15:57.670 "hosts": [] 00:15:57.670 }, 00:15:57.670 { 00:15:57.670 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:15:57.670 "subtype": "NVMe", 00:15:57.670 "listen_addresses": [ 00:15:57.670 { 00:15:57.670 "trtype": "VFIOUSER", 00:15:57.670 "adrfam": "IPv4", 00:15:57.670 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:15:57.670 "trsvcid": "0" 00:15:57.670 } 00:15:57.670 ], 00:15:57.670 "allow_any_host": true, 00:15:57.670 "hosts": [], 00:15:57.670 "serial_number": "SPDK1", 00:15:57.670 "model_number": "SPDK bdev Controller", 00:15:57.670 "max_namespaces": 32, 00:15:57.670 "min_cntlid": 1, 00:15:57.670 "max_cntlid": 65519, 00:15:57.670 "namespaces": [ 00:15:57.670 { 00:15:57.670 "nsid": 1, 00:15:57.670 "bdev_name": "Malloc1", 00:15:57.670 "name": "Malloc1", 00:15:57.670 "nguid": "24EF26F947254EAF95305F73C0239119", 00:15:57.670 "uuid": "24ef26f9-4725-4eaf-9530-5f73c0239119" 00:15:57.670 }, 00:15:57.670 { 00:15:57.670 "nsid": 2, 00:15:57.670 "bdev_name": "Malloc3", 00:15:57.670 "name": "Malloc3", 00:15:57.670 "nguid": "D4DE2962E02649C9B60DBAFD9A3C3DEA", 00:15:57.670 "uuid": "d4de2962-e026-49c9-b60d-bafd9a3c3dea" 00:15:57.670 } 00:15:57.670 ] 00:15:57.670 }, 00:15:57.670 { 00:15:57.670 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:15:57.670 "subtype": "NVMe", 00:15:57.670 "listen_addresses": [ 00:15:57.670 { 00:15:57.670 "trtype": "VFIOUSER", 00:15:57.670 "adrfam": "IPv4", 00:15:57.670 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:15:57.670 "trsvcid": "0" 00:15:57.670 } 00:15:57.670 ], 00:15:57.670 "allow_any_host": true, 00:15:57.670 "hosts": [], 00:15:57.670 "serial_number": "SPDK2", 00:15:57.670 "model_number": "SPDK bdev Controller", 00:15:57.670 "max_namespaces": 32, 00:15:57.670 "min_cntlid": 1, 00:15:57.670 "max_cntlid": 65519, 00:15:57.670 "namespaces": [ 00:15:57.670 { 00:15:57.670 "nsid": 1, 00:15:57.670 "bdev_name": "Malloc2", 00:15:57.670 "name": "Malloc2", 00:15:57.670 "nguid": "94BFD051513B4EB5A63FA198B52FAF09", 00:15:57.670 "uuid": "94bfd051-513b-4eb5-a63f-a198b52faf09" 00:15:57.670 } 00:15:57.670 ] 00:15:57.670 } 00:15:57.670 ] 00:15:57.670 03:19:03 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:15:57.670 03:19:03 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=3159442 00:15:57.670 03:19:03 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:15:57.670 03:19:03 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:15:57.670 03:19:03 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1265 -- # local i=0 00:15:57.670 03:19:03 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:57.670 03:19:03 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:57.670 03:19:03 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # return 0 00:15:57.670 03:19:03 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:15:57.670 03:19:03 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:15:57.670 EAL: No free 2048 kB hugepages reported on node 1 00:15:57.927 [2024-07-15 03:19:03.915399] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:57.927 Malloc4 00:15:57.927 03:19:04 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:15:58.183 [2024-07-15 03:19:04.278101] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:58.183 03:19:04 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:15:58.183 Asynchronous Event Request test 00:15:58.183 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:15:58.183 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:15:58.183 Registering asynchronous event callbacks... 00:15:58.183 Starting namespace attribute notice tests for all controllers... 00:15:58.183 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:15:58.183 aer_cb - Changed Namespace 00:15:58.183 Cleaning up... 00:15:58.441 [ 00:15:58.441 { 00:15:58.441 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:58.441 "subtype": "Discovery", 00:15:58.441 "listen_addresses": [], 00:15:58.441 "allow_any_host": true, 00:15:58.441 "hosts": [] 00:15:58.441 }, 00:15:58.441 { 00:15:58.441 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:15:58.441 "subtype": "NVMe", 00:15:58.441 "listen_addresses": [ 00:15:58.441 { 00:15:58.441 "trtype": "VFIOUSER", 00:15:58.441 "adrfam": "IPv4", 00:15:58.441 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:15:58.441 "trsvcid": "0" 00:15:58.441 } 00:15:58.441 ], 00:15:58.441 "allow_any_host": true, 00:15:58.441 "hosts": [], 00:15:58.441 "serial_number": "SPDK1", 00:15:58.441 "model_number": "SPDK bdev Controller", 00:15:58.441 "max_namespaces": 32, 00:15:58.441 "min_cntlid": 1, 00:15:58.441 "max_cntlid": 65519, 00:15:58.441 "namespaces": [ 00:15:58.441 { 00:15:58.441 "nsid": 1, 00:15:58.441 "bdev_name": "Malloc1", 00:15:58.441 "name": "Malloc1", 00:15:58.441 "nguid": "24EF26F947254EAF95305F73C0239119", 00:15:58.441 "uuid": "24ef26f9-4725-4eaf-9530-5f73c0239119" 00:15:58.441 }, 00:15:58.441 { 00:15:58.441 "nsid": 2, 00:15:58.441 "bdev_name": "Malloc3", 00:15:58.441 "name": "Malloc3", 00:15:58.441 "nguid": "D4DE2962E02649C9B60DBAFD9A3C3DEA", 00:15:58.441 "uuid": "d4de2962-e026-49c9-b60d-bafd9a3c3dea" 00:15:58.441 } 00:15:58.441 ] 00:15:58.441 }, 00:15:58.441 { 00:15:58.441 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:15:58.441 "subtype": "NVMe", 00:15:58.441 "listen_addresses": [ 00:15:58.441 { 00:15:58.441 "trtype": "VFIOUSER", 00:15:58.441 "adrfam": "IPv4", 00:15:58.441 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:15:58.441 "trsvcid": "0" 00:15:58.441 } 00:15:58.441 ], 00:15:58.441 "allow_any_host": true, 00:15:58.441 "hosts": [], 00:15:58.441 "serial_number": "SPDK2", 00:15:58.441 "model_number": "SPDK bdev Controller", 00:15:58.441 "max_namespaces": 32, 00:15:58.441 "min_cntlid": 1, 00:15:58.441 "max_cntlid": 65519, 00:15:58.441 "namespaces": [ 00:15:58.441 { 00:15:58.441 "nsid": 1, 00:15:58.441 "bdev_name": "Malloc2", 00:15:58.441 "name": "Malloc2", 00:15:58.441 "nguid": "94BFD051513B4EB5A63FA198B52FAF09", 00:15:58.441 "uuid": "94bfd051-513b-4eb5-a63f-a198b52faf09" 00:15:58.441 }, 00:15:58.441 { 00:15:58.441 "nsid": 2, 00:15:58.441 "bdev_name": "Malloc4", 00:15:58.441 "name": "Malloc4", 00:15:58.441 "nguid": "58E2B57333D54EB5A3D118293D1987F0", 00:15:58.441 "uuid": "58e2b573-33d5-4eb5-a3d1-18293d1987f0" 00:15:58.441 } 00:15:58.441 ] 00:15:58.441 } 00:15:58.441 ] 00:15:58.441 03:19:04 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 3159442 00:15:58.441 03:19:04 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:15:58.441 03:19:04 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 3153832 00:15:58.441 03:19:04 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@948 -- # '[' -z 3153832 ']' 00:15:58.441 03:19:04 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@952 -- # kill -0 3153832 00:15:58.441 03:19:04 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@953 -- # uname 00:15:58.441 03:19:04 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:58.441 03:19:04 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3153832 00:15:58.441 03:19:04 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:58.441 03:19:04 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:58.441 03:19:04 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3153832' 00:15:58.441 killing process with pid 3153832 00:15:58.441 03:19:04 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@967 -- # kill 3153832 00:15:58.441 03:19:04 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@972 -- # wait 3153832 00:15:59.007 03:19:04 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:15:59.007 03:19:04 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:15:59.007 03:19:04 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:15:59.007 03:19:04 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:15:59.007 03:19:04 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:15:59.007 03:19:04 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=3159599 00:15:59.007 03:19:04 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:15:59.007 03:19:04 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 3159599' 00:15:59.007 Process pid: 3159599 00:15:59.007 03:19:04 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:15:59.007 03:19:04 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 3159599 00:15:59.007 03:19:04 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@829 -- # '[' -z 3159599 ']' 00:15:59.007 03:19:04 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:59.007 03:19:04 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:59.007 03:19:04 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:59.007 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:59.007 03:19:04 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:59.007 03:19:04 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:15:59.007 [2024-07-15 03:19:04.908485] thread.c:2948:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:15:59.007 [2024-07-15 03:19:04.909475] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:15:59.007 [2024-07-15 03:19:04.909531] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:59.007 EAL: No free 2048 kB hugepages reported on node 1 00:15:59.007 [2024-07-15 03:19:04.975195] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:59.007 [2024-07-15 03:19:05.070586] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:59.007 [2024-07-15 03:19:05.070643] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:59.007 [2024-07-15 03:19:05.070659] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:59.007 [2024-07-15 03:19:05.070672] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:59.007 [2024-07-15 03:19:05.070684] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:59.007 [2024-07-15 03:19:05.070746] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:59.007 [2024-07-15 03:19:05.070834] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:15:59.007 [2024-07-15 03:19:05.070904] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:15:59.007 [2024-07-15 03:19:05.070907] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:59.265 [2024-07-15 03:19:05.177550] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:15:59.265 [2024-07-15 03:19:05.177777] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:15:59.265 [2024-07-15 03:19:05.178077] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:15:59.265 [2024-07-15 03:19:05.178693] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:15:59.265 [2024-07-15 03:19:05.178934] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:15:59.265 03:19:05 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:59.265 03:19:05 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@862 -- # return 0 00:15:59.265 03:19:05 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:16:00.199 03:19:06 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:16:00.458 03:19:06 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:16:00.458 03:19:06 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:16:00.458 03:19:06 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:16:00.458 03:19:06 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:16:00.458 03:19:06 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:16:00.717 Malloc1 00:16:00.717 03:19:06 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:16:00.976 03:19:06 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:16:01.234 03:19:07 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:16:01.492 03:19:07 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:16:01.492 03:19:07 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:16:01.492 03:19:07 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:16:01.750 Malloc2 00:16:01.750 03:19:07 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:16:02.007 03:19:07 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:16:02.264 03:19:08 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:16:02.556 03:19:08 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:16:02.556 03:19:08 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 3159599 00:16:02.556 03:19:08 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@948 -- # '[' -z 3159599 ']' 00:16:02.556 03:19:08 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@952 -- # kill -0 3159599 00:16:02.556 03:19:08 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@953 -- # uname 00:16:02.556 03:19:08 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:02.556 03:19:08 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3159599 00:16:02.556 03:19:08 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:16:02.556 03:19:08 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:16:02.556 03:19:08 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3159599' 00:16:02.556 killing process with pid 3159599 00:16:02.556 03:19:08 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@967 -- # kill 3159599 00:16:02.556 03:19:08 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@972 -- # wait 3159599 00:16:02.814 03:19:08 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:16:02.815 03:19:08 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:16:02.815 00:16:02.815 real 0m52.813s 00:16:02.815 user 3m28.675s 00:16:02.815 sys 0m4.333s 00:16:02.815 03:19:08 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:02.815 03:19:08 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:16:02.815 ************************************ 00:16:02.815 END TEST nvmf_vfio_user 00:16:02.815 ************************************ 00:16:02.815 03:19:08 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:16:02.815 03:19:08 nvmf_tcp -- nvmf/nvmf.sh@42 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:16:02.815 03:19:08 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:16:02.815 03:19:08 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:02.815 03:19:08 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:02.815 ************************************ 00:16:02.815 START TEST nvmf_vfio_user_nvme_compliance 00:16:02.815 ************************************ 00:16:02.815 03:19:08 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:16:02.815 * Looking for test storage... 00:16:02.815 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:16:02.815 03:19:08 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:02.815 03:19:08 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:16:02.815 03:19:08 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:02.815 03:19:08 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:02.815 03:19:08 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:02.815 03:19:08 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:02.815 03:19:08 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:02.815 03:19:08 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:02.815 03:19:08 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:02.815 03:19:08 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:02.815 03:19:08 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:02.815 03:19:08 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:02.815 03:19:08 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:02.815 03:19:08 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:16:02.815 03:19:08 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:02.815 03:19:08 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:02.815 03:19:08 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:02.815 03:19:08 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:02.815 03:19:08 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:02.815 03:19:08 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:02.815 03:19:08 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:02.815 03:19:08 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:02.815 03:19:08 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:02.815 03:19:08 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:02.815 03:19:08 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:02.815 03:19:08 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:16:02.815 03:19:08 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:02.815 03:19:08 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@47 -- # : 0 00:16:02.815 03:19:08 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:02.815 03:19:08 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:02.815 03:19:08 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:02.815 03:19:08 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:02.815 03:19:08 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:02.815 03:19:08 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:02.815 03:19:08 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:02.815 03:19:08 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:02.815 03:19:08 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:02.815 03:19:08 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:02.815 03:19:08 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:16:02.815 03:19:08 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:16:02.815 03:19:08 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:16:02.815 03:19:08 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=3160192 00:16:02.815 03:19:08 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:16:02.815 03:19:08 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 3160192' 00:16:02.815 Process pid: 3160192 00:16:02.815 03:19:08 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:16:02.815 03:19:08 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 3160192 00:16:02.815 03:19:08 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@829 -- # '[' -z 3160192 ']' 00:16:02.815 03:19:08 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:02.815 03:19:08 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:02.815 03:19:08 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:02.815 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:02.815 03:19:08 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:02.815 03:19:08 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:16:02.815 [2024-07-15 03:19:08.945479] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:16:02.815 [2024-07-15 03:19:08.945552] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:03.073 EAL: No free 2048 kB hugepages reported on node 1 00:16:03.073 [2024-07-15 03:19:09.008042] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:03.073 [2024-07-15 03:19:09.094535] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:03.073 [2024-07-15 03:19:09.094591] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:03.073 [2024-07-15 03:19:09.094619] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:03.073 [2024-07-15 03:19:09.094630] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:03.073 [2024-07-15 03:19:09.094640] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:03.073 [2024-07-15 03:19:09.095903] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:03.073 [2024-07-15 03:19:09.095928] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:16:03.073 [2024-07-15 03:19:09.095931] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:03.073 03:19:09 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:03.073 03:19:09 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@862 -- # return 0 00:16:03.073 03:19:09 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:16:04.448 03:19:10 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:16:04.448 03:19:10 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:16:04.448 03:19:10 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:16:04.448 03:19:10 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:04.449 03:19:10 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:16:04.449 03:19:10 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:04.449 03:19:10 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:16:04.449 03:19:10 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:16:04.449 03:19:10 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:04.449 03:19:10 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:16:04.449 malloc0 00:16:04.449 03:19:10 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:04.449 03:19:10 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:16:04.449 03:19:10 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:04.449 03:19:10 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:16:04.449 03:19:10 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:04.449 03:19:10 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:16:04.449 03:19:10 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:04.449 03:19:10 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:16:04.449 03:19:10 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:04.449 03:19:10 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:16:04.449 03:19:10 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:04.449 03:19:10 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:16:04.449 03:19:10 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:04.449 03:19:10 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:16:04.449 EAL: No free 2048 kB hugepages reported on node 1 00:16:04.449 00:16:04.449 00:16:04.449 CUnit - A unit testing framework for C - Version 2.1-3 00:16:04.449 http://cunit.sourceforge.net/ 00:16:04.449 00:16:04.449 00:16:04.449 Suite: nvme_compliance 00:16:04.449 Test: admin_identify_ctrlr_verify_dptr ...[2024-07-15 03:19:10.432243] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:04.449 [2024-07-15 03:19:10.433710] vfio_user.c: 804:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:16:04.449 [2024-07-15 03:19:10.433734] vfio_user.c:5514:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:16:04.449 [2024-07-15 03:19:10.433761] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:16:04.449 [2024-07-15 03:19:10.437289] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:04.449 passed 00:16:04.449 Test: admin_identify_ctrlr_verify_fused ...[2024-07-15 03:19:10.521880] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:04.449 [2024-07-15 03:19:10.524906] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:04.449 passed 00:16:04.707 Test: admin_identify_ns ...[2024-07-15 03:19:10.613772] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:04.707 [2024-07-15 03:19:10.670898] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:16:04.707 [2024-07-15 03:19:10.678894] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:16:04.707 [2024-07-15 03:19:10.700004] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:04.707 passed 00:16:04.707 Test: admin_get_features_mandatory_features ...[2024-07-15 03:19:10.785236] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:04.707 [2024-07-15 03:19:10.788253] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:04.707 passed 00:16:04.964 Test: admin_get_features_optional_features ...[2024-07-15 03:19:10.871802] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:04.964 [2024-07-15 03:19:10.877837] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:04.964 passed 00:16:04.964 Test: admin_set_features_number_of_queues ...[2024-07-15 03:19:10.959068] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:04.964 [2024-07-15 03:19:11.065010] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:04.964 passed 00:16:05.221 Test: admin_get_log_page_mandatory_logs ...[2024-07-15 03:19:11.149800] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:05.221 [2024-07-15 03:19:11.152826] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:05.221 passed 00:16:05.221 Test: admin_get_log_page_with_lpo ...[2024-07-15 03:19:11.236018] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:05.221 [2024-07-15 03:19:11.304889] ctrlr.c:2677:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:16:05.221 [2024-07-15 03:19:11.317985] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:05.221 passed 00:16:05.479 Test: fabric_property_get ...[2024-07-15 03:19:11.401640] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:05.479 [2024-07-15 03:19:11.402931] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:16:05.479 [2024-07-15 03:19:11.404665] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:05.479 passed 00:16:05.479 Test: admin_delete_io_sq_use_admin_qid ...[2024-07-15 03:19:11.489219] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:05.479 [2024-07-15 03:19:11.490480] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:16:05.479 [2024-07-15 03:19:11.492239] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:05.479 passed 00:16:05.479 Test: admin_delete_io_sq_delete_sq_twice ...[2024-07-15 03:19:11.574463] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:05.736 [2024-07-15 03:19:11.657885] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:16:05.736 [2024-07-15 03:19:11.673888] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:16:05.736 [2024-07-15 03:19:11.679012] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:05.736 passed 00:16:05.736 Test: admin_delete_io_cq_use_admin_qid ...[2024-07-15 03:19:11.762132] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:05.736 [2024-07-15 03:19:11.763438] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:16:05.736 [2024-07-15 03:19:11.765174] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:05.736 passed 00:16:05.736 Test: admin_delete_io_cq_delete_cq_first ...[2024-07-15 03:19:11.848411] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:05.993 [2024-07-15 03:19:11.923905] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:16:05.993 [2024-07-15 03:19:11.947888] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:16:05.993 [2024-07-15 03:19:11.953004] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:05.993 passed 00:16:05.993 Test: admin_create_io_cq_verify_iv_pc ...[2024-07-15 03:19:12.039200] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:05.993 [2024-07-15 03:19:12.040495] vfio_user.c:2158:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:16:05.993 [2024-07-15 03:19:12.040546] vfio_user.c:2152:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:16:05.993 [2024-07-15 03:19:12.042235] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:05.993 passed 00:16:05.993 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-07-15 03:19:12.123617] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:06.251 [2024-07-15 03:19:12.215888] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:16:06.251 [2024-07-15 03:19:12.223892] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:16:06.251 [2024-07-15 03:19:12.231902] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:16:06.251 [2024-07-15 03:19:12.239902] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:16:06.251 [2024-07-15 03:19:12.269000] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:06.251 passed 00:16:06.251 Test: admin_create_io_sq_verify_pc ...[2024-07-15 03:19:12.352616] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:06.251 [2024-07-15 03:19:12.368904] vfio_user.c:2051:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:16:06.251 [2024-07-15 03:19:12.386915] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:06.508 passed 00:16:06.508 Test: admin_create_io_qp_max_qps ...[2024-07-15 03:19:12.470474] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:07.440 [2024-07-15 03:19:13.560893] nvme_ctrlr.c:5465:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user] No free I/O queue IDs 00:16:08.003 [2024-07-15 03:19:13.941301] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:08.003 passed 00:16:08.003 Test: admin_create_io_sq_shared_cq ...[2024-07-15 03:19:14.025630] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:08.261 [2024-07-15 03:19:14.155893] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:16:08.261 [2024-07-15 03:19:14.192960] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:08.261 passed 00:16:08.261 00:16:08.261 Run Summary: Type Total Ran Passed Failed Inactive 00:16:08.261 suites 1 1 n/a 0 0 00:16:08.261 tests 18 18 18 0 0 00:16:08.261 asserts 360 360 360 0 n/a 00:16:08.261 00:16:08.261 Elapsed time = 1.555 seconds 00:16:08.261 03:19:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 3160192 00:16:08.261 03:19:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@948 -- # '[' -z 3160192 ']' 00:16:08.261 03:19:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@952 -- # kill -0 3160192 00:16:08.261 03:19:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@953 -- # uname 00:16:08.261 03:19:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:08.261 03:19:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3160192 00:16:08.261 03:19:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:16:08.261 03:19:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:16:08.261 03:19:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3160192' 00:16:08.261 killing process with pid 3160192 00:16:08.261 03:19:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@967 -- # kill 3160192 00:16:08.261 03:19:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@972 -- # wait 3160192 00:16:08.519 03:19:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:16:08.520 03:19:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:16:08.520 00:16:08.520 real 0m5.701s 00:16:08.520 user 0m16.016s 00:16:08.520 sys 0m0.532s 00:16:08.520 03:19:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:08.520 03:19:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:16:08.520 ************************************ 00:16:08.520 END TEST nvmf_vfio_user_nvme_compliance 00:16:08.520 ************************************ 00:16:08.520 03:19:14 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:16:08.520 03:19:14 nvmf_tcp -- nvmf/nvmf.sh@43 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:16:08.520 03:19:14 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:16:08.520 03:19:14 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:08.520 03:19:14 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:08.520 ************************************ 00:16:08.520 START TEST nvmf_vfio_user_fuzz 00:16:08.520 ************************************ 00:16:08.520 03:19:14 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:16:08.520 * Looking for test storage... 00:16:08.520 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:08.520 03:19:14 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:08.520 03:19:14 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:16:08.520 03:19:14 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:08.520 03:19:14 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:08.520 03:19:14 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:08.520 03:19:14 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:08.520 03:19:14 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:08.520 03:19:14 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:08.520 03:19:14 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:08.520 03:19:14 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:08.520 03:19:14 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:08.520 03:19:14 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:08.520 03:19:14 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:08.520 03:19:14 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:16:08.520 03:19:14 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:08.520 03:19:14 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:08.520 03:19:14 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:08.520 03:19:14 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:08.520 03:19:14 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:08.520 03:19:14 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:08.520 03:19:14 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:08.520 03:19:14 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:08.520 03:19:14 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:08.520 03:19:14 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:08.520 03:19:14 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:08.520 03:19:14 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:16:08.520 03:19:14 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:08.520 03:19:14 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@47 -- # : 0 00:16:08.520 03:19:14 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:08.520 03:19:14 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:08.520 03:19:14 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:08.520 03:19:14 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:08.520 03:19:14 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:08.520 03:19:14 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:08.520 03:19:14 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:08.520 03:19:14 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:08.520 03:19:14 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:16:08.520 03:19:14 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:16:08.520 03:19:14 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:16:08.520 03:19:14 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:16:08.520 03:19:14 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:16:08.520 03:19:14 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:16:08.520 03:19:14 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:16:08.520 03:19:14 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=3160915 00:16:08.520 03:19:14 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:16:08.520 03:19:14 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 3160915' 00:16:08.520 Process pid: 3160915 00:16:08.520 03:19:14 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:16:08.520 03:19:14 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 3160915 00:16:08.520 03:19:14 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@829 -- # '[' -z 3160915 ']' 00:16:08.520 03:19:14 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:08.520 03:19:14 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:08.520 03:19:14 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:08.520 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:08.520 03:19:14 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:08.520 03:19:14 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:09.085 03:19:14 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:09.085 03:19:14 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@862 -- # return 0 00:16:09.085 03:19:14 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:16:10.015 03:19:15 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:16:10.015 03:19:15 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:10.016 03:19:15 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:10.016 03:19:15 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:10.016 03:19:15 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:16:10.016 03:19:15 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:16:10.016 03:19:15 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:10.016 03:19:15 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:10.016 malloc0 00:16:10.016 03:19:16 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:10.016 03:19:16 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:16:10.016 03:19:16 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:10.016 03:19:16 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:10.016 03:19:16 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:10.016 03:19:16 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:16:10.016 03:19:16 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:10.016 03:19:16 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:10.016 03:19:16 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:10.016 03:19:16 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:16:10.016 03:19:16 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:10.016 03:19:16 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:10.016 03:19:16 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:10.016 03:19:16 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:16:10.016 03:19:16 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:16:42.075 Fuzzing completed. Shutting down the fuzz application 00:16:42.075 00:16:42.075 Dumping successful admin opcodes: 00:16:42.075 8, 9, 10, 24, 00:16:42.075 Dumping successful io opcodes: 00:16:42.075 0, 00:16:42.075 NS: 0x200003a1ef00 I/O qp, Total commands completed: 623168, total successful commands: 2412, random_seed: 4095210048 00:16:42.075 NS: 0x200003a1ef00 admin qp, Total commands completed: 95240, total successful commands: 772, random_seed: 1457349376 00:16:42.075 03:19:46 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:16:42.075 03:19:46 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:42.075 03:19:46 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:42.075 03:19:46 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:42.075 03:19:46 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 3160915 00:16:42.075 03:19:46 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@948 -- # '[' -z 3160915 ']' 00:16:42.075 03:19:46 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@952 -- # kill -0 3160915 00:16:42.075 03:19:46 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@953 -- # uname 00:16:42.075 03:19:46 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:42.075 03:19:46 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3160915 00:16:42.075 03:19:46 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:16:42.075 03:19:46 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:16:42.075 03:19:46 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3160915' 00:16:42.075 killing process with pid 3160915 00:16:42.075 03:19:46 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@967 -- # kill 3160915 00:16:42.075 03:19:46 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@972 -- # wait 3160915 00:16:42.075 03:19:46 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:16:42.075 03:19:46 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:16:42.075 00:16:42.075 real 0m32.200s 00:16:42.075 user 0m31.258s 00:16:42.075 sys 0m30.426s 00:16:42.075 03:19:46 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:42.075 03:19:46 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:42.075 ************************************ 00:16:42.075 END TEST nvmf_vfio_user_fuzz 00:16:42.075 ************************************ 00:16:42.075 03:19:46 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:16:42.075 03:19:46 nvmf_tcp -- nvmf/nvmf.sh@47 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:16:42.075 03:19:46 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:16:42.075 03:19:46 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:42.075 03:19:46 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:42.075 ************************************ 00:16:42.075 START TEST nvmf_host_management 00:16:42.075 ************************************ 00:16:42.075 03:19:46 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:16:42.075 * Looking for test storage... 00:16:42.075 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:42.075 03:19:46 nvmf_tcp.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:42.075 03:19:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:16:42.075 03:19:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:42.075 03:19:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:42.075 03:19:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:42.075 03:19:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:42.075 03:19:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:42.075 03:19:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:42.075 03:19:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:42.075 03:19:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:42.075 03:19:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:42.075 03:19:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:42.075 03:19:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:42.075 03:19:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:16:42.075 03:19:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:42.075 03:19:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:42.075 03:19:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:42.075 03:19:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:42.075 03:19:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:42.075 03:19:46 nvmf_tcp.nvmf_host_management -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:42.075 03:19:46 nvmf_tcp.nvmf_host_management -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:42.075 03:19:46 nvmf_tcp.nvmf_host_management -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:42.075 03:19:46 nvmf_tcp.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:42.075 03:19:46 nvmf_tcp.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:42.075 03:19:46 nvmf_tcp.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:42.075 03:19:46 nvmf_tcp.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:16:42.075 03:19:46 nvmf_tcp.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:42.075 03:19:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@47 -- # : 0 00:16:42.075 03:19:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:42.075 03:19:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:42.075 03:19:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:42.075 03:19:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:42.075 03:19:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:42.075 03:19:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:42.075 03:19:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:42.075 03:19:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:42.075 03:19:46 nvmf_tcp.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:42.075 03:19:46 nvmf_tcp.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:42.075 03:19:46 nvmf_tcp.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:16:42.075 03:19:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:42.075 03:19:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:42.075 03:19:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:42.075 03:19:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:42.075 03:19:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:42.075 03:19:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:42.075 03:19:46 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:42.075 03:19:46 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:42.075 03:19:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:42.075 03:19:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:42.075 03:19:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@285 -- # xtrace_disable 00:16:42.075 03:19:46 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:43.010 03:19:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:43.010 03:19:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@291 -- # pci_devs=() 00:16:43.010 03:19:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:43.010 03:19:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:43.010 03:19:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:43.010 03:19:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:43.010 03:19:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:43.010 03:19:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@295 -- # net_devs=() 00:16:43.010 03:19:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:43.010 03:19:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@296 -- # e810=() 00:16:43.010 03:19:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@296 -- # local -ga e810 00:16:43.010 03:19:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@297 -- # x722=() 00:16:43.010 03:19:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@297 -- # local -ga x722 00:16:43.010 03:19:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@298 -- # mlx=() 00:16:43.010 03:19:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@298 -- # local -ga mlx 00:16:43.010 03:19:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:43.010 03:19:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:43.010 03:19:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:43.010 03:19:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:43.010 03:19:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:43.010 03:19:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:43.010 03:19:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:43.010 03:19:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:43.010 03:19:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:43.010 03:19:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:43.010 03:19:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:43.010 03:19:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:43.010 03:19:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:43.010 03:19:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:43.010 03:19:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:43.010 03:19:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:43.010 03:19:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:43.010 03:19:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:43.010 03:19:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:16:43.010 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:16:43.010 03:19:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:43.010 03:19:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:43.010 03:19:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:43.010 03:19:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:43.010 03:19:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:43.010 03:19:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:43.010 03:19:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:16:43.010 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:16:43.010 03:19:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:43.010 03:19:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:43.010 03:19:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:43.010 03:19:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:43.010 03:19:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:43.010 03:19:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:43.010 03:19:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:43.010 03:19:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:43.010 03:19:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:43.010 03:19:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:43.010 03:19:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:43.010 03:19:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:43.010 03:19:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:43.010 03:19:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:43.010 03:19:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:43.010 03:19:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:16:43.010 Found net devices under 0000:0a:00.0: cvl_0_0 00:16:43.010 03:19:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:43.010 03:19:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:43.010 03:19:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:43.010 03:19:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:43.010 03:19:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:43.010 03:19:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:43.010 03:19:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:43.010 03:19:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:43.010 03:19:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:16:43.010 Found net devices under 0000:0a:00.1: cvl_0_1 00:16:43.010 03:19:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:43.010 03:19:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:43.010 03:19:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # is_hw=yes 00:16:43.010 03:19:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:43.010 03:19:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:16:43.010 03:19:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:16:43.010 03:19:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:43.010 03:19:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:43.010 03:19:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:43.010 03:19:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:43.010 03:19:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:43.010 03:19:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:43.010 03:19:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:43.010 03:19:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:43.010 03:19:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:43.010 03:19:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:43.010 03:19:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:43.010 03:19:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:43.010 03:19:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:43.010 03:19:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:43.010 03:19:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:43.010 03:19:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:43.010 03:19:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:43.010 03:19:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:43.010 03:19:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:43.010 03:19:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:43.010 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:43.010 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.131 ms 00:16:43.010 00:16:43.010 --- 10.0.0.2 ping statistics --- 00:16:43.010 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:43.010 rtt min/avg/max/mdev = 0.131/0.131/0.131/0.000 ms 00:16:43.010 03:19:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:43.010 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:43.010 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.109 ms 00:16:43.010 00:16:43.010 --- 10.0.0.1 ping statistics --- 00:16:43.010 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:43.010 rtt min/avg/max/mdev = 0.109/0.109/0.109/0.000 ms 00:16:43.010 03:19:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:43.010 03:19:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@422 -- # return 0 00:16:43.010 03:19:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:43.011 03:19:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:43.011 03:19:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:43.011 03:19:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:43.011 03:19:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:43.011 03:19:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:43.011 03:19:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:43.011 03:19:49 nvmf_tcp.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:16:43.011 03:19:49 nvmf_tcp.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:16:43.011 03:19:49 nvmf_tcp.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:16:43.011 03:19:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:43.011 03:19:49 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:43.011 03:19:49 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:43.011 03:19:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@481 -- # nvmfpid=3166319 00:16:43.011 03:19:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:16:43.011 03:19:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@482 -- # waitforlisten 3166319 00:16:43.011 03:19:49 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@829 -- # '[' -z 3166319 ']' 00:16:43.011 03:19:49 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:43.011 03:19:49 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:43.011 03:19:49 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:43.011 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:43.011 03:19:49 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:43.011 03:19:49 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:43.011 [2024-07-15 03:19:49.075582] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:16:43.011 [2024-07-15 03:19:49.075677] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:43.011 EAL: No free 2048 kB hugepages reported on node 1 00:16:43.011 [2024-07-15 03:19:49.144867] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:43.269 [2024-07-15 03:19:49.237015] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:43.269 [2024-07-15 03:19:49.237078] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:43.269 [2024-07-15 03:19:49.237104] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:43.269 [2024-07-15 03:19:49.237118] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:43.269 [2024-07-15 03:19:49.237129] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:43.269 [2024-07-15 03:19:49.237234] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:16:43.269 [2024-07-15 03:19:49.237328] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:16:43.269 [2024-07-15 03:19:49.237394] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:16:43.269 [2024-07-15 03:19:49.237397] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:43.269 03:19:49 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:43.270 03:19:49 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@862 -- # return 0 00:16:43.270 03:19:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:43.270 03:19:49 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:43.270 03:19:49 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:43.270 03:19:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:43.270 03:19:49 nvmf_tcp.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:43.270 03:19:49 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:43.270 03:19:49 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:43.270 [2024-07-15 03:19:49.378468] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:43.270 03:19:49 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:43.270 03:19:49 nvmf_tcp.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:16:43.270 03:19:49 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:43.270 03:19:49 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:43.270 03:19:49 nvmf_tcp.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:16:43.270 03:19:49 nvmf_tcp.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:16:43.270 03:19:49 nvmf_tcp.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:16:43.270 03:19:49 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:43.270 03:19:49 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:43.527 Malloc0 00:16:43.528 [2024-07-15 03:19:49.437771] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:43.528 03:19:49 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:43.528 03:19:49 nvmf_tcp.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:16:43.528 03:19:49 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:43.528 03:19:49 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:43.528 03:19:49 nvmf_tcp.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=3166403 00:16:43.528 03:19:49 nvmf_tcp.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 3166403 /var/tmp/bdevperf.sock 00:16:43.528 03:19:49 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@829 -- # '[' -z 3166403 ']' 00:16:43.528 03:19:49 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:16:43.528 03:19:49 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:16:43.528 03:19:49 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:43.528 03:19:49 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:43.528 03:19:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:16:43.528 03:19:49 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:43.528 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:43.528 03:19:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:16:43.528 03:19:49 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:43.528 03:19:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:16:43.528 03:19:49 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:43.528 03:19:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:16:43.528 { 00:16:43.528 "params": { 00:16:43.528 "name": "Nvme$subsystem", 00:16:43.528 "trtype": "$TEST_TRANSPORT", 00:16:43.528 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:43.528 "adrfam": "ipv4", 00:16:43.528 "trsvcid": "$NVMF_PORT", 00:16:43.528 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:43.528 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:43.528 "hdgst": ${hdgst:-false}, 00:16:43.528 "ddgst": ${ddgst:-false} 00:16:43.528 }, 00:16:43.528 "method": "bdev_nvme_attach_controller" 00:16:43.528 } 00:16:43.528 EOF 00:16:43.528 )") 00:16:43.528 03:19:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:16:43.528 03:19:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:16:43.528 03:19:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:16:43.528 03:19:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:16:43.528 "params": { 00:16:43.528 "name": "Nvme0", 00:16:43.528 "trtype": "tcp", 00:16:43.528 "traddr": "10.0.0.2", 00:16:43.528 "adrfam": "ipv4", 00:16:43.528 "trsvcid": "4420", 00:16:43.528 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:16:43.528 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:16:43.528 "hdgst": false, 00:16:43.528 "ddgst": false 00:16:43.528 }, 00:16:43.528 "method": "bdev_nvme_attach_controller" 00:16:43.528 }' 00:16:43.528 [2024-07-15 03:19:49.510262] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:16:43.528 [2024-07-15 03:19:49.510354] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3166403 ] 00:16:43.528 EAL: No free 2048 kB hugepages reported on node 1 00:16:43.528 [2024-07-15 03:19:49.572698] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:43.528 [2024-07-15 03:19:49.659137] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:44.094 Running I/O for 10 seconds... 00:16:44.094 03:19:49 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:44.094 03:19:49 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@862 -- # return 0 00:16:44.094 03:19:49 nvmf_tcp.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:16:44.094 03:19:49 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:44.094 03:19:49 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:44.094 03:19:50 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:44.094 03:19:50 nvmf_tcp.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:44.094 03:19:50 nvmf_tcp.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:16:44.094 03:19:50 nvmf_tcp.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:16:44.094 03:19:50 nvmf_tcp.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:16:44.094 03:19:50 nvmf_tcp.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:16:44.094 03:19:50 nvmf_tcp.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:16:44.094 03:19:50 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:16:44.094 03:19:50 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:16:44.094 03:19:50 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:16:44.094 03:19:50 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:44.094 03:19:50 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:44.094 03:19:50 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:16:44.094 03:19:50 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:44.094 03:19:50 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=67 00:16:44.094 03:19:50 nvmf_tcp.nvmf_host_management -- target/host_management.sh@58 -- # '[' 67 -ge 100 ']' 00:16:44.094 03:19:50 nvmf_tcp.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:16:44.353 03:19:50 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:16:44.353 03:19:50 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:16:44.353 03:19:50 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:16:44.353 03:19:50 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:16:44.353 03:19:50 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:44.353 03:19:50 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:44.353 03:19:50 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:44.353 03:19:50 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=526 00:16:44.353 03:19:50 nvmf_tcp.nvmf_host_management -- target/host_management.sh@58 -- # '[' 526 -ge 100 ']' 00:16:44.353 03:19:50 nvmf_tcp.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:16:44.353 03:19:50 nvmf_tcp.nvmf_host_management -- target/host_management.sh@60 -- # break 00:16:44.353 03:19:50 nvmf_tcp.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:16:44.353 03:19:50 nvmf_tcp.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:16:44.353 03:19:50 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:44.353 03:19:50 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:44.353 [2024-07-15 03:19:50.348836] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17d2dd0 is same with the state(5) to be set 00:16:44.353 [2024-07-15 03:19:50.348979] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17d2dd0 is same with the state(5) to be set 00:16:44.353 [2024-07-15 03:19:50.348996] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17d2dd0 is same with the state(5) to be set 00:16:44.353 [2024-07-15 03:19:50.349012] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17d2dd0 is same with the state(5) to be set 00:16:44.353 [2024-07-15 03:19:50.349024] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17d2dd0 is same with the state(5) to be set 00:16:44.353 [2024-07-15 03:19:50.349047] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17d2dd0 is same with the state(5) to be set 00:16:44.353 [2024-07-15 03:19:50.349059] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17d2dd0 is same with the state(5) to be set 00:16:44.353 [2024-07-15 03:19:50.349071] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17d2dd0 is same with the state(5) to be set 00:16:44.353 [2024-07-15 03:19:50.349083] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17d2dd0 is same with the state(5) to be set 00:16:44.353 [2024-07-15 03:19:50.349095] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17d2dd0 is same with the state(5) to be set 00:16:44.353 [2024-07-15 03:19:50.349106] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17d2dd0 is same with the state(5) to be set 00:16:44.353 [2024-07-15 03:19:50.349118] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17d2dd0 is same with the state(5) to be set 00:16:44.353 03:19:50 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:44.353 03:19:50 nvmf_tcp.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:16:44.353 03:19:50 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:44.353 03:19:50 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:44.353 03:19:50 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:44.353 03:19:50 nvmf_tcp.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:16:44.353 [2024-07-15 03:19:50.361968] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:16:44.353 [2024-07-15 03:19:50.362010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.353 [2024-07-15 03:19:50.362028] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:16:44.353 [2024-07-15 03:19:50.362043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.353 [2024-07-15 03:19:50.362058] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:16:44.353 [2024-07-15 03:19:50.362073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.353 [2024-07-15 03:19:50.362088] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:16:44.353 [2024-07-15 03:19:50.362103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.353 [2024-07-15 03:19:50.362118] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17eced0 is same with the state(5) to be set 00:16:44.354 [2024-07-15 03:19:50.362222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.354 [2024-07-15 03:19:50.362246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.354 [2024-07-15 03:19:50.362273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:82048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.354 [2024-07-15 03:19:50.362289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.354 [2024-07-15 03:19:50.362307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:82176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.354 [2024-07-15 03:19:50.362323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.354 [2024-07-15 03:19:50.362363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:82304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.354 [2024-07-15 03:19:50.362379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.354 [2024-07-15 03:19:50.362394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:82432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.354 [2024-07-15 03:19:50.362409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.354 [2024-07-15 03:19:50.362426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:82560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.354 [2024-07-15 03:19:50.362441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.354 [2024-07-15 03:19:50.362459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:82688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.354 [2024-07-15 03:19:50.362474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.354 [2024-07-15 03:19:50.362492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:82816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.354 [2024-07-15 03:19:50.362508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.354 [2024-07-15 03:19:50.362525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:82944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.354 [2024-07-15 03:19:50.362542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.354 [2024-07-15 03:19:50.362560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:83072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.354 [2024-07-15 03:19:50.362575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.354 [2024-07-15 03:19:50.362592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:83200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.354 [2024-07-15 03:19:50.362608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.354 [2024-07-15 03:19:50.362625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:83328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.354 [2024-07-15 03:19:50.362640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.354 [2024-07-15 03:19:50.362657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:83456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.354 [2024-07-15 03:19:50.362673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.354 [2024-07-15 03:19:50.362690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:83584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.354 [2024-07-15 03:19:50.362705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.354 [2024-07-15 03:19:50.362722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:83712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.354 [2024-07-15 03:19:50.362737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.354 [2024-07-15 03:19:50.362754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:83840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.354 [2024-07-15 03:19:50.362772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.354 [2024-07-15 03:19:50.362790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:83968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.354 [2024-07-15 03:19:50.362805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.354 [2024-07-15 03:19:50.362822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:84096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.354 [2024-07-15 03:19:50.362837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.354 [2024-07-15 03:19:50.362868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:84224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.354 [2024-07-15 03:19:50.362897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.354 [2024-07-15 03:19:50.362916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:84352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.354 [2024-07-15 03:19:50.362939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.354 [2024-07-15 03:19:50.362955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:84480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.354 [2024-07-15 03:19:50.362971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.354 [2024-07-15 03:19:50.362988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:84608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.354 [2024-07-15 03:19:50.363005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.354 [2024-07-15 03:19:50.363022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:84736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.354 [2024-07-15 03:19:50.363038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.354 [2024-07-15 03:19:50.363056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:84864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.354 [2024-07-15 03:19:50.363072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.354 [2024-07-15 03:19:50.363090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:84992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.354 [2024-07-15 03:19:50.363106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.354 [2024-07-15 03:19:50.363123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:85120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.354 [2024-07-15 03:19:50.363139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.354 [2024-07-15 03:19:50.363156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:85248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.354 [2024-07-15 03:19:50.363173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.354 [2024-07-15 03:19:50.363215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:85376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.354 [2024-07-15 03:19:50.363232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.354 [2024-07-15 03:19:50.363260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:85504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.354 [2024-07-15 03:19:50.363277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.354 [2024-07-15 03:19:50.363293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:85632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.354 [2024-07-15 03:19:50.363309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.354 [2024-07-15 03:19:50.363325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:85760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.354 [2024-07-15 03:19:50.363341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.354 [2024-07-15 03:19:50.363358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:85888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.354 [2024-07-15 03:19:50.363373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.354 [2024-07-15 03:19:50.363389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:86016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.354 [2024-07-15 03:19:50.363405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.354 [2024-07-15 03:19:50.363420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:86144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.354 [2024-07-15 03:19:50.363436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.354 [2024-07-15 03:19:50.363452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:86272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.354 [2024-07-15 03:19:50.363467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.354 [2024-07-15 03:19:50.363482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:86400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.354 [2024-07-15 03:19:50.363498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.354 [2024-07-15 03:19:50.363514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:86528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.354 [2024-07-15 03:19:50.363529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.354 [2024-07-15 03:19:50.363545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:86656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.355 [2024-07-15 03:19:50.363561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.355 [2024-07-15 03:19:50.363577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:86784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.355 [2024-07-15 03:19:50.363593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.355 [2024-07-15 03:19:50.363608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:86912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.355 [2024-07-15 03:19:50.363624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.355 [2024-07-15 03:19:50.363640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:87040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.355 [2024-07-15 03:19:50.363670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.355 [2024-07-15 03:19:50.363686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:87168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.355 [2024-07-15 03:19:50.363701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.355 [2024-07-15 03:19:50.363717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:87296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.355 [2024-07-15 03:19:50.363732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.355 [2024-07-15 03:19:50.363748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:87424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.355 [2024-07-15 03:19:50.363764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.355 [2024-07-15 03:19:50.363781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:87552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.355 [2024-07-15 03:19:50.363796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.355 [2024-07-15 03:19:50.363812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:87680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.355 [2024-07-15 03:19:50.363828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.355 [2024-07-15 03:19:50.363844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:87808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.355 [2024-07-15 03:19:50.363874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.355 [2024-07-15 03:19:50.363900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:87936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.355 [2024-07-15 03:19:50.363916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.355 [2024-07-15 03:19:50.363942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:88064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.355 [2024-07-15 03:19:50.363958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.355 [2024-07-15 03:19:50.363975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:88192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.355 [2024-07-15 03:19:50.363991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.355 [2024-07-15 03:19:50.364008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:88320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.355 [2024-07-15 03:19:50.364024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.355 [2024-07-15 03:19:50.364041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:88448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.355 [2024-07-15 03:19:50.364057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.355 [2024-07-15 03:19:50.364074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:88576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.355 [2024-07-15 03:19:50.364090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.355 [2024-07-15 03:19:50.364111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:88704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.355 [2024-07-15 03:19:50.364127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.355 [2024-07-15 03:19:50.364145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:88832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.355 [2024-07-15 03:19:50.364187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.355 [2024-07-15 03:19:50.364204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:88960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.355 [2024-07-15 03:19:50.364220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.355 [2024-07-15 03:19:50.364236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:89088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.355 [2024-07-15 03:19:50.364252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.355 [2024-07-15 03:19:50.364268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:89216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.355 [2024-07-15 03:19:50.364284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.355 [2024-07-15 03:19:50.364301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:89344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.355 [2024-07-15 03:19:50.364316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.355 [2024-07-15 03:19:50.364333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:89472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.355 [2024-07-15 03:19:50.364348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.355 [2024-07-15 03:19:50.364365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:89600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.355 [2024-07-15 03:19:50.364380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.355 [2024-07-15 03:19:50.364397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:89728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.355 [2024-07-15 03:19:50.364413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.355 [2024-07-15 03:19:50.364430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:89856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.355 [2024-07-15 03:19:50.364445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.355 [2024-07-15 03:19:50.364462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:89984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.355 [2024-07-15 03:19:50.364478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.355 [2024-07-15 03:19:50.364569] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1bfe100 was disconnected and freed. reset controller. 00:16:44.355 [2024-07-15 03:19:50.365700] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:16:44.355 task offset: 81920 on job bdev=Nvme0n1 fails 00:16:44.355 00:16:44.355 Latency(us) 00:16:44.355 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:44.355 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:44.355 Job: Nvme0n1 ended in about 0.41 seconds with error 00:16:44.355 Verification LBA range: start 0x0 length 0x400 00:16:44.355 Nvme0n1 : 0.41 1553.17 97.07 155.32 0.00 36406.33 3070.48 33787.45 00:16:44.355 =================================================================================================================== 00:16:44.355 Total : 1553.17 97.07 155.32 0.00 36406.33 3070.48 33787.45 00:16:44.355 [2024-07-15 03:19:50.367578] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:16:44.355 [2024-07-15 03:19:50.367608] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17eced0 (9): Bad file descriptor 00:16:44.355 [2024-07-15 03:19:50.373822] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:16:45.286 03:19:51 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 3166403 00:16:45.286 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (3166403) - No such process 00:16:45.286 03:19:51 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # true 00:16:45.286 03:19:51 nvmf_tcp.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:16:45.286 03:19:51 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:16:45.286 03:19:51 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:16:45.286 03:19:51 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:16:45.286 03:19:51 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:16:45.286 03:19:51 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:16:45.286 03:19:51 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:16:45.286 { 00:16:45.286 "params": { 00:16:45.286 "name": "Nvme$subsystem", 00:16:45.286 "trtype": "$TEST_TRANSPORT", 00:16:45.286 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:45.286 "adrfam": "ipv4", 00:16:45.286 "trsvcid": "$NVMF_PORT", 00:16:45.286 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:45.286 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:45.286 "hdgst": ${hdgst:-false}, 00:16:45.286 "ddgst": ${ddgst:-false} 00:16:45.286 }, 00:16:45.286 "method": "bdev_nvme_attach_controller" 00:16:45.286 } 00:16:45.286 EOF 00:16:45.286 )") 00:16:45.286 03:19:51 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:16:45.286 03:19:51 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:16:45.286 03:19:51 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:16:45.286 03:19:51 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:16:45.286 "params": { 00:16:45.286 "name": "Nvme0", 00:16:45.286 "trtype": "tcp", 00:16:45.286 "traddr": "10.0.0.2", 00:16:45.286 "adrfam": "ipv4", 00:16:45.286 "trsvcid": "4420", 00:16:45.286 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:16:45.286 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:16:45.286 "hdgst": false, 00:16:45.286 "ddgst": false 00:16:45.286 }, 00:16:45.286 "method": "bdev_nvme_attach_controller" 00:16:45.286 }' 00:16:45.286 [2024-07-15 03:19:51.409731] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:16:45.286 [2024-07-15 03:19:51.409802] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3166673 ] 00:16:45.544 EAL: No free 2048 kB hugepages reported on node 1 00:16:45.544 [2024-07-15 03:19:51.469724] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:45.544 [2024-07-15 03:19:51.558052] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:45.801 Running I/O for 1 seconds... 00:16:46.735 00:16:46.735 Latency(us) 00:16:46.735 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:46.735 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:46.735 Verification LBA range: start 0x0 length 0x400 00:16:46.735 Nvme0n1 : 1.04 1606.45 100.40 0.00 0.00 39201.48 8349.77 33399.09 00:16:46.735 =================================================================================================================== 00:16:46.735 Total : 1606.45 100.40 0.00 0.00 39201.48 8349.77 33399.09 00:16:46.993 03:19:53 nvmf_tcp.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:16:46.993 03:19:53 nvmf_tcp.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:16:46.993 03:19:53 nvmf_tcp.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:16:46.993 03:19:53 nvmf_tcp.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:16:46.993 03:19:53 nvmf_tcp.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:16:46.993 03:19:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:46.993 03:19:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@117 -- # sync 00:16:46.993 03:19:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:46.993 03:19:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@120 -- # set +e 00:16:46.993 03:19:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:46.993 03:19:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:46.993 rmmod nvme_tcp 00:16:46.993 rmmod nvme_fabrics 00:16:46.993 rmmod nvme_keyring 00:16:46.993 03:19:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:46.993 03:19:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@124 -- # set -e 00:16:46.993 03:19:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@125 -- # return 0 00:16:46.993 03:19:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@489 -- # '[' -n 3166319 ']' 00:16:46.993 03:19:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@490 -- # killprocess 3166319 00:16:46.993 03:19:53 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@948 -- # '[' -z 3166319 ']' 00:16:46.993 03:19:53 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@952 -- # kill -0 3166319 00:16:46.993 03:19:53 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@953 -- # uname 00:16:46.993 03:19:53 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:46.993 03:19:53 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3166319 00:16:47.259 03:19:53 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:16:47.259 03:19:53 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:16:47.259 03:19:53 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3166319' 00:16:47.259 killing process with pid 3166319 00:16:47.259 03:19:53 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@967 -- # kill 3166319 00:16:47.259 03:19:53 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@972 -- # wait 3166319 00:16:47.259 [2024-07-15 03:19:53.373993] app.c: 710:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:16:47.521 03:19:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:47.521 03:19:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:47.521 03:19:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:47.521 03:19:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:47.521 03:19:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:47.521 03:19:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:47.521 03:19:53 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:47.521 03:19:53 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:49.421 03:19:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:49.421 03:19:55 nvmf_tcp.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:16:49.421 00:16:49.421 real 0m8.631s 00:16:49.421 user 0m19.226s 00:16:49.421 sys 0m2.695s 00:16:49.421 03:19:55 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:49.421 03:19:55 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:49.421 ************************************ 00:16:49.421 END TEST nvmf_host_management 00:16:49.421 ************************************ 00:16:49.421 03:19:55 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:16:49.421 03:19:55 nvmf_tcp -- nvmf/nvmf.sh@48 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:16:49.421 03:19:55 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:16:49.421 03:19:55 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:49.421 03:19:55 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:49.421 ************************************ 00:16:49.421 START TEST nvmf_lvol 00:16:49.421 ************************************ 00:16:49.421 03:19:55 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:16:49.421 * Looking for test storage... 00:16:49.421 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:49.421 03:19:55 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:49.421 03:19:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:16:49.421 03:19:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:49.421 03:19:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:49.421 03:19:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:49.421 03:19:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:49.421 03:19:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:49.421 03:19:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:49.421 03:19:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:49.421 03:19:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:49.421 03:19:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:49.421 03:19:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:49.421 03:19:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:49.421 03:19:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:16:49.421 03:19:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:49.421 03:19:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:49.421 03:19:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:49.421 03:19:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:49.421 03:19:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:49.421 03:19:55 nvmf_tcp.nvmf_lvol -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:49.421 03:19:55 nvmf_tcp.nvmf_lvol -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:49.422 03:19:55 nvmf_tcp.nvmf_lvol -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:49.422 03:19:55 nvmf_tcp.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:49.422 03:19:55 nvmf_tcp.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:49.422 03:19:55 nvmf_tcp.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:49.422 03:19:55 nvmf_tcp.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:16:49.422 03:19:55 nvmf_tcp.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:49.422 03:19:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@47 -- # : 0 00:16:49.422 03:19:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:49.422 03:19:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:49.422 03:19:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:49.422 03:19:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:49.422 03:19:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:49.422 03:19:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:49.422 03:19:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:49.422 03:19:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:49.422 03:19:55 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:49.422 03:19:55 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:49.422 03:19:55 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:16:49.422 03:19:55 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:16:49.422 03:19:55 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:49.422 03:19:55 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:16:49.422 03:19:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:49.422 03:19:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:49.422 03:19:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:49.422 03:19:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:49.422 03:19:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:49.422 03:19:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:49.422 03:19:55 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:49.422 03:19:55 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:49.422 03:19:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:49.422 03:19:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:49.422 03:19:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@285 -- # xtrace_disable 00:16:49.422 03:19:55 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:16:51.324 03:19:57 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:51.324 03:19:57 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@291 -- # pci_devs=() 00:16:51.324 03:19:57 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:51.324 03:19:57 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:51.324 03:19:57 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:51.324 03:19:57 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:51.324 03:19:57 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:51.324 03:19:57 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@295 -- # net_devs=() 00:16:51.324 03:19:57 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:51.324 03:19:57 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@296 -- # e810=() 00:16:51.324 03:19:57 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@296 -- # local -ga e810 00:16:51.324 03:19:57 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@297 -- # x722=() 00:16:51.324 03:19:57 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@297 -- # local -ga x722 00:16:51.324 03:19:57 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@298 -- # mlx=() 00:16:51.324 03:19:57 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@298 -- # local -ga mlx 00:16:51.325 03:19:57 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:51.325 03:19:57 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:51.325 03:19:57 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:51.325 03:19:57 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:51.325 03:19:57 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:51.325 03:19:57 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:51.325 03:19:57 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:51.325 03:19:57 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:51.325 03:19:57 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:51.325 03:19:57 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:51.325 03:19:57 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:51.325 03:19:57 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:51.325 03:19:57 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:51.325 03:19:57 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:51.325 03:19:57 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:51.325 03:19:57 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:51.325 03:19:57 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:51.325 03:19:57 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:51.583 03:19:57 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:16:51.583 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:16:51.583 03:19:57 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:51.583 03:19:57 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:51.583 03:19:57 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:51.583 03:19:57 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:51.583 03:19:57 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:51.583 03:19:57 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:51.583 03:19:57 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:16:51.583 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:16:51.583 03:19:57 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:51.583 03:19:57 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:51.583 03:19:57 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:51.583 03:19:57 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:51.583 03:19:57 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:51.583 03:19:57 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:51.583 03:19:57 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:51.583 03:19:57 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:51.583 03:19:57 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:51.583 03:19:57 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:51.583 03:19:57 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:51.583 03:19:57 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:51.583 03:19:57 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:51.583 03:19:57 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:51.583 03:19:57 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:51.584 03:19:57 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:16:51.584 Found net devices under 0000:0a:00.0: cvl_0_0 00:16:51.584 03:19:57 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:51.584 03:19:57 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:51.584 03:19:57 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:51.584 03:19:57 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:51.584 03:19:57 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:51.584 03:19:57 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:51.584 03:19:57 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:51.584 03:19:57 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:51.584 03:19:57 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:16:51.584 Found net devices under 0000:0a:00.1: cvl_0_1 00:16:51.584 03:19:57 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:51.584 03:19:57 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:51.584 03:19:57 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # is_hw=yes 00:16:51.584 03:19:57 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:51.584 03:19:57 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:16:51.584 03:19:57 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:16:51.584 03:19:57 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:51.584 03:19:57 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:51.584 03:19:57 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:51.584 03:19:57 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:51.584 03:19:57 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:51.584 03:19:57 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:51.584 03:19:57 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:51.584 03:19:57 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:51.584 03:19:57 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:51.584 03:19:57 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:51.584 03:19:57 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:51.584 03:19:57 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:51.584 03:19:57 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:51.584 03:19:57 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:51.584 03:19:57 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:51.584 03:19:57 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:51.584 03:19:57 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:51.584 03:19:57 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:51.584 03:19:57 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:51.584 03:19:57 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:51.584 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:51.584 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.255 ms 00:16:51.584 00:16:51.584 --- 10.0.0.2 ping statistics --- 00:16:51.584 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:51.584 rtt min/avg/max/mdev = 0.255/0.255/0.255/0.000 ms 00:16:51.584 03:19:57 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:51.584 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:51.584 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.111 ms 00:16:51.584 00:16:51.584 --- 10.0.0.1 ping statistics --- 00:16:51.584 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:51.584 rtt min/avg/max/mdev = 0.111/0.111/0.111/0.000 ms 00:16:51.584 03:19:57 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:51.584 03:19:57 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@422 -- # return 0 00:16:51.584 03:19:57 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:51.584 03:19:57 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:51.584 03:19:57 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:51.584 03:19:57 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:51.584 03:19:57 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:51.584 03:19:57 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:51.584 03:19:57 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:51.584 03:19:57 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:16:51.584 03:19:57 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:51.584 03:19:57 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:51.584 03:19:57 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:16:51.584 03:19:57 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@481 -- # nvmfpid=3168755 00:16:51.584 03:19:57 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:16:51.584 03:19:57 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@482 -- # waitforlisten 3168755 00:16:51.584 03:19:57 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@829 -- # '[' -z 3168755 ']' 00:16:51.584 03:19:57 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:51.584 03:19:57 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:51.584 03:19:57 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:51.584 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:51.584 03:19:57 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:51.584 03:19:57 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:16:51.584 [2024-07-15 03:19:57.677057] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:16:51.584 [2024-07-15 03:19:57.677136] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:51.584 EAL: No free 2048 kB hugepages reported on node 1 00:16:51.875 [2024-07-15 03:19:57.746695] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:51.875 [2024-07-15 03:19:57.836531] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:51.875 [2024-07-15 03:19:57.836597] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:51.875 [2024-07-15 03:19:57.836614] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:51.875 [2024-07-15 03:19:57.836627] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:51.875 [2024-07-15 03:19:57.836639] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:51.875 [2024-07-15 03:19:57.836721] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:51.875 [2024-07-15 03:19:57.836790] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:51.875 [2024-07-15 03:19:57.836788] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:16:51.875 03:19:57 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:51.875 03:19:57 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@862 -- # return 0 00:16:51.875 03:19:57 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:51.875 03:19:57 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:51.875 03:19:57 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:16:51.875 03:19:57 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:51.875 03:19:57 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:16:52.155 [2024-07-15 03:19:58.211076] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:52.156 03:19:58 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:52.413 03:19:58 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:16:52.413 03:19:58 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:52.672 03:19:58 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:16:52.672 03:19:58 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:16:52.930 03:19:59 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:16:53.190 03:19:59 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=e9c3bec7-2359-4f03-9133-ea66985de5a2 00:16:53.190 03:19:59 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u e9c3bec7-2359-4f03-9133-ea66985de5a2 lvol 20 00:16:53.448 03:19:59 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=ab6b5ce6-1bc9-4ab7-a979-51288c2f4295 00:16:53.448 03:19:59 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:16:53.707 03:19:59 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 ab6b5ce6-1bc9-4ab7-a979-51288c2f4295 00:16:53.965 03:20:00 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:16:54.224 [2024-07-15 03:20:00.280983] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:54.224 03:20:00 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:16:54.482 03:20:00 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=3169177 00:16:54.482 03:20:00 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:16:54.482 03:20:00 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:16:54.482 EAL: No free 2048 kB hugepages reported on node 1 00:16:55.416 03:20:01 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot ab6b5ce6-1bc9-4ab7-a979-51288c2f4295 MY_SNAPSHOT 00:16:55.981 03:20:01 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=7363f419-4e9d-46d3-b3ea-6fc61d444204 00:16:55.981 03:20:01 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize ab6b5ce6-1bc9-4ab7-a979-51288c2f4295 30 00:16:56.237 03:20:02 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 7363f419-4e9d-46d3-b3ea-6fc61d444204 MY_CLONE 00:16:56.495 03:20:02 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=bf9846c2-c1eb-40f7-88f1-afa85950ebf8 00:16:56.495 03:20:02 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate bf9846c2-c1eb-40f7-88f1-afa85950ebf8 00:16:57.060 03:20:03 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 3169177 00:17:05.162 Initializing NVMe Controllers 00:17:05.162 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:17:05.162 Controller IO queue size 128, less than required. 00:17:05.162 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:17:05.162 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:17:05.162 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:17:05.162 Initialization complete. Launching workers. 00:17:05.162 ======================================================== 00:17:05.162 Latency(us) 00:17:05.162 Device Information : IOPS MiB/s Average min max 00:17:05.162 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 10774.90 42.09 11888.25 1185.24 127447.68 00:17:05.162 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 10690.50 41.76 11977.10 1945.42 58032.09 00:17:05.162 ======================================================== 00:17:05.162 Total : 21465.40 83.85 11932.50 1185.24 127447.68 00:17:05.162 00:17:05.162 03:20:10 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:17:05.162 03:20:11 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete ab6b5ce6-1bc9-4ab7-a979-51288c2f4295 00:17:05.419 03:20:11 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u e9c3bec7-2359-4f03-9133-ea66985de5a2 00:17:05.677 03:20:11 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:17:05.677 03:20:11 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:17:05.677 03:20:11 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:17:05.677 03:20:11 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:05.677 03:20:11 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@117 -- # sync 00:17:05.677 03:20:11 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:05.677 03:20:11 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@120 -- # set +e 00:17:05.677 03:20:11 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:05.677 03:20:11 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:05.677 rmmod nvme_tcp 00:17:05.677 rmmod nvme_fabrics 00:17:05.677 rmmod nvme_keyring 00:17:05.677 03:20:11 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:05.677 03:20:11 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@124 -- # set -e 00:17:05.677 03:20:11 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@125 -- # return 0 00:17:05.677 03:20:11 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@489 -- # '[' -n 3168755 ']' 00:17:05.677 03:20:11 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@490 -- # killprocess 3168755 00:17:05.677 03:20:11 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@948 -- # '[' -z 3168755 ']' 00:17:05.677 03:20:11 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@952 -- # kill -0 3168755 00:17:05.677 03:20:11 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@953 -- # uname 00:17:05.677 03:20:11 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:05.677 03:20:11 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3168755 00:17:05.677 03:20:11 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:17:05.677 03:20:11 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:17:05.677 03:20:11 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3168755' 00:17:05.677 killing process with pid 3168755 00:17:05.677 03:20:11 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@967 -- # kill 3168755 00:17:05.677 03:20:11 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@972 -- # wait 3168755 00:17:05.935 03:20:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:05.935 03:20:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:05.935 03:20:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:05.935 03:20:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:05.935 03:20:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:05.935 03:20:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:05.935 03:20:12 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:05.935 03:20:12 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:08.472 03:20:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:08.472 00:17:08.472 real 0m18.568s 00:17:08.472 user 1m3.941s 00:17:08.472 sys 0m5.407s 00:17:08.472 03:20:14 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:08.472 03:20:14 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:17:08.472 ************************************ 00:17:08.472 END TEST nvmf_lvol 00:17:08.472 ************************************ 00:17:08.472 03:20:14 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:17:08.472 03:20:14 nvmf_tcp -- nvmf/nvmf.sh@49 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:17:08.472 03:20:14 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:17:08.472 03:20:14 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:08.472 03:20:14 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:08.472 ************************************ 00:17:08.472 START TEST nvmf_lvs_grow 00:17:08.472 ************************************ 00:17:08.472 03:20:14 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:17:08.472 * Looking for test storage... 00:17:08.472 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:08.472 03:20:14 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:08.472 03:20:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:17:08.472 03:20:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:08.472 03:20:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:08.472 03:20:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:08.472 03:20:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:08.472 03:20:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:08.472 03:20:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:08.472 03:20:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:08.472 03:20:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:08.472 03:20:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:08.472 03:20:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:08.472 03:20:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:08.472 03:20:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:17:08.472 03:20:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:08.472 03:20:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:08.472 03:20:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:08.472 03:20:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:08.472 03:20:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:08.472 03:20:14 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:08.472 03:20:14 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:08.472 03:20:14 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:08.472 03:20:14 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:08.472 03:20:14 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:08.472 03:20:14 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:08.472 03:20:14 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:17:08.472 03:20:14 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:08.472 03:20:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@47 -- # : 0 00:17:08.472 03:20:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:08.472 03:20:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:08.472 03:20:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:08.472 03:20:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:08.472 03:20:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:08.472 03:20:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:08.472 03:20:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:08.472 03:20:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:08.472 03:20:14 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:08.472 03:20:14 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:08.472 03:20:14 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:17:08.472 03:20:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:08.472 03:20:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:08.472 03:20:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:08.472 03:20:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:08.472 03:20:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:08.472 03:20:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:08.472 03:20:14 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:08.472 03:20:14 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:08.472 03:20:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:08.472 03:20:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:08.472 03:20:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@285 -- # xtrace_disable 00:17:08.472 03:20:14 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:17:10.374 03:20:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:10.374 03:20:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@291 -- # pci_devs=() 00:17:10.374 03:20:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:10.374 03:20:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:10.374 03:20:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:10.374 03:20:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:10.374 03:20:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:10.374 03:20:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@295 -- # net_devs=() 00:17:10.374 03:20:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:10.374 03:20:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@296 -- # e810=() 00:17:10.374 03:20:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@296 -- # local -ga e810 00:17:10.374 03:20:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@297 -- # x722=() 00:17:10.374 03:20:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@297 -- # local -ga x722 00:17:10.374 03:20:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@298 -- # mlx=() 00:17:10.374 03:20:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@298 -- # local -ga mlx 00:17:10.374 03:20:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:10.374 03:20:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:10.374 03:20:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:10.374 03:20:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:10.374 03:20:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:10.374 03:20:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:10.374 03:20:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:10.374 03:20:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:10.374 03:20:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:10.374 03:20:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:10.374 03:20:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:10.374 03:20:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:10.374 03:20:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:10.374 03:20:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:10.374 03:20:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:10.374 03:20:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:10.374 03:20:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:10.374 03:20:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:10.374 03:20:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:17:10.374 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:17:10.374 03:20:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:10.374 03:20:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:10.374 03:20:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:10.374 03:20:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:10.374 03:20:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:10.374 03:20:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:10.374 03:20:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:17:10.374 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:17:10.374 03:20:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:10.374 03:20:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:10.374 03:20:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:10.374 03:20:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:10.374 03:20:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:10.374 03:20:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:10.374 03:20:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:10.374 03:20:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:10.374 03:20:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:10.374 03:20:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:10.374 03:20:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:10.374 03:20:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:10.374 03:20:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:10.374 03:20:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:10.374 03:20:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:10.374 03:20:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:17:10.374 Found net devices under 0000:0a:00.0: cvl_0_0 00:17:10.374 03:20:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:10.374 03:20:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:10.374 03:20:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:10.374 03:20:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:10.374 03:20:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:10.374 03:20:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:10.374 03:20:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:10.374 03:20:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:10.374 03:20:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:17:10.374 Found net devices under 0000:0a:00.1: cvl_0_1 00:17:10.374 03:20:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:10.374 03:20:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:10.374 03:20:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # is_hw=yes 00:17:10.374 03:20:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:10.374 03:20:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:17:10.374 03:20:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:17:10.374 03:20:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:10.374 03:20:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:10.374 03:20:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:10.374 03:20:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:10.374 03:20:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:10.374 03:20:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:10.374 03:20:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:10.374 03:20:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:10.374 03:20:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:10.374 03:20:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:10.374 03:20:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:10.374 03:20:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:10.374 03:20:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:10.374 03:20:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:10.375 03:20:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:10.375 03:20:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:10.375 03:20:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:10.375 03:20:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:10.375 03:20:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:10.375 03:20:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:10.375 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:10.375 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.135 ms 00:17:10.375 00:17:10.375 --- 10.0.0.2 ping statistics --- 00:17:10.375 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:10.375 rtt min/avg/max/mdev = 0.135/0.135/0.135/0.000 ms 00:17:10.375 03:20:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:10.375 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:10.375 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.113 ms 00:17:10.375 00:17:10.375 --- 10.0.0.1 ping statistics --- 00:17:10.375 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:10.375 rtt min/avg/max/mdev = 0.113/0.113/0.113/0.000 ms 00:17:10.375 03:20:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:10.375 03:20:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@422 -- # return 0 00:17:10.375 03:20:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:10.375 03:20:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:10.375 03:20:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:10.375 03:20:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:10.375 03:20:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:10.375 03:20:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:10.375 03:20:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:10.375 03:20:16 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:17:10.375 03:20:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:10.375 03:20:16 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:10.375 03:20:16 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:17:10.375 03:20:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@481 -- # nvmfpid=3172430 00:17:10.375 03:20:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:17:10.375 03:20:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@482 -- # waitforlisten 3172430 00:17:10.375 03:20:16 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@829 -- # '[' -z 3172430 ']' 00:17:10.375 03:20:16 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:10.375 03:20:16 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:10.375 03:20:16 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:10.375 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:10.375 03:20:16 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:10.375 03:20:16 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:17:10.375 [2024-07-15 03:20:16.302767] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:17:10.375 [2024-07-15 03:20:16.302863] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:10.375 EAL: No free 2048 kB hugepages reported on node 1 00:17:10.375 [2024-07-15 03:20:16.377808] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:10.375 [2024-07-15 03:20:16.470066] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:10.375 [2024-07-15 03:20:16.470122] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:10.375 [2024-07-15 03:20:16.470136] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:10.375 [2024-07-15 03:20:16.470164] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:10.375 [2024-07-15 03:20:16.470176] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:10.375 [2024-07-15 03:20:16.470214] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:10.633 03:20:16 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:10.633 03:20:16 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@862 -- # return 0 00:17:10.633 03:20:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:10.633 03:20:16 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:10.633 03:20:16 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:17:10.633 03:20:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:10.633 03:20:16 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:17:10.891 [2024-07-15 03:20:16.857342] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:10.891 03:20:16 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:17:10.891 03:20:16 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:17:10.891 03:20:16 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:10.891 03:20:16 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:17:10.891 ************************************ 00:17:10.891 START TEST lvs_grow_clean 00:17:10.891 ************************************ 00:17:10.891 03:20:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1123 -- # lvs_grow 00:17:10.891 03:20:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:17:10.891 03:20:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:17:10.891 03:20:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:17:10.891 03:20:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:17:10.891 03:20:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:17:10.891 03:20:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:17:10.891 03:20:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:10.891 03:20:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:10.891 03:20:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:17:11.150 03:20:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:17:11.150 03:20:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:17:11.408 03:20:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=77028866-db81-4234-a1a6-be2ac02f734e 00:17:11.408 03:20:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 77028866-db81-4234-a1a6-be2ac02f734e 00:17:11.408 03:20:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:17:11.666 03:20:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:17:11.666 03:20:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:17:11.666 03:20:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 77028866-db81-4234-a1a6-be2ac02f734e lvol 150 00:17:11.924 03:20:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=fd88e8ca-1988-42e5-b312-ef4c97d0578c 00:17:11.924 03:20:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:11.924 03:20:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:17:12.183 [2024-07-15 03:20:18.191172] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:17:12.183 [2024-07-15 03:20:18.191276] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:17:12.183 true 00:17:12.183 03:20:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 77028866-db81-4234-a1a6-be2ac02f734e 00:17:12.183 03:20:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:17:12.441 03:20:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:17:12.441 03:20:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:17:12.700 03:20:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 fd88e8ca-1988-42e5-b312-ef4c97d0578c 00:17:12.958 03:20:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:17:13.216 [2024-07-15 03:20:19.178155] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:13.217 03:20:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:17:13.474 03:20:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=3172775 00:17:13.474 03:20:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:17:13.474 03:20:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:13.474 03:20:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 3172775 /var/tmp/bdevperf.sock 00:17:13.474 03:20:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@829 -- # '[' -z 3172775 ']' 00:17:13.474 03:20:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:13.474 03:20:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:13.474 03:20:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:13.474 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:13.474 03:20:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:13.474 03:20:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:17:13.474 [2024-07-15 03:20:19.479123] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:17:13.474 [2024-07-15 03:20:19.479210] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3172775 ] 00:17:13.474 EAL: No free 2048 kB hugepages reported on node 1 00:17:13.474 [2024-07-15 03:20:19.542161] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:13.733 [2024-07-15 03:20:19.634307] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:13.733 03:20:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:13.733 03:20:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@862 -- # return 0 00:17:13.733 03:20:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:17:14.297 Nvme0n1 00:17:14.297 03:20:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:17:14.556 [ 00:17:14.556 { 00:17:14.556 "name": "Nvme0n1", 00:17:14.556 "aliases": [ 00:17:14.556 "fd88e8ca-1988-42e5-b312-ef4c97d0578c" 00:17:14.556 ], 00:17:14.556 "product_name": "NVMe disk", 00:17:14.556 "block_size": 4096, 00:17:14.556 "num_blocks": 38912, 00:17:14.556 "uuid": "fd88e8ca-1988-42e5-b312-ef4c97d0578c", 00:17:14.556 "assigned_rate_limits": { 00:17:14.556 "rw_ios_per_sec": 0, 00:17:14.556 "rw_mbytes_per_sec": 0, 00:17:14.556 "r_mbytes_per_sec": 0, 00:17:14.556 "w_mbytes_per_sec": 0 00:17:14.556 }, 00:17:14.556 "claimed": false, 00:17:14.556 "zoned": false, 00:17:14.556 "supported_io_types": { 00:17:14.556 "read": true, 00:17:14.556 "write": true, 00:17:14.556 "unmap": true, 00:17:14.556 "flush": true, 00:17:14.556 "reset": true, 00:17:14.556 "nvme_admin": true, 00:17:14.556 "nvme_io": true, 00:17:14.556 "nvme_io_md": false, 00:17:14.556 "write_zeroes": true, 00:17:14.556 "zcopy": false, 00:17:14.556 "get_zone_info": false, 00:17:14.556 "zone_management": false, 00:17:14.556 "zone_append": false, 00:17:14.556 "compare": true, 00:17:14.556 "compare_and_write": true, 00:17:14.556 "abort": true, 00:17:14.556 "seek_hole": false, 00:17:14.556 "seek_data": false, 00:17:14.556 "copy": true, 00:17:14.556 "nvme_iov_md": false 00:17:14.556 }, 00:17:14.556 "memory_domains": [ 00:17:14.556 { 00:17:14.556 "dma_device_id": "system", 00:17:14.556 "dma_device_type": 1 00:17:14.556 } 00:17:14.556 ], 00:17:14.556 "driver_specific": { 00:17:14.556 "nvme": [ 00:17:14.556 { 00:17:14.556 "trid": { 00:17:14.556 "trtype": "TCP", 00:17:14.556 "adrfam": "IPv4", 00:17:14.556 "traddr": "10.0.0.2", 00:17:14.556 "trsvcid": "4420", 00:17:14.556 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:17:14.556 }, 00:17:14.556 "ctrlr_data": { 00:17:14.556 "cntlid": 1, 00:17:14.556 "vendor_id": "0x8086", 00:17:14.556 "model_number": "SPDK bdev Controller", 00:17:14.556 "serial_number": "SPDK0", 00:17:14.556 "firmware_revision": "24.09", 00:17:14.556 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:17:14.556 "oacs": { 00:17:14.556 "security": 0, 00:17:14.556 "format": 0, 00:17:14.556 "firmware": 0, 00:17:14.556 "ns_manage": 0 00:17:14.556 }, 00:17:14.556 "multi_ctrlr": true, 00:17:14.556 "ana_reporting": false 00:17:14.556 }, 00:17:14.556 "vs": { 00:17:14.556 "nvme_version": "1.3" 00:17:14.556 }, 00:17:14.556 "ns_data": { 00:17:14.556 "id": 1, 00:17:14.556 "can_share": true 00:17:14.556 } 00:17:14.556 } 00:17:14.556 ], 00:17:14.556 "mp_policy": "active_passive" 00:17:14.556 } 00:17:14.556 } 00:17:14.556 ] 00:17:14.556 03:20:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=3172886 00:17:14.556 03:20:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:17:14.556 03:20:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:17:14.556 Running I/O for 10 seconds... 00:17:15.487 Latency(us) 00:17:15.487 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:15.487 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:15.487 Nvme0n1 : 1.00 14305.00 55.88 0.00 0.00 0.00 0.00 0.00 00:17:15.487 =================================================================================================================== 00:17:15.487 Total : 14305.00 55.88 0.00 0.00 0.00 0.00 0.00 00:17:15.487 00:17:16.421 03:20:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 77028866-db81-4234-a1a6-be2ac02f734e 00:17:16.680 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:16.680 Nvme0n1 : 2.00 14424.00 56.34 0.00 0.00 0.00 0.00 0.00 00:17:16.680 =================================================================================================================== 00:17:16.680 Total : 14424.00 56.34 0.00 0.00 0.00 0.00 0.00 00:17:16.680 00:17:16.680 true 00:17:16.680 03:20:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 77028866-db81-4234-a1a6-be2ac02f734e 00:17:16.680 03:20:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:17:17.245 03:20:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:17:17.246 03:20:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:17:17.246 03:20:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 3172886 00:17:17.504 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:17.504 Nvme0n1 : 3.00 14590.67 56.99 0.00 0.00 0.00 0.00 0.00 00:17:17.504 =================================================================================================================== 00:17:17.505 Total : 14590.67 56.99 0.00 0.00 0.00 0.00 0.00 00:17:17.505 00:17:18.881 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:18.881 Nvme0n1 : 4.00 14705.75 57.44 0.00 0.00 0.00 0.00 0.00 00:17:18.881 =================================================================================================================== 00:17:18.882 Total : 14705.75 57.44 0.00 0.00 0.00 0.00 0.00 00:17:18.882 00:17:19.819 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:19.819 Nvme0n1 : 5.00 14805.60 57.83 0.00 0.00 0.00 0.00 0.00 00:17:19.819 =================================================================================================================== 00:17:19.819 Total : 14805.60 57.83 0.00 0.00 0.00 0.00 0.00 00:17:19.819 00:17:20.754 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:20.754 Nvme0n1 : 6.00 14932.17 58.33 0.00 0.00 0.00 0.00 0.00 00:17:20.754 =================================================================================================================== 00:17:20.754 Total : 14932.17 58.33 0.00 0.00 0.00 0.00 0.00 00:17:20.754 00:17:21.690 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:21.690 Nvme0n1 : 7.00 14940.29 58.36 0.00 0.00 0.00 0.00 0.00 00:17:21.690 =================================================================================================================== 00:17:21.690 Total : 14940.29 58.36 0.00 0.00 0.00 0.00 0.00 00:17:21.690 00:17:22.629 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:22.629 Nvme0n1 : 8.00 14997.25 58.58 0.00 0.00 0.00 0.00 0.00 00:17:22.629 =================================================================================================================== 00:17:22.629 Total : 14997.25 58.58 0.00 0.00 0.00 0.00 0.00 00:17:22.629 00:17:23.568 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:23.569 Nvme0n1 : 9.00 15055.78 58.81 0.00 0.00 0.00 0.00 0.00 00:17:23.569 =================================================================================================================== 00:17:23.569 Total : 15055.78 58.81 0.00 0.00 0.00 0.00 0.00 00:17:23.569 00:17:24.506 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:24.506 Nvme0n1 : 10.00 15055.30 58.81 0.00 0.00 0.00 0.00 0.00 00:17:24.506 =================================================================================================================== 00:17:24.506 Total : 15055.30 58.81 0.00 0.00 0.00 0.00 0.00 00:17:24.506 00:17:24.765 00:17:24.765 Latency(us) 00:17:24.765 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:24.765 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:24.765 Nvme0n1 : 10.01 15059.73 58.83 0.00 0.00 8494.73 2354.44 16699.54 00:17:24.765 =================================================================================================================== 00:17:24.765 Total : 15059.73 58.83 0.00 0.00 8494.73 2354.44 16699.54 00:17:24.765 0 00:17:24.765 03:20:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 3172775 00:17:24.765 03:20:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@948 -- # '[' -z 3172775 ']' 00:17:24.765 03:20:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@952 -- # kill -0 3172775 00:17:24.766 03:20:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@953 -- # uname 00:17:24.766 03:20:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:24.766 03:20:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3172775 00:17:24.766 03:20:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:17:24.766 03:20:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:17:24.766 03:20:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3172775' 00:17:24.766 killing process with pid 3172775 00:17:24.766 03:20:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@967 -- # kill 3172775 00:17:24.766 Received shutdown signal, test time was about 10.000000 seconds 00:17:24.766 00:17:24.766 Latency(us) 00:17:24.766 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:24.766 =================================================================================================================== 00:17:24.766 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:24.766 03:20:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # wait 3172775 00:17:25.024 03:20:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:17:25.024 03:20:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:17:25.282 03:20:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 77028866-db81-4234-a1a6-be2ac02f734e 00:17:25.282 03:20:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:17:25.541 03:20:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:17:25.542 03:20:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:17:25.542 03:20:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:17:25.801 [2024-07-15 03:20:31.902682] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:17:25.801 03:20:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 77028866-db81-4234-a1a6-be2ac02f734e 00:17:25.801 03:20:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@648 -- # local es=0 00:17:25.801 03:20:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 77028866-db81-4234-a1a6-be2ac02f734e 00:17:25.801 03:20:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:25.801 03:20:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:25.801 03:20:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:25.801 03:20:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:25.801 03:20:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:25.801 03:20:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:25.801 03:20:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:25.801 03:20:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:17:25.801 03:20:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 77028866-db81-4234-a1a6-be2ac02f734e 00:17:26.060 request: 00:17:26.060 { 00:17:26.060 "uuid": "77028866-db81-4234-a1a6-be2ac02f734e", 00:17:26.060 "method": "bdev_lvol_get_lvstores", 00:17:26.060 "req_id": 1 00:17:26.060 } 00:17:26.060 Got JSON-RPC error response 00:17:26.060 response: 00:17:26.060 { 00:17:26.060 "code": -19, 00:17:26.060 "message": "No such device" 00:17:26.060 } 00:17:26.060 03:20:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # es=1 00:17:26.060 03:20:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:26.060 03:20:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:26.060 03:20:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:26.060 03:20:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:17:26.318 aio_bdev 00:17:26.576 03:20:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev fd88e8ca-1988-42e5-b312-ef4c97d0578c 00:17:26.576 03:20:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@897 -- # local bdev_name=fd88e8ca-1988-42e5-b312-ef4c97d0578c 00:17:26.576 03:20:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:17:26.576 03:20:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@899 -- # local i 00:17:26.576 03:20:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:17:26.576 03:20:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:17:26.576 03:20:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:17:26.576 03:20:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b fd88e8ca-1988-42e5-b312-ef4c97d0578c -t 2000 00:17:26.834 [ 00:17:26.834 { 00:17:26.834 "name": "fd88e8ca-1988-42e5-b312-ef4c97d0578c", 00:17:26.834 "aliases": [ 00:17:26.834 "lvs/lvol" 00:17:26.834 ], 00:17:26.834 "product_name": "Logical Volume", 00:17:26.834 "block_size": 4096, 00:17:26.834 "num_blocks": 38912, 00:17:26.834 "uuid": "fd88e8ca-1988-42e5-b312-ef4c97d0578c", 00:17:26.834 "assigned_rate_limits": { 00:17:26.834 "rw_ios_per_sec": 0, 00:17:26.834 "rw_mbytes_per_sec": 0, 00:17:26.834 "r_mbytes_per_sec": 0, 00:17:26.834 "w_mbytes_per_sec": 0 00:17:26.834 }, 00:17:26.834 "claimed": false, 00:17:26.834 "zoned": false, 00:17:26.834 "supported_io_types": { 00:17:26.834 "read": true, 00:17:26.834 "write": true, 00:17:26.834 "unmap": true, 00:17:26.834 "flush": false, 00:17:26.834 "reset": true, 00:17:26.834 "nvme_admin": false, 00:17:26.834 "nvme_io": false, 00:17:26.834 "nvme_io_md": false, 00:17:26.834 "write_zeroes": true, 00:17:26.834 "zcopy": false, 00:17:26.834 "get_zone_info": false, 00:17:26.834 "zone_management": false, 00:17:26.834 "zone_append": false, 00:17:26.834 "compare": false, 00:17:26.834 "compare_and_write": false, 00:17:26.834 "abort": false, 00:17:26.834 "seek_hole": true, 00:17:26.834 "seek_data": true, 00:17:26.834 "copy": false, 00:17:26.834 "nvme_iov_md": false 00:17:26.834 }, 00:17:26.834 "driver_specific": { 00:17:26.834 "lvol": { 00:17:26.834 "lvol_store_uuid": "77028866-db81-4234-a1a6-be2ac02f734e", 00:17:26.834 "base_bdev": "aio_bdev", 00:17:26.834 "thin_provision": false, 00:17:26.834 "num_allocated_clusters": 38, 00:17:26.834 "snapshot": false, 00:17:26.834 "clone": false, 00:17:26.834 "esnap_clone": false 00:17:26.834 } 00:17:26.834 } 00:17:26.834 } 00:17:26.834 ] 00:17:26.834 03:20:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # return 0 00:17:26.834 03:20:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 77028866-db81-4234-a1a6-be2ac02f734e 00:17:26.834 03:20:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:17:27.401 03:20:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:17:27.401 03:20:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 77028866-db81-4234-a1a6-be2ac02f734e 00:17:27.401 03:20:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:17:27.401 03:20:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:17:27.401 03:20:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete fd88e8ca-1988-42e5-b312-ef4c97d0578c 00:17:27.658 03:20:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 77028866-db81-4234-a1a6-be2ac02f734e 00:17:27.916 03:20:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:17:28.480 03:20:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:28.480 00:17:28.480 real 0m17.448s 00:17:28.480 user 0m17.051s 00:17:28.480 sys 0m1.825s 00:17:28.480 03:20:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:28.480 03:20:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:17:28.480 ************************************ 00:17:28.480 END TEST lvs_grow_clean 00:17:28.480 ************************************ 00:17:28.480 03:20:34 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1142 -- # return 0 00:17:28.480 03:20:34 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:17:28.480 03:20:34 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:17:28.480 03:20:34 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:28.480 03:20:34 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:17:28.480 ************************************ 00:17:28.480 START TEST lvs_grow_dirty 00:17:28.480 ************************************ 00:17:28.480 03:20:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1123 -- # lvs_grow dirty 00:17:28.480 03:20:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:17:28.480 03:20:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:17:28.480 03:20:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:17:28.480 03:20:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:17:28.480 03:20:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:17:28.480 03:20:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:17:28.480 03:20:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:28.480 03:20:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:28.480 03:20:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:17:28.737 03:20:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:17:28.737 03:20:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:17:28.995 03:20:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=5c70d666-5328-444a-a1d2-20b401a7918c 00:17:28.995 03:20:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5c70d666-5328-444a-a1d2-20b401a7918c 00:17:28.995 03:20:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:17:29.252 03:20:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:17:29.252 03:20:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:17:29.252 03:20:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 5c70d666-5328-444a-a1d2-20b401a7918c lvol 150 00:17:29.509 03:20:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=a76d33f1-78f6-4e7d-9cf5-f1905fd5c5f4 00:17:29.509 03:20:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:29.509 03:20:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:17:29.767 [2024-07-15 03:20:35.794380] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:17:29.767 [2024-07-15 03:20:35.794469] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:17:29.767 true 00:17:29.767 03:20:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5c70d666-5328-444a-a1d2-20b401a7918c 00:17:29.767 03:20:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:17:30.024 03:20:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:17:30.024 03:20:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:17:30.282 03:20:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 a76d33f1-78f6-4e7d-9cf5-f1905fd5c5f4 00:17:30.541 03:20:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:17:30.799 [2024-07-15 03:20:36.797455] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:30.799 03:20:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:17:31.058 03:20:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=3174916 00:17:31.058 03:20:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:17:31.058 03:20:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:31.058 03:20:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 3174916 /var/tmp/bdevperf.sock 00:17:31.058 03:20:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@829 -- # '[' -z 3174916 ']' 00:17:31.058 03:20:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:31.058 03:20:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:31.058 03:20:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:31.058 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:31.058 03:20:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:31.058 03:20:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:17:31.058 [2024-07-15 03:20:37.106951] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:17:31.058 [2024-07-15 03:20:37.107022] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3174916 ] 00:17:31.058 EAL: No free 2048 kB hugepages reported on node 1 00:17:31.058 [2024-07-15 03:20:37.170508] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:31.316 [2024-07-15 03:20:37.262192] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:31.316 03:20:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:31.316 03:20:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # return 0 00:17:31.316 03:20:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:17:31.881 Nvme0n1 00:17:31.881 03:20:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:17:32.139 [ 00:17:32.139 { 00:17:32.139 "name": "Nvme0n1", 00:17:32.139 "aliases": [ 00:17:32.139 "a76d33f1-78f6-4e7d-9cf5-f1905fd5c5f4" 00:17:32.139 ], 00:17:32.139 "product_name": "NVMe disk", 00:17:32.139 "block_size": 4096, 00:17:32.139 "num_blocks": 38912, 00:17:32.139 "uuid": "a76d33f1-78f6-4e7d-9cf5-f1905fd5c5f4", 00:17:32.139 "assigned_rate_limits": { 00:17:32.139 "rw_ios_per_sec": 0, 00:17:32.139 "rw_mbytes_per_sec": 0, 00:17:32.139 "r_mbytes_per_sec": 0, 00:17:32.139 "w_mbytes_per_sec": 0 00:17:32.139 }, 00:17:32.139 "claimed": false, 00:17:32.139 "zoned": false, 00:17:32.139 "supported_io_types": { 00:17:32.139 "read": true, 00:17:32.139 "write": true, 00:17:32.139 "unmap": true, 00:17:32.139 "flush": true, 00:17:32.139 "reset": true, 00:17:32.139 "nvme_admin": true, 00:17:32.139 "nvme_io": true, 00:17:32.139 "nvme_io_md": false, 00:17:32.139 "write_zeroes": true, 00:17:32.139 "zcopy": false, 00:17:32.139 "get_zone_info": false, 00:17:32.139 "zone_management": false, 00:17:32.139 "zone_append": false, 00:17:32.139 "compare": true, 00:17:32.139 "compare_and_write": true, 00:17:32.139 "abort": true, 00:17:32.139 "seek_hole": false, 00:17:32.139 "seek_data": false, 00:17:32.139 "copy": true, 00:17:32.139 "nvme_iov_md": false 00:17:32.139 }, 00:17:32.139 "memory_domains": [ 00:17:32.139 { 00:17:32.139 "dma_device_id": "system", 00:17:32.139 "dma_device_type": 1 00:17:32.139 } 00:17:32.139 ], 00:17:32.139 "driver_specific": { 00:17:32.139 "nvme": [ 00:17:32.139 { 00:17:32.139 "trid": { 00:17:32.139 "trtype": "TCP", 00:17:32.139 "adrfam": "IPv4", 00:17:32.139 "traddr": "10.0.0.2", 00:17:32.139 "trsvcid": "4420", 00:17:32.139 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:17:32.139 }, 00:17:32.139 "ctrlr_data": { 00:17:32.139 "cntlid": 1, 00:17:32.139 "vendor_id": "0x8086", 00:17:32.139 "model_number": "SPDK bdev Controller", 00:17:32.139 "serial_number": "SPDK0", 00:17:32.140 "firmware_revision": "24.09", 00:17:32.140 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:17:32.140 "oacs": { 00:17:32.140 "security": 0, 00:17:32.140 "format": 0, 00:17:32.140 "firmware": 0, 00:17:32.140 "ns_manage": 0 00:17:32.140 }, 00:17:32.140 "multi_ctrlr": true, 00:17:32.140 "ana_reporting": false 00:17:32.140 }, 00:17:32.140 "vs": { 00:17:32.140 "nvme_version": "1.3" 00:17:32.140 }, 00:17:32.140 "ns_data": { 00:17:32.140 "id": 1, 00:17:32.140 "can_share": true 00:17:32.140 } 00:17:32.140 } 00:17:32.140 ], 00:17:32.140 "mp_policy": "active_passive" 00:17:32.140 } 00:17:32.140 } 00:17:32.140 ] 00:17:32.140 03:20:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=3175052 00:17:32.140 03:20:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:17:32.140 03:20:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:17:32.140 Running I/O for 10 seconds... 00:17:33.108 Latency(us) 00:17:33.108 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:33.108 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:33.108 Nvme0n1 : 1.00 14623.00 57.12 0.00 0.00 0.00 0.00 0.00 00:17:33.108 =================================================================================================================== 00:17:33.108 Total : 14623.00 57.12 0.00 0.00 0.00 0.00 0.00 00:17:33.108 00:17:34.042 03:20:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 5c70d666-5328-444a-a1d2-20b401a7918c 00:17:34.298 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:34.299 Nvme0n1 : 2.00 14614.00 57.09 0.00 0.00 0.00 0.00 0.00 00:17:34.299 =================================================================================================================== 00:17:34.299 Total : 14614.00 57.09 0.00 0.00 0.00 0.00 0.00 00:17:34.299 00:17:34.299 true 00:17:34.299 03:20:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5c70d666-5328-444a-a1d2-20b401a7918c 00:17:34.299 03:20:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:17:34.556 03:20:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:17:34.556 03:20:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:17:34.556 03:20:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 3175052 00:17:35.121 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:35.121 Nvme0n1 : 3.00 14805.00 57.83 0.00 0.00 0.00 0.00 0.00 00:17:35.121 =================================================================================================================== 00:17:35.121 Total : 14805.00 57.83 0.00 0.00 0.00 0.00 0.00 00:17:35.121 00:17:36.053 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:36.053 Nvme0n1 : 4.00 14851.00 58.01 0.00 0.00 0.00 0.00 0.00 00:17:36.053 =================================================================================================================== 00:17:36.053 Total : 14851.00 58.01 0.00 0.00 0.00 0.00 0.00 00:17:36.053 00:17:37.421 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:37.421 Nvme0n1 : 5.00 14903.40 58.22 0.00 0.00 0.00 0.00 0.00 00:17:37.421 =================================================================================================================== 00:17:37.421 Total : 14903.40 58.22 0.00 0.00 0.00 0.00 0.00 00:17:37.421 00:17:38.353 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:38.353 Nvme0n1 : 6.00 14896.00 58.19 0.00 0.00 0.00 0.00 0.00 00:17:38.353 =================================================================================================================== 00:17:38.353 Total : 14896.00 58.19 0.00 0.00 0.00 0.00 0.00 00:17:38.353 00:17:39.286 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:39.286 Nvme0n1 : 7.00 14900.00 58.20 0.00 0.00 0.00 0.00 0.00 00:17:39.286 =================================================================================================================== 00:17:39.286 Total : 14900.00 58.20 0.00 0.00 0.00 0.00 0.00 00:17:39.286 00:17:40.219 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:40.219 Nvme0n1 : 8.00 14911.12 58.25 0.00 0.00 0.00 0.00 0.00 00:17:40.219 =================================================================================================================== 00:17:40.219 Total : 14911.12 58.25 0.00 0.00 0.00 0.00 0.00 00:17:40.219 00:17:41.153 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:41.153 Nvme0n1 : 9.00 14962.11 58.45 0.00 0.00 0.00 0.00 0.00 00:17:41.153 =================================================================================================================== 00:17:41.153 Total : 14962.11 58.45 0.00 0.00 0.00 0.00 0.00 00:17:41.153 00:17:42.086 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:42.086 Nvme0n1 : 10.00 14964.50 58.46 0.00 0.00 0.00 0.00 0.00 00:17:42.086 =================================================================================================================== 00:17:42.086 Total : 14964.50 58.46 0.00 0.00 0.00 0.00 0.00 00:17:42.086 00:17:42.086 00:17:42.086 Latency(us) 00:17:42.086 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:42.086 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:42.086 Nvme0n1 : 10.01 14965.43 58.46 0.00 0.00 8548.03 2245.21 16699.54 00:17:42.086 =================================================================================================================== 00:17:42.086 Total : 14965.43 58.46 0.00 0.00 8548.03 2245.21 16699.54 00:17:42.086 0 00:17:42.344 03:20:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 3174916 00:17:42.344 03:20:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@948 -- # '[' -z 3174916 ']' 00:17:42.344 03:20:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@952 -- # kill -0 3174916 00:17:42.344 03:20:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@953 -- # uname 00:17:42.344 03:20:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:42.344 03:20:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3174916 00:17:42.344 03:20:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:17:42.344 03:20:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:17:42.344 03:20:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3174916' 00:17:42.344 killing process with pid 3174916 00:17:42.344 03:20:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@967 -- # kill 3174916 00:17:42.344 Received shutdown signal, test time was about 10.000000 seconds 00:17:42.344 00:17:42.344 Latency(us) 00:17:42.344 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:42.344 =================================================================================================================== 00:17:42.344 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:42.344 03:20:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # wait 3174916 00:17:42.344 03:20:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:17:42.908 03:20:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:17:42.908 03:20:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5c70d666-5328-444a-a1d2-20b401a7918c 00:17:42.908 03:20:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:17:43.166 03:20:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:17:43.166 03:20:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:17:43.166 03:20:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 3172430 00:17:43.166 03:20:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 3172430 00:17:43.166 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 3172430 Killed "${NVMF_APP[@]}" "$@" 00:17:43.166 03:20:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:17:43.166 03:20:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:17:43.166 03:20:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:43.166 03:20:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:43.166 03:20:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:17:43.424 03:20:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@481 -- # nvmfpid=3176380 00:17:43.424 03:20:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:17:43.424 03:20:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@482 -- # waitforlisten 3176380 00:17:43.424 03:20:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@829 -- # '[' -z 3176380 ']' 00:17:43.424 03:20:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:43.424 03:20:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:43.424 03:20:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:43.424 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:43.424 03:20:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:43.424 03:20:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:17:43.424 [2024-07-15 03:20:49.359412] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:17:43.424 [2024-07-15 03:20:49.359498] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:43.424 EAL: No free 2048 kB hugepages reported on node 1 00:17:43.424 [2024-07-15 03:20:49.425889] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:43.424 [2024-07-15 03:20:49.512449] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:43.424 [2024-07-15 03:20:49.512511] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:43.424 [2024-07-15 03:20:49.512524] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:43.424 [2024-07-15 03:20:49.512543] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:43.424 [2024-07-15 03:20:49.512553] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:43.424 [2024-07-15 03:20:49.512580] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:43.682 03:20:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:43.682 03:20:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # return 0 00:17:43.682 03:20:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:43.682 03:20:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:43.682 03:20:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:17:43.682 03:20:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:43.682 03:20:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:17:43.939 [2024-07-15 03:20:49.931581] blobstore.c:4865:bs_recover: *NOTICE*: Performing recovery on blobstore 00:17:43.939 [2024-07-15 03:20:49.931697] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:17:43.939 [2024-07-15 03:20:49.931742] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:17:43.939 03:20:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:17:43.939 03:20:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev a76d33f1-78f6-4e7d-9cf5-f1905fd5c5f4 00:17:43.939 03:20:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local bdev_name=a76d33f1-78f6-4e7d-9cf5-f1905fd5c5f4 00:17:43.939 03:20:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:17:43.939 03:20:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local i 00:17:43.939 03:20:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:17:43.939 03:20:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:17:43.939 03:20:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:17:44.197 03:20:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b a76d33f1-78f6-4e7d-9cf5-f1905fd5c5f4 -t 2000 00:17:44.455 [ 00:17:44.455 { 00:17:44.455 "name": "a76d33f1-78f6-4e7d-9cf5-f1905fd5c5f4", 00:17:44.455 "aliases": [ 00:17:44.455 "lvs/lvol" 00:17:44.455 ], 00:17:44.455 "product_name": "Logical Volume", 00:17:44.455 "block_size": 4096, 00:17:44.455 "num_blocks": 38912, 00:17:44.455 "uuid": "a76d33f1-78f6-4e7d-9cf5-f1905fd5c5f4", 00:17:44.455 "assigned_rate_limits": { 00:17:44.455 "rw_ios_per_sec": 0, 00:17:44.455 "rw_mbytes_per_sec": 0, 00:17:44.455 "r_mbytes_per_sec": 0, 00:17:44.455 "w_mbytes_per_sec": 0 00:17:44.455 }, 00:17:44.455 "claimed": false, 00:17:44.455 "zoned": false, 00:17:44.455 "supported_io_types": { 00:17:44.455 "read": true, 00:17:44.455 "write": true, 00:17:44.455 "unmap": true, 00:17:44.455 "flush": false, 00:17:44.455 "reset": true, 00:17:44.455 "nvme_admin": false, 00:17:44.455 "nvme_io": false, 00:17:44.455 "nvme_io_md": false, 00:17:44.455 "write_zeroes": true, 00:17:44.455 "zcopy": false, 00:17:44.455 "get_zone_info": false, 00:17:44.455 "zone_management": false, 00:17:44.455 "zone_append": false, 00:17:44.455 "compare": false, 00:17:44.455 "compare_and_write": false, 00:17:44.455 "abort": false, 00:17:44.455 "seek_hole": true, 00:17:44.455 "seek_data": true, 00:17:44.455 "copy": false, 00:17:44.455 "nvme_iov_md": false 00:17:44.455 }, 00:17:44.455 "driver_specific": { 00:17:44.455 "lvol": { 00:17:44.455 "lvol_store_uuid": "5c70d666-5328-444a-a1d2-20b401a7918c", 00:17:44.455 "base_bdev": "aio_bdev", 00:17:44.455 "thin_provision": false, 00:17:44.455 "num_allocated_clusters": 38, 00:17:44.455 "snapshot": false, 00:17:44.455 "clone": false, 00:17:44.455 "esnap_clone": false 00:17:44.455 } 00:17:44.455 } 00:17:44.455 } 00:17:44.455 ] 00:17:44.455 03:20:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # return 0 00:17:44.455 03:20:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5c70d666-5328-444a-a1d2-20b401a7918c 00:17:44.455 03:20:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:17:44.712 03:20:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:17:44.712 03:20:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5c70d666-5328-444a-a1d2-20b401a7918c 00:17:44.712 03:20:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:17:44.972 03:20:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:17:44.972 03:20:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:17:45.228 [2024-07-15 03:20:51.192865] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:17:45.228 03:20:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5c70d666-5328-444a-a1d2-20b401a7918c 00:17:45.228 03:20:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@648 -- # local es=0 00:17:45.228 03:20:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5c70d666-5328-444a-a1d2-20b401a7918c 00:17:45.228 03:20:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:45.228 03:20:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:45.228 03:20:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:45.228 03:20:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:45.228 03:20:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:45.228 03:20:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:45.228 03:20:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:45.228 03:20:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:17:45.228 03:20:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5c70d666-5328-444a-a1d2-20b401a7918c 00:17:45.486 request: 00:17:45.486 { 00:17:45.486 "uuid": "5c70d666-5328-444a-a1d2-20b401a7918c", 00:17:45.486 "method": "bdev_lvol_get_lvstores", 00:17:45.486 "req_id": 1 00:17:45.486 } 00:17:45.486 Got JSON-RPC error response 00:17:45.486 response: 00:17:45.486 { 00:17:45.486 "code": -19, 00:17:45.486 "message": "No such device" 00:17:45.486 } 00:17:45.486 03:20:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # es=1 00:17:45.486 03:20:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:45.486 03:20:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:45.486 03:20:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:45.486 03:20:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:17:45.744 aio_bdev 00:17:45.744 03:20:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev a76d33f1-78f6-4e7d-9cf5-f1905fd5c5f4 00:17:45.744 03:20:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local bdev_name=a76d33f1-78f6-4e7d-9cf5-f1905fd5c5f4 00:17:45.744 03:20:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:17:45.744 03:20:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local i 00:17:45.744 03:20:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:17:45.744 03:20:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:17:45.744 03:20:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:17:46.001 03:20:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b a76d33f1-78f6-4e7d-9cf5-f1905fd5c5f4 -t 2000 00:17:46.259 [ 00:17:46.259 { 00:17:46.259 "name": "a76d33f1-78f6-4e7d-9cf5-f1905fd5c5f4", 00:17:46.259 "aliases": [ 00:17:46.259 "lvs/lvol" 00:17:46.259 ], 00:17:46.259 "product_name": "Logical Volume", 00:17:46.259 "block_size": 4096, 00:17:46.259 "num_blocks": 38912, 00:17:46.259 "uuid": "a76d33f1-78f6-4e7d-9cf5-f1905fd5c5f4", 00:17:46.259 "assigned_rate_limits": { 00:17:46.259 "rw_ios_per_sec": 0, 00:17:46.259 "rw_mbytes_per_sec": 0, 00:17:46.259 "r_mbytes_per_sec": 0, 00:17:46.259 "w_mbytes_per_sec": 0 00:17:46.259 }, 00:17:46.259 "claimed": false, 00:17:46.259 "zoned": false, 00:17:46.259 "supported_io_types": { 00:17:46.259 "read": true, 00:17:46.259 "write": true, 00:17:46.259 "unmap": true, 00:17:46.259 "flush": false, 00:17:46.259 "reset": true, 00:17:46.259 "nvme_admin": false, 00:17:46.259 "nvme_io": false, 00:17:46.259 "nvme_io_md": false, 00:17:46.259 "write_zeroes": true, 00:17:46.259 "zcopy": false, 00:17:46.259 "get_zone_info": false, 00:17:46.259 "zone_management": false, 00:17:46.259 "zone_append": false, 00:17:46.259 "compare": false, 00:17:46.259 "compare_and_write": false, 00:17:46.259 "abort": false, 00:17:46.259 "seek_hole": true, 00:17:46.259 "seek_data": true, 00:17:46.259 "copy": false, 00:17:46.259 "nvme_iov_md": false 00:17:46.259 }, 00:17:46.259 "driver_specific": { 00:17:46.259 "lvol": { 00:17:46.259 "lvol_store_uuid": "5c70d666-5328-444a-a1d2-20b401a7918c", 00:17:46.259 "base_bdev": "aio_bdev", 00:17:46.259 "thin_provision": false, 00:17:46.259 "num_allocated_clusters": 38, 00:17:46.259 "snapshot": false, 00:17:46.259 "clone": false, 00:17:46.259 "esnap_clone": false 00:17:46.259 } 00:17:46.259 } 00:17:46.259 } 00:17:46.259 ] 00:17:46.259 03:20:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # return 0 00:17:46.259 03:20:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5c70d666-5328-444a-a1d2-20b401a7918c 00:17:46.259 03:20:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:17:46.516 03:20:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:17:46.516 03:20:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5c70d666-5328-444a-a1d2-20b401a7918c 00:17:46.516 03:20:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:17:46.775 03:20:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:17:46.775 03:20:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete a76d33f1-78f6-4e7d-9cf5-f1905fd5c5f4 00:17:47.033 03:20:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 5c70d666-5328-444a-a1d2-20b401a7918c 00:17:47.291 03:20:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:17:47.858 03:20:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:47.858 00:17:47.858 real 0m19.323s 00:17:47.858 user 0m48.574s 00:17:47.858 sys 0m4.693s 00:17:47.858 03:20:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:47.858 03:20:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:17:47.858 ************************************ 00:17:47.858 END TEST lvs_grow_dirty 00:17:47.858 ************************************ 00:17:47.858 03:20:53 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1142 -- # return 0 00:17:47.858 03:20:53 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:17:47.858 03:20:53 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@806 -- # type=--id 00:17:47.858 03:20:53 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@807 -- # id=0 00:17:47.858 03:20:53 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:17:47.858 03:20:53 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:17:47.858 03:20:53 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:17:47.858 03:20:53 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:17:47.858 03:20:53 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # for n in $shm_files 00:17:47.858 03:20:53 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:17:47.858 nvmf_trace.0 00:17:47.858 03:20:53 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@821 -- # return 0 00:17:47.858 03:20:53 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:17:47.858 03:20:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:47.858 03:20:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@117 -- # sync 00:17:47.858 03:20:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:47.858 03:20:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@120 -- # set +e 00:17:47.858 03:20:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:47.858 03:20:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:47.858 rmmod nvme_tcp 00:17:47.858 rmmod nvme_fabrics 00:17:47.858 rmmod nvme_keyring 00:17:47.858 03:20:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:47.858 03:20:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set -e 00:17:47.858 03:20:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@125 -- # return 0 00:17:47.858 03:20:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@489 -- # '[' -n 3176380 ']' 00:17:47.858 03:20:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@490 -- # killprocess 3176380 00:17:47.858 03:20:53 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@948 -- # '[' -z 3176380 ']' 00:17:47.858 03:20:53 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@952 -- # kill -0 3176380 00:17:47.858 03:20:53 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@953 -- # uname 00:17:47.858 03:20:53 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:47.858 03:20:53 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3176380 00:17:47.858 03:20:53 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:17:47.858 03:20:53 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:17:47.858 03:20:53 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3176380' 00:17:47.858 killing process with pid 3176380 00:17:47.858 03:20:53 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@967 -- # kill 3176380 00:17:47.858 03:20:53 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # wait 3176380 00:17:48.116 03:20:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:48.117 03:20:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:48.117 03:20:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:48.117 03:20:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:48.117 03:20:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:48.117 03:20:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:48.117 03:20:54 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:48.117 03:20:54 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:50.016 03:20:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:50.016 00:17:50.016 real 0m42.045s 00:17:50.016 user 1m11.510s 00:17:50.016 sys 0m8.344s 00:17:50.016 03:20:56 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:50.016 03:20:56 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:17:50.016 ************************************ 00:17:50.016 END TEST nvmf_lvs_grow 00:17:50.016 ************************************ 00:17:50.274 03:20:56 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:17:50.274 03:20:56 nvmf_tcp -- nvmf/nvmf.sh@50 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:17:50.274 03:20:56 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:17:50.274 03:20:56 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:50.274 03:20:56 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:50.274 ************************************ 00:17:50.274 START TEST nvmf_bdev_io_wait 00:17:50.274 ************************************ 00:17:50.274 03:20:56 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:17:50.274 * Looking for test storage... 00:17:50.274 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:50.274 03:20:56 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:50.274 03:20:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:17:50.274 03:20:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:50.274 03:20:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:50.274 03:20:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:50.274 03:20:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:50.274 03:20:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:50.274 03:20:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:50.274 03:20:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:50.274 03:20:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:50.274 03:20:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:50.274 03:20:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:50.274 03:20:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:50.274 03:20:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:17:50.274 03:20:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:50.274 03:20:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:50.274 03:20:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:50.274 03:20:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:50.274 03:20:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:50.274 03:20:56 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:50.274 03:20:56 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:50.274 03:20:56 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:50.274 03:20:56 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:50.274 03:20:56 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:50.274 03:20:56 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:50.274 03:20:56 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:17:50.274 03:20:56 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:50.274 03:20:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@47 -- # : 0 00:17:50.274 03:20:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:50.274 03:20:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:50.274 03:20:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:50.274 03:20:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:50.274 03:20:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:50.274 03:20:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:50.274 03:20:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:50.274 03:20:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:50.274 03:20:56 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:50.274 03:20:56 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:50.274 03:20:56 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:17:50.274 03:20:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:50.274 03:20:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:50.274 03:20:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:50.274 03:20:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:50.274 03:20:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:50.274 03:20:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:50.274 03:20:56 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:50.274 03:20:56 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:50.274 03:20:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:50.274 03:20:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:50.274 03:20:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@285 -- # xtrace_disable 00:17:50.274 03:20:56 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:52.201 03:20:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:52.201 03:20:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # pci_devs=() 00:17:52.201 03:20:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:52.201 03:20:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:52.201 03:20:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:52.201 03:20:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:52.201 03:20:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:52.201 03:20:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # net_devs=() 00:17:52.201 03:20:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:52.201 03:20:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # e810=() 00:17:52.201 03:20:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # local -ga e810 00:17:52.201 03:20:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # x722=() 00:17:52.201 03:20:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # local -ga x722 00:17:52.201 03:20:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # mlx=() 00:17:52.201 03:20:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # local -ga mlx 00:17:52.201 03:20:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:52.201 03:20:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:52.201 03:20:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:52.201 03:20:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:52.201 03:20:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:52.201 03:20:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:52.201 03:20:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:52.201 03:20:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:52.201 03:20:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:52.201 03:20:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:52.201 03:20:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:52.201 03:20:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:52.201 03:20:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:52.201 03:20:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:52.201 03:20:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:52.201 03:20:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:52.201 03:20:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:52.201 03:20:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:52.201 03:20:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:17:52.201 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:17:52.202 03:20:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:52.202 03:20:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:52.202 03:20:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:52.202 03:20:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:52.202 03:20:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:52.202 03:20:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:52.202 03:20:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:17:52.202 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:17:52.202 03:20:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:52.202 03:20:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:52.202 03:20:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:52.202 03:20:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:52.202 03:20:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:52.202 03:20:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:52.202 03:20:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:52.202 03:20:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:52.202 03:20:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:52.202 03:20:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:52.202 03:20:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:52.202 03:20:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:52.202 03:20:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:52.202 03:20:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:52.202 03:20:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:52.202 03:20:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:17:52.202 Found net devices under 0000:0a:00.0: cvl_0_0 00:17:52.202 03:20:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:52.202 03:20:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:52.202 03:20:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:52.202 03:20:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:52.202 03:20:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:52.202 03:20:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:52.202 03:20:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:52.202 03:20:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:52.202 03:20:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:17:52.202 Found net devices under 0000:0a:00.1: cvl_0_1 00:17:52.202 03:20:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:52.202 03:20:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:52.202 03:20:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # is_hw=yes 00:17:52.202 03:20:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:52.202 03:20:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:17:52.202 03:20:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:17:52.202 03:20:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:52.202 03:20:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:52.202 03:20:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:52.202 03:20:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:52.202 03:20:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:52.202 03:20:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:52.202 03:20:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:52.202 03:20:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:52.202 03:20:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:52.202 03:20:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:52.202 03:20:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:52.202 03:20:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:52.202 03:20:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:52.202 03:20:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:52.202 03:20:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:52.202 03:20:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:52.202 03:20:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:52.202 03:20:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:52.202 03:20:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:52.202 03:20:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:52.202 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:52.202 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.255 ms 00:17:52.202 00:17:52.202 --- 10.0.0.2 ping statistics --- 00:17:52.202 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:52.202 rtt min/avg/max/mdev = 0.255/0.255/0.255/0.000 ms 00:17:52.202 03:20:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:52.461 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:52.461 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.097 ms 00:17:52.461 00:17:52.461 --- 10.0.0.1 ping statistics --- 00:17:52.461 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:52.461 rtt min/avg/max/mdev = 0.097/0.097/0.097/0.000 ms 00:17:52.461 03:20:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:52.461 03:20:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # return 0 00:17:52.461 03:20:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:52.461 03:20:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:52.461 03:20:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:52.461 03:20:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:52.461 03:20:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:52.461 03:20:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:52.461 03:20:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:52.461 03:20:58 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:17:52.461 03:20:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:52.461 03:20:58 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:52.461 03:20:58 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:52.461 03:20:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@481 -- # nvmfpid=3178899 00:17:52.461 03:20:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:17:52.461 03:20:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # waitforlisten 3178899 00:17:52.461 03:20:58 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@829 -- # '[' -z 3178899 ']' 00:17:52.461 03:20:58 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:52.461 03:20:58 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:52.461 03:20:58 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:52.461 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:52.461 03:20:58 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:52.461 03:20:58 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:52.461 [2024-07-15 03:20:58.418632] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:17:52.461 [2024-07-15 03:20:58.418702] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:52.461 EAL: No free 2048 kB hugepages reported on node 1 00:17:52.461 [2024-07-15 03:20:58.482655] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:52.461 [2024-07-15 03:20:58.569002] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:52.461 [2024-07-15 03:20:58.569053] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:52.461 [2024-07-15 03:20:58.569075] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:52.461 [2024-07-15 03:20:58.569086] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:52.461 [2024-07-15 03:20:58.569096] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:52.461 [2024-07-15 03:20:58.569160] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:52.461 [2024-07-15 03:20:58.569235] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:52.461 [2024-07-15 03:20:58.569303] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:52.461 [2024-07-15 03:20:58.569301] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:17:52.720 03:20:58 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:52.720 03:20:58 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@862 -- # return 0 00:17:52.720 03:20:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:52.720 03:20:58 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:52.720 03:20:58 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:52.720 03:20:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:52.720 03:20:58 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:17:52.720 03:20:58 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:52.720 03:20:58 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:52.720 03:20:58 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:52.720 03:20:58 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:17:52.720 03:20:58 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:52.720 03:20:58 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:52.720 03:20:58 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:52.720 03:20:58 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:52.720 03:20:58 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:52.720 03:20:58 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:52.720 [2024-07-15 03:20:58.730609] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:52.720 03:20:58 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:52.720 03:20:58 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:17:52.720 03:20:58 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:52.720 03:20:58 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:52.720 Malloc0 00:17:52.720 03:20:58 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:52.720 03:20:58 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:52.720 03:20:58 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:52.720 03:20:58 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:52.720 03:20:58 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:52.720 03:20:58 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:52.720 03:20:58 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:52.720 03:20:58 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:52.720 03:20:58 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:52.720 03:20:58 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:52.720 03:20:58 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:52.720 03:20:58 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:52.720 [2024-07-15 03:20:58.793567] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:52.720 03:20:58 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:52.720 03:20:58 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=3178923 00:17:52.720 03:20:58 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=3178924 00:17:52.720 03:20:58 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:17:52.720 03:20:58 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:17:52.720 03:20:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:17:52.720 03:20:58 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=3178927 00:17:52.720 03:20:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:17:52.720 03:20:58 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:17:52.720 03:20:58 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:17:52.720 03:20:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:17:52.720 03:20:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:17:52.720 { 00:17:52.720 "params": { 00:17:52.720 "name": "Nvme$subsystem", 00:17:52.720 "trtype": "$TEST_TRANSPORT", 00:17:52.720 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:52.720 "adrfam": "ipv4", 00:17:52.720 "trsvcid": "$NVMF_PORT", 00:17:52.720 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:52.720 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:52.720 "hdgst": ${hdgst:-false}, 00:17:52.720 "ddgst": ${ddgst:-false} 00:17:52.720 }, 00:17:52.720 "method": "bdev_nvme_attach_controller" 00:17:52.720 } 00:17:52.720 EOF 00:17:52.720 )") 00:17:52.720 03:20:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:17:52.720 03:20:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:17:52.720 03:20:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:17:52.720 03:20:58 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=3178929 00:17:52.720 03:20:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:17:52.720 { 00:17:52.720 "params": { 00:17:52.720 "name": "Nvme$subsystem", 00:17:52.720 "trtype": "$TEST_TRANSPORT", 00:17:52.720 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:52.720 "adrfam": "ipv4", 00:17:52.720 "trsvcid": "$NVMF_PORT", 00:17:52.720 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:52.720 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:52.720 "hdgst": ${hdgst:-false}, 00:17:52.720 "ddgst": ${ddgst:-false} 00:17:52.720 }, 00:17:52.720 "method": "bdev_nvme_attach_controller" 00:17:52.720 } 00:17:52.720 EOF 00:17:52.720 )") 00:17:52.720 03:20:58 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:17:52.720 03:20:58 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:17:52.720 03:20:58 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:17:52.720 03:20:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:17:52.720 03:20:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:17:52.720 03:20:58 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:17:52.720 03:20:58 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:17:52.720 03:20:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:17:52.720 03:20:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:17:52.720 03:20:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:17:52.720 { 00:17:52.720 "params": { 00:17:52.720 "name": "Nvme$subsystem", 00:17:52.720 "trtype": "$TEST_TRANSPORT", 00:17:52.720 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:52.720 "adrfam": "ipv4", 00:17:52.720 "trsvcid": "$NVMF_PORT", 00:17:52.720 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:52.720 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:52.720 "hdgst": ${hdgst:-false}, 00:17:52.720 "ddgst": ${ddgst:-false} 00:17:52.720 }, 00:17:52.720 "method": "bdev_nvme_attach_controller" 00:17:52.720 } 00:17:52.720 EOF 00:17:52.720 )") 00:17:52.720 03:20:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:17:52.720 03:20:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:17:52.720 03:20:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:17:52.720 03:20:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:17:52.720 { 00:17:52.720 "params": { 00:17:52.720 "name": "Nvme$subsystem", 00:17:52.720 "trtype": "$TEST_TRANSPORT", 00:17:52.720 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:52.720 "adrfam": "ipv4", 00:17:52.720 "trsvcid": "$NVMF_PORT", 00:17:52.720 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:52.720 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:52.720 "hdgst": ${hdgst:-false}, 00:17:52.720 "ddgst": ${ddgst:-false} 00:17:52.720 }, 00:17:52.720 "method": "bdev_nvme_attach_controller" 00:17:52.720 } 00:17:52.720 EOF 00:17:52.720 )") 00:17:52.720 03:20:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:17:52.720 03:20:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:17:52.720 03:20:58 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 3178923 00:17:52.720 03:20:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:17:52.721 03:20:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:17:52.721 03:20:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:17:52.721 03:20:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:17:52.721 03:20:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:17:52.721 03:20:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:17:52.721 03:20:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:17:52.721 "params": { 00:17:52.721 "name": "Nvme1", 00:17:52.721 "trtype": "tcp", 00:17:52.721 "traddr": "10.0.0.2", 00:17:52.721 "adrfam": "ipv4", 00:17:52.721 "trsvcid": "4420", 00:17:52.721 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:52.721 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:52.721 "hdgst": false, 00:17:52.721 "ddgst": false 00:17:52.721 }, 00:17:52.721 "method": "bdev_nvme_attach_controller" 00:17:52.721 }' 00:17:52.721 03:20:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:17:52.721 03:20:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:17:52.721 "params": { 00:17:52.721 "name": "Nvme1", 00:17:52.721 "trtype": "tcp", 00:17:52.721 "traddr": "10.0.0.2", 00:17:52.721 "adrfam": "ipv4", 00:17:52.721 "trsvcid": "4420", 00:17:52.721 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:52.721 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:52.721 "hdgst": false, 00:17:52.721 "ddgst": false 00:17:52.721 }, 00:17:52.721 "method": "bdev_nvme_attach_controller" 00:17:52.721 }' 00:17:52.721 03:20:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:17:52.721 03:20:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:17:52.721 "params": { 00:17:52.721 "name": "Nvme1", 00:17:52.721 "trtype": "tcp", 00:17:52.721 "traddr": "10.0.0.2", 00:17:52.721 "adrfam": "ipv4", 00:17:52.721 "trsvcid": "4420", 00:17:52.721 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:52.721 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:52.721 "hdgst": false, 00:17:52.721 "ddgst": false 00:17:52.721 }, 00:17:52.721 "method": "bdev_nvme_attach_controller" 00:17:52.721 }' 00:17:52.721 03:20:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:17:52.721 03:20:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:17:52.721 "params": { 00:17:52.721 "name": "Nvme1", 00:17:52.721 "trtype": "tcp", 00:17:52.721 "traddr": "10.0.0.2", 00:17:52.721 "adrfam": "ipv4", 00:17:52.721 "trsvcid": "4420", 00:17:52.721 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:52.721 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:52.721 "hdgst": false, 00:17:52.721 "ddgst": false 00:17:52.721 }, 00:17:52.721 "method": "bdev_nvme_attach_controller" 00:17:52.721 }' 00:17:52.721 [2024-07-15 03:20:58.842256] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:17:52.721 [2024-07-15 03:20:58.842251] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:17:52.721 [2024-07-15 03:20:58.842251] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:17:52.721 [2024-07-15 03:20:58.842251] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:17:52.721 [2024-07-15 03:20:58.842333] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:17:52.721 [2024-07-15 03:20:58.842355] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-07-15 03:20:58.842355] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-07-15 03:20:58.842356] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:17:52.721 --proc-type=auto ] 00:17:52.721 --proc-type=auto ] 00:17:52.979 EAL: No free 2048 kB hugepages reported on node 1 00:17:52.979 EAL: No free 2048 kB hugepages reported on node 1 00:17:52.979 [2024-07-15 03:20:59.016838] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:52.979 EAL: No free 2048 kB hugepages reported on node 1 00:17:52.979 [2024-07-15 03:20:59.091798] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:17:52.979 [2024-07-15 03:20:59.116428] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:53.237 EAL: No free 2048 kB hugepages reported on node 1 00:17:53.237 [2024-07-15 03:20:59.192487] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:17:53.237 [2024-07-15 03:20:59.216561] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:53.237 [2024-07-15 03:20:59.287544] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:53.237 [2024-07-15 03:20:59.291676] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:17:53.237 [2024-07-15 03:20:59.357504] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:17:53.495 Running I/O for 1 seconds... 00:17:53.495 Running I/O for 1 seconds... 00:17:53.495 Running I/O for 1 seconds... 00:17:53.495 Running I/O for 1 seconds... 00:17:54.428 00:17:54.428 Latency(us) 00:17:54.428 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:54.428 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:17:54.428 Nvme1n1 : 1.01 9938.50 38.82 0.00 0.00 12820.77 8155.59 21165.70 00:17:54.428 =================================================================================================================== 00:17:54.428 Total : 9938.50 38.82 0.00 0.00 12820.77 8155.59 21165.70 00:17:54.428 00:17:54.428 Latency(us) 00:17:54.428 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:54.428 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:17:54.428 Nvme1n1 : 1.02 5114.34 19.98 0.00 0.00 24730.25 8107.05 45244.11 00:17:54.428 =================================================================================================================== 00:17:54.428 Total : 5114.34 19.98 0.00 0.00 24730.25 8107.05 45244.11 00:17:54.685 00:17:54.685 Latency(us) 00:17:54.685 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:54.685 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:17:54.685 Nvme1n1 : 1.00 194328.49 759.10 0.00 0.00 656.10 307.96 910.22 00:17:54.685 =================================================================================================================== 00:17:54.685 Total : 194328.49 759.10 0.00 0.00 656.10 307.96 910.22 00:17:54.685 00:17:54.685 Latency(us) 00:17:54.685 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:54.685 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:17:54.685 Nvme1n1 : 1.01 4913.62 19.19 0.00 0.00 25929.24 8398.32 54370.61 00:17:54.685 =================================================================================================================== 00:17:54.685 Total : 4913.62 19.19 0.00 0.00 25929.24 8398.32 54370.61 00:17:54.942 03:21:00 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 3178924 00:17:54.942 03:21:00 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 3178927 00:17:54.942 03:21:00 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 3178929 00:17:54.942 03:21:00 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:54.942 03:21:00 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:54.942 03:21:00 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:54.942 03:21:00 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:54.942 03:21:00 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:17:54.942 03:21:00 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:17:54.942 03:21:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:54.942 03:21:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # sync 00:17:54.942 03:21:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:54.942 03:21:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@120 -- # set +e 00:17:54.942 03:21:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:54.942 03:21:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:54.942 rmmod nvme_tcp 00:17:54.942 rmmod nvme_fabrics 00:17:54.942 rmmod nvme_keyring 00:17:54.942 03:21:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:54.942 03:21:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set -e 00:17:54.942 03:21:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # return 0 00:17:54.942 03:21:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@489 -- # '[' -n 3178899 ']' 00:17:54.942 03:21:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@490 -- # killprocess 3178899 00:17:54.942 03:21:00 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@948 -- # '[' -z 3178899 ']' 00:17:54.942 03:21:00 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@952 -- # kill -0 3178899 00:17:54.942 03:21:00 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@953 -- # uname 00:17:54.942 03:21:00 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:54.942 03:21:00 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3178899 00:17:54.942 03:21:01 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:17:54.942 03:21:01 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:17:54.942 03:21:01 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3178899' 00:17:54.942 killing process with pid 3178899 00:17:54.942 03:21:01 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@967 -- # kill 3178899 00:17:54.942 03:21:01 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # wait 3178899 00:17:55.200 03:21:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:55.200 03:21:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:55.200 03:21:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:55.200 03:21:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:55.200 03:21:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:55.200 03:21:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:55.200 03:21:01 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:55.200 03:21:01 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:57.735 03:21:03 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:57.735 00:17:57.735 real 0m7.049s 00:17:57.735 user 0m16.179s 00:17:57.735 sys 0m3.464s 00:17:57.735 03:21:03 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:57.735 03:21:03 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:57.735 ************************************ 00:17:57.735 END TEST nvmf_bdev_io_wait 00:17:57.735 ************************************ 00:17:57.735 03:21:03 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:17:57.735 03:21:03 nvmf_tcp -- nvmf/nvmf.sh@51 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:17:57.735 03:21:03 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:17:57.735 03:21:03 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:57.735 03:21:03 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:57.735 ************************************ 00:17:57.735 START TEST nvmf_queue_depth 00:17:57.735 ************************************ 00:17:57.735 03:21:03 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:17:57.735 * Looking for test storage... 00:17:57.735 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:57.735 03:21:03 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:57.735 03:21:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:17:57.735 03:21:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:57.735 03:21:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:57.735 03:21:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:57.735 03:21:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:57.735 03:21:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:57.735 03:21:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:57.735 03:21:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:57.735 03:21:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:57.735 03:21:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:57.735 03:21:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:57.735 03:21:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:57.735 03:21:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:17:57.735 03:21:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:57.735 03:21:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:57.735 03:21:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:57.735 03:21:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:57.735 03:21:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:57.735 03:21:03 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:57.735 03:21:03 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:57.735 03:21:03 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:57.735 03:21:03 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:57.735 03:21:03 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:57.735 03:21:03 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:57.735 03:21:03 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:17:57.735 03:21:03 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:57.735 03:21:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@47 -- # : 0 00:17:57.735 03:21:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:57.735 03:21:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:57.735 03:21:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:57.735 03:21:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:57.735 03:21:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:57.735 03:21:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:57.735 03:21:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:57.735 03:21:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:57.735 03:21:03 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:17:57.735 03:21:03 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:17:57.735 03:21:03 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:57.735 03:21:03 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:17:57.735 03:21:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:57.735 03:21:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:57.735 03:21:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:57.735 03:21:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:57.735 03:21:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:57.735 03:21:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:57.735 03:21:03 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:57.735 03:21:03 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:57.735 03:21:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:57.735 03:21:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:57.735 03:21:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@285 -- # xtrace_disable 00:17:57.735 03:21:03 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:59.633 03:21:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:59.633 03:21:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@291 -- # pci_devs=() 00:17:59.633 03:21:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:59.633 03:21:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:59.633 03:21:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:59.633 03:21:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:59.633 03:21:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:59.633 03:21:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@295 -- # net_devs=() 00:17:59.633 03:21:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:59.633 03:21:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@296 -- # e810=() 00:17:59.634 03:21:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@296 -- # local -ga e810 00:17:59.634 03:21:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@297 -- # x722=() 00:17:59.634 03:21:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@297 -- # local -ga x722 00:17:59.634 03:21:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@298 -- # mlx=() 00:17:59.634 03:21:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@298 -- # local -ga mlx 00:17:59.634 03:21:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:59.634 03:21:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:59.634 03:21:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:59.634 03:21:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:59.634 03:21:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:59.634 03:21:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:59.634 03:21:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:59.634 03:21:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:59.634 03:21:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:59.634 03:21:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:59.634 03:21:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:59.634 03:21:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:59.634 03:21:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:59.634 03:21:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:59.634 03:21:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:59.634 03:21:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:59.634 03:21:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:59.634 03:21:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:59.634 03:21:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:17:59.634 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:17:59.634 03:21:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:59.634 03:21:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:59.634 03:21:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:59.634 03:21:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:59.634 03:21:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:59.634 03:21:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:59.634 03:21:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:17:59.634 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:17:59.634 03:21:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:59.634 03:21:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:59.634 03:21:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:59.634 03:21:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:59.634 03:21:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:59.634 03:21:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:59.634 03:21:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:59.634 03:21:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:59.634 03:21:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:59.634 03:21:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:59.634 03:21:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:59.634 03:21:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:59.634 03:21:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:59.634 03:21:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:59.634 03:21:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:59.634 03:21:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:17:59.634 Found net devices under 0000:0a:00.0: cvl_0_0 00:17:59.634 03:21:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:59.634 03:21:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:59.634 03:21:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:59.634 03:21:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:59.634 03:21:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:59.634 03:21:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:59.634 03:21:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:59.634 03:21:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:59.634 03:21:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:17:59.634 Found net devices under 0000:0a:00.1: cvl_0_1 00:17:59.634 03:21:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:59.634 03:21:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:59.634 03:21:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # is_hw=yes 00:17:59.634 03:21:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:59.634 03:21:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:17:59.634 03:21:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:17:59.634 03:21:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:59.634 03:21:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:59.634 03:21:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:59.634 03:21:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:59.634 03:21:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:59.634 03:21:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:59.634 03:21:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:59.634 03:21:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:59.634 03:21:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:59.634 03:21:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:59.634 03:21:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:59.634 03:21:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:59.634 03:21:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:59.634 03:21:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:59.634 03:21:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:59.634 03:21:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:59.634 03:21:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:59.634 03:21:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:59.634 03:21:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:59.634 03:21:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:59.634 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:59.634 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.209 ms 00:17:59.634 00:17:59.634 --- 10.0.0.2 ping statistics --- 00:17:59.634 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:59.634 rtt min/avg/max/mdev = 0.209/0.209/0.209/0.000 ms 00:17:59.634 03:21:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:59.634 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:59.634 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.115 ms 00:17:59.634 00:17:59.634 --- 10.0.0.1 ping statistics --- 00:17:59.634 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:59.634 rtt min/avg/max/mdev = 0.115/0.115/0.115/0.000 ms 00:17:59.634 03:21:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:59.634 03:21:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@422 -- # return 0 00:17:59.634 03:21:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:59.634 03:21:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:59.634 03:21:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:59.634 03:21:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:59.634 03:21:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:59.634 03:21:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:59.634 03:21:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:59.634 03:21:05 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:17:59.634 03:21:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:59.634 03:21:05 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:59.634 03:21:05 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:59.634 03:21:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@481 -- # nvmfpid=3181256 00:17:59.634 03:21:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:59.634 03:21:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@482 -- # waitforlisten 3181256 00:17:59.634 03:21:05 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@829 -- # '[' -z 3181256 ']' 00:17:59.634 03:21:05 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:59.634 03:21:05 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:59.634 03:21:05 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:59.634 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:59.634 03:21:05 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:59.635 03:21:05 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:59.635 [2024-07-15 03:21:05.583863] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:17:59.635 [2024-07-15 03:21:05.583961] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:59.635 EAL: No free 2048 kB hugepages reported on node 1 00:17:59.635 [2024-07-15 03:21:05.653205] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:59.635 [2024-07-15 03:21:05.742432] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:59.635 [2024-07-15 03:21:05.742495] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:59.635 [2024-07-15 03:21:05.742522] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:59.635 [2024-07-15 03:21:05.742535] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:59.635 [2024-07-15 03:21:05.742547] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:59.635 [2024-07-15 03:21:05.742584] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:59.893 03:21:05 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:59.893 03:21:05 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@862 -- # return 0 00:17:59.893 03:21:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:59.893 03:21:05 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:59.893 03:21:05 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:59.893 03:21:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:59.893 03:21:05 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:59.893 03:21:05 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:59.893 03:21:05 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:59.893 [2024-07-15 03:21:05.892518] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:59.893 03:21:05 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:59.893 03:21:05 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:17:59.893 03:21:05 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:59.893 03:21:05 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:59.893 Malloc0 00:17:59.893 03:21:05 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:59.893 03:21:05 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:59.893 03:21:05 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:59.893 03:21:05 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:59.893 03:21:05 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:59.894 03:21:05 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:59.894 03:21:05 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:59.894 03:21:05 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:59.894 03:21:05 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:59.894 03:21:05 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:59.894 03:21:05 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:59.894 03:21:05 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:59.894 [2024-07-15 03:21:05.957633] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:59.894 03:21:05 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:59.894 03:21:05 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=3181280 00:17:59.894 03:21:05 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:17:59.894 03:21:05 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:59.894 03:21:05 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 3181280 /var/tmp/bdevperf.sock 00:17:59.894 03:21:05 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@829 -- # '[' -z 3181280 ']' 00:17:59.894 03:21:05 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:59.894 03:21:05 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:59.894 03:21:05 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:59.894 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:59.894 03:21:05 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:59.894 03:21:05 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:59.894 [2024-07-15 03:21:06.009694] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:17:59.894 [2024-07-15 03:21:06.009779] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3181280 ] 00:18:00.152 EAL: No free 2048 kB hugepages reported on node 1 00:18:00.152 [2024-07-15 03:21:06.074805] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:00.152 [2024-07-15 03:21:06.162902] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:00.152 03:21:06 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:00.152 03:21:06 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@862 -- # return 0 00:18:00.152 03:21:06 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:18:00.152 03:21:06 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:00.152 03:21:06 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:18:00.411 NVMe0n1 00:18:00.411 03:21:06 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:00.411 03:21:06 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:00.411 Running I/O for 10 seconds... 00:18:12.640 00:18:12.640 Latency(us) 00:18:12.640 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:12.640 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:18:12.640 Verification LBA range: start 0x0 length 0x4000 00:18:12.640 NVMe0n1 : 10.07 8533.37 33.33 0.00 0.00 119510.70 12913.02 75730.49 00:18:12.640 =================================================================================================================== 00:18:12.640 Total : 8533.37 33.33 0.00 0.00 119510.70 12913.02 75730.49 00:18:12.640 0 00:18:12.640 03:21:16 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 3181280 00:18:12.640 03:21:16 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@948 -- # '[' -z 3181280 ']' 00:18:12.640 03:21:16 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # kill -0 3181280 00:18:12.640 03:21:16 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # uname 00:18:12.640 03:21:16 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:12.640 03:21:16 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3181280 00:18:12.640 03:21:16 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:18:12.640 03:21:16 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:18:12.640 03:21:16 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3181280' 00:18:12.640 killing process with pid 3181280 00:18:12.640 03:21:16 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@967 -- # kill 3181280 00:18:12.640 Received shutdown signal, test time was about 10.000000 seconds 00:18:12.640 00:18:12.640 Latency(us) 00:18:12.640 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:12.640 =================================================================================================================== 00:18:12.640 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:12.640 03:21:16 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@972 -- # wait 3181280 00:18:12.640 03:21:16 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:18:12.640 03:21:16 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:18:12.640 03:21:16 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:12.640 03:21:16 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@117 -- # sync 00:18:12.640 03:21:16 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:12.640 03:21:16 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@120 -- # set +e 00:18:12.640 03:21:16 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:12.640 03:21:16 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:12.640 rmmod nvme_tcp 00:18:12.640 rmmod nvme_fabrics 00:18:12.640 rmmod nvme_keyring 00:18:12.640 03:21:16 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:12.640 03:21:16 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@124 -- # set -e 00:18:12.640 03:21:16 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@125 -- # return 0 00:18:12.640 03:21:16 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@489 -- # '[' -n 3181256 ']' 00:18:12.640 03:21:16 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@490 -- # killprocess 3181256 00:18:12.640 03:21:16 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@948 -- # '[' -z 3181256 ']' 00:18:12.640 03:21:16 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # kill -0 3181256 00:18:12.640 03:21:16 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # uname 00:18:12.640 03:21:16 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:12.640 03:21:16 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3181256 00:18:12.640 03:21:16 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:18:12.640 03:21:16 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:18:12.640 03:21:16 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3181256' 00:18:12.640 killing process with pid 3181256 00:18:12.640 03:21:16 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@967 -- # kill 3181256 00:18:12.640 03:21:16 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@972 -- # wait 3181256 00:18:12.640 03:21:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:12.640 03:21:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:12.640 03:21:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:12.640 03:21:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:12.640 03:21:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:12.640 03:21:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:12.640 03:21:17 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:12.640 03:21:17 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:13.208 03:21:19 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:13.208 00:18:13.208 real 0m15.979s 00:18:13.208 user 0m22.494s 00:18:13.208 sys 0m2.993s 00:18:13.208 03:21:19 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:13.208 03:21:19 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:18:13.208 ************************************ 00:18:13.208 END TEST nvmf_queue_depth 00:18:13.208 ************************************ 00:18:13.208 03:21:19 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:18:13.208 03:21:19 nvmf_tcp -- nvmf/nvmf.sh@52 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:18:13.208 03:21:19 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:18:13.208 03:21:19 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:13.208 03:21:19 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:13.208 ************************************ 00:18:13.208 START TEST nvmf_target_multipath 00:18:13.208 ************************************ 00:18:13.208 03:21:19 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:18:13.467 * Looking for test storage... 00:18:13.467 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:13.467 03:21:19 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:13.467 03:21:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:18:13.467 03:21:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:13.467 03:21:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:13.467 03:21:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:13.467 03:21:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:13.467 03:21:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:13.467 03:21:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:13.467 03:21:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:13.467 03:21:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:13.467 03:21:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:13.467 03:21:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:13.467 03:21:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:13.467 03:21:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:18:13.467 03:21:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:13.467 03:21:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:13.467 03:21:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:13.467 03:21:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:13.467 03:21:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:13.467 03:21:19 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:13.467 03:21:19 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:13.467 03:21:19 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:13.467 03:21:19 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:13.467 03:21:19 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:13.467 03:21:19 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:13.467 03:21:19 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:18:13.468 03:21:19 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:13.468 03:21:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@47 -- # : 0 00:18:13.468 03:21:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:13.468 03:21:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:13.468 03:21:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:13.468 03:21:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:13.468 03:21:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:13.468 03:21:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:13.468 03:21:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:13.468 03:21:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:13.468 03:21:19 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:13.468 03:21:19 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:13.468 03:21:19 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:18:13.468 03:21:19 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:13.468 03:21:19 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:18:13.468 03:21:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:13.468 03:21:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:13.468 03:21:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:13.468 03:21:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:13.468 03:21:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:13.468 03:21:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:13.468 03:21:19 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:13.468 03:21:19 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:13.468 03:21:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:18:13.468 03:21:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:18:13.468 03:21:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@285 -- # xtrace_disable 00:18:13.468 03:21:19 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:18:15.375 03:21:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:15.375 03:21:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@291 -- # pci_devs=() 00:18:15.375 03:21:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:15.375 03:21:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:15.375 03:21:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:15.375 03:21:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:15.375 03:21:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:15.375 03:21:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@295 -- # net_devs=() 00:18:15.375 03:21:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:15.375 03:21:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@296 -- # e810=() 00:18:15.375 03:21:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@296 -- # local -ga e810 00:18:15.375 03:21:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@297 -- # x722=() 00:18:15.375 03:21:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@297 -- # local -ga x722 00:18:15.375 03:21:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@298 -- # mlx=() 00:18:15.375 03:21:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@298 -- # local -ga mlx 00:18:15.375 03:21:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:15.375 03:21:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:15.375 03:21:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:15.375 03:21:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:15.375 03:21:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:15.375 03:21:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:15.375 03:21:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:15.375 03:21:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:15.375 03:21:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:15.375 03:21:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:15.375 03:21:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:15.375 03:21:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:15.375 03:21:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:15.375 03:21:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:15.375 03:21:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:15.375 03:21:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:15.375 03:21:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:15.375 03:21:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:15.375 03:21:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:18:15.375 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:18:15.375 03:21:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:15.375 03:21:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:15.375 03:21:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:15.375 03:21:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:15.375 03:21:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:15.375 03:21:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:15.375 03:21:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:18:15.375 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:18:15.375 03:21:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:15.375 03:21:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:15.375 03:21:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:15.375 03:21:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:15.375 03:21:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:15.375 03:21:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:15.375 03:21:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:15.375 03:21:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:15.375 03:21:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:15.375 03:21:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:15.375 03:21:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:15.375 03:21:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:15.375 03:21:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:15.375 03:21:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:15.375 03:21:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:15.375 03:21:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:18:15.375 Found net devices under 0000:0a:00.0: cvl_0_0 00:18:15.375 03:21:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:15.375 03:21:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:15.375 03:21:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:15.375 03:21:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:15.375 03:21:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:15.375 03:21:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:15.375 03:21:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:15.375 03:21:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:15.375 03:21:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:18:15.375 Found net devices under 0000:0a:00.1: cvl_0_1 00:18:15.375 03:21:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:15.375 03:21:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:15.375 03:21:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # is_hw=yes 00:18:15.375 03:21:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:18:15.375 03:21:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:18:15.375 03:21:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:18:15.375 03:21:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:15.376 03:21:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:15.376 03:21:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:15.376 03:21:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:15.376 03:21:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:15.376 03:21:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:15.376 03:21:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:15.376 03:21:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:15.376 03:21:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:15.376 03:21:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:15.376 03:21:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:15.376 03:21:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:15.376 03:21:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:15.376 03:21:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:15.376 03:21:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:15.376 03:21:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:15.376 03:21:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:15.376 03:21:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:15.635 03:21:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:15.635 03:21:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:15.635 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:15.635 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.130 ms 00:18:15.635 00:18:15.635 --- 10.0.0.2 ping statistics --- 00:18:15.635 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:15.635 rtt min/avg/max/mdev = 0.130/0.130/0.130/0.000 ms 00:18:15.635 03:21:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:15.635 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:15.635 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.231 ms 00:18:15.635 00:18:15.635 --- 10.0.0.1 ping statistics --- 00:18:15.635 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:15.635 rtt min/avg/max/mdev = 0.231/0.231/0.231/0.000 ms 00:18:15.635 03:21:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:15.635 03:21:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@422 -- # return 0 00:18:15.635 03:21:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:15.635 03:21:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:15.635 03:21:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:15.635 03:21:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:15.635 03:21:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:15.635 03:21:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:15.635 03:21:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:15.635 03:21:21 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:18:15.635 03:21:21 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:18:15.635 only one NIC for nvmf test 00:18:15.635 03:21:21 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:18:15.635 03:21:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:15.635 03:21:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:18:15.635 03:21:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:15.635 03:21:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:18:15.635 03:21:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:15.635 03:21:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:15.635 rmmod nvme_tcp 00:18:15.635 rmmod nvme_fabrics 00:18:15.635 rmmod nvme_keyring 00:18:15.635 03:21:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:15.635 03:21:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:18:15.635 03:21:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:18:15.635 03:21:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:18:15.635 03:21:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:15.635 03:21:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:15.635 03:21:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:15.635 03:21:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:15.635 03:21:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:15.635 03:21:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:15.635 03:21:21 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:15.635 03:21:21 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:17.538 03:21:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:17.538 03:21:23 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:18:17.538 03:21:23 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:18:17.538 03:21:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:17.538 03:21:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:18:17.538 03:21:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:17.538 03:21:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:18:17.538 03:21:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:17.538 03:21:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:17.538 03:21:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:17.538 03:21:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:18:17.538 03:21:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:18:17.538 03:21:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:18:17.538 03:21:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:17.538 03:21:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:17.538 03:21:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:17.538 03:21:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:17.538 03:21:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:17.538 03:21:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:17.538 03:21:23 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:17.538 03:21:23 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:17.538 03:21:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:17.538 00:18:17.538 real 0m4.332s 00:18:17.538 user 0m0.832s 00:18:17.538 sys 0m1.493s 00:18:17.538 03:21:23 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:17.538 03:21:23 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:18:17.538 ************************************ 00:18:17.538 END TEST nvmf_target_multipath 00:18:17.538 ************************************ 00:18:17.795 03:21:23 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:18:17.795 03:21:23 nvmf_tcp -- nvmf/nvmf.sh@53 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:18:17.795 03:21:23 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:18:17.795 03:21:23 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:17.795 03:21:23 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:17.795 ************************************ 00:18:17.795 START TEST nvmf_zcopy 00:18:17.795 ************************************ 00:18:17.795 03:21:23 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:18:17.795 * Looking for test storage... 00:18:17.795 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:17.795 03:21:23 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:17.795 03:21:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:18:17.795 03:21:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:17.795 03:21:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:17.795 03:21:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:17.795 03:21:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:17.795 03:21:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:17.795 03:21:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:17.795 03:21:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:17.795 03:21:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:17.795 03:21:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:17.795 03:21:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:17.795 03:21:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:17.795 03:21:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:18:17.795 03:21:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:17.795 03:21:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:17.795 03:21:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:17.795 03:21:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:17.796 03:21:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:17.796 03:21:23 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:17.796 03:21:23 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:17.796 03:21:23 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:17.796 03:21:23 nvmf_tcp.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:17.796 03:21:23 nvmf_tcp.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:17.796 03:21:23 nvmf_tcp.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:17.796 03:21:23 nvmf_tcp.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:18:17.796 03:21:23 nvmf_tcp.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:17.796 03:21:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@47 -- # : 0 00:18:17.796 03:21:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:17.796 03:21:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:17.796 03:21:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:17.796 03:21:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:17.796 03:21:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:17.796 03:21:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:17.796 03:21:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:17.796 03:21:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:17.796 03:21:23 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:18:17.796 03:21:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:17.796 03:21:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:17.796 03:21:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:17.796 03:21:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:17.796 03:21:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:17.796 03:21:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:17.796 03:21:23 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:17.796 03:21:23 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:17.796 03:21:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:18:17.796 03:21:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:18:17.796 03:21:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@285 -- # xtrace_disable 00:18:17.796 03:21:23 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:19.694 03:21:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:19.695 03:21:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@291 -- # pci_devs=() 00:18:19.695 03:21:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:19.695 03:21:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:19.695 03:21:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:19.695 03:21:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:19.695 03:21:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:19.695 03:21:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@295 -- # net_devs=() 00:18:19.695 03:21:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:19.695 03:21:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@296 -- # e810=() 00:18:19.695 03:21:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@296 -- # local -ga e810 00:18:19.695 03:21:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@297 -- # x722=() 00:18:19.695 03:21:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@297 -- # local -ga x722 00:18:19.695 03:21:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@298 -- # mlx=() 00:18:19.695 03:21:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@298 -- # local -ga mlx 00:18:19.695 03:21:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:19.695 03:21:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:19.695 03:21:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:19.695 03:21:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:19.695 03:21:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:19.695 03:21:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:19.695 03:21:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:19.695 03:21:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:19.695 03:21:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:19.695 03:21:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:19.695 03:21:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:19.695 03:21:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:19.695 03:21:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:19.695 03:21:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:19.695 03:21:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:19.695 03:21:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:19.695 03:21:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:19.695 03:21:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:19.695 03:21:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:18:19.695 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:18:19.695 03:21:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:19.695 03:21:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:19.695 03:21:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:19.695 03:21:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:19.695 03:21:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:19.695 03:21:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:19.695 03:21:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:18:19.695 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:18:19.695 03:21:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:19.695 03:21:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:19.695 03:21:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:19.695 03:21:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:19.695 03:21:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:19.695 03:21:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:19.695 03:21:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:19.695 03:21:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:19.695 03:21:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:19.695 03:21:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:19.695 03:21:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:19.695 03:21:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:19.695 03:21:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:19.695 03:21:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:19.695 03:21:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:19.695 03:21:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:18:19.695 Found net devices under 0000:0a:00.0: cvl_0_0 00:18:19.695 03:21:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:19.695 03:21:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:19.695 03:21:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:19.695 03:21:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:19.695 03:21:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:19.695 03:21:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:19.695 03:21:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:19.695 03:21:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:19.695 03:21:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:18:19.695 Found net devices under 0000:0a:00.1: cvl_0_1 00:18:19.695 03:21:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:19.695 03:21:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:19.695 03:21:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # is_hw=yes 00:18:19.695 03:21:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:18:19.695 03:21:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:18:19.695 03:21:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:18:19.695 03:21:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:19.695 03:21:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:19.695 03:21:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:19.695 03:21:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:19.695 03:21:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:19.695 03:21:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:19.695 03:21:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:19.695 03:21:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:19.695 03:21:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:19.695 03:21:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:19.695 03:21:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:19.695 03:21:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:19.695 03:21:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:19.695 03:21:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:19.695 03:21:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:19.695 03:21:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:19.695 03:21:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:19.955 03:21:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:19.955 03:21:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:19.955 03:21:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:19.955 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:19.955 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.253 ms 00:18:19.955 00:18:19.955 --- 10.0.0.2 ping statistics --- 00:18:19.955 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:19.955 rtt min/avg/max/mdev = 0.253/0.253/0.253/0.000 ms 00:18:19.955 03:21:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:19.955 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:19.955 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.189 ms 00:18:19.955 00:18:19.955 --- 10.0.0.1 ping statistics --- 00:18:19.955 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:19.955 rtt min/avg/max/mdev = 0.189/0.189/0.189/0.000 ms 00:18:19.955 03:21:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:19.955 03:21:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@422 -- # return 0 00:18:19.955 03:21:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:19.955 03:21:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:19.955 03:21:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:19.955 03:21:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:19.955 03:21:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:19.955 03:21:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:19.955 03:21:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:19.955 03:21:25 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:18:19.955 03:21:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:19.955 03:21:25 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:19.955 03:21:25 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:19.955 03:21:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@481 -- # nvmfpid=3186954 00:18:19.955 03:21:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:19.955 03:21:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@482 -- # waitforlisten 3186954 00:18:19.955 03:21:25 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@829 -- # '[' -z 3186954 ']' 00:18:19.955 03:21:25 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:19.955 03:21:25 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:19.955 03:21:25 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:19.955 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:19.955 03:21:25 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:19.955 03:21:25 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:19.955 [2024-07-15 03:21:25.971412] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:18:19.955 [2024-07-15 03:21:25.971502] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:19.955 EAL: No free 2048 kB hugepages reported on node 1 00:18:19.955 [2024-07-15 03:21:26.034678] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:20.259 [2024-07-15 03:21:26.119658] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:20.259 [2024-07-15 03:21:26.119705] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:20.259 [2024-07-15 03:21:26.119729] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:20.259 [2024-07-15 03:21:26.119739] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:20.259 [2024-07-15 03:21:26.119748] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:20.259 [2024-07-15 03:21:26.119773] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:20.259 03:21:26 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:20.259 03:21:26 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@862 -- # return 0 00:18:20.259 03:21:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:20.259 03:21:26 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:20.259 03:21:26 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:20.259 03:21:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:20.259 03:21:26 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:18:20.259 03:21:26 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:18:20.259 03:21:26 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:20.259 03:21:26 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:20.259 [2024-07-15 03:21:26.266472] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:20.259 03:21:26 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:20.259 03:21:26 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:18:20.259 03:21:26 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:20.259 03:21:26 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:20.259 03:21:26 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:20.259 03:21:26 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:20.259 03:21:26 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:20.259 03:21:26 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:20.259 [2024-07-15 03:21:26.282666] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:20.259 03:21:26 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:20.259 03:21:26 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:18:20.259 03:21:26 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:20.259 03:21:26 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:20.259 03:21:26 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:20.259 03:21:26 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:18:20.259 03:21:26 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:20.259 03:21:26 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:20.259 malloc0 00:18:20.259 03:21:26 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:20.259 03:21:26 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:20.259 03:21:26 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:20.259 03:21:26 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:20.259 03:21:26 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:20.259 03:21:26 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:18:20.259 03:21:26 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:18:20.259 03:21:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:18:20.259 03:21:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:18:20.259 03:21:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:18:20.259 03:21:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:18:20.259 { 00:18:20.259 "params": { 00:18:20.259 "name": "Nvme$subsystem", 00:18:20.259 "trtype": "$TEST_TRANSPORT", 00:18:20.259 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:20.259 "adrfam": "ipv4", 00:18:20.259 "trsvcid": "$NVMF_PORT", 00:18:20.259 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:20.259 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:20.259 "hdgst": ${hdgst:-false}, 00:18:20.259 "ddgst": ${ddgst:-false} 00:18:20.259 }, 00:18:20.259 "method": "bdev_nvme_attach_controller" 00:18:20.259 } 00:18:20.259 EOF 00:18:20.259 )") 00:18:20.259 03:21:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:18:20.259 03:21:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:18:20.259 03:21:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:18:20.259 03:21:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:18:20.259 "params": { 00:18:20.259 "name": "Nvme1", 00:18:20.259 "trtype": "tcp", 00:18:20.259 "traddr": "10.0.0.2", 00:18:20.259 "adrfam": "ipv4", 00:18:20.259 "trsvcid": "4420", 00:18:20.259 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:20.259 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:20.259 "hdgst": false, 00:18:20.259 "ddgst": false 00:18:20.259 }, 00:18:20.259 "method": "bdev_nvme_attach_controller" 00:18:20.259 }' 00:18:20.259 [2024-07-15 03:21:26.363761] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:18:20.259 [2024-07-15 03:21:26.363830] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3186975 ] 00:18:20.517 EAL: No free 2048 kB hugepages reported on node 1 00:18:20.517 [2024-07-15 03:21:26.426823] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:20.517 [2024-07-15 03:21:26.519467] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:20.775 Running I/O for 10 seconds... 00:18:32.978 00:18:32.978 Latency(us) 00:18:32.978 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:32.978 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:18:32.978 Verification LBA range: start 0x0 length 0x1000 00:18:32.978 Nvme1n1 : 10.01 5730.46 44.77 0.00 0.00 22273.60 682.67 33981.63 00:18:32.978 =================================================================================================================== 00:18:32.979 Total : 5730.46 44.77 0.00 0.00 22273.60 682.67 33981.63 00:18:32.979 03:21:37 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=3188283 00:18:32.979 03:21:37 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:18:32.979 03:21:37 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:32.979 03:21:37 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:18:32.979 03:21:37 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:18:32.979 03:21:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:18:32.979 03:21:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:18:32.979 03:21:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:18:32.979 03:21:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:18:32.979 { 00:18:32.979 "params": { 00:18:32.979 "name": "Nvme$subsystem", 00:18:32.979 "trtype": "$TEST_TRANSPORT", 00:18:32.979 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:32.979 "adrfam": "ipv4", 00:18:32.979 "trsvcid": "$NVMF_PORT", 00:18:32.979 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:32.979 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:32.979 "hdgst": ${hdgst:-false}, 00:18:32.979 "ddgst": ${ddgst:-false} 00:18:32.979 }, 00:18:32.979 "method": "bdev_nvme_attach_controller" 00:18:32.979 } 00:18:32.979 EOF 00:18:32.979 )") 00:18:32.979 03:21:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:18:32.979 [2024-07-15 03:21:37.140656] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.979 [2024-07-15 03:21:37.140707] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.979 03:21:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:18:32.979 03:21:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:18:32.979 03:21:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:18:32.979 "params": { 00:18:32.979 "name": "Nvme1", 00:18:32.979 "trtype": "tcp", 00:18:32.979 "traddr": "10.0.0.2", 00:18:32.979 "adrfam": "ipv4", 00:18:32.979 "trsvcid": "4420", 00:18:32.979 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:32.979 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:32.979 "hdgst": false, 00:18:32.979 "ddgst": false 00:18:32.979 }, 00:18:32.979 "method": "bdev_nvme_attach_controller" 00:18:32.979 }' 00:18:32.979 [2024-07-15 03:21:37.148610] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.979 [2024-07-15 03:21:37.148637] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.979 [2024-07-15 03:21:37.156630] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.979 [2024-07-15 03:21:37.156657] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.979 [2024-07-15 03:21:37.164651] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.979 [2024-07-15 03:21:37.164676] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.979 [2024-07-15 03:21:37.172672] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.979 [2024-07-15 03:21:37.172696] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.979 [2024-07-15 03:21:37.180691] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.979 [2024-07-15 03:21:37.180716] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.979 [2024-07-15 03:21:37.181067] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:18:32.979 [2024-07-15 03:21:37.181154] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3188283 ] 00:18:32.979 [2024-07-15 03:21:37.188718] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.979 [2024-07-15 03:21:37.188751] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.979 [2024-07-15 03:21:37.196734] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.979 [2024-07-15 03:21:37.196759] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.979 [2024-07-15 03:21:37.204754] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.979 [2024-07-15 03:21:37.204778] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.979 EAL: No free 2048 kB hugepages reported on node 1 00:18:32.979 [2024-07-15 03:21:37.212779] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.979 [2024-07-15 03:21:37.212805] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.979 [2024-07-15 03:21:37.220800] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.979 [2024-07-15 03:21:37.220825] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.979 [2024-07-15 03:21:37.228822] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.979 [2024-07-15 03:21:37.228846] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.979 [2024-07-15 03:21:37.236843] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.979 [2024-07-15 03:21:37.236868] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.979 [2024-07-15 03:21:37.244176] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:32.979 [2024-07-15 03:21:37.244866] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.979 [2024-07-15 03:21:37.244899] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.979 [2024-07-15 03:21:37.252953] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.979 [2024-07-15 03:21:37.252997] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.979 [2024-07-15 03:21:37.260944] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.979 [2024-07-15 03:21:37.260971] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.979 [2024-07-15 03:21:37.268951] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.979 [2024-07-15 03:21:37.268974] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.979 [2024-07-15 03:21:37.276973] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.979 [2024-07-15 03:21:37.276996] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.979 [2024-07-15 03:21:37.284979] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.979 [2024-07-15 03:21:37.285001] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.979 [2024-07-15 03:21:37.293014] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.979 [2024-07-15 03:21:37.293042] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.979 [2024-07-15 03:21:37.301045] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.979 [2024-07-15 03:21:37.301084] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.979 [2024-07-15 03:21:37.309030] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.979 [2024-07-15 03:21:37.309052] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.979 [2024-07-15 03:21:37.317050] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.979 [2024-07-15 03:21:37.317072] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.979 [2024-07-15 03:21:37.325072] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.979 [2024-07-15 03:21:37.325093] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.979 [2024-07-15 03:21:37.333093] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.979 [2024-07-15 03:21:37.333120] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.979 [2024-07-15 03:21:37.335344] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:32.979 [2024-07-15 03:21:37.341116] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.979 [2024-07-15 03:21:37.341137] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.979 [2024-07-15 03:21:37.349146] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.979 [2024-07-15 03:21:37.349187] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.979 [2024-07-15 03:21:37.357209] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.979 [2024-07-15 03:21:37.357251] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.979 [2024-07-15 03:21:37.365235] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.979 [2024-07-15 03:21:37.365278] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.979 [2024-07-15 03:21:37.373261] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.979 [2024-07-15 03:21:37.373302] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.979 [2024-07-15 03:21:37.381285] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.979 [2024-07-15 03:21:37.381327] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.979 [2024-07-15 03:21:37.389304] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.979 [2024-07-15 03:21:37.389347] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.979 [2024-07-15 03:21:37.397333] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.979 [2024-07-15 03:21:37.397376] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.979 [2024-07-15 03:21:37.405328] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.979 [2024-07-15 03:21:37.405357] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.979 [2024-07-15 03:21:37.413381] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.979 [2024-07-15 03:21:37.413426] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.979 [2024-07-15 03:21:37.421402] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.979 [2024-07-15 03:21:37.421446] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.979 [2024-07-15 03:21:37.429405] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.979 [2024-07-15 03:21:37.429439] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.980 [2024-07-15 03:21:37.437417] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.980 [2024-07-15 03:21:37.437445] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.980 [2024-07-15 03:21:37.445446] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.980 [2024-07-15 03:21:37.445475] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.980 [2024-07-15 03:21:37.453482] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.980 [2024-07-15 03:21:37.453511] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.980 [2024-07-15 03:21:37.461490] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.980 [2024-07-15 03:21:37.461517] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.980 [2024-07-15 03:21:37.469516] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.980 [2024-07-15 03:21:37.469540] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.980 [2024-07-15 03:21:37.477543] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.980 [2024-07-15 03:21:37.477568] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.980 [2024-07-15 03:21:37.485566] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.980 [2024-07-15 03:21:37.485591] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.980 [2024-07-15 03:21:37.493591] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.980 [2024-07-15 03:21:37.493616] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.980 [2024-07-15 03:21:37.501614] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.980 [2024-07-15 03:21:37.501638] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.980 [2024-07-15 03:21:37.509635] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.980 [2024-07-15 03:21:37.509659] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.980 [2024-07-15 03:21:37.517662] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.980 [2024-07-15 03:21:37.517689] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.980 [2024-07-15 03:21:37.525681] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.980 [2024-07-15 03:21:37.525708] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.980 [2024-07-15 03:21:37.533702] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.980 [2024-07-15 03:21:37.533726] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.980 [2024-07-15 03:21:37.541726] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.980 [2024-07-15 03:21:37.541751] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.980 [2024-07-15 03:21:37.549747] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.980 [2024-07-15 03:21:37.549771] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.980 [2024-07-15 03:21:37.557771] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.980 [2024-07-15 03:21:37.557795] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.980 [2024-07-15 03:21:37.565797] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.980 [2024-07-15 03:21:37.565822] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.980 [2024-07-15 03:21:37.573821] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.980 [2024-07-15 03:21:37.573847] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.980 [2024-07-15 03:21:37.581843] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.980 [2024-07-15 03:21:37.581868] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.980 [2024-07-15 03:21:37.589865] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.980 [2024-07-15 03:21:37.589901] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.980 [2024-07-15 03:21:37.597894] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.980 [2024-07-15 03:21:37.597931] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.980 [2024-07-15 03:21:37.605915] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.980 [2024-07-15 03:21:37.605950] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.980 [2024-07-15 03:21:37.613953] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.980 [2024-07-15 03:21:37.613976] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.980 [2024-07-15 03:21:37.621967] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.980 [2024-07-15 03:21:37.621988] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.980 [2024-07-15 03:21:37.629982] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.980 [2024-07-15 03:21:37.630003] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.980 [2024-07-15 03:21:37.637988] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.980 [2024-07-15 03:21:37.638009] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.980 [2024-07-15 03:21:37.646023] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.980 [2024-07-15 03:21:37.646043] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.980 [2024-07-15 03:21:37.654056] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.980 [2024-07-15 03:21:37.654078] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.980 [2024-07-15 03:21:37.662080] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.980 [2024-07-15 03:21:37.662103] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.980 [2024-07-15 03:21:37.670107] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.980 [2024-07-15 03:21:37.670132] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.980 [2024-07-15 03:21:37.678127] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.980 [2024-07-15 03:21:37.678150] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.980 Running I/O for 5 seconds... 00:18:32.980 [2024-07-15 03:21:37.690040] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.980 [2024-07-15 03:21:37.690069] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.980 [2024-07-15 03:21:37.700534] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.980 [2024-07-15 03:21:37.700565] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.980 [2024-07-15 03:21:37.713438] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.980 [2024-07-15 03:21:37.713472] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.980 [2024-07-15 03:21:37.725099] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.980 [2024-07-15 03:21:37.725129] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.980 [2024-07-15 03:21:37.736522] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.980 [2024-07-15 03:21:37.736553] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.980 [2024-07-15 03:21:37.748447] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.980 [2024-07-15 03:21:37.748478] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.980 [2024-07-15 03:21:37.759995] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.980 [2024-07-15 03:21:37.760024] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.980 [2024-07-15 03:21:37.771521] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.980 [2024-07-15 03:21:37.771552] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.980 [2024-07-15 03:21:37.783018] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.980 [2024-07-15 03:21:37.783046] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.980 [2024-07-15 03:21:37.794265] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.980 [2024-07-15 03:21:37.794296] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.980 [2024-07-15 03:21:37.805701] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.980 [2024-07-15 03:21:37.805731] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.980 [2024-07-15 03:21:37.817872] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.980 [2024-07-15 03:21:37.817927] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.980 [2024-07-15 03:21:37.830170] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.980 [2024-07-15 03:21:37.830222] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.980 [2024-07-15 03:21:37.842008] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.980 [2024-07-15 03:21:37.842036] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.980 [2024-07-15 03:21:37.853643] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.980 [2024-07-15 03:21:37.853674] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.980 [2024-07-15 03:21:37.865130] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.980 [2024-07-15 03:21:37.865158] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.980 [2024-07-15 03:21:37.876542] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.980 [2024-07-15 03:21:37.876573] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.980 [2024-07-15 03:21:37.888062] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.980 [2024-07-15 03:21:37.888091] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.980 [2024-07-15 03:21:37.899604] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.980 [2024-07-15 03:21:37.899636] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.980 [2024-07-15 03:21:37.911174] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.980 [2024-07-15 03:21:37.911214] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.980 [2024-07-15 03:21:37.923014] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.980 [2024-07-15 03:21:37.923043] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.981 [2024-07-15 03:21:37.936608] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.981 [2024-07-15 03:21:37.936639] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.981 [2024-07-15 03:21:37.947954] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.981 [2024-07-15 03:21:37.947982] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.981 [2024-07-15 03:21:37.959292] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.981 [2024-07-15 03:21:37.959336] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.981 [2024-07-15 03:21:37.971130] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.981 [2024-07-15 03:21:37.971159] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.981 [2024-07-15 03:21:37.982417] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.981 [2024-07-15 03:21:37.982448] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.981 [2024-07-15 03:21:37.995858] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.981 [2024-07-15 03:21:37.995904] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.981 [2024-07-15 03:21:38.007175] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.981 [2024-07-15 03:21:38.007205] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.981 [2024-07-15 03:21:38.018561] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.981 [2024-07-15 03:21:38.018592] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.981 [2024-07-15 03:21:38.030222] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.981 [2024-07-15 03:21:38.030253] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.981 [2024-07-15 03:21:38.042014] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.981 [2024-07-15 03:21:38.042045] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.981 [2024-07-15 03:21:38.053260] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.981 [2024-07-15 03:21:38.053300] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.981 [2024-07-15 03:21:38.064800] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.981 [2024-07-15 03:21:38.064832] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.981 [2024-07-15 03:21:38.076333] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.981 [2024-07-15 03:21:38.076365] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.981 [2024-07-15 03:21:38.087676] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.981 [2024-07-15 03:21:38.087707] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.981 [2024-07-15 03:21:38.098900] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.981 [2024-07-15 03:21:38.098944] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.981 [2024-07-15 03:21:38.110521] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.981 [2024-07-15 03:21:38.110553] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.981 [2024-07-15 03:21:38.122659] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.981 [2024-07-15 03:21:38.122690] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.981 [2024-07-15 03:21:38.134695] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.981 [2024-07-15 03:21:38.134726] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.981 [2024-07-15 03:21:38.146528] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.981 [2024-07-15 03:21:38.146561] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.981 [2024-07-15 03:21:38.158827] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.981 [2024-07-15 03:21:38.158859] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.981 [2024-07-15 03:21:38.172441] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.981 [2024-07-15 03:21:38.172473] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.981 [2024-07-15 03:21:38.182795] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.981 [2024-07-15 03:21:38.182827] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.981 [2024-07-15 03:21:38.194923] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.981 [2024-07-15 03:21:38.194951] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.981 [2024-07-15 03:21:38.206583] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.981 [2024-07-15 03:21:38.206615] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.981 [2024-07-15 03:21:38.217585] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.981 [2024-07-15 03:21:38.217617] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.981 [2024-07-15 03:21:38.229358] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.981 [2024-07-15 03:21:38.229390] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.981 [2024-07-15 03:21:38.241095] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.981 [2024-07-15 03:21:38.241124] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.981 [2024-07-15 03:21:38.252603] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.981 [2024-07-15 03:21:38.252634] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.981 [2024-07-15 03:21:38.263986] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.981 [2024-07-15 03:21:38.264015] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.981 [2024-07-15 03:21:38.275463] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.981 [2024-07-15 03:21:38.275505] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.981 [2024-07-15 03:21:38.287399] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.981 [2024-07-15 03:21:38.287430] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.981 [2024-07-15 03:21:38.299596] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.981 [2024-07-15 03:21:38.299627] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.981 [2024-07-15 03:21:38.311291] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.981 [2024-07-15 03:21:38.311322] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.981 [2024-07-15 03:21:38.322562] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.981 [2024-07-15 03:21:38.322593] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.981 [2024-07-15 03:21:38.334019] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.981 [2024-07-15 03:21:38.334048] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.981 [2024-07-15 03:21:38.345424] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.981 [2024-07-15 03:21:38.345455] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.981 [2024-07-15 03:21:38.356756] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.981 [2024-07-15 03:21:38.356788] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.981 [2024-07-15 03:21:38.367923] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.981 [2024-07-15 03:21:38.367951] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.981 [2024-07-15 03:21:38.379706] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.981 [2024-07-15 03:21:38.379738] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.981 [2024-07-15 03:21:38.391226] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.981 [2024-07-15 03:21:38.391258] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.981 [2024-07-15 03:21:38.402448] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.981 [2024-07-15 03:21:38.402480] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.981 [2024-07-15 03:21:38.413789] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.981 [2024-07-15 03:21:38.413820] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.981 [2024-07-15 03:21:38.425187] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.981 [2024-07-15 03:21:38.425218] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.981 [2024-07-15 03:21:38.436494] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.981 [2024-07-15 03:21:38.436526] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.981 [2024-07-15 03:21:38.448464] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.981 [2024-07-15 03:21:38.448495] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.981 [2024-07-15 03:21:38.460094] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.981 [2024-07-15 03:21:38.460123] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.981 [2024-07-15 03:21:38.471410] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.981 [2024-07-15 03:21:38.471442] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.981 [2024-07-15 03:21:38.482742] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.981 [2024-07-15 03:21:38.482773] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.981 [2024-07-15 03:21:38.494442] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.981 [2024-07-15 03:21:38.494474] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.981 [2024-07-15 03:21:38.506083] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.981 [2024-07-15 03:21:38.506112] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.981 [2024-07-15 03:21:38.517070] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.981 [2024-07-15 03:21:38.517098] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.981 [2024-07-15 03:21:38.528784] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.981 [2024-07-15 03:21:38.528815] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.982 [2024-07-15 03:21:38.540415] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.982 [2024-07-15 03:21:38.540446] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.982 [2024-07-15 03:21:38.551941] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.982 [2024-07-15 03:21:38.551970] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.982 [2024-07-15 03:21:38.563938] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.982 [2024-07-15 03:21:38.563966] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.982 [2024-07-15 03:21:38.575801] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.982 [2024-07-15 03:21:38.575831] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.982 [2024-07-15 03:21:38.588154] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.982 [2024-07-15 03:21:38.588182] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.982 [2024-07-15 03:21:38.599276] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.982 [2024-07-15 03:21:38.599307] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.982 [2024-07-15 03:21:38.610838] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.982 [2024-07-15 03:21:38.610868] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.982 [2024-07-15 03:21:38.622096] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.982 [2024-07-15 03:21:38.622125] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.982 [2024-07-15 03:21:38.634205] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.982 [2024-07-15 03:21:38.634236] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.982 [2024-07-15 03:21:38.645843] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.982 [2024-07-15 03:21:38.645874] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.982 [2024-07-15 03:21:38.657663] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.982 [2024-07-15 03:21:38.657694] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.982 [2024-07-15 03:21:38.668931] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.982 [2024-07-15 03:21:38.668959] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.982 [2024-07-15 03:21:38.680610] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.982 [2024-07-15 03:21:38.680640] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.982 [2024-07-15 03:21:38.692060] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.982 [2024-07-15 03:21:38.692088] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.982 [2024-07-15 03:21:38.703840] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.982 [2024-07-15 03:21:38.703872] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.982 [2024-07-15 03:21:38.715614] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.982 [2024-07-15 03:21:38.715645] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.982 [2024-07-15 03:21:38.727112] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.982 [2024-07-15 03:21:38.727150] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.982 [2024-07-15 03:21:38.739321] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.982 [2024-07-15 03:21:38.739353] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.982 [2024-07-15 03:21:38.750647] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.982 [2024-07-15 03:21:38.750678] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.982 [2024-07-15 03:21:38.762019] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.982 [2024-07-15 03:21:38.762047] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.982 [2024-07-15 03:21:38.773539] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.982 [2024-07-15 03:21:38.773570] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.982 [2024-07-15 03:21:38.784983] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.982 [2024-07-15 03:21:38.785011] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.982 [2024-07-15 03:21:38.796689] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.982 [2024-07-15 03:21:38.796719] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.982 [2024-07-15 03:21:38.808464] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.982 [2024-07-15 03:21:38.808495] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.982 [2024-07-15 03:21:38.819637] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.982 [2024-07-15 03:21:38.819668] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.982 [2024-07-15 03:21:38.830961] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.982 [2024-07-15 03:21:38.830992] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.982 [2024-07-15 03:21:38.842655] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.982 [2024-07-15 03:21:38.842687] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.982 [2024-07-15 03:21:38.854033] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.982 [2024-07-15 03:21:38.854062] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.982 [2024-07-15 03:21:38.865597] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.982 [2024-07-15 03:21:38.865627] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.982 [2024-07-15 03:21:38.876814] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.982 [2024-07-15 03:21:38.876846] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.982 [2024-07-15 03:21:38.888523] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.982 [2024-07-15 03:21:38.888554] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.982 [2024-07-15 03:21:38.900192] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.982 [2024-07-15 03:21:38.900224] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.982 [2024-07-15 03:21:38.911778] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.982 [2024-07-15 03:21:38.911809] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.982 [2024-07-15 03:21:38.923105] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.982 [2024-07-15 03:21:38.923133] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.982 [2024-07-15 03:21:38.934736] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.982 [2024-07-15 03:21:38.934767] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.982 [2024-07-15 03:21:38.946424] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.982 [2024-07-15 03:21:38.946456] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.982 [2024-07-15 03:21:38.960181] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.982 [2024-07-15 03:21:38.960225] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.982 [2024-07-15 03:21:38.971454] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.982 [2024-07-15 03:21:38.971485] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.982 [2024-07-15 03:21:38.983087] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.982 [2024-07-15 03:21:38.983116] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.982 [2024-07-15 03:21:38.994242] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.982 [2024-07-15 03:21:38.994273] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.982 [2024-07-15 03:21:39.005047] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.982 [2024-07-15 03:21:39.005075] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.982 [2024-07-15 03:21:39.016122] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.982 [2024-07-15 03:21:39.016152] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.982 [2024-07-15 03:21:39.027595] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.982 [2024-07-15 03:21:39.027638] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.982 [2024-07-15 03:21:39.039301] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.982 [2024-07-15 03:21:39.039332] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.982 [2024-07-15 03:21:39.050831] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.982 [2024-07-15 03:21:39.050862] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.982 [2024-07-15 03:21:39.062675] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.982 [2024-07-15 03:21:39.062706] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.982 [2024-07-15 03:21:39.076312] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.982 [2024-07-15 03:21:39.076344] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.982 [2024-07-15 03:21:39.087231] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.982 [2024-07-15 03:21:39.087263] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.982 [2024-07-15 03:21:39.098785] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.982 [2024-07-15 03:21:39.098818] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.982 [2024-07-15 03:21:39.110402] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.982 [2024-07-15 03:21:39.110434] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.240 [2024-07-15 03:21:39.122050] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.240 [2024-07-15 03:21:39.122078] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.240 [2024-07-15 03:21:39.133347] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.240 [2024-07-15 03:21:39.133379] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.240 [2024-07-15 03:21:39.144661] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.240 [2024-07-15 03:21:39.144692] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.240 [2024-07-15 03:21:39.156419] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.240 [2024-07-15 03:21:39.156450] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.240 [2024-07-15 03:21:39.168074] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.240 [2024-07-15 03:21:39.168103] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.240 [2024-07-15 03:21:39.179244] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.240 [2024-07-15 03:21:39.179275] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.240 [2024-07-15 03:21:39.190732] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.240 [2024-07-15 03:21:39.190762] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.240 [2024-07-15 03:21:39.202436] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.240 [2024-07-15 03:21:39.202467] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.240 [2024-07-15 03:21:39.214029] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.240 [2024-07-15 03:21:39.214057] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.240 [2024-07-15 03:21:39.225638] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.240 [2024-07-15 03:21:39.225669] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.240 [2024-07-15 03:21:39.237145] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.240 [2024-07-15 03:21:39.237175] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.240 [2024-07-15 03:21:39.248322] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.240 [2024-07-15 03:21:39.248353] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.240 [2024-07-15 03:21:39.259344] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.240 [2024-07-15 03:21:39.259376] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.240 [2024-07-15 03:21:39.271412] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.240 [2024-07-15 03:21:39.271443] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.240 [2024-07-15 03:21:39.282781] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.240 [2024-07-15 03:21:39.282813] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.241 [2024-07-15 03:21:39.293886] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.241 [2024-07-15 03:21:39.293916] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.241 [2024-07-15 03:21:39.304927] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.241 [2024-07-15 03:21:39.304955] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.241 [2024-07-15 03:21:39.316398] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.241 [2024-07-15 03:21:39.316428] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.241 [2024-07-15 03:21:39.327991] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.241 [2024-07-15 03:21:39.328019] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.241 [2024-07-15 03:21:39.339390] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.241 [2024-07-15 03:21:39.339422] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.241 [2024-07-15 03:21:39.350673] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.241 [2024-07-15 03:21:39.350704] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.241 [2024-07-15 03:21:39.362646] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.241 [2024-07-15 03:21:39.362684] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.241 [2024-07-15 03:21:39.373846] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.241 [2024-07-15 03:21:39.373884] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.498 [2024-07-15 03:21:39.385647] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.498 [2024-07-15 03:21:39.385677] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.498 [2024-07-15 03:21:39.397243] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.498 [2024-07-15 03:21:39.397273] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.498 [2024-07-15 03:21:39.408733] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.498 [2024-07-15 03:21:39.408764] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.498 [2024-07-15 03:21:39.420644] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.498 [2024-07-15 03:21:39.420674] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.498 [2024-07-15 03:21:39.432547] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.498 [2024-07-15 03:21:39.432578] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.498 [2024-07-15 03:21:39.444279] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.498 [2024-07-15 03:21:39.444310] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.498 [2024-07-15 03:21:39.458173] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.498 [2024-07-15 03:21:39.458219] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.498 [2024-07-15 03:21:39.469276] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.498 [2024-07-15 03:21:39.469308] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.498 [2024-07-15 03:21:39.480674] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.498 [2024-07-15 03:21:39.480706] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.498 [2024-07-15 03:21:39.492188] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.498 [2024-07-15 03:21:39.492220] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.498 [2024-07-15 03:21:39.503630] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.498 [2024-07-15 03:21:39.503662] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.498 [2024-07-15 03:21:39.514993] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.498 [2024-07-15 03:21:39.515022] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.498 [2024-07-15 03:21:39.526816] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.498 [2024-07-15 03:21:39.526847] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.498 [2024-07-15 03:21:39.538381] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.498 [2024-07-15 03:21:39.538413] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.498 [2024-07-15 03:21:39.549758] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.498 [2024-07-15 03:21:39.549790] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.498 [2024-07-15 03:21:39.561173] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.498 [2024-07-15 03:21:39.561219] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.498 [2024-07-15 03:21:39.572592] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.498 [2024-07-15 03:21:39.572624] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.498 [2024-07-15 03:21:39.583690] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.498 [2024-07-15 03:21:39.583729] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.498 [2024-07-15 03:21:39.595095] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.498 [2024-07-15 03:21:39.595123] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.498 [2024-07-15 03:21:39.606831] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.498 [2024-07-15 03:21:39.606863] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.498 [2024-07-15 03:21:39.618456] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.498 [2024-07-15 03:21:39.618488] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.498 [2024-07-15 03:21:39.630054] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.498 [2024-07-15 03:21:39.630083] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.756 [2024-07-15 03:21:39.641702] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.756 [2024-07-15 03:21:39.641733] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.756 [2024-07-15 03:21:39.653505] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.756 [2024-07-15 03:21:39.653537] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.756 [2024-07-15 03:21:39.665195] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.756 [2024-07-15 03:21:39.665241] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.756 [2024-07-15 03:21:39.679010] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.756 [2024-07-15 03:21:39.679038] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.756 [2024-07-15 03:21:39.690173] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.756 [2024-07-15 03:21:39.690201] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.756 [2024-07-15 03:21:39.701683] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.756 [2024-07-15 03:21:39.701715] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.756 [2024-07-15 03:21:39.713396] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.756 [2024-07-15 03:21:39.713427] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.756 [2024-07-15 03:21:39.725047] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.756 [2024-07-15 03:21:39.725076] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.756 [2024-07-15 03:21:39.736617] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.756 [2024-07-15 03:21:39.736649] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.756 [2024-07-15 03:21:39.748111] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.756 [2024-07-15 03:21:39.748140] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.756 [2024-07-15 03:21:39.759540] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.756 [2024-07-15 03:21:39.759570] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.756 [2024-07-15 03:21:39.771019] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.756 [2024-07-15 03:21:39.771047] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.756 [2024-07-15 03:21:39.782675] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.756 [2024-07-15 03:21:39.782706] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.756 [2024-07-15 03:21:39.794367] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.756 [2024-07-15 03:21:39.794398] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.756 [2024-07-15 03:21:39.806384] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.756 [2024-07-15 03:21:39.806425] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.756 [2024-07-15 03:21:39.817817] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.756 [2024-07-15 03:21:39.817849] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.756 [2024-07-15 03:21:39.829283] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.756 [2024-07-15 03:21:39.829314] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.756 [2024-07-15 03:21:39.841032] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.756 [2024-07-15 03:21:39.841060] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.756 [2024-07-15 03:21:39.852823] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.756 [2024-07-15 03:21:39.852854] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.757 [2024-07-15 03:21:39.864327] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.757 [2024-07-15 03:21:39.864358] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.757 [2024-07-15 03:21:39.875512] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.757 [2024-07-15 03:21:39.875543] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.757 [2024-07-15 03:21:39.887058] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.757 [2024-07-15 03:21:39.887086] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.757 [2024-07-15 03:21:39.898712] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.757 [2024-07-15 03:21:39.898744] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.015 [2024-07-15 03:21:39.909867] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.015 [2024-07-15 03:21:39.909906] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.015 [2024-07-15 03:21:39.921132] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.015 [2024-07-15 03:21:39.921160] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.015 [2024-07-15 03:21:39.932669] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.015 [2024-07-15 03:21:39.932701] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.015 [2024-07-15 03:21:39.944607] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.015 [2024-07-15 03:21:39.944638] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.015 [2024-07-15 03:21:39.956481] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.015 [2024-07-15 03:21:39.956513] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.015 [2024-07-15 03:21:39.968470] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.015 [2024-07-15 03:21:39.968502] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.015 [2024-07-15 03:21:39.980674] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.015 [2024-07-15 03:21:39.980706] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.015 [2024-07-15 03:21:39.992343] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.015 [2024-07-15 03:21:39.992375] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.015 [2024-07-15 03:21:40.003996] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.015 [2024-07-15 03:21:40.004027] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.015 [2024-07-15 03:21:40.017587] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.015 [2024-07-15 03:21:40.017629] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.015 [2024-07-15 03:21:40.028837] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.015 [2024-07-15 03:21:40.028894] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.015 [2024-07-15 03:21:40.040600] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.015 [2024-07-15 03:21:40.040635] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.015 [2024-07-15 03:21:40.052986] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.015 [2024-07-15 03:21:40.053017] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.015 [2024-07-15 03:21:40.065029] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.015 [2024-07-15 03:21:40.065058] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.015 [2024-07-15 03:21:40.077127] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.015 [2024-07-15 03:21:40.077172] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.015 [2024-07-15 03:21:40.089082] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.015 [2024-07-15 03:21:40.089110] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.015 [2024-07-15 03:21:40.100771] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.015 [2024-07-15 03:21:40.100804] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.015 [2024-07-15 03:21:40.112687] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.015 [2024-07-15 03:21:40.112718] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.015 [2024-07-15 03:21:40.124299] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.015 [2024-07-15 03:21:40.124330] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.015 [2024-07-15 03:21:40.138226] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.015 [2024-07-15 03:21:40.138258] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.015 [2024-07-15 03:21:40.149388] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.015 [2024-07-15 03:21:40.149421] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.274 [2024-07-15 03:21:40.160738] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.274 [2024-07-15 03:21:40.160771] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.274 [2024-07-15 03:21:40.172209] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.274 [2024-07-15 03:21:40.172241] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.274 [2024-07-15 03:21:40.183704] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.274 [2024-07-15 03:21:40.183735] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.274 [2024-07-15 03:21:40.195039] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.274 [2024-07-15 03:21:40.195068] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.274 [2024-07-15 03:21:40.206662] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.274 [2024-07-15 03:21:40.206693] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.274 [2024-07-15 03:21:40.218391] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.274 [2024-07-15 03:21:40.218422] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.274 [2024-07-15 03:21:40.232011] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.274 [2024-07-15 03:21:40.232040] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.274 [2024-07-15 03:21:40.242571] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.274 [2024-07-15 03:21:40.242611] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.274 [2024-07-15 03:21:40.254051] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.274 [2024-07-15 03:21:40.254080] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.274 [2024-07-15 03:21:40.265022] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.274 [2024-07-15 03:21:40.265050] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.274 [2024-07-15 03:21:40.276432] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.274 [2024-07-15 03:21:40.276464] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.274 [2024-07-15 03:21:40.288109] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.274 [2024-07-15 03:21:40.288138] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.274 [2024-07-15 03:21:40.299588] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.274 [2024-07-15 03:21:40.299619] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.274 [2024-07-15 03:21:40.310833] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.274 [2024-07-15 03:21:40.310864] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.274 [2024-07-15 03:21:40.322612] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.274 [2024-07-15 03:21:40.322643] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.274 [2024-07-15 03:21:40.334345] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.274 [2024-07-15 03:21:40.334377] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.274 [2024-07-15 03:21:40.345533] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.274 [2024-07-15 03:21:40.345565] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.274 [2024-07-15 03:21:40.356933] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.274 [2024-07-15 03:21:40.356962] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.274 [2024-07-15 03:21:40.368443] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.274 [2024-07-15 03:21:40.368474] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.274 [2024-07-15 03:21:40.379638] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.274 [2024-07-15 03:21:40.379669] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.274 [2024-07-15 03:21:40.390990] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.274 [2024-07-15 03:21:40.391018] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.274 [2024-07-15 03:21:40.402324] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.274 [2024-07-15 03:21:40.402356] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.274 [2024-07-15 03:21:40.413949] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.274 [2024-07-15 03:21:40.413977] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.532 [2024-07-15 03:21:40.425514] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.532 [2024-07-15 03:21:40.425546] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.532 [2024-07-15 03:21:40.436825] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.532 [2024-07-15 03:21:40.436855] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.533 [2024-07-15 03:21:40.448437] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.533 [2024-07-15 03:21:40.448470] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.533 [2024-07-15 03:21:40.460371] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.533 [2024-07-15 03:21:40.460402] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.533 [2024-07-15 03:21:40.472623] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.533 [2024-07-15 03:21:40.472654] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.533 [2024-07-15 03:21:40.484784] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.533 [2024-07-15 03:21:40.484815] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.533 [2024-07-15 03:21:40.496394] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.533 [2024-07-15 03:21:40.496426] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.533 [2024-07-15 03:21:40.507953] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.533 [2024-07-15 03:21:40.507982] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.533 [2024-07-15 03:21:40.521475] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.533 [2024-07-15 03:21:40.521507] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.533 [2024-07-15 03:21:40.532840] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.533 [2024-07-15 03:21:40.532871] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.533 [2024-07-15 03:21:40.543984] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.533 [2024-07-15 03:21:40.544012] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.533 [2024-07-15 03:21:40.555616] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.533 [2024-07-15 03:21:40.555648] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.533 [2024-07-15 03:21:40.567187] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.533 [2024-07-15 03:21:40.567219] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.533 [2024-07-15 03:21:40.578848] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.533 [2024-07-15 03:21:40.578889] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.533 [2024-07-15 03:21:40.590353] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.533 [2024-07-15 03:21:40.590384] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.533 [2024-07-15 03:21:40.603656] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.533 [2024-07-15 03:21:40.603687] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.533 [2024-07-15 03:21:40.614844] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.533 [2024-07-15 03:21:40.614885] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.533 [2024-07-15 03:21:40.626940] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.533 [2024-07-15 03:21:40.626968] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.533 [2024-07-15 03:21:40.638780] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.533 [2024-07-15 03:21:40.638812] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.533 [2024-07-15 03:21:40.650505] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.533 [2024-07-15 03:21:40.650536] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.533 [2024-07-15 03:21:40.662826] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.533 [2024-07-15 03:21:40.662858] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.533 [2024-07-15 03:21:40.674339] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.533 [2024-07-15 03:21:40.674369] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.791 [2024-07-15 03:21:40.685184] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.791 [2024-07-15 03:21:40.685212] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.791 [2024-07-15 03:21:40.696160] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.791 [2024-07-15 03:21:40.696188] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.791 [2024-07-15 03:21:40.708783] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.791 [2024-07-15 03:21:40.708812] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.791 [2024-07-15 03:21:40.719176] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.791 [2024-07-15 03:21:40.719204] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.791 [2024-07-15 03:21:40.729781] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.791 [2024-07-15 03:21:40.729809] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.791 [2024-07-15 03:21:40.740239] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.791 [2024-07-15 03:21:40.740267] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.791 [2024-07-15 03:21:40.750635] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.791 [2024-07-15 03:21:40.750664] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.791 [2024-07-15 03:21:40.761630] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.791 [2024-07-15 03:21:40.761658] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.791 [2024-07-15 03:21:40.772689] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.791 [2024-07-15 03:21:40.772717] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.791 [2024-07-15 03:21:40.785110] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.791 [2024-07-15 03:21:40.785137] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.791 [2024-07-15 03:21:40.794962] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.791 [2024-07-15 03:21:40.794989] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.791 [2024-07-15 03:21:40.805192] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.791 [2024-07-15 03:21:40.805220] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.791 [2024-07-15 03:21:40.816095] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.791 [2024-07-15 03:21:40.816133] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.791 [2024-07-15 03:21:40.826321] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.791 [2024-07-15 03:21:40.826349] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.791 [2024-07-15 03:21:40.837426] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.791 [2024-07-15 03:21:40.837453] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.791 [2024-07-15 03:21:40.848252] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.791 [2024-07-15 03:21:40.848280] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.791 [2024-07-15 03:21:40.860461] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.791 [2024-07-15 03:21:40.860489] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.791 [2024-07-15 03:21:40.870716] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.791 [2024-07-15 03:21:40.870744] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.791 [2024-07-15 03:21:40.881263] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.791 [2024-07-15 03:21:40.881291] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.791 [2024-07-15 03:21:40.891815] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.791 [2024-07-15 03:21:40.891844] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.791 [2024-07-15 03:21:40.902478] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.791 [2024-07-15 03:21:40.902506] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.791 [2024-07-15 03:21:40.913148] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.791 [2024-07-15 03:21:40.913177] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.791 [2024-07-15 03:21:40.925569] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.791 [2024-07-15 03:21:40.925596] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.050 [2024-07-15 03:21:40.935416] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.050 [2024-07-15 03:21:40.935444] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.050 [2024-07-15 03:21:40.945900] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.050 [2024-07-15 03:21:40.945928] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.050 [2024-07-15 03:21:40.956536] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.050 [2024-07-15 03:21:40.956564] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.050 [2024-07-15 03:21:40.969058] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.050 [2024-07-15 03:21:40.969086] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.050 [2024-07-15 03:21:40.978086] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.050 [2024-07-15 03:21:40.978115] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.050 [2024-07-15 03:21:40.990772] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.050 [2024-07-15 03:21:40.990801] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.050 [2024-07-15 03:21:41.000770] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.050 [2024-07-15 03:21:41.000799] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.050 [2024-07-15 03:21:41.011227] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.050 [2024-07-15 03:21:41.011255] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.050 [2024-07-15 03:21:41.021540] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.050 [2024-07-15 03:21:41.021569] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.050 [2024-07-15 03:21:41.031610] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.050 [2024-07-15 03:21:41.031638] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.050 [2024-07-15 03:21:41.042100] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.050 [2024-07-15 03:21:41.042128] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.050 [2024-07-15 03:21:41.054043] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.050 [2024-07-15 03:21:41.054070] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.050 [2024-07-15 03:21:41.064326] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.050 [2024-07-15 03:21:41.064354] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.050 [2024-07-15 03:21:41.075113] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.050 [2024-07-15 03:21:41.075141] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.050 [2024-07-15 03:21:41.086011] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.050 [2024-07-15 03:21:41.086039] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.050 [2024-07-15 03:21:41.096473] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.050 [2024-07-15 03:21:41.096510] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.050 [2024-07-15 03:21:41.107121] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.050 [2024-07-15 03:21:41.107149] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.050 [2024-07-15 03:21:41.117524] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.050 [2024-07-15 03:21:41.117552] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.050 [2024-07-15 03:21:41.127860] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.050 [2024-07-15 03:21:41.127895] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.050 [2024-07-15 03:21:41.138360] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.050 [2024-07-15 03:21:41.138388] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.050 [2024-07-15 03:21:41.149164] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.050 [2024-07-15 03:21:41.149193] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.050 [2024-07-15 03:21:41.159743] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.050 [2024-07-15 03:21:41.159771] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.050 [2024-07-15 03:21:41.172115] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.050 [2024-07-15 03:21:41.172143] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.050 [2024-07-15 03:21:41.182268] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.050 [2024-07-15 03:21:41.182296] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.050 [2024-07-15 03:21:41.192353] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.050 [2024-07-15 03:21:41.192381] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.309 [2024-07-15 03:21:41.202567] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.309 [2024-07-15 03:21:41.202595] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.309 [2024-07-15 03:21:41.213061] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.309 [2024-07-15 03:21:41.213089] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.309 [2024-07-15 03:21:41.223439] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.309 [2024-07-15 03:21:41.223466] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.309 [2024-07-15 03:21:41.233725] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.309 [2024-07-15 03:21:41.233753] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.309 [2024-07-15 03:21:41.244648] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.309 [2024-07-15 03:21:41.244676] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.309 [2024-07-15 03:21:41.255412] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.309 [2024-07-15 03:21:41.255439] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.309 [2024-07-15 03:21:41.267708] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.309 [2024-07-15 03:21:41.267737] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.309 [2024-07-15 03:21:41.277772] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.309 [2024-07-15 03:21:41.277801] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.309 [2024-07-15 03:21:41.287906] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.309 [2024-07-15 03:21:41.287935] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.309 [2024-07-15 03:21:41.298833] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.309 [2024-07-15 03:21:41.298885] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.309 [2024-07-15 03:21:41.310428] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.309 [2024-07-15 03:21:41.310460] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.309 [2024-07-15 03:21:41.322093] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.309 [2024-07-15 03:21:41.322122] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.309 [2024-07-15 03:21:41.333498] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.309 [2024-07-15 03:21:41.333530] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.309 [2024-07-15 03:21:41.345023] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.309 [2024-07-15 03:21:41.345054] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.309 [2024-07-15 03:21:41.356356] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.309 [2024-07-15 03:21:41.356388] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.309 [2024-07-15 03:21:41.367323] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.309 [2024-07-15 03:21:41.367355] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.309 [2024-07-15 03:21:41.378688] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.309 [2024-07-15 03:21:41.378720] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.309 [2024-07-15 03:21:41.390161] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.309 [2024-07-15 03:21:41.390206] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.309 [2024-07-15 03:21:41.401583] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.309 [2024-07-15 03:21:41.401614] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.309 [2024-07-15 03:21:41.412685] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.309 [2024-07-15 03:21:41.412716] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.309 [2024-07-15 03:21:41.424048] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.309 [2024-07-15 03:21:41.424077] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.309 [2024-07-15 03:21:41.437029] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.309 [2024-07-15 03:21:41.437058] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.309 [2024-07-15 03:21:41.447725] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.309 [2024-07-15 03:21:41.447756] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.567 [2024-07-15 03:21:41.459438] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.567 [2024-07-15 03:21:41.459469] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.567 [2024-07-15 03:21:41.471042] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.567 [2024-07-15 03:21:41.471070] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.567 [2024-07-15 03:21:41.483889] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.567 [2024-07-15 03:21:41.483935] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.567 [2024-07-15 03:21:41.494420] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.567 [2024-07-15 03:21:41.494452] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.567 [2024-07-15 03:21:41.506625] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.567 [2024-07-15 03:21:41.506656] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.567 [2024-07-15 03:21:41.518253] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.567 [2024-07-15 03:21:41.518294] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.567 [2024-07-15 03:21:41.529851] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.567 [2024-07-15 03:21:41.529892] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.567 [2024-07-15 03:21:41.541668] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.567 [2024-07-15 03:21:41.541700] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.567 [2024-07-15 03:21:41.553632] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.567 [2024-07-15 03:21:41.553664] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.567 [2024-07-15 03:21:41.565569] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.567 [2024-07-15 03:21:41.565601] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.567 [2024-07-15 03:21:41.576343] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.567 [2024-07-15 03:21:41.576375] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.567 [2024-07-15 03:21:41.588076] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.567 [2024-07-15 03:21:41.588104] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.567 [2024-07-15 03:21:41.599389] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.567 [2024-07-15 03:21:41.599421] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.567 [2024-07-15 03:21:41.610594] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.567 [2024-07-15 03:21:41.610625] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.567 [2024-07-15 03:21:41.621773] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.567 [2024-07-15 03:21:41.621805] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.567 [2024-07-15 03:21:41.633718] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.567 [2024-07-15 03:21:41.633749] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.567 [2024-07-15 03:21:41.645153] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.567 [2024-07-15 03:21:41.645200] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.567 [2024-07-15 03:21:41.658192] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.567 [2024-07-15 03:21:41.658224] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.567 [2024-07-15 03:21:41.669216] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.567 [2024-07-15 03:21:41.669247] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.568 [2024-07-15 03:21:41.680886] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.568 [2024-07-15 03:21:41.680932] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.568 [2024-07-15 03:21:41.692109] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.568 [2024-07-15 03:21:41.692138] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.568 [2024-07-15 03:21:41.703331] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.568 [2024-07-15 03:21:41.703363] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.825 [2024-07-15 03:21:41.715142] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.825 [2024-07-15 03:21:41.715170] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.825 [2024-07-15 03:21:41.726546] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.825 [2024-07-15 03:21:41.726577] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.825 [2024-07-15 03:21:41.738388] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.826 [2024-07-15 03:21:41.738429] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.826 [2024-07-15 03:21:41.749604] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.826 [2024-07-15 03:21:41.749635] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.826 [2024-07-15 03:21:41.760981] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.826 [2024-07-15 03:21:41.761010] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.826 [2024-07-15 03:21:41.772291] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.826 [2024-07-15 03:21:41.772323] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.826 [2024-07-15 03:21:41.784126] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.826 [2024-07-15 03:21:41.784154] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.826 [2024-07-15 03:21:41.796192] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.826 [2024-07-15 03:21:41.796238] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.826 [2024-07-15 03:21:41.807728] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.826 [2024-07-15 03:21:41.807759] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.826 [2024-07-15 03:21:41.819120] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.826 [2024-07-15 03:21:41.819149] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.826 [2024-07-15 03:21:41.830741] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.826 [2024-07-15 03:21:41.830772] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.826 [2024-07-15 03:21:41.842491] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.826 [2024-07-15 03:21:41.842522] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.826 [2024-07-15 03:21:41.854142] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.826 [2024-07-15 03:21:41.854187] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.826 [2024-07-15 03:21:41.865514] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.826 [2024-07-15 03:21:41.865546] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.826 [2024-07-15 03:21:41.876666] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.826 [2024-07-15 03:21:41.876698] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.826 [2024-07-15 03:21:41.888279] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.826 [2024-07-15 03:21:41.888310] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.826 [2024-07-15 03:21:41.899276] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.826 [2024-07-15 03:21:41.899307] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.826 [2024-07-15 03:21:41.910329] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.826 [2024-07-15 03:21:41.910361] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.826 [2024-07-15 03:21:41.923407] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.826 [2024-07-15 03:21:41.923438] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.826 [2024-07-15 03:21:41.933488] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.826 [2024-07-15 03:21:41.933520] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.826 [2024-07-15 03:21:41.945804] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.826 [2024-07-15 03:21:41.945835] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.826 [2024-07-15 03:21:41.957673] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.826 [2024-07-15 03:21:41.957714] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.084 [2024-07-15 03:21:41.969168] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.084 [2024-07-15 03:21:41.969196] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.084 [2024-07-15 03:21:41.980903] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.084 [2024-07-15 03:21:41.980947] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.084 [2024-07-15 03:21:41.992295] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.084 [2024-07-15 03:21:41.992327] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.084 [2024-07-15 03:21:42.003444] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.084 [2024-07-15 03:21:42.003475] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.084 [2024-07-15 03:21:42.015301] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.084 [2024-07-15 03:21:42.015332] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.084 [2024-07-15 03:21:42.026834] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.084 [2024-07-15 03:21:42.026866] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.084 [2024-07-15 03:21:42.037997] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.084 [2024-07-15 03:21:42.038026] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.084 [2024-07-15 03:21:42.049861] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.084 [2024-07-15 03:21:42.049918] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.084 [2024-07-15 03:21:42.061469] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.084 [2024-07-15 03:21:42.061500] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.084 [2024-07-15 03:21:42.074889] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.084 [2024-07-15 03:21:42.074934] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.084 [2024-07-15 03:21:42.085579] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.084 [2024-07-15 03:21:42.085610] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.084 [2024-07-15 03:21:42.100905] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.084 [2024-07-15 03:21:42.100950] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.084 [2024-07-15 03:21:42.111106] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.084 [2024-07-15 03:21:42.111135] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.084 [2024-07-15 03:21:42.123025] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.084 [2024-07-15 03:21:42.123054] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.084 [2024-07-15 03:21:42.134793] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.084 [2024-07-15 03:21:42.134824] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.084 [2024-07-15 03:21:42.146705] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.084 [2024-07-15 03:21:42.146736] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.084 [2024-07-15 03:21:42.158345] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.084 [2024-07-15 03:21:42.158376] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.084 [2024-07-15 03:21:42.169783] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.084 [2024-07-15 03:21:42.169814] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.085 [2024-07-15 03:21:42.181555] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.085 [2024-07-15 03:21:42.181586] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.085 [2024-07-15 03:21:42.193251] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.085 [2024-07-15 03:21:42.193282] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.085 [2024-07-15 03:21:42.204639] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.085 [2024-07-15 03:21:42.204670] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.085 [2024-07-15 03:21:42.217961] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.085 [2024-07-15 03:21:42.217989] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.343 [2024-07-15 03:21:42.228649] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.343 [2024-07-15 03:21:42.228680] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.343 [2024-07-15 03:21:42.240147] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.343 [2024-07-15 03:21:42.240190] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.343 [2024-07-15 03:21:42.252200] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.343 [2024-07-15 03:21:42.252231] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.343 [2024-07-15 03:21:42.264812] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.343 [2024-07-15 03:21:42.264843] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.343 [2024-07-15 03:21:42.274999] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.343 [2024-07-15 03:21:42.275027] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.343 [2024-07-15 03:21:42.287287] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.343 [2024-07-15 03:21:42.287318] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.343 [2024-07-15 03:21:42.298621] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.343 [2024-07-15 03:21:42.298652] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.343 [2024-07-15 03:21:42.311947] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.343 [2024-07-15 03:21:42.311975] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.343 [2024-07-15 03:21:42.321934] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.343 [2024-07-15 03:21:42.321961] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.343 [2024-07-15 03:21:42.332741] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.343 [2024-07-15 03:21:42.332769] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.343 [2024-07-15 03:21:42.345773] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.343 [2024-07-15 03:21:42.345800] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.343 [2024-07-15 03:21:42.356245] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.343 [2024-07-15 03:21:42.356273] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.343 [2024-07-15 03:21:42.367266] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.343 [2024-07-15 03:21:42.367293] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.343 [2024-07-15 03:21:42.379985] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.343 [2024-07-15 03:21:42.380013] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.343 [2024-07-15 03:21:42.389810] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.343 [2024-07-15 03:21:42.389838] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.343 [2024-07-15 03:21:42.400680] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.343 [2024-07-15 03:21:42.400708] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.343 [2024-07-15 03:21:42.413042] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.343 [2024-07-15 03:21:42.413071] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.343 [2024-07-15 03:21:42.423233] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.343 [2024-07-15 03:21:42.423261] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.343 [2024-07-15 03:21:42.433719] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.343 [2024-07-15 03:21:42.433748] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.343 [2024-07-15 03:21:42.444368] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.344 [2024-07-15 03:21:42.444397] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.344 [2024-07-15 03:21:42.457167] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.344 [2024-07-15 03:21:42.457196] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.344 [2024-07-15 03:21:42.467263] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.344 [2024-07-15 03:21:42.467291] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.344 [2024-07-15 03:21:42.478029] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.344 [2024-07-15 03:21:42.478058] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.602 [2024-07-15 03:21:42.490135] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.602 [2024-07-15 03:21:42.490164] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.602 [2024-07-15 03:21:42.500167] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.602 [2024-07-15 03:21:42.500195] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.602 [2024-07-15 03:21:42.510676] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.602 [2024-07-15 03:21:42.510705] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.602 [2024-07-15 03:21:42.521296] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.602 [2024-07-15 03:21:42.521325] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.602 [2024-07-15 03:21:42.531794] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.602 [2024-07-15 03:21:42.531824] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.602 [2024-07-15 03:21:42.544169] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.602 [2024-07-15 03:21:42.544197] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.602 [2024-07-15 03:21:42.554466] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.602 [2024-07-15 03:21:42.554494] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.602 [2024-07-15 03:21:42.564902] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.602 [2024-07-15 03:21:42.564939] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.602 [2024-07-15 03:21:42.575528] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.602 [2024-07-15 03:21:42.575556] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.602 [2024-07-15 03:21:42.586369] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.602 [2024-07-15 03:21:42.586400] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.602 [2024-07-15 03:21:42.597788] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.602 [2024-07-15 03:21:42.597816] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.602 [2024-07-15 03:21:42.608657] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.602 [2024-07-15 03:21:42.608685] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.602 [2024-07-15 03:21:42.621370] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.602 [2024-07-15 03:21:42.621399] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.602 [2024-07-15 03:21:42.631546] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.602 [2024-07-15 03:21:42.631574] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.602 [2024-07-15 03:21:42.642114] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.602 [2024-07-15 03:21:42.642142] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.602 [2024-07-15 03:21:42.652582] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.602 [2024-07-15 03:21:42.652609] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.602 [2024-07-15 03:21:42.663187] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.602 [2024-07-15 03:21:42.663215] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.602 [2024-07-15 03:21:42.673679] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.602 [2024-07-15 03:21:42.673708] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.602 [2024-07-15 03:21:42.684367] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.602 [2024-07-15 03:21:42.684395] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.602 [2024-07-15 03:21:42.694777] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.602 [2024-07-15 03:21:42.694805] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.602 [2024-07-15 03:21:42.701465] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.602 [2024-07-15 03:21:42.701492] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.602 00:18:36.602 Latency(us) 00:18:36.602 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:36.602 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:18:36.602 Nvme1n1 : 5.01 11196.39 87.47 0.00 0.00 11417.45 4951.61 20097.71 00:18:36.602 =================================================================================================================== 00:18:36.602 Total : 11196.39 87.47 0.00 0.00 11417.45 4951.61 20097.71 00:18:36.602 [2024-07-15 03:21:42.706954] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.602 [2024-07-15 03:21:42.706979] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.602 [2024-07-15 03:21:42.714960] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.602 [2024-07-15 03:21:42.714985] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.602 [2024-07-15 03:21:42.723018] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.602 [2024-07-15 03:21:42.723063] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.602 [2024-07-15 03:21:42.731064] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.603 [2024-07-15 03:21:42.731113] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.603 [2024-07-15 03:21:42.739085] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.603 [2024-07-15 03:21:42.739138] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.861 [2024-07-15 03:21:42.747105] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.861 [2024-07-15 03:21:42.747166] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.861 [2024-07-15 03:21:42.755123] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.861 [2024-07-15 03:21:42.755175] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.861 [2024-07-15 03:21:42.763149] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.861 [2024-07-15 03:21:42.763198] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.861 [2024-07-15 03:21:42.771170] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.861 [2024-07-15 03:21:42.771233] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.861 [2024-07-15 03:21:42.779204] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.861 [2024-07-15 03:21:42.779250] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.861 [2024-07-15 03:21:42.787223] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.861 [2024-07-15 03:21:42.787277] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.861 [2024-07-15 03:21:42.795233] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.861 [2024-07-15 03:21:42.795283] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.861 [2024-07-15 03:21:42.803256] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.861 [2024-07-15 03:21:42.803307] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.861 [2024-07-15 03:21:42.811280] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.861 [2024-07-15 03:21:42.811329] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.861 [2024-07-15 03:21:42.819302] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.861 [2024-07-15 03:21:42.819353] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.861 [2024-07-15 03:21:42.827322] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.861 [2024-07-15 03:21:42.827371] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.861 [2024-07-15 03:21:42.835338] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.861 [2024-07-15 03:21:42.835388] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.861 [2024-07-15 03:21:42.843333] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.861 [2024-07-15 03:21:42.843374] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.861 [2024-07-15 03:21:42.851335] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.861 [2024-07-15 03:21:42.851363] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.861 [2024-07-15 03:21:42.859407] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.861 [2024-07-15 03:21:42.859455] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.861 [2024-07-15 03:21:42.867431] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.861 [2024-07-15 03:21:42.867483] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.861 [2024-07-15 03:21:42.875459] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.861 [2024-07-15 03:21:42.875504] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.861 [2024-07-15 03:21:42.883425] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.861 [2024-07-15 03:21:42.883453] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.861 [2024-07-15 03:21:42.891483] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.861 [2024-07-15 03:21:42.891526] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.861 [2024-07-15 03:21:42.899514] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.861 [2024-07-15 03:21:42.899576] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.861 [2024-07-15 03:21:42.907536] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.861 [2024-07-15 03:21:42.907583] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.861 [2024-07-15 03:21:42.915494] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.861 [2024-07-15 03:21:42.915515] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.861 [2024-07-15 03:21:42.923520] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.861 [2024-07-15 03:21:42.923542] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.861 [2024-07-15 03:21:42.931537] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.861 [2024-07-15 03:21:42.931557] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.861 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (3188283) - No such process 00:18:36.861 03:21:42 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 3188283 00:18:36.861 03:21:42 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:18:36.861 03:21:42 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:36.861 03:21:42 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:36.861 03:21:42 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:36.861 03:21:42 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:18:36.861 03:21:42 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:36.861 03:21:42 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:36.861 delay0 00:18:36.861 03:21:42 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:36.861 03:21:42 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:18:36.861 03:21:42 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:36.861 03:21:42 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:36.861 03:21:42 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:36.861 03:21:42 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:18:36.861 EAL: No free 2048 kB hugepages reported on node 1 00:18:37.120 [2024-07-15 03:21:43.056140] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:18:43.676 Initializing NVMe Controllers 00:18:43.676 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:18:43.676 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:18:43.676 Initialization complete. Launching workers. 00:18:43.676 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 133 00:18:43.676 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 420, failed to submit 33 00:18:43.676 success 248, unsuccess 172, failed 0 00:18:43.676 03:21:49 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:18:43.676 03:21:49 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:18:43.676 03:21:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:43.676 03:21:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@117 -- # sync 00:18:43.676 03:21:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:43.676 03:21:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@120 -- # set +e 00:18:43.676 03:21:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:43.676 03:21:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:43.676 rmmod nvme_tcp 00:18:43.676 rmmod nvme_fabrics 00:18:43.676 rmmod nvme_keyring 00:18:43.676 03:21:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:43.676 03:21:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@124 -- # set -e 00:18:43.676 03:21:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@125 -- # return 0 00:18:43.676 03:21:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@489 -- # '[' -n 3186954 ']' 00:18:43.676 03:21:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@490 -- # killprocess 3186954 00:18:43.676 03:21:49 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@948 -- # '[' -z 3186954 ']' 00:18:43.676 03:21:49 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@952 -- # kill -0 3186954 00:18:43.676 03:21:49 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@953 -- # uname 00:18:43.676 03:21:49 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:43.676 03:21:49 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3186954 00:18:43.676 03:21:49 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:18:43.676 03:21:49 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:18:43.676 03:21:49 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3186954' 00:18:43.676 killing process with pid 3186954 00:18:43.676 03:21:49 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@967 -- # kill 3186954 00:18:43.676 03:21:49 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@972 -- # wait 3186954 00:18:43.676 03:21:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:43.676 03:21:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:43.676 03:21:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:43.676 03:21:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:43.676 03:21:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:43.676 03:21:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:43.676 03:21:49 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:43.676 03:21:49 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:45.581 03:21:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:45.581 00:18:45.581 real 0m27.923s 00:18:45.581 user 0m41.405s 00:18:45.581 sys 0m8.270s 00:18:45.581 03:21:51 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:45.581 03:21:51 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:45.581 ************************************ 00:18:45.581 END TEST nvmf_zcopy 00:18:45.581 ************************************ 00:18:45.581 03:21:51 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:18:45.581 03:21:51 nvmf_tcp -- nvmf/nvmf.sh@54 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:18:45.581 03:21:51 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:18:45.581 03:21:51 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:45.581 03:21:51 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:45.581 ************************************ 00:18:45.581 START TEST nvmf_nmic 00:18:45.581 ************************************ 00:18:45.581 03:21:51 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:18:45.840 * Looking for test storage... 00:18:45.840 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:45.840 03:21:51 nvmf_tcp.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:45.840 03:21:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:18:45.840 03:21:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:45.840 03:21:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:45.840 03:21:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:45.840 03:21:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:45.840 03:21:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:45.840 03:21:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:45.840 03:21:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:45.840 03:21:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:45.840 03:21:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:45.840 03:21:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:45.840 03:21:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:45.840 03:21:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:18:45.840 03:21:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:45.840 03:21:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:45.840 03:21:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:45.840 03:21:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:45.840 03:21:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:45.840 03:21:51 nvmf_tcp.nvmf_nmic -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:45.840 03:21:51 nvmf_tcp.nvmf_nmic -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:45.840 03:21:51 nvmf_tcp.nvmf_nmic -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:45.840 03:21:51 nvmf_tcp.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:45.840 03:21:51 nvmf_tcp.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:45.841 03:21:51 nvmf_tcp.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:45.841 03:21:51 nvmf_tcp.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:18:45.841 03:21:51 nvmf_tcp.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:45.841 03:21:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@47 -- # : 0 00:18:45.841 03:21:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:45.841 03:21:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:45.841 03:21:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:45.841 03:21:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:45.841 03:21:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:45.841 03:21:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:45.841 03:21:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:45.841 03:21:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:45.841 03:21:51 nvmf_tcp.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:45.841 03:21:51 nvmf_tcp.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:45.841 03:21:51 nvmf_tcp.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:18:45.841 03:21:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:45.841 03:21:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:45.841 03:21:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:45.841 03:21:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:45.841 03:21:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:45.841 03:21:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:45.841 03:21:51 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:45.841 03:21:51 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:45.841 03:21:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:18:45.841 03:21:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:18:45.841 03:21:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@285 -- # xtrace_disable 00:18:45.841 03:21:51 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:47.749 03:21:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:47.749 03:21:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@291 -- # pci_devs=() 00:18:47.749 03:21:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:47.749 03:21:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:47.749 03:21:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:47.749 03:21:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:47.749 03:21:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:47.749 03:21:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@295 -- # net_devs=() 00:18:47.749 03:21:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:47.749 03:21:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@296 -- # e810=() 00:18:47.749 03:21:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@296 -- # local -ga e810 00:18:47.749 03:21:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@297 -- # x722=() 00:18:47.749 03:21:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@297 -- # local -ga x722 00:18:47.749 03:21:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@298 -- # mlx=() 00:18:47.749 03:21:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@298 -- # local -ga mlx 00:18:47.749 03:21:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:47.749 03:21:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:47.749 03:21:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:47.749 03:21:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:47.749 03:21:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:47.749 03:21:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:47.749 03:21:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:47.749 03:21:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:47.749 03:21:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:47.749 03:21:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:47.749 03:21:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:47.749 03:21:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:47.749 03:21:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:47.749 03:21:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:47.749 03:21:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:47.749 03:21:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:47.749 03:21:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:47.749 03:21:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:47.749 03:21:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:18:47.749 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:18:47.749 03:21:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:47.749 03:21:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:47.749 03:21:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:47.749 03:21:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:47.749 03:21:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:47.749 03:21:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:47.749 03:21:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:18:47.749 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:18:47.749 03:21:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:47.749 03:21:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:47.749 03:21:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:47.749 03:21:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:47.749 03:21:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:47.749 03:21:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:47.749 03:21:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:47.749 03:21:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:47.749 03:21:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:47.749 03:21:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:47.749 03:21:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:47.749 03:21:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:47.749 03:21:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:47.749 03:21:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:47.749 03:21:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:47.749 03:21:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:18:47.749 Found net devices under 0000:0a:00.0: cvl_0_0 00:18:47.749 03:21:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:47.749 03:21:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:47.750 03:21:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:47.750 03:21:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:47.750 03:21:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:47.750 03:21:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:47.750 03:21:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:47.750 03:21:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:47.750 03:21:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:18:47.750 Found net devices under 0000:0a:00.1: cvl_0_1 00:18:47.750 03:21:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:47.750 03:21:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:47.750 03:21:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # is_hw=yes 00:18:47.750 03:21:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:18:47.750 03:21:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:18:47.750 03:21:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:18:47.750 03:21:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:47.750 03:21:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:47.750 03:21:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:47.750 03:21:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:47.750 03:21:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:47.750 03:21:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:47.750 03:21:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:47.750 03:21:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:47.750 03:21:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:47.750 03:21:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:47.750 03:21:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:47.750 03:21:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:47.750 03:21:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:47.750 03:21:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:47.750 03:21:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:47.750 03:21:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:47.750 03:21:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:47.750 03:21:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:47.750 03:21:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:47.750 03:21:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:47.750 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:47.750 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.132 ms 00:18:47.750 00:18:47.750 --- 10.0.0.2 ping statistics --- 00:18:47.750 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:47.750 rtt min/avg/max/mdev = 0.132/0.132/0.132/0.000 ms 00:18:47.750 03:21:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:48.007 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:48.007 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.170 ms 00:18:48.007 00:18:48.007 --- 10.0.0.1 ping statistics --- 00:18:48.007 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:48.007 rtt min/avg/max/mdev = 0.170/0.170/0.170/0.000 ms 00:18:48.007 03:21:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:48.007 03:21:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@422 -- # return 0 00:18:48.007 03:21:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:48.007 03:21:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:48.007 03:21:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:48.007 03:21:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:48.007 03:21:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:48.007 03:21:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:48.007 03:21:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:48.007 03:21:53 nvmf_tcp.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:18:48.007 03:21:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:48.007 03:21:53 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:48.007 03:21:53 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:48.007 03:21:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@481 -- # nvmfpid=3191576 00:18:48.008 03:21:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:18:48.008 03:21:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@482 -- # waitforlisten 3191576 00:18:48.008 03:21:53 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@829 -- # '[' -z 3191576 ']' 00:18:48.008 03:21:53 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:48.008 03:21:53 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:48.008 03:21:53 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:48.008 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:48.008 03:21:53 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:48.008 03:21:53 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:48.008 [2024-07-15 03:21:53.969056] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:18:48.008 [2024-07-15 03:21:53.969142] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:48.008 EAL: No free 2048 kB hugepages reported on node 1 00:18:48.008 [2024-07-15 03:21:54.041047] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:48.008 [2024-07-15 03:21:54.127844] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:48.008 [2024-07-15 03:21:54.127916] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:48.008 [2024-07-15 03:21:54.127946] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:48.008 [2024-07-15 03:21:54.127959] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:48.008 [2024-07-15 03:21:54.127969] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:48.008 [2024-07-15 03:21:54.128018] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:48.008 [2024-07-15 03:21:54.128077] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:48.008 [2024-07-15 03:21:54.128143] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:18:48.008 [2024-07-15 03:21:54.128145] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:48.266 03:21:54 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:48.266 03:21:54 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@862 -- # return 0 00:18:48.266 03:21:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:48.266 03:21:54 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:48.266 03:21:54 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:48.266 03:21:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:48.266 03:21:54 nvmf_tcp.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:48.266 03:21:54 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:48.266 03:21:54 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:48.266 [2024-07-15 03:21:54.284799] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:48.266 03:21:54 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:48.266 03:21:54 nvmf_tcp.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:18:48.266 03:21:54 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:48.266 03:21:54 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:48.266 Malloc0 00:18:48.266 03:21:54 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:48.266 03:21:54 nvmf_tcp.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:18:48.266 03:21:54 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:48.266 03:21:54 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:48.266 03:21:54 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:48.266 03:21:54 nvmf_tcp.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:48.266 03:21:54 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:48.266 03:21:54 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:48.266 03:21:54 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:48.266 03:21:54 nvmf_tcp.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:48.266 03:21:54 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:48.266 03:21:54 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:48.266 [2024-07-15 03:21:54.338501] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:48.266 03:21:54 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:48.266 03:21:54 nvmf_tcp.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:18:48.266 test case1: single bdev can't be used in multiple subsystems 00:18:48.266 03:21:54 nvmf_tcp.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:18:48.266 03:21:54 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:48.266 03:21:54 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:48.266 03:21:54 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:48.266 03:21:54 nvmf_tcp.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:18:48.266 03:21:54 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:48.266 03:21:54 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:48.266 03:21:54 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:48.266 03:21:54 nvmf_tcp.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:18:48.266 03:21:54 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:18:48.266 03:21:54 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:48.266 03:21:54 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:48.266 [2024-07-15 03:21:54.362339] bdev.c:8078:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:18:48.266 [2024-07-15 03:21:54.362368] subsystem.c:2083:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:18:48.266 [2024-07-15 03:21:54.362384] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:48.266 request: 00:18:48.266 { 00:18:48.266 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:18:48.266 "namespace": { 00:18:48.266 "bdev_name": "Malloc0", 00:18:48.266 "no_auto_visible": false 00:18:48.266 }, 00:18:48.266 "method": "nvmf_subsystem_add_ns", 00:18:48.266 "req_id": 1 00:18:48.266 } 00:18:48.266 Got JSON-RPC error response 00:18:48.266 response: 00:18:48.266 { 00:18:48.266 "code": -32602, 00:18:48.266 "message": "Invalid parameters" 00:18:48.266 } 00:18:48.266 03:21:54 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:18:48.266 03:21:54 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:18:48.266 03:21:54 nvmf_tcp.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:18:48.266 03:21:54 nvmf_tcp.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:18:48.266 Adding namespace failed - expected result. 00:18:48.266 03:21:54 nvmf_tcp.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:18:48.266 test case2: host connect to nvmf target in multiple paths 00:18:48.266 03:21:54 nvmf_tcp.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:18:48.266 03:21:54 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:48.266 03:21:54 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:48.266 [2024-07-15 03:21:54.370455] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:18:48.266 03:21:54 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:48.266 03:21:54 nvmf_tcp.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:18:49.198 03:21:55 nvmf_tcp.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:18:49.763 03:21:55 nvmf_tcp.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:18:49.763 03:21:55 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1198 -- # local i=0 00:18:49.763 03:21:55 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:18:49.763 03:21:55 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:18:49.763 03:21:55 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1205 -- # sleep 2 00:18:51.693 03:21:57 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:18:51.693 03:21:57 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:18:51.693 03:21:57 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:18:51.693 03:21:57 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:18:51.693 03:21:57 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:18:51.693 03:21:57 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1208 -- # return 0 00:18:51.693 03:21:57 nvmf_tcp.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:18:51.693 [global] 00:18:51.693 thread=1 00:18:51.693 invalidate=1 00:18:51.693 rw=write 00:18:51.693 time_based=1 00:18:51.693 runtime=1 00:18:51.693 ioengine=libaio 00:18:51.693 direct=1 00:18:51.693 bs=4096 00:18:51.693 iodepth=1 00:18:51.693 norandommap=0 00:18:51.693 numjobs=1 00:18:51.693 00:18:51.693 verify_dump=1 00:18:51.693 verify_backlog=512 00:18:51.693 verify_state_save=0 00:18:51.693 do_verify=1 00:18:51.693 verify=crc32c-intel 00:18:51.693 [job0] 00:18:51.693 filename=/dev/nvme0n1 00:18:51.693 Could not set queue depth (nvme0n1) 00:18:51.949 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:51.949 fio-3.35 00:18:51.949 Starting 1 thread 00:18:53.318 00:18:53.318 job0: (groupid=0, jobs=1): err= 0: pid=3192180: Mon Jul 15 03:21:59 2024 00:18:53.318 read: IOPS=21, BW=85.4KiB/s (87.4kB/s)(88.0KiB/1031msec) 00:18:53.318 slat (nsec): min=14598, max=36555, avg=26004.00, stdev=8996.82 00:18:53.318 clat (usec): min=40886, max=42031, avg=41630.30, stdev=485.67 00:18:53.318 lat (usec): min=40919, max=42045, avg=41656.31, stdev=485.76 00:18:53.318 clat percentiles (usec): 00:18:53.318 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:18:53.318 | 30.00th=[41157], 40.00th=[41681], 50.00th=[41681], 60.00th=[42206], 00:18:53.318 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:18:53.318 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:18:53.318 | 99.99th=[42206] 00:18:53.318 write: IOPS=496, BW=1986KiB/s (2034kB/s)(2048KiB/1031msec); 0 zone resets 00:18:53.318 slat (nsec): min=7133, max=39250, avg=11558.17, stdev=5491.30 00:18:53.318 clat (usec): min=156, max=389, avg=207.59, stdev=40.49 00:18:53.318 lat (usec): min=164, max=404, avg=219.15, stdev=41.94 00:18:53.318 clat percentiles (usec): 00:18:53.318 | 1.00th=[ 159], 5.00th=[ 163], 10.00th=[ 165], 20.00th=[ 172], 00:18:53.318 | 30.00th=[ 178], 40.00th=[ 184], 50.00th=[ 196], 60.00th=[ 212], 00:18:53.318 | 70.00th=[ 241], 80.00th=[ 243], 90.00th=[ 247], 95.00th=[ 260], 00:18:53.318 | 99.00th=[ 351], 99.50th=[ 355], 99.90th=[ 392], 99.95th=[ 392], 00:18:53.318 | 99.99th=[ 392] 00:18:53.318 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:18:53.318 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:18:53.318 lat (usec) : 250=88.39%, 500=7.49% 00:18:53.318 lat (msec) : 50=4.12% 00:18:53.318 cpu : usr=0.19%, sys=0.97%, ctx=534, majf=0, minf=2 00:18:53.318 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:53.318 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:53.318 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:53.318 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:53.318 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:53.318 00:18:53.318 Run status group 0 (all jobs): 00:18:53.318 READ: bw=85.4KiB/s (87.4kB/s), 85.4KiB/s-85.4KiB/s (87.4kB/s-87.4kB/s), io=88.0KiB (90.1kB), run=1031-1031msec 00:18:53.318 WRITE: bw=1986KiB/s (2034kB/s), 1986KiB/s-1986KiB/s (2034kB/s-2034kB/s), io=2048KiB (2097kB), run=1031-1031msec 00:18:53.318 00:18:53.318 Disk stats (read/write): 00:18:53.318 nvme0n1: ios=68/512, merge=0/0, ticks=781/102, in_queue=883, util=91.58% 00:18:53.318 03:21:59 nvmf_tcp.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:53.318 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:18:53.318 03:21:59 nvmf_tcp.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:18:53.318 03:21:59 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1219 -- # local i=0 00:18:53.318 03:21:59 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:18:53.318 03:21:59 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:53.318 03:21:59 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:18:53.318 03:21:59 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:53.318 03:21:59 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1231 -- # return 0 00:18:53.318 03:21:59 nvmf_tcp.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:18:53.319 03:21:59 nvmf_tcp.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:18:53.319 03:21:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:53.319 03:21:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@117 -- # sync 00:18:53.319 03:21:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:53.319 03:21:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@120 -- # set +e 00:18:53.319 03:21:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:53.319 03:21:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:53.319 rmmod nvme_tcp 00:18:53.319 rmmod nvme_fabrics 00:18:53.319 rmmod nvme_keyring 00:18:53.319 03:21:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:53.319 03:21:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@124 -- # set -e 00:18:53.319 03:21:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@125 -- # return 0 00:18:53.319 03:21:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@489 -- # '[' -n 3191576 ']' 00:18:53.319 03:21:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@490 -- # killprocess 3191576 00:18:53.319 03:21:59 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@948 -- # '[' -z 3191576 ']' 00:18:53.319 03:21:59 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@952 -- # kill -0 3191576 00:18:53.319 03:21:59 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@953 -- # uname 00:18:53.319 03:21:59 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:53.319 03:21:59 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3191576 00:18:53.319 03:21:59 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:18:53.319 03:21:59 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:18:53.319 03:21:59 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3191576' 00:18:53.319 killing process with pid 3191576 00:18:53.319 03:21:59 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@967 -- # kill 3191576 00:18:53.319 03:21:59 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@972 -- # wait 3191576 00:18:53.578 03:21:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:53.578 03:21:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:53.578 03:21:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:53.578 03:21:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:53.578 03:21:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:53.578 03:21:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:53.578 03:21:59 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:53.578 03:21:59 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:56.113 03:22:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:56.113 00:18:56.113 real 0m9.964s 00:18:56.113 user 0m22.824s 00:18:56.113 sys 0m2.271s 00:18:56.113 03:22:01 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:56.113 03:22:01 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:56.113 ************************************ 00:18:56.113 END TEST nvmf_nmic 00:18:56.113 ************************************ 00:18:56.113 03:22:01 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:18:56.113 03:22:01 nvmf_tcp -- nvmf/nvmf.sh@55 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:18:56.113 03:22:01 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:18:56.113 03:22:01 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:56.113 03:22:01 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:56.113 ************************************ 00:18:56.113 START TEST nvmf_fio_target 00:18:56.113 ************************************ 00:18:56.113 03:22:01 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:18:56.113 * Looking for test storage... 00:18:56.113 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:56.113 03:22:01 nvmf_tcp.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:56.113 03:22:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:18:56.113 03:22:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:56.113 03:22:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:56.113 03:22:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:56.113 03:22:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:56.113 03:22:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:56.113 03:22:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:56.113 03:22:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:56.113 03:22:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:56.113 03:22:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:56.113 03:22:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:56.113 03:22:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:56.113 03:22:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:18:56.113 03:22:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:56.113 03:22:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:56.113 03:22:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:56.113 03:22:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:56.113 03:22:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:56.113 03:22:01 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:56.113 03:22:01 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:56.113 03:22:01 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:56.113 03:22:01 nvmf_tcp.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:56.113 03:22:01 nvmf_tcp.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:56.114 03:22:01 nvmf_tcp.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:56.114 03:22:01 nvmf_tcp.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:18:56.114 03:22:01 nvmf_tcp.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:56.114 03:22:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@47 -- # : 0 00:18:56.114 03:22:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:56.114 03:22:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:56.114 03:22:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:56.114 03:22:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:56.114 03:22:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:56.114 03:22:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:56.114 03:22:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:56.114 03:22:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:56.114 03:22:01 nvmf_tcp.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:56.114 03:22:01 nvmf_tcp.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:56.114 03:22:01 nvmf_tcp.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:56.114 03:22:01 nvmf_tcp.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:18:56.114 03:22:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:56.114 03:22:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:56.114 03:22:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:56.114 03:22:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:56.114 03:22:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:56.114 03:22:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:56.114 03:22:01 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:56.114 03:22:01 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:56.114 03:22:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:18:56.114 03:22:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:18:56.114 03:22:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@285 -- # xtrace_disable 00:18:56.114 03:22:01 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:18:58.014 03:22:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:58.014 03:22:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@291 -- # pci_devs=() 00:18:58.015 03:22:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:58.015 03:22:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:58.015 03:22:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:58.015 03:22:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:58.015 03:22:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:58.015 03:22:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@295 -- # net_devs=() 00:18:58.015 03:22:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:58.015 03:22:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@296 -- # e810=() 00:18:58.015 03:22:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@296 -- # local -ga e810 00:18:58.015 03:22:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@297 -- # x722=() 00:18:58.015 03:22:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@297 -- # local -ga x722 00:18:58.015 03:22:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@298 -- # mlx=() 00:18:58.015 03:22:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@298 -- # local -ga mlx 00:18:58.015 03:22:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:58.015 03:22:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:58.015 03:22:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:58.015 03:22:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:58.015 03:22:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:58.015 03:22:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:58.015 03:22:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:58.015 03:22:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:58.015 03:22:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:58.015 03:22:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:58.015 03:22:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:58.015 03:22:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:58.015 03:22:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:58.015 03:22:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:58.015 03:22:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:58.015 03:22:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:58.015 03:22:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:58.015 03:22:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:58.015 03:22:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:18:58.015 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:18:58.015 03:22:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:58.015 03:22:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:58.015 03:22:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:58.015 03:22:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:58.015 03:22:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:58.015 03:22:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:58.015 03:22:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:18:58.015 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:18:58.015 03:22:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:58.015 03:22:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:58.015 03:22:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:58.015 03:22:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:58.015 03:22:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:58.015 03:22:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:58.015 03:22:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:58.015 03:22:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:58.015 03:22:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:58.015 03:22:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:58.015 03:22:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:58.015 03:22:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:58.015 03:22:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:58.015 03:22:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:58.015 03:22:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:58.015 03:22:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:18:58.015 Found net devices under 0000:0a:00.0: cvl_0_0 00:18:58.015 03:22:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:58.015 03:22:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:58.015 03:22:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:58.015 03:22:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:58.015 03:22:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:58.015 03:22:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:58.015 03:22:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:58.015 03:22:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:58.015 03:22:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:18:58.015 Found net devices under 0000:0a:00.1: cvl_0_1 00:18:58.015 03:22:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:58.015 03:22:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:58.015 03:22:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # is_hw=yes 00:18:58.015 03:22:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:18:58.015 03:22:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:18:58.015 03:22:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:18:58.015 03:22:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:58.015 03:22:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:58.015 03:22:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:58.015 03:22:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:58.015 03:22:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:58.015 03:22:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:58.015 03:22:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:58.015 03:22:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:58.015 03:22:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:58.015 03:22:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:58.015 03:22:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:58.015 03:22:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:58.015 03:22:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:58.015 03:22:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:58.015 03:22:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:58.015 03:22:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:58.015 03:22:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:58.015 03:22:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:58.015 03:22:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:58.015 03:22:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:58.015 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:58.015 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.154 ms 00:18:58.015 00:18:58.015 --- 10.0.0.2 ping statistics --- 00:18:58.015 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:58.015 rtt min/avg/max/mdev = 0.154/0.154/0.154/0.000 ms 00:18:58.015 03:22:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:58.015 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:58.015 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.115 ms 00:18:58.015 00:18:58.015 --- 10.0.0.1 ping statistics --- 00:18:58.015 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:58.015 rtt min/avg/max/mdev = 0.115/0.115/0.115/0.000 ms 00:18:58.015 03:22:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:58.015 03:22:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@422 -- # return 0 00:18:58.015 03:22:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:58.015 03:22:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:58.015 03:22:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:58.015 03:22:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:58.015 03:22:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:58.015 03:22:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:58.015 03:22:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:58.015 03:22:03 nvmf_tcp.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:18:58.015 03:22:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:58.015 03:22:03 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:58.015 03:22:03 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:18:58.015 03:22:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@481 -- # nvmfpid=3194247 00:18:58.015 03:22:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:18:58.015 03:22:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@482 -- # waitforlisten 3194247 00:18:58.016 03:22:03 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@829 -- # '[' -z 3194247 ']' 00:18:58.016 03:22:03 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:58.016 03:22:03 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:58.016 03:22:03 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:58.016 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:58.016 03:22:03 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:58.016 03:22:03 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:18:58.016 [2024-07-15 03:22:03.968252] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:18:58.016 [2024-07-15 03:22:03.968338] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:58.016 EAL: No free 2048 kB hugepages reported on node 1 00:18:58.016 [2024-07-15 03:22:04.039084] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:58.016 [2024-07-15 03:22:04.132589] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:58.016 [2024-07-15 03:22:04.132653] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:58.016 [2024-07-15 03:22:04.132669] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:58.016 [2024-07-15 03:22:04.132683] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:58.016 [2024-07-15 03:22:04.132694] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:58.016 [2024-07-15 03:22:04.132772] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:58.016 [2024-07-15 03:22:04.132825] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:58.016 [2024-07-15 03:22:04.132884] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:58.016 [2024-07-15 03:22:04.132892] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:18:58.273 03:22:04 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:58.273 03:22:04 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@862 -- # return 0 00:18:58.274 03:22:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:58.274 03:22:04 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:58.274 03:22:04 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:18:58.274 03:22:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:58.274 03:22:04 nvmf_tcp.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:18:58.531 [2024-07-15 03:22:04.507561] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:58.531 03:22:04 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:18:58.788 03:22:04 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:18:58.788 03:22:04 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:18:59.046 03:22:05 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:18:59.046 03:22:05 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:18:59.305 03:22:05 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:18:59.305 03:22:05 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:18:59.563 03:22:05 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:18:59.563 03:22:05 nvmf_tcp.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:18:59.820 03:22:05 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:19:00.079 03:22:06 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:19:00.079 03:22:06 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:19:00.337 03:22:06 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:19:00.337 03:22:06 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:19:00.594 03:22:06 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:19:00.594 03:22:06 nvmf_tcp.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:19:00.851 03:22:06 nvmf_tcp.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:19:01.108 03:22:07 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:19:01.108 03:22:07 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:01.366 03:22:07 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:19:01.366 03:22:07 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:19:01.625 03:22:07 nvmf_tcp.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:01.883 [2024-07-15 03:22:07.834440] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:01.883 03:22:07 nvmf_tcp.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:19:02.141 03:22:08 nvmf_tcp.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:19:02.399 03:22:08 nvmf_tcp.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:19:02.965 03:22:08 nvmf_tcp.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:19:02.965 03:22:08 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1198 -- # local i=0 00:19:02.965 03:22:08 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:19:02.965 03:22:08 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1200 -- # [[ -n 4 ]] 00:19:02.965 03:22:08 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1201 -- # nvme_device_counter=4 00:19:02.965 03:22:08 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1205 -- # sleep 2 00:19:04.861 03:22:10 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:19:04.861 03:22:10 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:19:04.861 03:22:10 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:19:04.861 03:22:10 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # nvme_devices=4 00:19:04.861 03:22:10 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:19:04.861 03:22:10 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1208 -- # return 0 00:19:04.861 03:22:10 nvmf_tcp.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:19:04.861 [global] 00:19:04.861 thread=1 00:19:04.861 invalidate=1 00:19:04.861 rw=write 00:19:04.861 time_based=1 00:19:04.861 runtime=1 00:19:04.861 ioengine=libaio 00:19:04.861 direct=1 00:19:04.861 bs=4096 00:19:04.861 iodepth=1 00:19:04.861 norandommap=0 00:19:04.861 numjobs=1 00:19:04.861 00:19:04.861 verify_dump=1 00:19:04.861 verify_backlog=512 00:19:04.861 verify_state_save=0 00:19:04.861 do_verify=1 00:19:04.861 verify=crc32c-intel 00:19:04.861 [job0] 00:19:04.861 filename=/dev/nvme0n1 00:19:04.861 [job1] 00:19:04.861 filename=/dev/nvme0n2 00:19:04.861 [job2] 00:19:04.861 filename=/dev/nvme0n3 00:19:04.861 [job3] 00:19:04.861 filename=/dev/nvme0n4 00:19:05.118 Could not set queue depth (nvme0n1) 00:19:05.118 Could not set queue depth (nvme0n2) 00:19:05.118 Could not set queue depth (nvme0n3) 00:19:05.118 Could not set queue depth (nvme0n4) 00:19:05.118 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:05.118 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:05.119 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:05.119 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:05.119 fio-3.35 00:19:05.119 Starting 4 threads 00:19:06.489 00:19:06.489 job0: (groupid=0, jobs=1): err= 0: pid=3195316: Mon Jul 15 03:22:12 2024 00:19:06.489 read: IOPS=159, BW=639KiB/s (654kB/s)(640KiB/1002msec) 00:19:06.489 slat (nsec): min=5823, max=34378, avg=11779.76, stdev=6064.46 00:19:06.489 clat (usec): min=228, max=41018, avg=5404.40, stdev=13494.83 00:19:06.489 lat (usec): min=235, max=41030, avg=5416.18, stdev=13497.81 00:19:06.489 clat percentiles (usec): 00:19:06.489 | 1.00th=[ 231], 5.00th=[ 235], 10.00th=[ 239], 20.00th=[ 241], 00:19:06.489 | 30.00th=[ 247], 40.00th=[ 251], 50.00th=[ 260], 60.00th=[ 277], 00:19:06.489 | 70.00th=[ 285], 80.00th=[ 289], 90.00th=[41157], 95.00th=[41157], 00:19:06.489 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:19:06.489 | 99.99th=[41157] 00:19:06.489 write: IOPS=510, BW=2044KiB/s (2093kB/s)(2048KiB/1002msec); 0 zone resets 00:19:06.489 slat (nsec): min=10563, max=53699, avg=21587.91, stdev=6480.53 00:19:06.489 clat (usec): min=172, max=459, avg=235.68, stdev=52.61 00:19:06.489 lat (usec): min=195, max=489, avg=257.27, stdev=53.38 00:19:06.489 clat percentiles (usec): 00:19:06.489 | 1.00th=[ 180], 5.00th=[ 188], 10.00th=[ 190], 20.00th=[ 196], 00:19:06.489 | 30.00th=[ 200], 40.00th=[ 208], 50.00th=[ 219], 60.00th=[ 231], 00:19:06.489 | 70.00th=[ 243], 80.00th=[ 269], 90.00th=[ 314], 95.00th=[ 355], 00:19:06.489 | 99.00th=[ 412], 99.50th=[ 424], 99.90th=[ 461], 99.95th=[ 461], 00:19:06.489 | 99.99th=[ 461] 00:19:06.489 bw ( KiB/s): min= 4096, max= 4096, per=51.65%, avg=4096.00, stdev= 0.00, samples=1 00:19:06.489 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:19:06.489 lat (usec) : 250=64.29%, 500=32.59% 00:19:06.489 lat (msec) : 10=0.15%, 50=2.98% 00:19:06.489 cpu : usr=0.90%, sys=1.00%, ctx=673, majf=0, minf=2 00:19:06.489 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:06.489 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:06.489 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:06.489 issued rwts: total=160,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:06.489 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:06.489 job1: (groupid=0, jobs=1): err= 0: pid=3195317: Mon Jul 15 03:22:12 2024 00:19:06.489 read: IOPS=20, BW=83.0KiB/s (85.0kB/s)(84.0KiB/1012msec) 00:19:06.489 slat (nsec): min=13760, max=40325, avg=24960.10, stdev=8966.10 00:19:06.489 clat (usec): min=40893, max=42038, avg=41208.69, stdev=442.98 00:19:06.489 lat (usec): min=40921, max=42052, avg=41233.65, stdev=441.29 00:19:06.489 clat percentiles (usec): 00:19:06.489 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:19:06.489 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:19:06.489 | 70.00th=[41157], 80.00th=[41681], 90.00th=[42206], 95.00th=[42206], 00:19:06.489 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:19:06.489 | 99.99th=[42206] 00:19:06.489 write: IOPS=505, BW=2024KiB/s (2072kB/s)(2048KiB/1012msec); 0 zone resets 00:19:06.489 slat (nsec): min=11852, max=66616, avg=24757.82, stdev=5979.98 00:19:06.489 clat (usec): min=192, max=509, avg=253.07, stdev=53.28 00:19:06.489 lat (usec): min=214, max=548, avg=277.83, stdev=55.44 00:19:06.489 clat percentiles (usec): 00:19:06.489 | 1.00th=[ 202], 5.00th=[ 208], 10.00th=[ 212], 20.00th=[ 217], 00:19:06.489 | 30.00th=[ 223], 40.00th=[ 227], 50.00th=[ 237], 60.00th=[ 247], 00:19:06.489 | 70.00th=[ 255], 80.00th=[ 265], 90.00th=[ 334], 95.00th=[ 379], 00:19:06.489 | 99.00th=[ 445], 99.50th=[ 453], 99.90th=[ 510], 99.95th=[ 510], 00:19:06.489 | 99.99th=[ 510] 00:19:06.489 bw ( KiB/s): min= 4096, max= 4096, per=51.65%, avg=4096.00, stdev= 0.00, samples=1 00:19:06.489 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:19:06.489 lat (usec) : 250=61.16%, 500=34.71%, 750=0.19% 00:19:06.489 lat (msec) : 50=3.94% 00:19:06.489 cpu : usr=1.09%, sys=1.29%, ctx=534, majf=0, minf=1 00:19:06.489 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:06.489 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:06.489 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:06.489 issued rwts: total=21,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:06.489 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:06.489 job2: (groupid=0, jobs=1): err= 0: pid=3195318: Mon Jul 15 03:22:12 2024 00:19:06.489 read: IOPS=20, BW=83.9KiB/s (85.9kB/s)(84.0KiB/1001msec) 00:19:06.489 slat (nsec): min=12496, max=40399, avg=19712.29, stdev=9484.13 00:19:06.489 clat (usec): min=423, max=42025, avg=39584.97, stdev=8986.36 00:19:06.489 lat (usec): min=464, max=42044, avg=39604.69, stdev=8981.69 00:19:06.489 clat percentiles (usec): 00:19:06.489 | 1.00th=[ 424], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:19:06.489 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41681], 60.00th=[41681], 00:19:06.489 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:19:06.489 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:19:06.489 | 99.99th=[42206] 00:19:06.489 write: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec); 0 zone resets 00:19:06.489 slat (nsec): min=11206, max=63827, avg=26075.71, stdev=7550.45 00:19:06.489 clat (usec): min=230, max=413, avg=296.91, stdev=38.46 00:19:06.489 lat (usec): min=242, max=456, avg=322.99, stdev=42.63 00:19:06.489 clat percentiles (usec): 00:19:06.489 | 1.00th=[ 239], 5.00th=[ 249], 10.00th=[ 255], 20.00th=[ 265], 00:19:06.489 | 30.00th=[ 277], 40.00th=[ 281], 50.00th=[ 289], 60.00th=[ 302], 00:19:06.489 | 70.00th=[ 310], 80.00th=[ 322], 90.00th=[ 363], 95.00th=[ 383], 00:19:06.489 | 99.00th=[ 400], 99.50th=[ 408], 99.90th=[ 412], 99.95th=[ 412], 00:19:06.489 | 99.99th=[ 412] 00:19:06.489 bw ( KiB/s): min= 4096, max= 4096, per=51.65%, avg=4096.00, stdev= 0.00, samples=1 00:19:06.489 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:19:06.489 lat (usec) : 250=5.25%, 500=90.99% 00:19:06.489 lat (msec) : 50=3.75% 00:19:06.489 cpu : usr=1.30%, sys=1.30%, ctx=533, majf=0, minf=1 00:19:06.489 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:06.489 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:06.489 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:06.489 issued rwts: total=21,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:06.489 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:06.489 job3: (groupid=0, jobs=1): err= 0: pid=3195319: Mon Jul 15 03:22:12 2024 00:19:06.489 read: IOPS=22, BW=89.1KiB/s (91.2kB/s)(92.0KiB/1033msec) 00:19:06.489 slat (nsec): min=12167, max=32787, avg=18792.52, stdev=8429.98 00:19:06.489 clat (usec): min=375, max=41987, avg=39320.85, stdev=8497.58 00:19:06.489 lat (usec): min=391, max=42012, avg=39339.64, stdev=8498.18 00:19:06.489 clat percentiles (usec): 00:19:06.489 | 1.00th=[ 375], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:19:06.489 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:19:06.489 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41681], 95.00th=[42206], 00:19:06.489 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:19:06.489 | 99.99th=[42206] 00:19:06.489 write: IOPS=495, BW=1983KiB/s (2030kB/s)(2048KiB/1033msec); 0 zone resets 00:19:06.489 slat (nsec): min=8980, max=59037, avg=18378.74, stdev=6511.47 00:19:06.489 clat (usec): min=176, max=438, avg=226.22, stdev=46.54 00:19:06.489 lat (usec): min=194, max=475, avg=244.60, stdev=47.61 00:19:06.489 clat percentiles (usec): 00:19:06.490 | 1.00th=[ 184], 5.00th=[ 188], 10.00th=[ 192], 20.00th=[ 198], 00:19:06.490 | 30.00th=[ 200], 40.00th=[ 206], 50.00th=[ 212], 60.00th=[ 217], 00:19:06.490 | 70.00th=[ 225], 80.00th=[ 241], 90.00th=[ 297], 95.00th=[ 343], 00:19:06.490 | 99.00th=[ 396], 99.50th=[ 404], 99.90th=[ 437], 99.95th=[ 437], 00:19:06.490 | 99.99th=[ 437] 00:19:06.490 bw ( KiB/s): min= 4096, max= 4096, per=51.65%, avg=4096.00, stdev= 0.00, samples=1 00:19:06.490 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:19:06.490 lat (usec) : 250=80.37%, 500=15.51% 00:19:06.490 lat (msec) : 50=4.11% 00:19:06.490 cpu : usr=0.39%, sys=0.97%, ctx=535, majf=0, minf=1 00:19:06.490 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:06.490 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:06.490 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:06.490 issued rwts: total=23,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:06.490 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:06.490 00:19:06.490 Run status group 0 (all jobs): 00:19:06.490 READ: bw=871KiB/s (892kB/s), 83.0KiB/s-639KiB/s (85.0kB/s-654kB/s), io=900KiB (922kB), run=1001-1033msec 00:19:06.490 WRITE: bw=7930KiB/s (8121kB/s), 1983KiB/s-2046KiB/s (2030kB/s-2095kB/s), io=8192KiB (8389kB), run=1001-1033msec 00:19:06.490 00:19:06.490 Disk stats (read/write): 00:19:06.490 nvme0n1: ios=182/512, merge=0/0, ticks=1638/119, in_queue=1757, util=98.10% 00:19:06.490 nvme0n2: ios=42/512, merge=0/0, ticks=1649/117, in_queue=1766, util=98.46% 00:19:06.490 nvme0n3: ios=16/512, merge=0/0, ticks=623/135, in_queue=758, util=87.85% 00:19:06.490 nvme0n4: ios=17/512, merge=0/0, ticks=657/107, in_queue=764, util=89.21% 00:19:06.490 03:22:12 nvmf_tcp.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:19:06.490 [global] 00:19:06.490 thread=1 00:19:06.490 invalidate=1 00:19:06.490 rw=randwrite 00:19:06.490 time_based=1 00:19:06.490 runtime=1 00:19:06.490 ioengine=libaio 00:19:06.490 direct=1 00:19:06.490 bs=4096 00:19:06.490 iodepth=1 00:19:06.490 norandommap=0 00:19:06.490 numjobs=1 00:19:06.490 00:19:06.490 verify_dump=1 00:19:06.490 verify_backlog=512 00:19:06.490 verify_state_save=0 00:19:06.490 do_verify=1 00:19:06.490 verify=crc32c-intel 00:19:06.490 [job0] 00:19:06.490 filename=/dev/nvme0n1 00:19:06.490 [job1] 00:19:06.490 filename=/dev/nvme0n2 00:19:06.490 [job2] 00:19:06.490 filename=/dev/nvme0n3 00:19:06.490 [job3] 00:19:06.490 filename=/dev/nvme0n4 00:19:06.490 Could not set queue depth (nvme0n1) 00:19:06.490 Could not set queue depth (nvme0n2) 00:19:06.490 Could not set queue depth (nvme0n3) 00:19:06.490 Could not set queue depth (nvme0n4) 00:19:06.748 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:06.748 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:06.748 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:06.748 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:06.748 fio-3.35 00:19:06.748 Starting 4 threads 00:19:08.118 00:19:08.118 job0: (groupid=0, jobs=1): err= 0: pid=3195543: Mon Jul 15 03:22:13 2024 00:19:08.118 read: IOPS=984, BW=3938KiB/s (4032kB/s)(4060KiB/1031msec) 00:19:08.118 slat (nsec): min=5266, max=67089, avg=13914.90, stdev=7442.00 00:19:08.118 clat (usec): min=247, max=41995, avg=724.70, stdev=3846.17 00:19:08.118 lat (usec): min=255, max=42011, avg=738.62, stdev=3846.43 00:19:08.118 clat percentiles (usec): 00:19:08.118 | 1.00th=[ 260], 5.00th=[ 285], 10.00th=[ 293], 20.00th=[ 302], 00:19:08.118 | 30.00th=[ 310], 40.00th=[ 326], 50.00th=[ 334], 60.00th=[ 338], 00:19:08.118 | 70.00th=[ 351], 80.00th=[ 400], 90.00th=[ 478], 95.00th=[ 537], 00:19:08.118 | 99.00th=[ 758], 99.50th=[41157], 99.90th=[42206], 99.95th=[42206], 00:19:08.118 | 99.99th=[42206] 00:19:08.118 write: IOPS=993, BW=3973KiB/s (4068kB/s)(4096KiB/1031msec); 0 zone resets 00:19:08.118 slat (usec): min=7, max=8899, avg=24.70, stdev=277.76 00:19:08.118 clat (usec): min=169, max=558, avg=240.99, stdev=52.77 00:19:08.118 lat (usec): min=176, max=9140, avg=265.68, stdev=283.37 00:19:08.118 clat percentiles (usec): 00:19:08.118 | 1.00th=[ 174], 5.00th=[ 180], 10.00th=[ 184], 20.00th=[ 192], 00:19:08.118 | 30.00th=[ 210], 40.00th=[ 223], 50.00th=[ 235], 60.00th=[ 245], 00:19:08.118 | 70.00th=[ 258], 80.00th=[ 281], 90.00th=[ 297], 95.00th=[ 347], 00:19:08.118 | 99.00th=[ 416], 99.50th=[ 437], 99.90th=[ 515], 99.95th=[ 562], 00:19:08.118 | 99.99th=[ 562] 00:19:08.118 bw ( KiB/s): min= 4096, max= 4096, per=22.91%, avg=4096.00, stdev= 0.00, samples=2 00:19:08.118 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=2 00:19:08.118 lat (usec) : 250=33.64%, 500=62.58%, 750=3.24%, 1000=0.05% 00:19:08.118 lat (msec) : 10=0.05%, 50=0.44% 00:19:08.118 cpu : usr=0.78%, sys=3.79%, ctx=2041, majf=0, minf=2 00:19:08.118 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:08.118 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:08.118 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:08.118 issued rwts: total=1015,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:08.118 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:08.118 job1: (groupid=0, jobs=1): err= 0: pid=3195544: Mon Jul 15 03:22:13 2024 00:19:08.118 read: IOPS=677, BW=2709KiB/s (2774kB/s)(2712KiB/1001msec) 00:19:08.118 slat (nsec): min=5101, max=40250, avg=13197.12, stdev=5971.40 00:19:08.118 clat (usec): min=225, max=41500, avg=1095.78, stdev=5544.74 00:19:08.118 lat (usec): min=232, max=41515, avg=1108.97, stdev=5544.89 00:19:08.118 clat percentiles (usec): 00:19:08.118 | 1.00th=[ 231], 5.00th=[ 237], 10.00th=[ 243], 20.00th=[ 251], 00:19:08.118 | 30.00th=[ 277], 40.00th=[ 293], 50.00th=[ 310], 60.00th=[ 330], 00:19:08.118 | 70.00th=[ 351], 80.00th=[ 383], 90.00th=[ 433], 95.00th=[ 486], 00:19:08.118 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41681], 99.95th=[41681], 00:19:08.118 | 99.99th=[41681] 00:19:08.118 write: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec); 0 zone resets 00:19:08.118 slat (nsec): min=6851, max=66513, avg=14463.16, stdev=6973.29 00:19:08.118 clat (usec): min=166, max=413, avg=222.04, stdev=39.70 00:19:08.118 lat (usec): min=174, max=446, avg=236.50, stdev=40.77 00:19:08.118 clat percentiles (usec): 00:19:08.118 | 1.00th=[ 178], 5.00th=[ 186], 10.00th=[ 190], 20.00th=[ 194], 00:19:08.119 | 30.00th=[ 198], 40.00th=[ 204], 50.00th=[ 208], 60.00th=[ 217], 00:19:08.119 | 70.00th=[ 227], 80.00th=[ 241], 90.00th=[ 281], 95.00th=[ 306], 00:19:08.119 | 99.00th=[ 367], 99.50th=[ 383], 99.90th=[ 404], 99.95th=[ 412], 00:19:08.119 | 99.99th=[ 412] 00:19:08.119 bw ( KiB/s): min= 8192, max= 8192, per=45.82%, avg=8192.00, stdev= 0.00, samples=1 00:19:08.119 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:19:08.119 lat (usec) : 250=57.34%, 500=41.42%, 750=0.47% 00:19:08.119 lat (msec) : 50=0.76% 00:19:08.119 cpu : usr=1.30%, sys=2.40%, ctx=1702, majf=0, minf=1 00:19:08.119 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:08.119 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:08.119 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:08.119 issued rwts: total=678,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:08.119 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:08.119 job2: (groupid=0, jobs=1): err= 0: pid=3195545: Mon Jul 15 03:22:13 2024 00:19:08.119 read: IOPS=182, BW=729KiB/s (747kB/s)(732KiB/1004msec) 00:19:08.119 slat (nsec): min=7074, max=34659, avg=11807.71, stdev=5754.27 00:19:08.119 clat (usec): min=251, max=42074, avg=4688.17, stdev=12608.63 00:19:08.119 lat (usec): min=258, max=42083, avg=4699.97, stdev=12610.03 00:19:08.119 clat percentiles (usec): 00:19:08.119 | 1.00th=[ 253], 5.00th=[ 265], 10.00th=[ 269], 20.00th=[ 273], 00:19:08.119 | 30.00th=[ 277], 40.00th=[ 281], 50.00th=[ 285], 60.00th=[ 297], 00:19:08.119 | 70.00th=[ 318], 80.00th=[ 486], 90.00th=[40633], 95.00th=[41157], 00:19:08.119 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:19:08.119 | 99.99th=[42206] 00:19:08.119 write: IOPS=509, BW=2040KiB/s (2089kB/s)(2048KiB/1004msec); 0 zone resets 00:19:08.119 slat (nsec): min=9306, max=49341, avg=20934.46, stdev=7705.62 00:19:08.119 clat (usec): min=182, max=392, avg=252.57, stdev=33.88 00:19:08.119 lat (usec): min=193, max=418, avg=273.50, stdev=36.34 00:19:08.119 clat percentiles (usec): 00:19:08.119 | 1.00th=[ 196], 5.00th=[ 204], 10.00th=[ 215], 20.00th=[ 225], 00:19:08.119 | 30.00th=[ 235], 40.00th=[ 243], 50.00th=[ 251], 60.00th=[ 258], 00:19:08.119 | 70.00th=[ 265], 80.00th=[ 273], 90.00th=[ 289], 95.00th=[ 310], 00:19:08.119 | 99.00th=[ 375], 99.50th=[ 375], 99.90th=[ 392], 99.95th=[ 392], 00:19:08.119 | 99.99th=[ 392] 00:19:08.119 bw ( KiB/s): min= 4096, max= 4096, per=22.91%, avg=4096.00, stdev= 0.00, samples=1 00:19:08.119 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:19:08.119 lat (usec) : 250=36.40%, 500=59.42%, 750=1.29% 00:19:08.119 lat (msec) : 20=0.14%, 50=2.73% 00:19:08.119 cpu : usr=0.70%, sys=1.79%, ctx=696, majf=0, minf=1 00:19:08.119 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:08.119 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:08.119 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:08.119 issued rwts: total=183,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:08.119 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:08.119 job3: (groupid=0, jobs=1): err= 0: pid=3195546: Mon Jul 15 03:22:13 2024 00:19:08.119 read: IOPS=1553, BW=6214KiB/s (6363kB/s)(6220KiB/1001msec) 00:19:08.119 slat (nsec): min=5861, max=50296, avg=13245.78, stdev=5804.09 00:19:08.119 clat (usec): min=237, max=549, avg=315.33, stdev=39.81 00:19:08.119 lat (usec): min=245, max=582, avg=328.58, stdev=41.95 00:19:08.119 clat percentiles (usec): 00:19:08.119 | 1.00th=[ 247], 5.00th=[ 262], 10.00th=[ 273], 20.00th=[ 285], 00:19:08.119 | 30.00th=[ 293], 40.00th=[ 302], 50.00th=[ 310], 60.00th=[ 318], 00:19:08.119 | 70.00th=[ 330], 80.00th=[ 343], 90.00th=[ 363], 95.00th=[ 383], 00:19:08.119 | 99.00th=[ 449], 99.50th=[ 478], 99.90th=[ 515], 99.95th=[ 553], 00:19:08.119 | 99.99th=[ 553] 00:19:08.119 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:19:08.119 slat (nsec): min=7993, max=61861, avg=13072.39, stdev=7675.03 00:19:08.119 clat (usec): min=159, max=1046, avg=219.48, stdev=57.45 00:19:08.119 lat (usec): min=168, max=1070, avg=232.55, stdev=62.00 00:19:08.119 clat percentiles (usec): 00:19:08.119 | 1.00th=[ 165], 5.00th=[ 169], 10.00th=[ 174], 20.00th=[ 180], 00:19:08.119 | 30.00th=[ 186], 40.00th=[ 194], 50.00th=[ 202], 60.00th=[ 215], 00:19:08.119 | 70.00th=[ 231], 80.00th=[ 249], 90.00th=[ 293], 95.00th=[ 334], 00:19:08.119 | 99.00th=[ 404], 99.50th=[ 461], 99.90th=[ 758], 99.95th=[ 840], 00:19:08.119 | 99.99th=[ 1045] 00:19:08.119 bw ( KiB/s): min= 8192, max= 8192, per=45.82%, avg=8192.00, stdev= 0.00, samples=1 00:19:08.119 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:19:08.119 lat (usec) : 250=46.04%, 500=53.68%, 750=0.19%, 1000=0.06% 00:19:08.119 lat (msec) : 2=0.03% 00:19:08.119 cpu : usr=2.90%, sys=4.40%, ctx=3604, majf=0, minf=1 00:19:08.119 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:08.119 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:08.119 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:08.119 issued rwts: total=1555,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:08.119 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:08.119 00:19:08.119 Run status group 0 (all jobs): 00:19:08.119 READ: bw=13.0MiB/s (13.6MB/s), 729KiB/s-6214KiB/s (747kB/s-6363kB/s), io=13.4MiB (14.1MB), run=1001-1031msec 00:19:08.119 WRITE: bw=17.5MiB/s (18.3MB/s), 2040KiB/s-8184KiB/s (2089kB/s-8380kB/s), io=18.0MiB (18.9MB), run=1001-1031msec 00:19:08.119 00:19:08.119 Disk stats (read/write): 00:19:08.119 nvme0n1: ios=1020/1024, merge=0/0, ticks=859/238, in_queue=1097, util=85.77% 00:19:08.119 nvme0n2: ios=723/1024, merge=0/0, ticks=649/219, in_queue=868, util=91.16% 00:19:08.119 nvme0n3: ios=201/512, merge=0/0, ticks=1546/127, in_queue=1673, util=93.53% 00:19:08.119 nvme0n4: ios=1408/1536, merge=0/0, ticks=1301/345, in_queue=1646, util=94.12% 00:19:08.119 03:22:13 nvmf_tcp.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:19:08.119 [global] 00:19:08.119 thread=1 00:19:08.119 invalidate=1 00:19:08.119 rw=write 00:19:08.119 time_based=1 00:19:08.119 runtime=1 00:19:08.119 ioengine=libaio 00:19:08.119 direct=1 00:19:08.119 bs=4096 00:19:08.119 iodepth=128 00:19:08.119 norandommap=0 00:19:08.119 numjobs=1 00:19:08.119 00:19:08.119 verify_dump=1 00:19:08.119 verify_backlog=512 00:19:08.119 verify_state_save=0 00:19:08.119 do_verify=1 00:19:08.119 verify=crc32c-intel 00:19:08.119 [job0] 00:19:08.119 filename=/dev/nvme0n1 00:19:08.119 [job1] 00:19:08.119 filename=/dev/nvme0n2 00:19:08.119 [job2] 00:19:08.119 filename=/dev/nvme0n3 00:19:08.119 [job3] 00:19:08.119 filename=/dev/nvme0n4 00:19:08.119 Could not set queue depth (nvme0n1) 00:19:08.119 Could not set queue depth (nvme0n2) 00:19:08.119 Could not set queue depth (nvme0n3) 00:19:08.119 Could not set queue depth (nvme0n4) 00:19:08.119 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:08.119 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:08.119 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:08.119 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:08.119 fio-3.35 00:19:08.119 Starting 4 threads 00:19:09.494 00:19:09.494 job0: (groupid=0, jobs=1): err= 0: pid=3195780: Mon Jul 15 03:22:15 2024 00:19:09.494 read: IOPS=3548, BW=13.9MiB/s (14.5MB/s)(14.0MiB/1010msec) 00:19:09.494 slat (usec): min=2, max=16924, avg=130.78, stdev=819.49 00:19:09.494 clat (usec): min=6542, max=50805, avg=17044.77, stdev=6378.92 00:19:09.494 lat (usec): min=8522, max=50810, avg=17175.55, stdev=6444.21 00:19:09.494 clat percentiles (usec): 00:19:09.494 | 1.00th=[ 8717], 5.00th=[10159], 10.00th=[11338], 20.00th=[12125], 00:19:09.494 | 30.00th=[13173], 40.00th=[13960], 50.00th=[14746], 60.00th=[17957], 00:19:09.494 | 70.00th=[20055], 80.00th=[20317], 90.00th=[22676], 95.00th=[30802], 00:19:09.494 | 99.00th=[46400], 99.50th=[46400], 99.90th=[50594], 99.95th=[50594], 00:19:09.494 | 99.99th=[50594] 00:19:09.494 write: IOPS=3626, BW=14.2MiB/s (14.9MB/s)(14.3MiB/1010msec); 0 zone resets 00:19:09.494 slat (usec): min=3, max=11828, avg=137.16, stdev=676.96 00:19:09.494 clat (usec): min=5861, max=47971, avg=18178.71, stdev=7406.85 00:19:09.494 lat (usec): min=7908, max=47978, avg=18315.87, stdev=7453.24 00:19:09.494 clat percentiles (usec): 00:19:09.494 | 1.00th=[ 8848], 5.00th=[10159], 10.00th=[10683], 20.00th=[12387], 00:19:09.494 | 30.00th=[13304], 40.00th=[15008], 50.00th=[15926], 60.00th=[17433], 00:19:09.494 | 70.00th=[21103], 80.00th=[23987], 90.00th=[27919], 95.00th=[32375], 00:19:09.494 | 99.00th=[46400], 99.50th=[47449], 99.90th=[47973], 99.95th=[47973], 00:19:09.494 | 99.99th=[47973] 00:19:09.494 bw ( KiB/s): min=11928, max=16744, per=22.69%, avg=14336.00, stdev=3405.43, samples=2 00:19:09.494 iops : min= 2982, max= 4186, avg=3584.00, stdev=851.36, samples=2 00:19:09.494 lat (msec) : 10=3.12%, 20=65.81%, 50=31.02%, 100=0.06% 00:19:09.494 cpu : usr=3.47%, sys=6.84%, ctx=403, majf=0, minf=13 00:19:09.494 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:19:09.494 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:09.494 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:09.494 issued rwts: total=3584,3663,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:09.494 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:09.494 job1: (groupid=0, jobs=1): err= 0: pid=3195781: Mon Jul 15 03:22:15 2024 00:19:09.494 read: IOPS=4558, BW=17.8MiB/s (18.7MB/s)(17.9MiB/1007msec) 00:19:09.494 slat (usec): min=2, max=21385, avg=114.58, stdev=821.50 00:19:09.494 clat (usec): min=3912, max=60687, avg=14489.40, stdev=8874.79 00:19:09.494 lat (usec): min=3919, max=60697, avg=14603.98, stdev=8930.84 00:19:09.494 clat percentiles (usec): 00:19:09.494 | 1.00th=[ 4948], 5.00th=[ 8455], 10.00th=[ 9372], 20.00th=[10683], 00:19:09.494 | 30.00th=[10814], 40.00th=[11207], 50.00th=[11731], 60.00th=[12256], 00:19:09.494 | 70.00th=[13566], 80.00th=[15664], 90.00th=[20055], 95.00th=[34866], 00:19:09.494 | 99.00th=[55313], 99.50th=[60556], 99.90th=[60556], 99.95th=[60556], 00:19:09.494 | 99.99th=[60556] 00:19:09.494 write: IOPS=4575, BW=17.9MiB/s (18.7MB/s)(18.0MiB/1007msec); 0 zone resets 00:19:09.494 slat (usec): min=3, max=19809, avg=95.77, stdev=615.44 00:19:09.494 clat (usec): min=2347, max=39113, avg=12753.91, stdev=4035.99 00:19:09.494 lat (usec): min=2353, max=39120, avg=12849.68, stdev=4069.06 00:19:09.494 clat percentiles (usec): 00:19:09.494 | 1.00th=[ 3916], 5.00th=[ 7111], 10.00th=[ 7963], 20.00th=[10552], 00:19:09.494 | 30.00th=[10814], 40.00th=[11207], 50.00th=[11863], 60.00th=[13173], 00:19:09.495 | 70.00th=[14091], 80.00th=[15270], 90.00th=[17433], 95.00th=[19792], 00:19:09.495 | 99.00th=[26870], 99.50th=[28967], 99.90th=[30278], 99.95th=[35390], 00:19:09.495 | 99.99th=[39060] 00:19:09.495 bw ( KiB/s): min=16384, max=20480, per=29.18%, avg=18432.00, stdev=2896.31, samples=2 00:19:09.495 iops : min= 4096, max= 5120, avg=4608.00, stdev=724.08, samples=2 00:19:09.495 lat (msec) : 4=0.64%, 10=14.62%, 20=78.16%, 50=5.40%, 100=1.17% 00:19:09.495 cpu : usr=5.37%, sys=5.07%, ctx=422, majf=0, minf=15 00:19:09.495 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:19:09.495 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:09.495 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:09.495 issued rwts: total=4590,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:09.495 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:09.495 job2: (groupid=0, jobs=1): err= 0: pid=3195784: Mon Jul 15 03:22:15 2024 00:19:09.495 read: IOPS=3105, BW=12.1MiB/s (12.7MB/s)(12.2MiB/1003msec) 00:19:09.495 slat (usec): min=2, max=16063, avg=173.72, stdev=1084.12 00:19:09.495 clat (usec): min=828, max=61345, avg=22369.28, stdev=12909.35 00:19:09.495 lat (usec): min=3483, max=61355, avg=22542.99, stdev=12969.32 00:19:09.495 clat percentiles (usec): 00:19:09.495 | 1.00th=[ 3621], 5.00th=[ 9110], 10.00th=[10814], 20.00th=[11994], 00:19:09.495 | 30.00th=[13829], 40.00th=[15139], 50.00th=[18744], 60.00th=[22938], 00:19:09.495 | 70.00th=[26084], 80.00th=[27657], 90.00th=[43779], 95.00th=[53216], 00:19:09.495 | 99.00th=[58983], 99.50th=[61080], 99.90th=[61080], 99.95th=[61080], 00:19:09.495 | 99.99th=[61604] 00:19:09.495 write: IOPS=3573, BW=14.0MiB/s (14.6MB/s)(14.0MiB/1003msec); 0 zone resets 00:19:09.495 slat (usec): min=3, max=8802, avg=122.71, stdev=600.16 00:19:09.495 clat (usec): min=7160, max=38091, avg=15926.70, stdev=5807.13 00:19:09.495 lat (usec): min=7163, max=38097, avg=16049.41, stdev=5848.99 00:19:09.495 clat percentiles (usec): 00:19:09.495 | 1.00th=[ 7373], 5.00th=[ 9372], 10.00th=[10683], 20.00th=[11600], 00:19:09.495 | 30.00th=[12518], 40.00th=[13042], 50.00th=[13435], 60.00th=[14615], 00:19:09.495 | 70.00th=[17433], 80.00th=[22152], 90.00th=[24511], 95.00th=[27657], 00:19:09.495 | 99.00th=[31589], 99.50th=[33817], 99.90th=[38011], 99.95th=[38011], 00:19:09.495 | 99.99th=[38011] 00:19:09.495 bw ( KiB/s): min=12344, max=15648, per=22.16%, avg=13996.00, stdev=2336.28, samples=2 00:19:09.495 iops : min= 3086, max= 3912, avg=3499.00, stdev=584.07, samples=2 00:19:09.495 lat (usec) : 1000=0.01% 00:19:09.495 lat (msec) : 4=0.54%, 10=5.88%, 20=59.38%, 50=31.26%, 100=2.93% 00:19:09.495 cpu : usr=2.30%, sys=4.09%, ctx=341, majf=0, minf=13 00:19:09.495 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:19:09.495 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:09.495 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:09.495 issued rwts: total=3115,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:09.495 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:09.495 job3: (groupid=0, jobs=1): err= 0: pid=3195785: Mon Jul 15 03:22:15 2024 00:19:09.495 read: IOPS=3839, BW=15.0MiB/s (15.7MB/s)(15.1MiB/1009msec) 00:19:09.495 slat (usec): min=2, max=21247, avg=121.27, stdev=766.82 00:19:09.495 clat (usec): min=3153, max=44445, avg=15359.36, stdev=5439.99 00:19:09.495 lat (usec): min=8990, max=45661, avg=15480.63, stdev=5471.28 00:19:09.495 clat percentiles (usec): 00:19:09.495 | 1.00th=[ 9241], 5.00th=[11076], 10.00th=[11731], 20.00th=[12256], 00:19:09.495 | 30.00th=[12911], 40.00th=[13304], 50.00th=[13698], 60.00th=[14222], 00:19:09.495 | 70.00th=[15533], 80.00th=[16909], 90.00th=[19530], 95.00th=[25297], 00:19:09.495 | 99.00th=[41157], 99.50th=[41157], 99.90th=[44303], 99.95th=[44303], 00:19:09.495 | 99.99th=[44303] 00:19:09.495 write: IOPS=4059, BW=15.9MiB/s (16.6MB/s)(16.0MiB/1009msec); 0 zone resets 00:19:09.495 slat (usec): min=3, max=13776, avg=124.64, stdev=685.83 00:19:09.495 clat (usec): min=7413, max=40211, avg=16676.78, stdev=6140.62 00:19:09.495 lat (usec): min=7424, max=40224, avg=16801.42, stdev=6185.76 00:19:09.495 clat percentiles (usec): 00:19:09.495 | 1.00th=[ 8455], 5.00th=[11600], 10.00th=[11994], 20.00th=[12256], 00:19:09.495 | 30.00th=[12649], 40.00th=[13566], 50.00th=[14484], 60.00th=[15664], 00:19:09.495 | 70.00th=[16909], 80.00th=[20055], 90.00th=[26346], 95.00th=[31327], 00:19:09.495 | 99.00th=[36439], 99.50th=[38011], 99.90th=[40109], 99.95th=[40109], 00:19:09.495 | 99.99th=[40109] 00:19:09.495 bw ( KiB/s): min=16384, max=16384, per=25.94%, avg=16384.00, stdev= 0.00, samples=2 00:19:09.495 iops : min= 4096, max= 4096, avg=4096.00, stdev= 0.00, samples=2 00:19:09.495 lat (msec) : 4=0.01%, 10=2.38%, 20=82.61%, 50=14.99% 00:19:09.495 cpu : usr=2.78%, sys=5.06%, ctx=349, majf=0, minf=9 00:19:09.495 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:19:09.495 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:09.495 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:09.495 issued rwts: total=3874,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:09.495 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:09.495 00:19:09.495 Run status group 0 (all jobs): 00:19:09.495 READ: bw=58.6MiB/s (61.5MB/s), 12.1MiB/s-17.8MiB/s (12.7MB/s-18.7MB/s), io=59.2MiB (62.1MB), run=1003-1010msec 00:19:09.495 WRITE: bw=61.7MiB/s (64.7MB/s), 14.0MiB/s-17.9MiB/s (14.6MB/s-18.7MB/s), io=62.3MiB (65.3MB), run=1003-1010msec 00:19:09.495 00:19:09.495 Disk stats (read/write): 00:19:09.495 nvme0n1: ios=3121/3375, merge=0/0, ticks=19884/24270, in_queue=44154, util=97.70% 00:19:09.495 nvme0n2: ios=3635/4096, merge=0/0, ticks=26474/31213, in_queue=57687, util=97.97% 00:19:09.495 nvme0n3: ios=2560/2951, merge=0/0, ticks=16418/13389, in_queue=29807, util=88.91% 00:19:09.495 nvme0n4: ios=3332/3584, merge=0/0, ticks=19846/22303, in_queue=42149, util=97.89% 00:19:09.495 03:22:15 nvmf_tcp.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:19:09.495 [global] 00:19:09.495 thread=1 00:19:09.495 invalidate=1 00:19:09.495 rw=randwrite 00:19:09.495 time_based=1 00:19:09.495 runtime=1 00:19:09.495 ioengine=libaio 00:19:09.495 direct=1 00:19:09.495 bs=4096 00:19:09.495 iodepth=128 00:19:09.495 norandommap=0 00:19:09.495 numjobs=1 00:19:09.495 00:19:09.495 verify_dump=1 00:19:09.495 verify_backlog=512 00:19:09.495 verify_state_save=0 00:19:09.495 do_verify=1 00:19:09.495 verify=crc32c-intel 00:19:09.495 [job0] 00:19:09.495 filename=/dev/nvme0n1 00:19:09.495 [job1] 00:19:09.495 filename=/dev/nvme0n2 00:19:09.495 [job2] 00:19:09.495 filename=/dev/nvme0n3 00:19:09.495 [job3] 00:19:09.495 filename=/dev/nvme0n4 00:19:09.495 Could not set queue depth (nvme0n1) 00:19:09.495 Could not set queue depth (nvme0n2) 00:19:09.495 Could not set queue depth (nvme0n3) 00:19:09.495 Could not set queue depth (nvme0n4) 00:19:09.495 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:09.495 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:09.495 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:09.495 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:09.495 fio-3.35 00:19:09.495 Starting 4 threads 00:19:10.920 00:19:10.920 job0: (groupid=0, jobs=1): err= 0: pid=3196128: Mon Jul 15 03:22:16 2024 00:19:10.920 read: IOPS=4557, BW=17.8MiB/s (18.7MB/s)(18.0MiB/1011msec) 00:19:10.920 slat (usec): min=3, max=10469, avg=99.67, stdev=676.09 00:19:10.920 clat (usec): min=4026, max=36395, avg=12579.65, stdev=3582.37 00:19:10.920 lat (usec): min=4042, max=36410, avg=12679.32, stdev=3626.12 00:19:10.920 clat percentiles (usec): 00:19:10.920 | 1.00th=[ 8356], 5.00th=[ 9110], 10.00th=[ 9765], 20.00th=[10028], 00:19:10.920 | 30.00th=[10552], 40.00th=[11076], 50.00th=[11600], 60.00th=[12518], 00:19:10.920 | 70.00th=[13173], 80.00th=[14484], 90.00th=[16909], 95.00th=[18744], 00:19:10.920 | 99.00th=[27657], 99.50th=[32637], 99.90th=[36439], 99.95th=[36439], 00:19:10.920 | 99.99th=[36439] 00:19:10.920 write: IOPS=4959, BW=19.4MiB/s (20.3MB/s)(19.6MiB/1011msec); 0 zone resets 00:19:10.920 slat (usec): min=4, max=8896, avg=96.56, stdev=482.40 00:19:10.920 clat (usec): min=1146, max=36352, avg=13975.24, stdev=6460.78 00:19:10.920 lat (usec): min=1152, max=36360, avg=14071.80, stdev=6498.99 00:19:10.920 clat percentiles (usec): 00:19:10.920 | 1.00th=[ 2999], 5.00th=[ 6456], 10.00th=[ 6980], 20.00th=[ 9503], 00:19:10.920 | 30.00th=[10552], 40.00th=[10945], 50.00th=[11207], 60.00th=[12911], 00:19:10.920 | 70.00th=[15926], 80.00th=[21103], 90.00th=[24773], 95.00th=[26346], 00:19:10.920 | 99.00th=[30016], 99.50th=[32113], 99.90th=[35390], 99.95th=[35390], 00:19:10.920 | 99.99th=[36439] 00:19:10.920 bw ( KiB/s): min=18616, max=20480, per=29.57%, avg=19548.00, stdev=1318.05, samples=2 00:19:10.920 iops : min= 4654, max= 5120, avg=4887.00, stdev=329.51, samples=2 00:19:10.920 lat (msec) : 2=0.33%, 4=0.84%, 10=20.32%, 20=65.15%, 50=13.35% 00:19:10.920 cpu : usr=6.24%, sys=10.30%, ctx=521, majf=0, minf=17 00:19:10.920 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:19:10.920 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:10.920 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:10.920 issued rwts: total=4608,5014,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:10.920 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:10.920 job1: (groupid=0, jobs=1): err= 0: pid=3196129: Mon Jul 15 03:22:16 2024 00:19:10.920 read: IOPS=2410, BW=9641KiB/s (9872kB/s)(9708KiB/1007msec) 00:19:10.920 slat (usec): min=2, max=29277, avg=194.96, stdev=1313.81 00:19:10.921 clat (usec): min=3387, max=98324, avg=25611.31, stdev=17226.48 00:19:10.921 lat (msec): min=6, max=102, avg=25.81, stdev=17.34 00:19:10.921 clat percentiles (usec): 00:19:10.921 | 1.00th=[ 7504], 5.00th=[ 9503], 10.00th=[13960], 20.00th=[15533], 00:19:10.921 | 30.00th=[16581], 40.00th=[17171], 50.00th=[18482], 60.00th=[18744], 00:19:10.921 | 70.00th=[25297], 80.00th=[36963], 90.00th=[51643], 95.00th=[55837], 00:19:10.921 | 99.00th=[87557], 99.50th=[93848], 99.90th=[98042], 99.95th=[98042], 00:19:10.921 | 99.99th=[98042] 00:19:10.921 write: IOPS=2542, BW=9.93MiB/s (10.4MB/s)(10.0MiB/1007msec); 0 zone resets 00:19:10.921 slat (usec): min=3, max=20252, avg=200.02, stdev=1256.35 00:19:10.921 clat (msec): min=2, max=100, avg=25.45, stdev=13.89 00:19:10.921 lat (msec): min=2, max=100, avg=25.65, stdev=14.01 00:19:10.921 clat percentiles (msec): 00:19:10.921 | 1.00th=[ 7], 5.00th=[ 9], 10.00th=[ 12], 20.00th=[ 16], 00:19:10.921 | 30.00th=[ 19], 40.00th=[ 21], 50.00th=[ 22], 60.00th=[ 24], 00:19:10.921 | 70.00th=[ 30], 80.00th=[ 35], 90.00th=[ 44], 95.00th=[ 48], 00:19:10.921 | 99.00th=[ 81], 99.50th=[ 87], 99.90th=[ 101], 99.95th=[ 101], 00:19:10.921 | 99.99th=[ 101] 00:19:10.921 bw ( KiB/s): min= 8192, max=12288, per=15.49%, avg=10240.00, stdev=2896.31, samples=2 00:19:10.921 iops : min= 2048, max= 3072, avg=2560.00, stdev=724.08, samples=2 00:19:10.921 lat (msec) : 4=0.06%, 10=7.30%, 20=42.71%, 50=42.75%, 100=7.12% 00:19:10.921 lat (msec) : 250=0.06% 00:19:10.921 cpu : usr=2.39%, sys=2.88%, ctx=222, majf=0, minf=15 00:19:10.921 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.7% 00:19:10.921 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:10.921 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:10.921 issued rwts: total=2427,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:10.921 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:10.921 job2: (groupid=0, jobs=1): err= 0: pid=3196130: Mon Jul 15 03:22:16 2024 00:19:10.921 read: IOPS=4575, BW=17.9MiB/s (18.7MB/s)(18.0MiB/1007msec) 00:19:10.921 slat (usec): min=2, max=13080, avg=98.95, stdev=700.52 00:19:10.921 clat (usec): min=1553, max=25737, avg=13074.61, stdev=3637.50 00:19:10.921 lat (usec): min=1556, max=25872, avg=13173.55, stdev=3686.52 00:19:10.921 clat percentiles (usec): 00:19:10.921 | 1.00th=[ 2900], 5.00th=[ 7570], 10.00th=[10421], 20.00th=[11207], 00:19:10.921 | 30.00th=[11600], 40.00th=[12125], 50.00th=[12649], 60.00th=[12911], 00:19:10.921 | 70.00th=[13566], 80.00th=[15008], 90.00th=[17957], 95.00th=[20055], 00:19:10.921 | 99.00th=[23725], 99.50th=[23987], 99.90th=[25822], 99.95th=[25822], 00:19:10.921 | 99.99th=[25822] 00:19:10.921 write: IOPS=4927, BW=19.2MiB/s (20.2MB/s)(19.4MiB/1007msec); 0 zone resets 00:19:10.921 slat (usec): min=3, max=12256, avg=96.23, stdev=671.29 00:19:10.921 clat (usec): min=535, max=45440, avg=13596.07, stdev=6609.97 00:19:10.921 lat (usec): min=551, max=45447, avg=13692.30, stdev=6660.04 00:19:10.921 clat percentiles (usec): 00:19:10.921 | 1.00th=[ 2008], 5.00th=[ 6194], 10.00th=[ 7701], 20.00th=[ 9634], 00:19:10.921 | 30.00th=[11469], 40.00th=[11731], 50.00th=[12256], 60.00th=[12780], 00:19:10.921 | 70.00th=[13173], 80.00th=[15926], 90.00th=[20055], 95.00th=[30016], 00:19:10.921 | 99.00th=[40633], 99.50th=[42730], 99.90th=[45351], 99.95th=[45351], 00:19:10.921 | 99.99th=[45351] 00:19:10.921 bw ( KiB/s): min=18200, max=20480, per=29.25%, avg=19340.00, stdev=1612.20, samples=2 00:19:10.921 iops : min= 4550, max= 5120, avg=4835.00, stdev=403.05, samples=2 00:19:10.921 lat (usec) : 750=0.01% 00:19:10.921 lat (msec) : 2=0.57%, 4=1.84%, 10=12.88%, 20=76.99%, 50=7.70% 00:19:10.921 cpu : usr=4.67%, sys=10.04%, ctx=397, majf=0, minf=11 00:19:10.921 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:19:10.921 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:10.921 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:10.921 issued rwts: total=4608,4962,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:10.921 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:10.921 job3: (groupid=0, jobs=1): err= 0: pid=3196131: Mon Jul 15 03:22:16 2024 00:19:10.921 read: IOPS=4067, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1007msec) 00:19:10.921 slat (usec): min=3, max=9776, avg=116.81, stdev=624.71 00:19:10.921 clat (usec): min=10486, max=26876, avg=15307.70, stdev=2643.34 00:19:10.921 lat (usec): min=10493, max=26894, avg=15424.51, stdev=2693.12 00:19:10.921 clat percentiles (usec): 00:19:10.921 | 1.00th=[11338], 5.00th=[12649], 10.00th=[13173], 20.00th=[13698], 00:19:10.921 | 30.00th=[13960], 40.00th=[14222], 50.00th=[14353], 60.00th=[14615], 00:19:10.921 | 70.00th=[15664], 80.00th=[16909], 90.00th=[18744], 95.00th=[20055], 00:19:10.921 | 99.00th=[25822], 99.50th=[26346], 99.90th=[26870], 99.95th=[26870], 00:19:10.921 | 99.99th=[26870] 00:19:10.921 write: IOPS=4145, BW=16.2MiB/s (17.0MB/s)(16.3MiB/1007msec); 0 zone resets 00:19:10.921 slat (usec): min=3, max=11316, avg=113.80, stdev=656.94 00:19:10.921 clat (usec): min=6243, max=34661, avg=15414.39, stdev=3135.18 00:19:10.921 lat (usec): min=6622, max=34683, avg=15528.18, stdev=3194.35 00:19:10.921 clat percentiles (usec): 00:19:10.921 | 1.00th=[ 7111], 5.00th=[10814], 10.00th=[12911], 20.00th=[14091], 00:19:10.921 | 30.00th=[14353], 40.00th=[14484], 50.00th=[14746], 60.00th=[15139], 00:19:10.921 | 70.00th=[15533], 80.00th=[16057], 90.00th=[20055], 95.00th=[22414], 00:19:10.921 | 99.00th=[23987], 99.50th=[23987], 99.90th=[32113], 99.95th=[32900], 00:19:10.921 | 99.99th=[34866] 00:19:10.921 bw ( KiB/s): min=15816, max=16952, per=24.78%, avg=16384.00, stdev=803.27, samples=2 00:19:10.921 iops : min= 3954, max= 4238, avg=4096.00, stdev=200.82, samples=2 00:19:10.921 lat (msec) : 10=2.15%, 20=89.80%, 50=8.05% 00:19:10.921 cpu : usr=5.47%, sys=9.34%, ctx=433, majf=0, minf=7 00:19:10.921 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:19:10.921 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:10.921 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:10.921 issued rwts: total=4096,4175,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:10.921 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:10.921 00:19:10.921 Run status group 0 (all jobs): 00:19:10.921 READ: bw=60.8MiB/s (63.8MB/s), 9641KiB/s-17.9MiB/s (9872kB/s-18.7MB/s), io=61.5MiB (64.5MB), run=1007-1011msec 00:19:10.921 WRITE: bw=64.6MiB/s (67.7MB/s), 9.93MiB/s-19.4MiB/s (10.4MB/s-20.3MB/s), io=65.3MiB (68.4MB), run=1007-1011msec 00:19:10.921 00:19:10.921 Disk stats (read/write): 00:19:10.921 nvme0n1: ios=4092/4096, merge=0/0, ticks=48857/54002, in_queue=102859, util=99.00% 00:19:10.921 nvme0n2: ios=2068/2125, merge=0/0, ticks=19451/19888, in_queue=39339, util=86.59% 00:19:10.921 nvme0n3: ios=3842/4096, merge=0/0, ticks=43775/54304, in_queue=98079, util=88.92% 00:19:10.921 nvme0n4: ios=3582/3584, merge=0/0, ticks=27761/23687, in_queue=51448, util=97.89% 00:19:10.921 03:22:16 nvmf_tcp.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:19:10.921 03:22:16 nvmf_tcp.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=3196268 00:19:10.921 03:22:16 nvmf_tcp.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:19:10.921 03:22:16 nvmf_tcp.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:19:10.921 [global] 00:19:10.921 thread=1 00:19:10.921 invalidate=1 00:19:10.921 rw=read 00:19:10.921 time_based=1 00:19:10.921 runtime=10 00:19:10.921 ioengine=libaio 00:19:10.921 direct=1 00:19:10.921 bs=4096 00:19:10.921 iodepth=1 00:19:10.921 norandommap=1 00:19:10.921 numjobs=1 00:19:10.921 00:19:10.921 [job0] 00:19:10.921 filename=/dev/nvme0n1 00:19:10.921 [job1] 00:19:10.921 filename=/dev/nvme0n2 00:19:10.921 [job2] 00:19:10.921 filename=/dev/nvme0n3 00:19:10.921 [job3] 00:19:10.921 filename=/dev/nvme0n4 00:19:10.921 Could not set queue depth (nvme0n1) 00:19:10.921 Could not set queue depth (nvme0n2) 00:19:10.921 Could not set queue depth (nvme0n3) 00:19:10.921 Could not set queue depth (nvme0n4) 00:19:11.179 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:11.179 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:11.179 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:11.179 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:11.179 fio-3.35 00:19:11.179 Starting 4 threads 00:19:14.456 03:22:19 nvmf_tcp.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:19:14.456 03:22:20 nvmf_tcp.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:19:14.456 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=372736, buflen=4096 00:19:14.456 fio: pid=3196361, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:19:14.456 03:22:20 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:14.456 03:22:20 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:19:14.456 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=34537472, buflen=4096 00:19:14.456 fio: pid=3196360, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:19:14.714 03:22:20 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:14.714 03:22:20 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:19:14.714 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=24145920, buflen=4096 00:19:14.714 fio: pid=3196358, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:19:14.971 03:22:20 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:14.971 03:22:21 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:19:14.972 fio: io_u error on file /dev/nvme0n2: Input/output error: read offset=18837504, buflen=4096 00:19:14.972 fio: pid=3196359, err=5/file:io_u.c:1889, func=io_u error, error=Input/output error 00:19:14.972 00:19:14.972 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=3196358: Mon Jul 15 03:22:21 2024 00:19:14.972 read: IOPS=1676, BW=6706KiB/s (6867kB/s)(23.0MiB/3516msec) 00:19:14.972 slat (usec): min=5, max=28824, avg=17.32, stdev=410.53 00:19:14.972 clat (usec): min=246, max=42991, avg=572.66, stdev=3064.56 00:19:14.972 lat (usec): min=253, max=43014, avg=587.81, stdev=3088.31 00:19:14.972 clat percentiles (usec): 00:19:14.972 | 1.00th=[ 269], 5.00th=[ 281], 10.00th=[ 289], 20.00th=[ 310], 00:19:14.972 | 30.00th=[ 322], 40.00th=[ 326], 50.00th=[ 334], 60.00th=[ 347], 00:19:14.972 | 70.00th=[ 355], 80.00th=[ 371], 90.00th=[ 396], 95.00th=[ 457], 00:19:14.972 | 99.00th=[ 515], 99.50th=[41157], 99.90th=[42206], 99.95th=[42206], 00:19:14.972 | 99.99th=[43254] 00:19:14.972 bw ( KiB/s): min= 96, max=11488, per=28.91%, avg=5822.67, stdev=5137.15, samples=6 00:19:14.972 iops : min= 24, max= 2872, avg=1455.67, stdev=1284.29, samples=6 00:19:14.972 lat (usec) : 250=0.02%, 500=98.47%, 750=0.92%, 1000=0.02% 00:19:14.972 lat (msec) : 50=0.56% 00:19:14.972 cpu : usr=1.05%, sys=2.73%, ctx=5900, majf=0, minf=1 00:19:14.972 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:14.972 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:14.972 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:14.972 issued rwts: total=5896,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:14.972 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:14.972 job1: (groupid=0, jobs=1): err= 5 (file:io_u.c:1889, func=io_u error, error=Input/output error): pid=3196359: Mon Jul 15 03:22:21 2024 00:19:14.972 read: IOPS=1217, BW=4871KiB/s (4987kB/s)(18.0MiB/3777msec) 00:19:14.972 slat (usec): min=4, max=15775, avg=26.15, stdev=417.33 00:19:14.972 clat (usec): min=220, max=46199, avg=791.93, stdev=4312.38 00:19:14.972 lat (usec): min=225, max=53077, avg=816.47, stdev=4378.66 00:19:14.972 clat percentiles (usec): 00:19:14.972 | 1.00th=[ 247], 5.00th=[ 265], 10.00th=[ 273], 20.00th=[ 285], 00:19:14.972 | 30.00th=[ 293], 40.00th=[ 306], 50.00th=[ 314], 60.00th=[ 318], 00:19:14.972 | 70.00th=[ 334], 80.00th=[ 363], 90.00th=[ 383], 95.00th=[ 449], 00:19:14.972 | 99.00th=[40633], 99.50th=[41157], 99.90th=[41157], 99.95th=[42206], 00:19:14.972 | 99.99th=[46400] 00:19:14.972 bw ( KiB/s): min= 104, max=11680, per=24.81%, avg=4997.14, stdev=5364.59, samples=7 00:19:14.972 iops : min= 26, max= 2920, avg=1249.29, stdev=1341.15, samples=7 00:19:14.972 lat (usec) : 250=1.52%, 500=95.00%, 750=2.00%, 1000=0.13% 00:19:14.972 lat (msec) : 2=0.11%, 4=0.04%, 10=0.02%, 20=0.02%, 50=1.13% 00:19:14.972 cpu : usr=0.85%, sys=2.12%, ctx=4604, majf=0, minf=1 00:19:14.972 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:14.972 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:14.972 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:14.972 issued rwts: total=4600,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:14.972 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:14.972 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=3196360: Mon Jul 15 03:22:21 2024 00:19:14.972 read: IOPS=2597, BW=10.1MiB/s (10.6MB/s)(32.9MiB/3247msec) 00:19:14.972 slat (usec): min=4, max=15565, avg=16.94, stdev=235.78 00:19:14.972 clat (usec): min=260, max=41059, avg=362.28, stdev=798.66 00:19:14.972 lat (usec): min=268, max=41069, avg=379.22, stdev=833.03 00:19:14.972 clat percentiles (usec): 00:19:14.972 | 1.00th=[ 273], 5.00th=[ 285], 10.00th=[ 293], 20.00th=[ 310], 00:19:14.972 | 30.00th=[ 322], 40.00th=[ 330], 50.00th=[ 338], 60.00th=[ 351], 00:19:14.972 | 70.00th=[ 359], 80.00th=[ 375], 90.00th=[ 388], 95.00th=[ 424], 00:19:14.972 | 99.00th=[ 545], 99.50th=[ 611], 99.90th=[ 996], 99.95th=[ 3523], 00:19:14.972 | 99.99th=[41157] 00:19:14.972 bw ( KiB/s): min= 9592, max=11472, per=52.37%, avg=10548.00, stdev=864.04, samples=6 00:19:14.972 iops : min= 2398, max= 2868, avg=2637.00, stdev=216.01, samples=6 00:19:14.972 lat (usec) : 500=98.42%, 750=1.41%, 1000=0.06% 00:19:14.972 lat (msec) : 2=0.02%, 4=0.02%, 50=0.05% 00:19:14.972 cpu : usr=1.97%, sys=4.68%, ctx=8435, majf=0, minf=1 00:19:14.972 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:14.972 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:14.972 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:14.972 issued rwts: total=8433,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:14.972 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:14.972 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=3196361: Mon Jul 15 03:22:21 2024 00:19:14.972 read: IOPS=30, BW=123KiB/s (126kB/s)(364KiB/2969msec) 00:19:14.972 slat (nsec): min=8044, max=35480, avg=21038.65, stdev=8690.63 00:19:14.972 clat (usec): min=321, max=42251, avg=32349.13, stdev=16661.06 00:19:14.972 lat (usec): min=330, max=42285, avg=32370.25, stdev=16661.88 00:19:14.972 clat percentiles (usec): 00:19:14.972 | 1.00th=[ 322], 5.00th=[ 375], 10.00th=[ 392], 20.00th=[ 988], 00:19:14.972 | 30.00th=[40633], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:19:14.972 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41681], 95.00th=[42206], 00:19:14.972 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:19:14.972 | 99.99th=[42206] 00:19:14.972 bw ( KiB/s): min= 96, max= 168, per=0.63%, avg=126.40, stdev=26.77, samples=5 00:19:14.972 iops : min= 24, max= 42, avg=31.60, stdev= 6.69, samples=5 00:19:14.972 lat (usec) : 500=17.39%, 750=2.17%, 1000=1.09% 00:19:14.972 lat (msec) : 20=1.09%, 50=77.17% 00:19:14.972 cpu : usr=0.00%, sys=0.10%, ctx=94, majf=0, minf=1 00:19:14.972 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:14.972 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:14.972 complete : 0=1.1%, 4=98.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:14.972 issued rwts: total=92,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:14.972 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:14.972 00:19:14.972 Run status group 0 (all jobs): 00:19:14.972 READ: bw=19.7MiB/s (20.6MB/s), 123KiB/s-10.1MiB/s (126kB/s-10.6MB/s), io=74.3MiB (77.9MB), run=2969-3777msec 00:19:14.972 00:19:14.972 Disk stats (read/write): 00:19:14.972 nvme0n1: ios=5549/0, merge=0/0, ticks=4375/0, in_queue=4375, util=98.43% 00:19:14.972 nvme0n2: ios=4595/0, merge=0/0, ticks=3446/0, in_queue=3446, util=95.23% 00:19:14.972 nvme0n3: ios=8111/0, merge=0/0, ticks=2837/0, in_queue=2837, util=95.85% 00:19:14.972 nvme0n4: ios=142/0, merge=0/0, ticks=3875/0, in_queue=3875, util=98.98% 00:19:15.230 03:22:21 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:15.230 03:22:21 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:19:15.488 03:22:21 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:15.488 03:22:21 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:19:15.745 03:22:21 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:15.745 03:22:21 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:19:16.003 03:22:22 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:16.003 03:22:22 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:19:16.260 03:22:22 nvmf_tcp.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:19:16.260 03:22:22 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # wait 3196268 00:19:16.260 03:22:22 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:19:16.260 03:22:22 nvmf_tcp.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:19:16.518 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:16.518 03:22:22 nvmf_tcp.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:19:16.518 03:22:22 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1219 -- # local i=0 00:19:16.518 03:22:22 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:19:16.518 03:22:22 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:16.518 03:22:22 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:19:16.518 03:22:22 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:16.518 03:22:22 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1231 -- # return 0 00:19:16.518 03:22:22 nvmf_tcp.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:19:16.518 03:22:22 nvmf_tcp.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:19:16.518 nvmf hotplug test: fio failed as expected 00:19:16.518 03:22:22 nvmf_tcp.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:16.776 03:22:22 nvmf_tcp.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:19:16.776 03:22:22 nvmf_tcp.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:19:16.776 03:22:22 nvmf_tcp.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:19:16.776 03:22:22 nvmf_tcp.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:19:16.776 03:22:22 nvmf_tcp.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:19:16.776 03:22:22 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:16.776 03:22:22 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@117 -- # sync 00:19:16.776 03:22:22 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:16.776 03:22:22 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@120 -- # set +e 00:19:16.776 03:22:22 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:16.776 03:22:22 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:16.776 rmmod nvme_tcp 00:19:16.776 rmmod nvme_fabrics 00:19:16.776 rmmod nvme_keyring 00:19:16.776 03:22:22 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:16.776 03:22:22 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@124 -- # set -e 00:19:16.776 03:22:22 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@125 -- # return 0 00:19:16.776 03:22:22 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@489 -- # '[' -n 3194247 ']' 00:19:16.776 03:22:22 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@490 -- # killprocess 3194247 00:19:16.776 03:22:22 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@948 -- # '[' -z 3194247 ']' 00:19:16.776 03:22:22 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@952 -- # kill -0 3194247 00:19:16.776 03:22:22 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@953 -- # uname 00:19:16.776 03:22:22 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:16.776 03:22:22 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3194247 00:19:16.776 03:22:22 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:19:16.776 03:22:22 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:19:16.776 03:22:22 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3194247' 00:19:16.776 killing process with pid 3194247 00:19:16.776 03:22:22 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@967 -- # kill 3194247 00:19:16.776 03:22:22 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@972 -- # wait 3194247 00:19:17.035 03:22:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:17.035 03:22:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:17.035 03:22:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:17.035 03:22:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:17.035 03:22:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:17.035 03:22:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:17.035 03:22:23 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:17.035 03:22:23 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:19.567 03:22:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:19.567 00:19:19.567 real 0m23.385s 00:19:19.567 user 1m20.567s 00:19:19.567 sys 0m7.098s 00:19:19.567 03:22:25 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:19.567 03:22:25 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:19:19.567 ************************************ 00:19:19.567 END TEST nvmf_fio_target 00:19:19.567 ************************************ 00:19:19.567 03:22:25 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:19:19.567 03:22:25 nvmf_tcp -- nvmf/nvmf.sh@56 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:19:19.567 03:22:25 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:19:19.567 03:22:25 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:19.567 03:22:25 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:19.567 ************************************ 00:19:19.567 START TEST nvmf_bdevio 00:19:19.567 ************************************ 00:19:19.567 03:22:25 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:19:19.567 * Looking for test storage... 00:19:19.567 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:19.567 03:22:25 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:19.567 03:22:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:19:19.567 03:22:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:19.567 03:22:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:19.567 03:22:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:19.567 03:22:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:19.567 03:22:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:19.567 03:22:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:19.567 03:22:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:19.567 03:22:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:19.567 03:22:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:19.567 03:22:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:19.567 03:22:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:19.567 03:22:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:19:19.567 03:22:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:19.567 03:22:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:19.567 03:22:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:19.567 03:22:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:19.567 03:22:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:19.567 03:22:25 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:19.567 03:22:25 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:19.567 03:22:25 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:19.567 03:22:25 nvmf_tcp.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:19.567 03:22:25 nvmf_tcp.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:19.567 03:22:25 nvmf_tcp.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:19.567 03:22:25 nvmf_tcp.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:19:19.567 03:22:25 nvmf_tcp.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:19.567 03:22:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@47 -- # : 0 00:19:19.567 03:22:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:19.567 03:22:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:19.567 03:22:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:19.567 03:22:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:19.567 03:22:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:19.567 03:22:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:19.567 03:22:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:19.567 03:22:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:19.567 03:22:25 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:19.567 03:22:25 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:19.567 03:22:25 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:19:19.567 03:22:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:19.567 03:22:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:19.567 03:22:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:19.567 03:22:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:19.567 03:22:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:19.567 03:22:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:19.567 03:22:25 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:19.567 03:22:25 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:19.567 03:22:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:19:19.567 03:22:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:19:19.567 03:22:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@285 -- # xtrace_disable 00:19:19.567 03:22:25 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:19:21.470 03:22:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:21.470 03:22:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@291 -- # pci_devs=() 00:19:21.470 03:22:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:21.470 03:22:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:21.470 03:22:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:21.470 03:22:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:21.470 03:22:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:21.470 03:22:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@295 -- # net_devs=() 00:19:21.470 03:22:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:21.470 03:22:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@296 -- # e810=() 00:19:21.470 03:22:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@296 -- # local -ga e810 00:19:21.470 03:22:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@297 -- # x722=() 00:19:21.470 03:22:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@297 -- # local -ga x722 00:19:21.470 03:22:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@298 -- # mlx=() 00:19:21.470 03:22:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@298 -- # local -ga mlx 00:19:21.470 03:22:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:21.470 03:22:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:21.470 03:22:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:21.470 03:22:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:21.470 03:22:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:21.470 03:22:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:21.470 03:22:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:21.470 03:22:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:21.470 03:22:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:21.470 03:22:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:21.470 03:22:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:21.470 03:22:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:21.470 03:22:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:21.470 03:22:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:21.470 03:22:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:21.470 03:22:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:21.470 03:22:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:21.470 03:22:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:21.470 03:22:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:19:21.470 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:19:21.470 03:22:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:21.470 03:22:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:21.470 03:22:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:21.470 03:22:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:21.470 03:22:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:21.470 03:22:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:21.470 03:22:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:19:21.470 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:19:21.470 03:22:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:21.470 03:22:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:21.470 03:22:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:21.470 03:22:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:21.470 03:22:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:21.470 03:22:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:21.470 03:22:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:21.470 03:22:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:21.470 03:22:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:21.470 03:22:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:21.470 03:22:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:21.470 03:22:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:21.470 03:22:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:21.470 03:22:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:21.470 03:22:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:21.470 03:22:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:19:21.470 Found net devices under 0000:0a:00.0: cvl_0_0 00:19:21.470 03:22:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:21.470 03:22:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:21.470 03:22:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:21.470 03:22:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:21.470 03:22:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:21.470 03:22:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:21.470 03:22:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:21.470 03:22:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:21.470 03:22:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:19:21.470 Found net devices under 0000:0a:00.1: cvl_0_1 00:19:21.470 03:22:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:21.470 03:22:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:21.470 03:22:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # is_hw=yes 00:19:21.470 03:22:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:19:21.470 03:22:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:19:21.470 03:22:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:19:21.470 03:22:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:21.470 03:22:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:21.470 03:22:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:21.470 03:22:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:21.470 03:22:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:21.471 03:22:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:21.471 03:22:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:21.471 03:22:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:21.471 03:22:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:21.471 03:22:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:21.471 03:22:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:21.471 03:22:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:21.471 03:22:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:21.471 03:22:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:21.471 03:22:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:21.471 03:22:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:21.471 03:22:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:21.471 03:22:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:21.471 03:22:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:21.471 03:22:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:21.471 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:21.471 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.205 ms 00:19:21.471 00:19:21.471 --- 10.0.0.2 ping statistics --- 00:19:21.471 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:21.471 rtt min/avg/max/mdev = 0.205/0.205/0.205/0.000 ms 00:19:21.471 03:22:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:21.471 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:21.471 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.118 ms 00:19:21.471 00:19:21.471 --- 10.0.0.1 ping statistics --- 00:19:21.471 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:21.471 rtt min/avg/max/mdev = 0.118/0.118/0.118/0.000 ms 00:19:21.471 03:22:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:21.471 03:22:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@422 -- # return 0 00:19:21.471 03:22:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:21.471 03:22:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:21.471 03:22:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:21.471 03:22:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:21.471 03:22:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:21.471 03:22:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:21.471 03:22:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:21.471 03:22:27 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:19:21.471 03:22:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:21.471 03:22:27 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:21.471 03:22:27 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:19:21.471 03:22:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@481 -- # nvmfpid=3198976 00:19:21.471 03:22:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:19:21.471 03:22:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@482 -- # waitforlisten 3198976 00:19:21.471 03:22:27 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@829 -- # '[' -z 3198976 ']' 00:19:21.471 03:22:27 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:21.471 03:22:27 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:21.471 03:22:27 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:21.471 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:21.471 03:22:27 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:21.471 03:22:27 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:19:21.471 [2024-07-15 03:22:27.315086] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:19:21.471 [2024-07-15 03:22:27.315167] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:21.471 EAL: No free 2048 kB hugepages reported on node 1 00:19:21.471 [2024-07-15 03:22:27.381290] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:21.471 [2024-07-15 03:22:27.471015] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:21.471 [2024-07-15 03:22:27.471076] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:21.471 [2024-07-15 03:22:27.471105] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:21.471 [2024-07-15 03:22:27.471117] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:21.471 [2024-07-15 03:22:27.471128] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:21.471 [2024-07-15 03:22:27.471216] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:19:21.471 [2024-07-15 03:22:27.471270] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:19:21.471 [2024-07-15 03:22:27.471319] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:19:21.471 [2024-07-15 03:22:27.471321] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:19:21.471 03:22:27 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:21.471 03:22:27 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@862 -- # return 0 00:19:21.471 03:22:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:21.471 03:22:27 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:21.471 03:22:27 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:19:21.730 03:22:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:21.730 03:22:27 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:21.730 03:22:27 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:21.730 03:22:27 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:19:21.730 [2024-07-15 03:22:27.632667] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:21.730 03:22:27 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:21.730 03:22:27 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:19:21.730 03:22:27 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:21.730 03:22:27 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:19:21.730 Malloc0 00:19:21.730 03:22:27 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:21.730 03:22:27 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:21.730 03:22:27 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:21.730 03:22:27 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:19:21.730 03:22:27 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:21.730 03:22:27 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:21.730 03:22:27 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:21.730 03:22:27 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:19:21.730 03:22:27 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:21.730 03:22:27 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:21.730 03:22:27 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:21.730 03:22:27 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:19:21.730 [2024-07-15 03:22:27.686485] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:21.730 03:22:27 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:21.730 03:22:27 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:19:21.730 03:22:27 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:19:21.730 03:22:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # config=() 00:19:21.730 03:22:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # local subsystem config 00:19:21.730 03:22:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:21.730 03:22:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:21.730 { 00:19:21.730 "params": { 00:19:21.730 "name": "Nvme$subsystem", 00:19:21.730 "trtype": "$TEST_TRANSPORT", 00:19:21.730 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:21.730 "adrfam": "ipv4", 00:19:21.730 "trsvcid": "$NVMF_PORT", 00:19:21.730 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:21.730 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:21.730 "hdgst": ${hdgst:-false}, 00:19:21.730 "ddgst": ${ddgst:-false} 00:19:21.730 }, 00:19:21.730 "method": "bdev_nvme_attach_controller" 00:19:21.730 } 00:19:21.730 EOF 00:19:21.730 )") 00:19:21.730 03:22:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # cat 00:19:21.730 03:22:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@556 -- # jq . 00:19:21.730 03:22:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@557 -- # IFS=, 00:19:21.730 03:22:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:19:21.730 "params": { 00:19:21.730 "name": "Nvme1", 00:19:21.730 "trtype": "tcp", 00:19:21.730 "traddr": "10.0.0.2", 00:19:21.730 "adrfam": "ipv4", 00:19:21.730 "trsvcid": "4420", 00:19:21.730 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:21.730 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:21.730 "hdgst": false, 00:19:21.730 "ddgst": false 00:19:21.730 }, 00:19:21.730 "method": "bdev_nvme_attach_controller" 00:19:21.730 }' 00:19:21.730 [2024-07-15 03:22:27.734453] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:19:21.730 [2024-07-15 03:22:27.734521] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3199005 ] 00:19:21.730 EAL: No free 2048 kB hugepages reported on node 1 00:19:21.730 [2024-07-15 03:22:27.795093] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:19:21.988 [2024-07-15 03:22:27.887558] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:21.988 [2024-07-15 03:22:27.887607] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:21.988 [2024-07-15 03:22:27.887610] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:22.245 I/O targets: 00:19:22.245 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:19:22.245 00:19:22.245 00:19:22.245 CUnit - A unit testing framework for C - Version 2.1-3 00:19:22.245 http://cunit.sourceforge.net/ 00:19:22.245 00:19:22.245 00:19:22.245 Suite: bdevio tests on: Nvme1n1 00:19:22.246 Test: blockdev write read block ...passed 00:19:22.246 Test: blockdev write zeroes read block ...passed 00:19:22.246 Test: blockdev write zeroes read no split ...passed 00:19:22.246 Test: blockdev write zeroes read split ...passed 00:19:22.503 Test: blockdev write zeroes read split partial ...passed 00:19:22.503 Test: blockdev reset ...[2024-07-15 03:22:28.389774] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:22.503 [2024-07-15 03:22:28.389888] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fd6c90 (9): Bad file descriptor 00:19:22.503 [2024-07-15 03:22:28.408633] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:19:22.503 passed 00:19:22.503 Test: blockdev write read 8 blocks ...passed 00:19:22.503 Test: blockdev write read size > 128k ...passed 00:19:22.503 Test: blockdev write read invalid size ...passed 00:19:22.503 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:19:22.503 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:19:22.503 Test: blockdev write read max offset ...passed 00:19:22.503 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:19:22.503 Test: blockdev writev readv 8 blocks ...passed 00:19:22.503 Test: blockdev writev readv 30 x 1block ...passed 00:19:22.761 Test: blockdev writev readv block ...passed 00:19:22.761 Test: blockdev writev readv size > 128k ...passed 00:19:22.761 Test: blockdev writev readv size > 128k in two iovs ...passed 00:19:22.761 Test: blockdev comparev and writev ...[2024-07-15 03:22:28.661438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:22.761 [2024-07-15 03:22:28.661472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:22.761 [2024-07-15 03:22:28.661497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:22.761 [2024-07-15 03:22:28.661514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:22.761 [2024-07-15 03:22:28.661886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:22.761 [2024-07-15 03:22:28.661911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:19:22.761 [2024-07-15 03:22:28.661933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:22.761 [2024-07-15 03:22:28.661949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:19:22.761 [2024-07-15 03:22:28.662300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:22.761 [2024-07-15 03:22:28.662324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:22.761 [2024-07-15 03:22:28.662346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:22.761 [2024-07-15 03:22:28.662363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:19:22.761 [2024-07-15 03:22:28.662715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:22.761 [2024-07-15 03:22:28.662738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:22.761 [2024-07-15 03:22:28.662760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:22.762 [2024-07-15 03:22:28.662776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:19:22.762 passed 00:19:22.762 Test: blockdev nvme passthru rw ...passed 00:19:22.762 Test: blockdev nvme passthru vendor specific ...[2024-07-15 03:22:28.745181] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:22.762 [2024-07-15 03:22:28.745208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:19:22.762 [2024-07-15 03:22:28.745374] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:22.762 [2024-07-15 03:22:28.745397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:19:22.762 [2024-07-15 03:22:28.745553] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:22.762 [2024-07-15 03:22:28.745575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:19:22.762 [2024-07-15 03:22:28.745734] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:22.762 [2024-07-15 03:22:28.745757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:19:22.762 passed 00:19:22.762 Test: blockdev nvme admin passthru ...passed 00:19:22.762 Test: blockdev copy ...passed 00:19:22.762 00:19:22.762 Run Summary: Type Total Ran Passed Failed Inactive 00:19:22.762 suites 1 1 n/a 0 0 00:19:22.762 tests 23 23 23 0 0 00:19:22.762 asserts 152 152 152 0 n/a 00:19:22.762 00:19:22.762 Elapsed time = 1.146 seconds 00:19:23.019 03:22:28 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:23.019 03:22:28 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:23.019 03:22:28 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:19:23.019 03:22:29 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:23.019 03:22:29 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:19:23.019 03:22:29 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:19:23.019 03:22:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:23.019 03:22:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@117 -- # sync 00:19:23.019 03:22:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:23.019 03:22:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@120 -- # set +e 00:19:23.019 03:22:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:23.019 03:22:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:23.019 rmmod nvme_tcp 00:19:23.019 rmmod nvme_fabrics 00:19:23.019 rmmod nvme_keyring 00:19:23.019 03:22:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:23.019 03:22:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@124 -- # set -e 00:19:23.019 03:22:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@125 -- # return 0 00:19:23.019 03:22:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@489 -- # '[' -n 3198976 ']' 00:19:23.019 03:22:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@490 -- # killprocess 3198976 00:19:23.019 03:22:29 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@948 -- # '[' -z 3198976 ']' 00:19:23.019 03:22:29 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@952 -- # kill -0 3198976 00:19:23.019 03:22:29 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@953 -- # uname 00:19:23.019 03:22:29 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:23.019 03:22:29 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3198976 00:19:23.019 03:22:29 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@954 -- # process_name=reactor_3 00:19:23.019 03:22:29 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@958 -- # '[' reactor_3 = sudo ']' 00:19:23.020 03:22:29 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3198976' 00:19:23.020 killing process with pid 3198976 00:19:23.020 03:22:29 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@967 -- # kill 3198976 00:19:23.020 03:22:29 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@972 -- # wait 3198976 00:19:23.278 03:22:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:23.278 03:22:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:23.278 03:22:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:23.278 03:22:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:23.278 03:22:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:23.278 03:22:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:23.278 03:22:29 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:23.278 03:22:29 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:25.807 03:22:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:25.807 00:19:25.807 real 0m6.260s 00:19:25.807 user 0m10.627s 00:19:25.807 sys 0m1.993s 00:19:25.807 03:22:31 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:25.807 03:22:31 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:19:25.807 ************************************ 00:19:25.807 END TEST nvmf_bdevio 00:19:25.807 ************************************ 00:19:25.807 03:22:31 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:19:25.807 03:22:31 nvmf_tcp -- nvmf/nvmf.sh@57 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:19:25.807 03:22:31 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:19:25.807 03:22:31 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:25.807 03:22:31 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:25.807 ************************************ 00:19:25.807 START TEST nvmf_auth_target 00:19:25.807 ************************************ 00:19:25.807 03:22:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:19:25.807 * Looking for test storage... 00:19:25.807 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:25.807 03:22:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:25.807 03:22:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:19:25.807 03:22:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:25.807 03:22:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:25.807 03:22:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:25.807 03:22:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:25.807 03:22:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:25.807 03:22:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:25.807 03:22:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:25.807 03:22:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:25.807 03:22:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:25.807 03:22:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:25.807 03:22:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:25.807 03:22:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:19:25.807 03:22:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:25.807 03:22:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:25.807 03:22:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:25.807 03:22:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:25.807 03:22:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:25.807 03:22:31 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:25.807 03:22:31 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:25.807 03:22:31 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:25.807 03:22:31 nvmf_tcp.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:25.807 03:22:31 nvmf_tcp.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:25.807 03:22:31 nvmf_tcp.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:25.807 03:22:31 nvmf_tcp.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:19:25.808 03:22:31 nvmf_tcp.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:25.808 03:22:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@47 -- # : 0 00:19:25.808 03:22:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:25.808 03:22:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:25.808 03:22:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:25.808 03:22:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:25.808 03:22:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:25.808 03:22:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:25.808 03:22:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:25.808 03:22:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:25.808 03:22:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:19:25.808 03:22:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:19:25.808 03:22:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:19:25.808 03:22:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:25.808 03:22:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:19:25.808 03:22:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:19:25.808 03:22:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:19:25.808 03:22:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@59 -- # nvmftestinit 00:19:25.808 03:22:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:25.808 03:22:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:25.808 03:22:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:25.808 03:22:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:25.808 03:22:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:25.808 03:22:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:25.808 03:22:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:25.808 03:22:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:25.808 03:22:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:19:25.808 03:22:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:19:25.808 03:22:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@285 -- # xtrace_disable 00:19:25.808 03:22:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:27.708 03:22:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:27.708 03:22:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@291 -- # pci_devs=() 00:19:27.708 03:22:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:27.708 03:22:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:27.708 03:22:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:27.708 03:22:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:27.708 03:22:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:27.708 03:22:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@295 -- # net_devs=() 00:19:27.708 03:22:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:27.708 03:22:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@296 -- # e810=() 00:19:27.708 03:22:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@296 -- # local -ga e810 00:19:27.708 03:22:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@297 -- # x722=() 00:19:27.708 03:22:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@297 -- # local -ga x722 00:19:27.708 03:22:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@298 -- # mlx=() 00:19:27.708 03:22:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@298 -- # local -ga mlx 00:19:27.708 03:22:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:27.708 03:22:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:27.708 03:22:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:27.708 03:22:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:27.708 03:22:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:27.708 03:22:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:27.708 03:22:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:27.708 03:22:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:27.708 03:22:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:27.708 03:22:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:27.708 03:22:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:27.708 03:22:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:27.708 03:22:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:27.708 03:22:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:27.708 03:22:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:27.708 03:22:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:27.708 03:22:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:27.708 03:22:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:27.708 03:22:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:19:27.708 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:19:27.708 03:22:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:27.708 03:22:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:27.708 03:22:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:27.708 03:22:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:27.708 03:22:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:27.708 03:22:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:27.708 03:22:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:19:27.708 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:19:27.708 03:22:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:27.708 03:22:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:27.708 03:22:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:27.708 03:22:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:27.708 03:22:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:27.708 03:22:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:27.708 03:22:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:27.708 03:22:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:27.708 03:22:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:27.708 03:22:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:27.708 03:22:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:27.708 03:22:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:27.708 03:22:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:27.708 03:22:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:27.708 03:22:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:27.708 03:22:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:19:27.708 Found net devices under 0000:0a:00.0: cvl_0_0 00:19:27.708 03:22:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:27.708 03:22:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:27.708 03:22:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:27.708 03:22:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:27.708 03:22:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:27.708 03:22:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:27.708 03:22:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:27.708 03:22:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:27.708 03:22:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:19:27.708 Found net devices under 0000:0a:00.1: cvl_0_1 00:19:27.708 03:22:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:27.708 03:22:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:27.708 03:22:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # is_hw=yes 00:19:27.708 03:22:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:19:27.708 03:22:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:19:27.708 03:22:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:19:27.708 03:22:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:27.708 03:22:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:27.708 03:22:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:27.708 03:22:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:27.708 03:22:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:27.708 03:22:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:27.708 03:22:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:27.708 03:22:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:27.708 03:22:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:27.708 03:22:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:27.708 03:22:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:27.708 03:22:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:27.708 03:22:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:27.708 03:22:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:27.708 03:22:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:27.708 03:22:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:27.708 03:22:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:27.708 03:22:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:27.708 03:22:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:27.708 03:22:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:27.708 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:27.708 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.266 ms 00:19:27.708 00:19:27.708 --- 10.0.0.2 ping statistics --- 00:19:27.708 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:27.709 rtt min/avg/max/mdev = 0.266/0.266/0.266/0.000 ms 00:19:27.709 03:22:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:27.709 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:27.709 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.107 ms 00:19:27.709 00:19:27.709 --- 10.0.0.1 ping statistics --- 00:19:27.709 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:27.709 rtt min/avg/max/mdev = 0.107/0.107/0.107/0.000 ms 00:19:27.709 03:22:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:27.709 03:22:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@422 -- # return 0 00:19:27.709 03:22:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:27.709 03:22:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:27.709 03:22:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:27.709 03:22:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:27.709 03:22:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:27.709 03:22:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:27.709 03:22:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:27.709 03:22:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@60 -- # nvmfappstart -L nvmf_auth 00:19:27.709 03:22:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:27.709 03:22:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:27.709 03:22:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:27.709 03:22:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=3201073 00:19:27.709 03:22:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:19:27.709 03:22:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 3201073 00:19:27.709 03:22:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 3201073 ']' 00:19:27.709 03:22:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:27.709 03:22:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:27.709 03:22:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:27.709 03:22:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:27.709 03:22:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:27.709 03:22:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:27.709 03:22:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:19:27.967 03:22:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:27.967 03:22:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:27.967 03:22:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:27.967 03:22:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:27.967 03:22:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@62 -- # hostpid=3201212 00:19:27.967 03:22:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:19:27.967 03:22:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@64 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:19:27.967 03:22:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key null 48 00:19:27.967 03:22:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:19:27.967 03:22:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:27.967 03:22:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:19:27.968 03:22:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=null 00:19:27.968 03:22:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:19:27.968 03:22:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:19:27.968 03:22:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=224452bfd23b9634a3f02be755cda68caca7994a0d9a619b 00:19:27.968 03:22:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:19:27.968 03:22:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.9v8 00:19:27.968 03:22:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 224452bfd23b9634a3f02be755cda68caca7994a0d9a619b 0 00:19:27.968 03:22:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 224452bfd23b9634a3f02be755cda68caca7994a0d9a619b 0 00:19:27.968 03:22:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:19:27.968 03:22:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:19:27.968 03:22:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=224452bfd23b9634a3f02be755cda68caca7994a0d9a619b 00:19:27.968 03:22:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=0 00:19:27.968 03:22:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:19:27.968 03:22:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.9v8 00:19:27.968 03:22:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.9v8 00:19:27.968 03:22:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # keys[0]=/tmp/spdk.key-null.9v8 00:19:27.968 03:22:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key sha512 64 00:19:27.968 03:22:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:19:27.968 03:22:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:27.968 03:22:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:19:27.968 03:22:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:19:27.968 03:22:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:19:27.968 03:22:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:19:27.968 03:22:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=d560e4e287ea0ae619a3e5b273f21b521666964d64e78d392fa0747503ac8c66 00:19:27.968 03:22:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:19:27.968 03:22:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.D22 00:19:27.968 03:22:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key d560e4e287ea0ae619a3e5b273f21b521666964d64e78d392fa0747503ac8c66 3 00:19:27.968 03:22:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 d560e4e287ea0ae619a3e5b273f21b521666964d64e78d392fa0747503ac8c66 3 00:19:27.968 03:22:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:19:27.968 03:22:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:19:27.968 03:22:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=d560e4e287ea0ae619a3e5b273f21b521666964d64e78d392fa0747503ac8c66 00:19:27.968 03:22:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:19:27.968 03:22:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:19:27.968 03:22:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.D22 00:19:27.968 03:22:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.D22 00:19:27.968 03:22:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # ckeys[0]=/tmp/spdk.key-sha512.D22 00:19:27.968 03:22:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha256 32 00:19:27.968 03:22:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:19:27.968 03:22:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:27.968 03:22:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:19:27.968 03:22:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:19:27.968 03:22:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:19:27.968 03:22:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:19:27.968 03:22:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=07b0be35f2a7613af667d08e716c4976 00:19:27.968 03:22:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:19:27.968 03:22:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.uHP 00:19:27.968 03:22:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 07b0be35f2a7613af667d08e716c4976 1 00:19:27.968 03:22:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 07b0be35f2a7613af667d08e716c4976 1 00:19:27.968 03:22:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:19:27.968 03:22:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:19:27.968 03:22:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=07b0be35f2a7613af667d08e716c4976 00:19:27.968 03:22:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:19:27.968 03:22:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:19:27.968 03:22:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.uHP 00:19:27.968 03:22:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.uHP 00:19:27.968 03:22:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # keys[1]=/tmp/spdk.key-sha256.uHP 00:19:27.968 03:22:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha384 48 00:19:27.968 03:22:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:19:27.968 03:22:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:27.968 03:22:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:19:27.968 03:22:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:19:27.968 03:22:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:19:27.968 03:22:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:19:27.968 03:22:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=3a12436d3f38f78478df7b849c92f10f981cb7aa4e061ef5 00:19:27.968 03:22:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:19:27.968 03:22:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.RGK 00:19:27.968 03:22:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 3a12436d3f38f78478df7b849c92f10f981cb7aa4e061ef5 2 00:19:27.968 03:22:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 3a12436d3f38f78478df7b849c92f10f981cb7aa4e061ef5 2 00:19:27.968 03:22:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:19:27.968 03:22:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:19:27.968 03:22:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=3a12436d3f38f78478df7b849c92f10f981cb7aa4e061ef5 00:19:27.968 03:22:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:19:27.968 03:22:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:19:27.968 03:22:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.RGK 00:19:27.968 03:22:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.RGK 00:19:27.968 03:22:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # ckeys[1]=/tmp/spdk.key-sha384.RGK 00:19:27.968 03:22:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha384 48 00:19:27.968 03:22:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:19:27.968 03:22:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:27.968 03:22:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:19:27.968 03:22:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:19:27.968 03:22:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:19:27.968 03:22:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:19:27.968 03:22:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=ebb45070db64df0f8a54970a1055bd7f1fb1338a8f985511 00:19:27.968 03:22:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:19:27.968 03:22:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.Rra 00:19:27.968 03:22:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key ebb45070db64df0f8a54970a1055bd7f1fb1338a8f985511 2 00:19:27.968 03:22:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 ebb45070db64df0f8a54970a1055bd7f1fb1338a8f985511 2 00:19:27.968 03:22:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:19:27.968 03:22:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:19:27.968 03:22:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=ebb45070db64df0f8a54970a1055bd7f1fb1338a8f985511 00:19:27.968 03:22:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:19:27.968 03:22:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:19:28.227 03:22:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.Rra 00:19:28.227 03:22:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.Rra 00:19:28.227 03:22:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # keys[2]=/tmp/spdk.key-sha384.Rra 00:19:28.227 03:22:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha256 32 00:19:28.227 03:22:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:19:28.227 03:22:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:28.227 03:22:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:19:28.227 03:22:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:19:28.227 03:22:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:19:28.227 03:22:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:19:28.227 03:22:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=d75cee0c8157b2085d27daafd6b92a02 00:19:28.227 03:22:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:19:28.227 03:22:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.8Ax 00:19:28.227 03:22:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key d75cee0c8157b2085d27daafd6b92a02 1 00:19:28.227 03:22:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 d75cee0c8157b2085d27daafd6b92a02 1 00:19:28.227 03:22:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:19:28.227 03:22:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:19:28.227 03:22:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=d75cee0c8157b2085d27daafd6b92a02 00:19:28.227 03:22:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:19:28.227 03:22:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:19:28.227 03:22:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.8Ax 00:19:28.227 03:22:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.8Ax 00:19:28.227 03:22:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # ckeys[2]=/tmp/spdk.key-sha256.8Ax 00:19:28.227 03:22:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # gen_dhchap_key sha512 64 00:19:28.227 03:22:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:19:28.227 03:22:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:28.227 03:22:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:19:28.227 03:22:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:19:28.227 03:22:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:19:28.227 03:22:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:19:28.227 03:22:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=897a325a545fe13348d8040aa4bd1ffc22531d56bb3e935ebc8824feb212e6ff 00:19:28.227 03:22:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:19:28.227 03:22:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.b8P 00:19:28.227 03:22:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 897a325a545fe13348d8040aa4bd1ffc22531d56bb3e935ebc8824feb212e6ff 3 00:19:28.227 03:22:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 897a325a545fe13348d8040aa4bd1ffc22531d56bb3e935ebc8824feb212e6ff 3 00:19:28.227 03:22:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:19:28.227 03:22:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:19:28.227 03:22:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=897a325a545fe13348d8040aa4bd1ffc22531d56bb3e935ebc8824feb212e6ff 00:19:28.227 03:22:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:19:28.227 03:22:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:19:28.227 03:22:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.b8P 00:19:28.227 03:22:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.b8P 00:19:28.227 03:22:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # keys[3]=/tmp/spdk.key-sha512.b8P 00:19:28.227 03:22:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # ckeys[3]= 00:19:28.227 03:22:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@72 -- # waitforlisten 3201073 00:19:28.227 03:22:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 3201073 ']' 00:19:28.227 03:22:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:28.227 03:22:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:28.228 03:22:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:28.228 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:28.228 03:22:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:28.228 03:22:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:28.485 03:22:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:28.485 03:22:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:19:28.485 03:22:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@73 -- # waitforlisten 3201212 /var/tmp/host.sock 00:19:28.485 03:22:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 3201212 ']' 00:19:28.485 03:22:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/host.sock 00:19:28.485 03:22:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:28.485 03:22:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:19:28.485 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:19:28.485 03:22:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:28.485 03:22:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:28.742 03:22:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:28.742 03:22:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:19:28.742 03:22:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd 00:19:28.742 03:22:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:28.742 03:22:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:28.742 03:22:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:28.742 03:22:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:19:28.742 03:22:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.9v8 00:19:28.742 03:22:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:28.742 03:22:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:28.742 03:22:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:28.742 03:22:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.9v8 00:19:28.743 03:22:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.9v8 00:19:29.000 03:22:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha512.D22 ]] 00:19:29.000 03:22:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.D22 00:19:29.000 03:22:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:29.000 03:22:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:29.000 03:22:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:29.000 03:22:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.D22 00:19:29.000 03:22:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.D22 00:19:29.257 03:22:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:19:29.257 03:22:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.uHP 00:19:29.257 03:22:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:29.257 03:22:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:29.257 03:22:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:29.257 03:22:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.uHP 00:19:29.257 03:22:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.uHP 00:19:29.513 03:22:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha384.RGK ]] 00:19:29.513 03:22:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.RGK 00:19:29.513 03:22:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:29.513 03:22:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:29.513 03:22:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:29.513 03:22:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.RGK 00:19:29.513 03:22:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.RGK 00:19:29.771 03:22:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:19:29.771 03:22:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.Rra 00:19:29.771 03:22:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:29.771 03:22:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:29.771 03:22:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:29.771 03:22:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.Rra 00:19:29.771 03:22:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.Rra 00:19:30.028 03:22:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha256.8Ax ]] 00:19:30.028 03:22:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.8Ax 00:19:30.028 03:22:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:30.028 03:22:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:30.028 03:22:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:30.028 03:22:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.8Ax 00:19:30.029 03:22:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.8Ax 00:19:30.286 03:22:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:19:30.286 03:22:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.b8P 00:19:30.286 03:22:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:30.286 03:22:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:30.286 03:22:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:30.286 03:22:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.b8P 00:19:30.286 03:22:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.b8P 00:19:30.544 03:22:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n '' ]] 00:19:30.544 03:22:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:19:30.544 03:22:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:30.544 03:22:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:30.544 03:22:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:30.544 03:22:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:30.802 03:22:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 0 00:19:30.802 03:22:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:30.802 03:22:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:30.802 03:22:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:30.802 03:22:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:30.802 03:22:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:30.802 03:22:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:30.802 03:22:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:30.802 03:22:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:30.802 03:22:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:30.802 03:22:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:30.802 03:22:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:31.060 00:19:31.060 03:22:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:31.060 03:22:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:31.060 03:22:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:31.344 03:22:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:31.344 03:22:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:31.344 03:22:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:31.344 03:22:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:31.344 03:22:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:31.344 03:22:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:31.344 { 00:19:31.344 "cntlid": 1, 00:19:31.344 "qid": 0, 00:19:31.344 "state": "enabled", 00:19:31.344 "thread": "nvmf_tgt_poll_group_000", 00:19:31.344 "listen_address": { 00:19:31.344 "trtype": "TCP", 00:19:31.344 "adrfam": "IPv4", 00:19:31.344 "traddr": "10.0.0.2", 00:19:31.344 "trsvcid": "4420" 00:19:31.344 }, 00:19:31.344 "peer_address": { 00:19:31.344 "trtype": "TCP", 00:19:31.344 "adrfam": "IPv4", 00:19:31.344 "traddr": "10.0.0.1", 00:19:31.344 "trsvcid": "54076" 00:19:31.344 }, 00:19:31.344 "auth": { 00:19:31.344 "state": "completed", 00:19:31.344 "digest": "sha256", 00:19:31.344 "dhgroup": "null" 00:19:31.344 } 00:19:31.344 } 00:19:31.344 ]' 00:19:31.344 03:22:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:31.344 03:22:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:31.344 03:22:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:31.344 03:22:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:19:31.344 03:22:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:31.344 03:22:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:31.344 03:22:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:31.344 03:22:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:31.602 03:22:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:MjI0NDUyYmZkMjNiOTYzNGEzZjAyYmU3NTVjZGE2OGNhY2E3OTk0YTBkOWE2MTlid1ecCw==: --dhchap-ctrl-secret DHHC-1:03:ZDU2MGU0ZTI4N2VhMGFlNjE5YTNlNWIyNzNmMjFiNTIxNjY2OTY0ZDY0ZTc4ZDM5MmZhMDc0NzUwM2FjOGM2NrFKL3Y=: 00:19:32.536 03:22:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:32.536 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:32.536 03:22:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:32.536 03:22:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:32.536 03:22:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:32.536 03:22:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:32.536 03:22:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:32.536 03:22:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:32.536 03:22:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:33.103 03:22:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 1 00:19:33.103 03:22:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:33.103 03:22:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:33.103 03:22:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:33.103 03:22:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:33.103 03:22:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:33.103 03:22:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:33.103 03:22:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:33.103 03:22:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:33.103 03:22:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:33.103 03:22:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:33.103 03:22:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:33.360 00:19:33.360 03:22:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:33.360 03:22:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:33.360 03:22:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:33.618 03:22:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:33.618 03:22:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:33.618 03:22:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:33.618 03:22:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:33.618 03:22:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:33.618 03:22:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:33.618 { 00:19:33.618 "cntlid": 3, 00:19:33.618 "qid": 0, 00:19:33.618 "state": "enabled", 00:19:33.618 "thread": "nvmf_tgt_poll_group_000", 00:19:33.618 "listen_address": { 00:19:33.618 "trtype": "TCP", 00:19:33.618 "adrfam": "IPv4", 00:19:33.618 "traddr": "10.0.0.2", 00:19:33.618 "trsvcid": "4420" 00:19:33.618 }, 00:19:33.618 "peer_address": { 00:19:33.618 "trtype": "TCP", 00:19:33.618 "adrfam": "IPv4", 00:19:33.618 "traddr": "10.0.0.1", 00:19:33.618 "trsvcid": "54112" 00:19:33.618 }, 00:19:33.618 "auth": { 00:19:33.618 "state": "completed", 00:19:33.618 "digest": "sha256", 00:19:33.618 "dhgroup": "null" 00:19:33.618 } 00:19:33.618 } 00:19:33.618 ]' 00:19:33.618 03:22:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:33.618 03:22:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:33.618 03:22:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:33.618 03:22:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:19:33.618 03:22:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:33.618 03:22:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:33.618 03:22:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:33.618 03:22:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:33.877 03:22:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:MDdiMGJlMzVmMmE3NjEzYWY2NjdkMDhlNzE2YzQ5Nzb2gSDU: --dhchap-ctrl-secret DHHC-1:02:M2ExMjQzNmQzZjM4Zjc4NDc4ZGY3Yjg0OWM5MmYxMGY5ODFjYjdhYTRlMDYxZWY1yP+/9A==: 00:19:34.809 03:22:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:34.809 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:34.809 03:22:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:34.809 03:22:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:34.809 03:22:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:34.809 03:22:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:34.809 03:22:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:34.809 03:22:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:34.810 03:22:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:35.066 03:22:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 2 00:19:35.066 03:22:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:35.066 03:22:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:35.066 03:22:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:35.066 03:22:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:35.066 03:22:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:35.066 03:22:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:35.066 03:22:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:35.066 03:22:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:35.066 03:22:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:35.066 03:22:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:35.066 03:22:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:35.324 00:19:35.324 03:22:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:35.324 03:22:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:35.324 03:22:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:35.581 03:22:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:35.581 03:22:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:35.581 03:22:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:35.581 03:22:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:35.839 03:22:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:35.839 03:22:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:35.839 { 00:19:35.839 "cntlid": 5, 00:19:35.839 "qid": 0, 00:19:35.839 "state": "enabled", 00:19:35.839 "thread": "nvmf_tgt_poll_group_000", 00:19:35.839 "listen_address": { 00:19:35.839 "trtype": "TCP", 00:19:35.839 "adrfam": "IPv4", 00:19:35.839 "traddr": "10.0.0.2", 00:19:35.839 "trsvcid": "4420" 00:19:35.839 }, 00:19:35.839 "peer_address": { 00:19:35.839 "trtype": "TCP", 00:19:35.839 "adrfam": "IPv4", 00:19:35.839 "traddr": "10.0.0.1", 00:19:35.839 "trsvcid": "34816" 00:19:35.839 }, 00:19:35.839 "auth": { 00:19:35.839 "state": "completed", 00:19:35.839 "digest": "sha256", 00:19:35.839 "dhgroup": "null" 00:19:35.839 } 00:19:35.839 } 00:19:35.839 ]' 00:19:35.839 03:22:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:35.839 03:22:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:35.839 03:22:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:35.839 03:22:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:19:35.839 03:22:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:35.839 03:22:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:35.839 03:22:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:35.839 03:22:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:36.097 03:22:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:ZWJiNDUwNzBkYjY0ZGYwZjhhNTQ5NzBhMTA1NWJkN2YxZmIxMzM4YThmOTg1NTExOimYFA==: --dhchap-ctrl-secret DHHC-1:01:ZDc1Y2VlMGM4MTU3YjIwODVkMjdkYWFmZDZiOTJhMDJJwEtf: 00:19:37.029 03:22:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:37.029 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:37.029 03:22:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:37.029 03:22:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:37.029 03:22:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:37.029 03:22:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:37.029 03:22:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:37.029 03:22:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:37.029 03:22:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:37.294 03:22:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 3 00:19:37.294 03:22:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:37.294 03:22:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:37.294 03:22:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:37.294 03:22:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:37.294 03:22:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:37.294 03:22:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:19:37.294 03:22:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:37.294 03:22:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:37.294 03:22:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:37.294 03:22:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:37.294 03:22:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:37.861 00:19:37.861 03:22:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:37.861 03:22:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:37.861 03:22:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:38.119 03:22:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:38.119 03:22:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:38.119 03:22:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:38.119 03:22:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:38.119 03:22:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:38.119 03:22:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:38.119 { 00:19:38.119 "cntlid": 7, 00:19:38.119 "qid": 0, 00:19:38.119 "state": "enabled", 00:19:38.119 "thread": "nvmf_tgt_poll_group_000", 00:19:38.119 "listen_address": { 00:19:38.119 "trtype": "TCP", 00:19:38.119 "adrfam": "IPv4", 00:19:38.119 "traddr": "10.0.0.2", 00:19:38.119 "trsvcid": "4420" 00:19:38.119 }, 00:19:38.119 "peer_address": { 00:19:38.119 "trtype": "TCP", 00:19:38.119 "adrfam": "IPv4", 00:19:38.119 "traddr": "10.0.0.1", 00:19:38.119 "trsvcid": "34830" 00:19:38.119 }, 00:19:38.119 "auth": { 00:19:38.119 "state": "completed", 00:19:38.119 "digest": "sha256", 00:19:38.119 "dhgroup": "null" 00:19:38.119 } 00:19:38.119 } 00:19:38.119 ]' 00:19:38.119 03:22:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:38.119 03:22:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:38.119 03:22:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:38.119 03:22:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:19:38.119 03:22:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:38.119 03:22:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:38.119 03:22:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:38.119 03:22:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:38.377 03:22:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:ODk3YTMyNWE1NDVmZTEzMzQ4ZDgwNDBhYTRiZDFmZmMyMjUzMWQ1NmJiM2U5MzVlYmM4ODI0ZmViMjEyZTZmZiS5hUA=: 00:19:39.308 03:22:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:39.308 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:39.308 03:22:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:39.308 03:22:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:39.308 03:22:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:39.308 03:22:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:39.308 03:22:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:39.308 03:22:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:39.308 03:22:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:39.308 03:22:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:39.565 03:22:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 0 00:19:39.565 03:22:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:39.565 03:22:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:39.565 03:22:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:39.565 03:22:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:39.565 03:22:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:39.565 03:22:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:39.565 03:22:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:39.565 03:22:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:39.565 03:22:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:39.565 03:22:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:39.565 03:22:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:39.822 00:19:39.822 03:22:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:39.822 03:22:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:39.822 03:22:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:40.079 03:22:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:40.337 03:22:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:40.337 03:22:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:40.337 03:22:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:40.337 03:22:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:40.337 03:22:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:40.337 { 00:19:40.337 "cntlid": 9, 00:19:40.337 "qid": 0, 00:19:40.337 "state": "enabled", 00:19:40.337 "thread": "nvmf_tgt_poll_group_000", 00:19:40.337 "listen_address": { 00:19:40.337 "trtype": "TCP", 00:19:40.337 "adrfam": "IPv4", 00:19:40.337 "traddr": "10.0.0.2", 00:19:40.337 "trsvcid": "4420" 00:19:40.337 }, 00:19:40.337 "peer_address": { 00:19:40.337 "trtype": "TCP", 00:19:40.337 "adrfam": "IPv4", 00:19:40.337 "traddr": "10.0.0.1", 00:19:40.337 "trsvcid": "34848" 00:19:40.337 }, 00:19:40.337 "auth": { 00:19:40.337 "state": "completed", 00:19:40.337 "digest": "sha256", 00:19:40.337 "dhgroup": "ffdhe2048" 00:19:40.337 } 00:19:40.337 } 00:19:40.337 ]' 00:19:40.337 03:22:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:40.337 03:22:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:40.337 03:22:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:40.337 03:22:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:40.337 03:22:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:40.337 03:22:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:40.337 03:22:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:40.337 03:22:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:40.594 03:22:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:MjI0NDUyYmZkMjNiOTYzNGEzZjAyYmU3NTVjZGE2OGNhY2E3OTk0YTBkOWE2MTlid1ecCw==: --dhchap-ctrl-secret DHHC-1:03:ZDU2MGU0ZTI4N2VhMGFlNjE5YTNlNWIyNzNmMjFiNTIxNjY2OTY0ZDY0ZTc4ZDM5MmZhMDc0NzUwM2FjOGM2NrFKL3Y=: 00:19:41.524 03:22:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:41.524 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:41.524 03:22:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:41.525 03:22:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:41.525 03:22:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:41.525 03:22:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:41.525 03:22:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:41.525 03:22:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:41.525 03:22:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:41.782 03:22:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 1 00:19:41.782 03:22:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:41.782 03:22:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:41.782 03:22:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:41.782 03:22:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:41.782 03:22:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:41.782 03:22:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:41.782 03:22:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:41.782 03:22:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:41.782 03:22:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:41.782 03:22:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:41.782 03:22:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:42.040 00:19:42.040 03:22:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:42.040 03:22:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:42.040 03:22:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:42.297 03:22:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:42.297 03:22:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:42.297 03:22:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:42.297 03:22:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:42.297 03:22:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:42.297 03:22:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:42.297 { 00:19:42.297 "cntlid": 11, 00:19:42.297 "qid": 0, 00:19:42.297 "state": "enabled", 00:19:42.297 "thread": "nvmf_tgt_poll_group_000", 00:19:42.297 "listen_address": { 00:19:42.297 "trtype": "TCP", 00:19:42.297 "adrfam": "IPv4", 00:19:42.297 "traddr": "10.0.0.2", 00:19:42.297 "trsvcid": "4420" 00:19:42.297 }, 00:19:42.297 "peer_address": { 00:19:42.297 "trtype": "TCP", 00:19:42.297 "adrfam": "IPv4", 00:19:42.297 "traddr": "10.0.0.1", 00:19:42.297 "trsvcid": "34880" 00:19:42.297 }, 00:19:42.297 "auth": { 00:19:42.297 "state": "completed", 00:19:42.297 "digest": "sha256", 00:19:42.297 "dhgroup": "ffdhe2048" 00:19:42.297 } 00:19:42.297 } 00:19:42.297 ]' 00:19:42.297 03:22:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:42.555 03:22:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:42.555 03:22:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:42.555 03:22:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:42.555 03:22:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:42.555 03:22:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:42.555 03:22:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:42.555 03:22:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:42.813 03:22:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:MDdiMGJlMzVmMmE3NjEzYWY2NjdkMDhlNzE2YzQ5Nzb2gSDU: --dhchap-ctrl-secret DHHC-1:02:M2ExMjQzNmQzZjM4Zjc4NDc4ZGY3Yjg0OWM5MmYxMGY5ODFjYjdhYTRlMDYxZWY1yP+/9A==: 00:19:43.747 03:22:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:43.747 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:43.747 03:22:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:43.747 03:22:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:43.747 03:22:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:43.747 03:22:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:43.747 03:22:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:43.747 03:22:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:43.747 03:22:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:44.005 03:22:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 2 00:19:44.006 03:22:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:44.006 03:22:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:44.006 03:22:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:44.006 03:22:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:44.006 03:22:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:44.006 03:22:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:44.006 03:22:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:44.006 03:22:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:44.006 03:22:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:44.006 03:22:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:44.006 03:22:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:44.264 00:19:44.264 03:22:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:44.264 03:22:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:44.264 03:22:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:44.522 03:22:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:44.522 03:22:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:44.522 03:22:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:44.522 03:22:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:44.522 03:22:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:44.522 03:22:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:44.522 { 00:19:44.522 "cntlid": 13, 00:19:44.522 "qid": 0, 00:19:44.522 "state": "enabled", 00:19:44.522 "thread": "nvmf_tgt_poll_group_000", 00:19:44.522 "listen_address": { 00:19:44.522 "trtype": "TCP", 00:19:44.522 "adrfam": "IPv4", 00:19:44.522 "traddr": "10.0.0.2", 00:19:44.522 "trsvcid": "4420" 00:19:44.522 }, 00:19:44.522 "peer_address": { 00:19:44.522 "trtype": "TCP", 00:19:44.522 "adrfam": "IPv4", 00:19:44.522 "traddr": "10.0.0.1", 00:19:44.522 "trsvcid": "34900" 00:19:44.522 }, 00:19:44.522 "auth": { 00:19:44.522 "state": "completed", 00:19:44.522 "digest": "sha256", 00:19:44.522 "dhgroup": "ffdhe2048" 00:19:44.522 } 00:19:44.522 } 00:19:44.522 ]' 00:19:44.522 03:22:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:44.780 03:22:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:44.780 03:22:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:44.780 03:22:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:44.780 03:22:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:44.780 03:22:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:44.780 03:22:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:44.780 03:22:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:45.039 03:22:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:ZWJiNDUwNzBkYjY0ZGYwZjhhNTQ5NzBhMTA1NWJkN2YxZmIxMzM4YThmOTg1NTExOimYFA==: --dhchap-ctrl-secret DHHC-1:01:ZDc1Y2VlMGM4MTU3YjIwODVkMjdkYWFmZDZiOTJhMDJJwEtf: 00:19:45.973 03:22:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:45.973 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:45.973 03:22:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:45.973 03:22:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:45.973 03:22:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:45.973 03:22:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:45.973 03:22:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:45.973 03:22:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:45.973 03:22:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:46.231 03:22:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 3 00:19:46.231 03:22:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:46.231 03:22:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:46.231 03:22:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:46.231 03:22:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:46.231 03:22:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:46.231 03:22:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:19:46.231 03:22:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:46.231 03:22:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:46.231 03:22:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:46.231 03:22:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:46.231 03:22:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:46.489 00:19:46.489 03:22:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:46.489 03:22:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:46.489 03:22:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:46.747 03:22:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:46.747 03:22:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:46.747 03:22:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:46.747 03:22:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:46.747 03:22:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:46.747 03:22:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:46.747 { 00:19:46.747 "cntlid": 15, 00:19:46.747 "qid": 0, 00:19:46.747 "state": "enabled", 00:19:46.747 "thread": "nvmf_tgt_poll_group_000", 00:19:46.747 "listen_address": { 00:19:46.747 "trtype": "TCP", 00:19:46.747 "adrfam": "IPv4", 00:19:46.747 "traddr": "10.0.0.2", 00:19:46.747 "trsvcid": "4420" 00:19:46.747 }, 00:19:46.747 "peer_address": { 00:19:46.747 "trtype": "TCP", 00:19:46.747 "adrfam": "IPv4", 00:19:46.747 "traddr": "10.0.0.1", 00:19:46.747 "trsvcid": "46880" 00:19:46.747 }, 00:19:46.747 "auth": { 00:19:46.747 "state": "completed", 00:19:46.747 "digest": "sha256", 00:19:46.747 "dhgroup": "ffdhe2048" 00:19:46.747 } 00:19:46.747 } 00:19:46.747 ]' 00:19:46.747 03:22:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:47.005 03:22:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:47.005 03:22:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:47.005 03:22:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:47.005 03:22:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:47.005 03:22:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:47.005 03:22:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:47.005 03:22:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:47.262 03:22:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:ODk3YTMyNWE1NDVmZTEzMzQ4ZDgwNDBhYTRiZDFmZmMyMjUzMWQ1NmJiM2U5MzVlYmM4ODI0ZmViMjEyZTZmZiS5hUA=: 00:19:48.192 03:22:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:48.192 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:48.192 03:22:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:48.192 03:22:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:48.192 03:22:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:48.192 03:22:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:48.192 03:22:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:48.192 03:22:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:48.193 03:22:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:48.193 03:22:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:48.450 03:22:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 0 00:19:48.450 03:22:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:48.451 03:22:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:48.451 03:22:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:48.451 03:22:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:48.451 03:22:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:48.451 03:22:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:48.451 03:22:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:48.451 03:22:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:48.451 03:22:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:48.451 03:22:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:48.451 03:22:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:48.720 00:19:48.720 03:22:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:48.720 03:22:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:48.720 03:22:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:49.013 03:22:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:49.013 03:22:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:49.013 03:22:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:49.013 03:22:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:49.013 03:22:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:49.013 03:22:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:49.013 { 00:19:49.013 "cntlid": 17, 00:19:49.013 "qid": 0, 00:19:49.013 "state": "enabled", 00:19:49.013 "thread": "nvmf_tgt_poll_group_000", 00:19:49.013 "listen_address": { 00:19:49.013 "trtype": "TCP", 00:19:49.013 "adrfam": "IPv4", 00:19:49.013 "traddr": "10.0.0.2", 00:19:49.013 "trsvcid": "4420" 00:19:49.013 }, 00:19:49.013 "peer_address": { 00:19:49.013 "trtype": "TCP", 00:19:49.013 "adrfam": "IPv4", 00:19:49.013 "traddr": "10.0.0.1", 00:19:49.013 "trsvcid": "46914" 00:19:49.014 }, 00:19:49.014 "auth": { 00:19:49.014 "state": "completed", 00:19:49.014 "digest": "sha256", 00:19:49.014 "dhgroup": "ffdhe3072" 00:19:49.014 } 00:19:49.014 } 00:19:49.014 ]' 00:19:49.014 03:22:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:49.014 03:22:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:49.014 03:22:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:49.014 03:22:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:49.014 03:22:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:49.271 03:22:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:49.271 03:22:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:49.271 03:22:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:49.271 03:22:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:MjI0NDUyYmZkMjNiOTYzNGEzZjAyYmU3NTVjZGE2OGNhY2E3OTk0YTBkOWE2MTlid1ecCw==: --dhchap-ctrl-secret DHHC-1:03:ZDU2MGU0ZTI4N2VhMGFlNjE5YTNlNWIyNzNmMjFiNTIxNjY2OTY0ZDY0ZTc4ZDM5MmZhMDc0NzUwM2FjOGM2NrFKL3Y=: 00:19:50.645 03:22:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:50.645 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:50.645 03:22:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:50.645 03:22:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:50.645 03:22:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:50.645 03:22:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:50.645 03:22:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:50.645 03:22:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:50.645 03:22:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:50.645 03:22:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 1 00:19:50.645 03:22:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:50.645 03:22:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:50.645 03:22:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:50.645 03:22:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:50.645 03:22:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:50.645 03:22:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:50.645 03:22:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:50.645 03:22:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:50.645 03:22:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:50.645 03:22:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:50.645 03:22:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:50.903 00:19:50.903 03:22:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:50.903 03:22:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:50.903 03:22:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:51.161 03:22:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:51.161 03:22:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:51.161 03:22:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:51.161 03:22:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:51.162 03:22:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:51.162 03:22:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:51.162 { 00:19:51.162 "cntlid": 19, 00:19:51.162 "qid": 0, 00:19:51.162 "state": "enabled", 00:19:51.162 "thread": "nvmf_tgt_poll_group_000", 00:19:51.162 "listen_address": { 00:19:51.162 "trtype": "TCP", 00:19:51.162 "adrfam": "IPv4", 00:19:51.162 "traddr": "10.0.0.2", 00:19:51.162 "trsvcid": "4420" 00:19:51.162 }, 00:19:51.162 "peer_address": { 00:19:51.162 "trtype": "TCP", 00:19:51.162 "adrfam": "IPv4", 00:19:51.162 "traddr": "10.0.0.1", 00:19:51.162 "trsvcid": "46938" 00:19:51.162 }, 00:19:51.162 "auth": { 00:19:51.162 "state": "completed", 00:19:51.162 "digest": "sha256", 00:19:51.162 "dhgroup": "ffdhe3072" 00:19:51.162 } 00:19:51.162 } 00:19:51.162 ]' 00:19:51.162 03:22:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:51.162 03:22:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:51.162 03:22:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:51.436 03:22:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:51.436 03:22:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:51.436 03:22:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:51.436 03:22:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:51.436 03:22:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:51.694 03:22:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:MDdiMGJlMzVmMmE3NjEzYWY2NjdkMDhlNzE2YzQ5Nzb2gSDU: --dhchap-ctrl-secret DHHC-1:02:M2ExMjQzNmQzZjM4Zjc4NDc4ZGY3Yjg0OWM5MmYxMGY5ODFjYjdhYTRlMDYxZWY1yP+/9A==: 00:19:52.629 03:22:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:52.629 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:52.629 03:22:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:52.629 03:22:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:52.629 03:22:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:52.629 03:22:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:52.629 03:22:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:52.629 03:22:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:52.629 03:22:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:52.887 03:22:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 2 00:19:52.887 03:22:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:52.887 03:22:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:52.887 03:22:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:52.887 03:22:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:52.887 03:22:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:52.887 03:22:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:52.887 03:22:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:52.887 03:22:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:52.887 03:22:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:52.887 03:22:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:52.887 03:22:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:53.145 00:19:53.145 03:22:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:53.145 03:22:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:53.145 03:22:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:53.403 03:22:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:53.403 03:22:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:53.404 03:22:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:53.404 03:22:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:53.404 03:22:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:53.404 03:22:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:53.404 { 00:19:53.404 "cntlid": 21, 00:19:53.404 "qid": 0, 00:19:53.404 "state": "enabled", 00:19:53.404 "thread": "nvmf_tgt_poll_group_000", 00:19:53.404 "listen_address": { 00:19:53.404 "trtype": "TCP", 00:19:53.404 "adrfam": "IPv4", 00:19:53.404 "traddr": "10.0.0.2", 00:19:53.404 "trsvcid": "4420" 00:19:53.404 }, 00:19:53.404 "peer_address": { 00:19:53.404 "trtype": "TCP", 00:19:53.404 "adrfam": "IPv4", 00:19:53.404 "traddr": "10.0.0.1", 00:19:53.404 "trsvcid": "46960" 00:19:53.404 }, 00:19:53.404 "auth": { 00:19:53.404 "state": "completed", 00:19:53.404 "digest": "sha256", 00:19:53.404 "dhgroup": "ffdhe3072" 00:19:53.404 } 00:19:53.404 } 00:19:53.404 ]' 00:19:53.404 03:22:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:53.404 03:22:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:53.404 03:22:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:53.404 03:22:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:53.404 03:22:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:53.662 03:22:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:53.662 03:22:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:53.662 03:22:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:53.921 03:22:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:ZWJiNDUwNzBkYjY0ZGYwZjhhNTQ5NzBhMTA1NWJkN2YxZmIxMzM4YThmOTg1NTExOimYFA==: --dhchap-ctrl-secret DHHC-1:01:ZDc1Y2VlMGM4MTU3YjIwODVkMjdkYWFmZDZiOTJhMDJJwEtf: 00:19:54.854 03:23:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:54.854 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:54.854 03:23:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:54.854 03:23:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:54.854 03:23:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:54.854 03:23:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:54.854 03:23:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:54.854 03:23:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:54.854 03:23:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:54.854 03:23:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 3 00:19:54.854 03:23:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:54.855 03:23:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:54.855 03:23:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:54.855 03:23:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:54.855 03:23:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:54.855 03:23:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:19:54.855 03:23:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:54.855 03:23:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:54.855 03:23:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:54.855 03:23:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:54.855 03:23:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:55.421 00:19:55.421 03:23:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:55.421 03:23:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:55.421 03:23:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:55.421 03:23:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:55.421 03:23:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:55.421 03:23:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:55.421 03:23:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:55.421 03:23:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:55.421 03:23:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:55.421 { 00:19:55.421 "cntlid": 23, 00:19:55.421 "qid": 0, 00:19:55.421 "state": "enabled", 00:19:55.421 "thread": "nvmf_tgt_poll_group_000", 00:19:55.421 "listen_address": { 00:19:55.421 "trtype": "TCP", 00:19:55.421 "adrfam": "IPv4", 00:19:55.421 "traddr": "10.0.0.2", 00:19:55.421 "trsvcid": "4420" 00:19:55.421 }, 00:19:55.421 "peer_address": { 00:19:55.421 "trtype": "TCP", 00:19:55.421 "adrfam": "IPv4", 00:19:55.421 "traddr": "10.0.0.1", 00:19:55.421 "trsvcid": "51682" 00:19:55.421 }, 00:19:55.421 "auth": { 00:19:55.421 "state": "completed", 00:19:55.421 "digest": "sha256", 00:19:55.421 "dhgroup": "ffdhe3072" 00:19:55.421 } 00:19:55.421 } 00:19:55.421 ]' 00:19:55.678 03:23:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:55.678 03:23:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:55.678 03:23:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:55.678 03:23:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:55.678 03:23:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:55.678 03:23:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:55.678 03:23:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:55.678 03:23:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:55.936 03:23:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:ODk3YTMyNWE1NDVmZTEzMzQ4ZDgwNDBhYTRiZDFmZmMyMjUzMWQ1NmJiM2U5MzVlYmM4ODI0ZmViMjEyZTZmZiS5hUA=: 00:19:56.868 03:23:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:56.868 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:56.868 03:23:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:56.868 03:23:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:56.868 03:23:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:56.868 03:23:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:56.868 03:23:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:56.868 03:23:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:56.868 03:23:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:56.868 03:23:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:57.126 03:23:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 0 00:19:57.126 03:23:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:57.126 03:23:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:57.127 03:23:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:57.127 03:23:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:57.127 03:23:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:57.127 03:23:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:57.127 03:23:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:57.127 03:23:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:57.127 03:23:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:57.127 03:23:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:57.127 03:23:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:57.384 00:19:57.384 03:23:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:57.384 03:23:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:57.384 03:23:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:57.641 03:23:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:57.641 03:23:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:57.641 03:23:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:57.641 03:23:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:57.641 03:23:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:57.641 03:23:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:57.641 { 00:19:57.641 "cntlid": 25, 00:19:57.641 "qid": 0, 00:19:57.641 "state": "enabled", 00:19:57.641 "thread": "nvmf_tgt_poll_group_000", 00:19:57.641 "listen_address": { 00:19:57.641 "trtype": "TCP", 00:19:57.641 "adrfam": "IPv4", 00:19:57.641 "traddr": "10.0.0.2", 00:19:57.641 "trsvcid": "4420" 00:19:57.641 }, 00:19:57.641 "peer_address": { 00:19:57.641 "trtype": "TCP", 00:19:57.641 "adrfam": "IPv4", 00:19:57.641 "traddr": "10.0.0.1", 00:19:57.641 "trsvcid": "51716" 00:19:57.641 }, 00:19:57.641 "auth": { 00:19:57.641 "state": "completed", 00:19:57.641 "digest": "sha256", 00:19:57.641 "dhgroup": "ffdhe4096" 00:19:57.641 } 00:19:57.641 } 00:19:57.641 ]' 00:19:57.641 03:23:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:57.899 03:23:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:57.899 03:23:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:57.899 03:23:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:57.899 03:23:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:57.899 03:23:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:57.899 03:23:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:57.899 03:23:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:58.157 03:23:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:MjI0NDUyYmZkMjNiOTYzNGEzZjAyYmU3NTVjZGE2OGNhY2E3OTk0YTBkOWE2MTlid1ecCw==: --dhchap-ctrl-secret DHHC-1:03:ZDU2MGU0ZTI4N2VhMGFlNjE5YTNlNWIyNzNmMjFiNTIxNjY2OTY0ZDY0ZTc4ZDM5MmZhMDc0NzUwM2FjOGM2NrFKL3Y=: 00:19:59.088 03:23:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:59.088 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:59.088 03:23:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:59.088 03:23:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:59.088 03:23:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:59.088 03:23:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:59.088 03:23:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:59.088 03:23:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:59.088 03:23:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:59.345 03:23:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 1 00:19:59.345 03:23:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:59.345 03:23:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:59.345 03:23:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:59.345 03:23:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:59.345 03:23:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:59.345 03:23:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:59.345 03:23:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:59.345 03:23:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:59.345 03:23:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:59.345 03:23:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:59.345 03:23:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:59.603 00:19:59.860 03:23:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:59.860 03:23:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:59.860 03:23:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:00.118 03:23:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:00.118 03:23:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:00.118 03:23:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:00.118 03:23:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:00.118 03:23:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:00.118 03:23:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:00.118 { 00:20:00.118 "cntlid": 27, 00:20:00.118 "qid": 0, 00:20:00.118 "state": "enabled", 00:20:00.118 "thread": "nvmf_tgt_poll_group_000", 00:20:00.118 "listen_address": { 00:20:00.118 "trtype": "TCP", 00:20:00.118 "adrfam": "IPv4", 00:20:00.118 "traddr": "10.0.0.2", 00:20:00.118 "trsvcid": "4420" 00:20:00.118 }, 00:20:00.118 "peer_address": { 00:20:00.118 "trtype": "TCP", 00:20:00.118 "adrfam": "IPv4", 00:20:00.118 "traddr": "10.0.0.1", 00:20:00.118 "trsvcid": "51738" 00:20:00.118 }, 00:20:00.118 "auth": { 00:20:00.118 "state": "completed", 00:20:00.118 "digest": "sha256", 00:20:00.118 "dhgroup": "ffdhe4096" 00:20:00.118 } 00:20:00.118 } 00:20:00.118 ]' 00:20:00.118 03:23:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:00.118 03:23:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:00.118 03:23:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:00.118 03:23:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:00.118 03:23:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:00.118 03:23:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:00.118 03:23:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:00.118 03:23:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:00.376 03:23:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:MDdiMGJlMzVmMmE3NjEzYWY2NjdkMDhlNzE2YzQ5Nzb2gSDU: --dhchap-ctrl-secret DHHC-1:02:M2ExMjQzNmQzZjM4Zjc4NDc4ZGY3Yjg0OWM5MmYxMGY5ODFjYjdhYTRlMDYxZWY1yP+/9A==: 00:20:01.307 03:23:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:01.307 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:01.307 03:23:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:01.307 03:23:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:01.307 03:23:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:01.307 03:23:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:01.307 03:23:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:01.307 03:23:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:01.307 03:23:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:01.565 03:23:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 2 00:20:01.565 03:23:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:01.565 03:23:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:01.565 03:23:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:20:01.565 03:23:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:01.565 03:23:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:01.565 03:23:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:01.565 03:23:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:01.565 03:23:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:01.565 03:23:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:01.565 03:23:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:01.565 03:23:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:02.130 00:20:02.130 03:23:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:02.130 03:23:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:02.130 03:23:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:02.388 03:23:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:02.388 03:23:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:02.388 03:23:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:02.388 03:23:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:02.388 03:23:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:02.388 03:23:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:02.388 { 00:20:02.388 "cntlid": 29, 00:20:02.388 "qid": 0, 00:20:02.388 "state": "enabled", 00:20:02.388 "thread": "nvmf_tgt_poll_group_000", 00:20:02.388 "listen_address": { 00:20:02.388 "trtype": "TCP", 00:20:02.388 "adrfam": "IPv4", 00:20:02.388 "traddr": "10.0.0.2", 00:20:02.388 "trsvcid": "4420" 00:20:02.388 }, 00:20:02.388 "peer_address": { 00:20:02.388 "trtype": "TCP", 00:20:02.388 "adrfam": "IPv4", 00:20:02.388 "traddr": "10.0.0.1", 00:20:02.388 "trsvcid": "51772" 00:20:02.388 }, 00:20:02.388 "auth": { 00:20:02.388 "state": "completed", 00:20:02.388 "digest": "sha256", 00:20:02.388 "dhgroup": "ffdhe4096" 00:20:02.388 } 00:20:02.388 } 00:20:02.388 ]' 00:20:02.388 03:23:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:02.388 03:23:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:02.388 03:23:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:02.388 03:23:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:02.388 03:23:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:02.388 03:23:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:02.388 03:23:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:02.388 03:23:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:02.646 03:23:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:ZWJiNDUwNzBkYjY0ZGYwZjhhNTQ5NzBhMTA1NWJkN2YxZmIxMzM4YThmOTg1NTExOimYFA==: --dhchap-ctrl-secret DHHC-1:01:ZDc1Y2VlMGM4MTU3YjIwODVkMjdkYWFmZDZiOTJhMDJJwEtf: 00:20:03.578 03:23:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:03.578 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:03.578 03:23:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:03.578 03:23:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:03.578 03:23:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:03.578 03:23:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:03.578 03:23:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:03.578 03:23:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:03.578 03:23:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:03.836 03:23:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 3 00:20:03.836 03:23:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:03.836 03:23:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:03.836 03:23:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:20:03.836 03:23:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:03.836 03:23:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:03.836 03:23:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:03.836 03:23:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:03.836 03:23:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:03.836 03:23:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:03.836 03:23:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:03.836 03:23:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:04.401 00:20:04.401 03:23:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:04.401 03:23:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:04.401 03:23:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:04.658 03:23:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:04.658 03:23:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:04.658 03:23:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:04.658 03:23:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:04.658 03:23:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:04.658 03:23:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:04.658 { 00:20:04.658 "cntlid": 31, 00:20:04.658 "qid": 0, 00:20:04.658 "state": "enabled", 00:20:04.658 "thread": "nvmf_tgt_poll_group_000", 00:20:04.658 "listen_address": { 00:20:04.658 "trtype": "TCP", 00:20:04.658 "adrfam": "IPv4", 00:20:04.658 "traddr": "10.0.0.2", 00:20:04.658 "trsvcid": "4420" 00:20:04.658 }, 00:20:04.658 "peer_address": { 00:20:04.658 "trtype": "TCP", 00:20:04.658 "adrfam": "IPv4", 00:20:04.658 "traddr": "10.0.0.1", 00:20:04.658 "trsvcid": "51802" 00:20:04.658 }, 00:20:04.658 "auth": { 00:20:04.658 "state": "completed", 00:20:04.658 "digest": "sha256", 00:20:04.658 "dhgroup": "ffdhe4096" 00:20:04.658 } 00:20:04.658 } 00:20:04.658 ]' 00:20:04.658 03:23:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:04.658 03:23:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:04.658 03:23:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:04.658 03:23:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:04.658 03:23:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:04.658 03:23:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:04.658 03:23:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:04.658 03:23:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:04.916 03:23:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:ODk3YTMyNWE1NDVmZTEzMzQ4ZDgwNDBhYTRiZDFmZmMyMjUzMWQ1NmJiM2U5MzVlYmM4ODI0ZmViMjEyZTZmZiS5hUA=: 00:20:05.848 03:23:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:05.848 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:05.848 03:23:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:05.848 03:23:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:05.848 03:23:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:05.848 03:23:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:05.848 03:23:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:05.848 03:23:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:05.848 03:23:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:05.848 03:23:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:06.105 03:23:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 0 00:20:06.105 03:23:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:06.105 03:23:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:06.105 03:23:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:20:06.105 03:23:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:06.105 03:23:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:06.105 03:23:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:06.105 03:23:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:06.105 03:23:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:06.105 03:23:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:06.105 03:23:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:06.105 03:23:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:06.700 00:20:06.700 03:23:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:06.700 03:23:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:06.700 03:23:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:06.958 03:23:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:06.958 03:23:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:06.958 03:23:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:06.958 03:23:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:06.958 03:23:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:06.958 03:23:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:06.958 { 00:20:06.958 "cntlid": 33, 00:20:06.958 "qid": 0, 00:20:06.958 "state": "enabled", 00:20:06.958 "thread": "nvmf_tgt_poll_group_000", 00:20:06.958 "listen_address": { 00:20:06.958 "trtype": "TCP", 00:20:06.958 "adrfam": "IPv4", 00:20:06.958 "traddr": "10.0.0.2", 00:20:06.958 "trsvcid": "4420" 00:20:06.958 }, 00:20:06.958 "peer_address": { 00:20:06.958 "trtype": "TCP", 00:20:06.958 "adrfam": "IPv4", 00:20:06.958 "traddr": "10.0.0.1", 00:20:06.958 "trsvcid": "45078" 00:20:06.958 }, 00:20:06.958 "auth": { 00:20:06.958 "state": "completed", 00:20:06.958 "digest": "sha256", 00:20:06.958 "dhgroup": "ffdhe6144" 00:20:06.958 } 00:20:06.958 } 00:20:06.958 ]' 00:20:06.958 03:23:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:06.958 03:23:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:06.958 03:23:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:07.216 03:23:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:07.216 03:23:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:07.216 03:23:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:07.216 03:23:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:07.216 03:23:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:07.472 03:23:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:MjI0NDUyYmZkMjNiOTYzNGEzZjAyYmU3NTVjZGE2OGNhY2E3OTk0YTBkOWE2MTlid1ecCw==: --dhchap-ctrl-secret DHHC-1:03:ZDU2MGU0ZTI4N2VhMGFlNjE5YTNlNWIyNzNmMjFiNTIxNjY2OTY0ZDY0ZTc4ZDM5MmZhMDc0NzUwM2FjOGM2NrFKL3Y=: 00:20:08.402 03:23:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:08.402 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:08.402 03:23:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:08.402 03:23:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:08.402 03:23:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:08.402 03:23:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:08.402 03:23:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:08.402 03:23:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:08.402 03:23:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:08.660 03:23:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 1 00:20:08.660 03:23:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:08.660 03:23:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:08.660 03:23:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:20:08.660 03:23:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:08.660 03:23:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:08.660 03:23:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:08.660 03:23:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:08.660 03:23:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:08.660 03:23:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:08.660 03:23:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:08.660 03:23:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:09.225 00:20:09.225 03:23:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:09.225 03:23:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:09.225 03:23:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:09.482 03:23:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:09.482 03:23:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:09.483 03:23:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:09.483 03:23:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:09.483 03:23:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:09.483 03:23:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:09.483 { 00:20:09.483 "cntlid": 35, 00:20:09.483 "qid": 0, 00:20:09.483 "state": "enabled", 00:20:09.483 "thread": "nvmf_tgt_poll_group_000", 00:20:09.483 "listen_address": { 00:20:09.483 "trtype": "TCP", 00:20:09.483 "adrfam": "IPv4", 00:20:09.483 "traddr": "10.0.0.2", 00:20:09.483 "trsvcid": "4420" 00:20:09.483 }, 00:20:09.483 "peer_address": { 00:20:09.483 "trtype": "TCP", 00:20:09.483 "adrfam": "IPv4", 00:20:09.483 "traddr": "10.0.0.1", 00:20:09.483 "trsvcid": "45114" 00:20:09.483 }, 00:20:09.483 "auth": { 00:20:09.483 "state": "completed", 00:20:09.483 "digest": "sha256", 00:20:09.483 "dhgroup": "ffdhe6144" 00:20:09.483 } 00:20:09.483 } 00:20:09.483 ]' 00:20:09.483 03:23:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:09.483 03:23:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:09.483 03:23:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:09.740 03:23:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:09.740 03:23:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:09.740 03:23:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:09.740 03:23:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:09.740 03:23:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:09.998 03:23:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:MDdiMGJlMzVmMmE3NjEzYWY2NjdkMDhlNzE2YzQ5Nzb2gSDU: --dhchap-ctrl-secret DHHC-1:02:M2ExMjQzNmQzZjM4Zjc4NDc4ZGY3Yjg0OWM5MmYxMGY5ODFjYjdhYTRlMDYxZWY1yP+/9A==: 00:20:10.931 03:23:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:10.931 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:10.931 03:23:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:10.931 03:23:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:10.931 03:23:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:10.931 03:23:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:10.931 03:23:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:10.931 03:23:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:10.931 03:23:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:11.189 03:23:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 2 00:20:11.189 03:23:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:11.189 03:23:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:11.189 03:23:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:20:11.189 03:23:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:11.189 03:23:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:11.189 03:23:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:11.189 03:23:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:11.189 03:23:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:11.189 03:23:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:11.189 03:23:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:11.189 03:23:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:11.755 00:20:11.755 03:23:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:11.755 03:23:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:11.755 03:23:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:12.013 03:23:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:12.013 03:23:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:12.013 03:23:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:12.013 03:23:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:12.013 03:23:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:12.013 03:23:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:12.013 { 00:20:12.013 "cntlid": 37, 00:20:12.013 "qid": 0, 00:20:12.013 "state": "enabled", 00:20:12.013 "thread": "nvmf_tgt_poll_group_000", 00:20:12.013 "listen_address": { 00:20:12.013 "trtype": "TCP", 00:20:12.013 "adrfam": "IPv4", 00:20:12.013 "traddr": "10.0.0.2", 00:20:12.013 "trsvcid": "4420" 00:20:12.013 }, 00:20:12.013 "peer_address": { 00:20:12.013 "trtype": "TCP", 00:20:12.013 "adrfam": "IPv4", 00:20:12.013 "traddr": "10.0.0.1", 00:20:12.013 "trsvcid": "45150" 00:20:12.013 }, 00:20:12.013 "auth": { 00:20:12.013 "state": "completed", 00:20:12.013 "digest": "sha256", 00:20:12.013 "dhgroup": "ffdhe6144" 00:20:12.013 } 00:20:12.013 } 00:20:12.013 ]' 00:20:12.013 03:23:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:12.013 03:23:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:12.013 03:23:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:12.013 03:23:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:12.013 03:23:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:12.013 03:23:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:12.014 03:23:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:12.014 03:23:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:12.272 03:23:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:ZWJiNDUwNzBkYjY0ZGYwZjhhNTQ5NzBhMTA1NWJkN2YxZmIxMzM4YThmOTg1NTExOimYFA==: --dhchap-ctrl-secret DHHC-1:01:ZDc1Y2VlMGM4MTU3YjIwODVkMjdkYWFmZDZiOTJhMDJJwEtf: 00:20:13.205 03:23:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:13.205 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:13.205 03:23:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:13.205 03:23:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:13.205 03:23:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:13.205 03:23:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:13.205 03:23:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:13.205 03:23:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:13.205 03:23:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:13.463 03:23:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 3 00:20:13.463 03:23:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:13.463 03:23:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:13.463 03:23:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:20:13.463 03:23:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:13.463 03:23:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:13.464 03:23:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:13.464 03:23:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:13.464 03:23:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:13.464 03:23:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:13.464 03:23:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:13.464 03:23:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:14.030 00:20:14.030 03:23:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:14.030 03:23:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:14.030 03:23:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:14.287 03:23:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:14.287 03:23:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:14.287 03:23:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:14.287 03:23:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:14.287 03:23:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:14.287 03:23:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:14.287 { 00:20:14.287 "cntlid": 39, 00:20:14.287 "qid": 0, 00:20:14.287 "state": "enabled", 00:20:14.287 "thread": "nvmf_tgt_poll_group_000", 00:20:14.287 "listen_address": { 00:20:14.287 "trtype": "TCP", 00:20:14.287 "adrfam": "IPv4", 00:20:14.287 "traddr": "10.0.0.2", 00:20:14.287 "trsvcid": "4420" 00:20:14.287 }, 00:20:14.287 "peer_address": { 00:20:14.287 "trtype": "TCP", 00:20:14.287 "adrfam": "IPv4", 00:20:14.287 "traddr": "10.0.0.1", 00:20:14.287 "trsvcid": "45178" 00:20:14.287 }, 00:20:14.287 "auth": { 00:20:14.287 "state": "completed", 00:20:14.287 "digest": "sha256", 00:20:14.287 "dhgroup": "ffdhe6144" 00:20:14.287 } 00:20:14.287 } 00:20:14.287 ]' 00:20:14.287 03:23:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:14.545 03:23:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:14.545 03:23:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:14.545 03:23:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:14.545 03:23:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:14.545 03:23:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:14.545 03:23:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:14.545 03:23:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:14.803 03:23:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:ODk3YTMyNWE1NDVmZTEzMzQ4ZDgwNDBhYTRiZDFmZmMyMjUzMWQ1NmJiM2U5MzVlYmM4ODI0ZmViMjEyZTZmZiS5hUA=: 00:20:15.737 03:23:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:15.737 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:15.737 03:23:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:15.737 03:23:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:15.737 03:23:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:15.737 03:23:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:15.737 03:23:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:15.737 03:23:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:15.737 03:23:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:15.737 03:23:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:15.996 03:23:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 0 00:20:15.996 03:23:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:15.996 03:23:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:15.996 03:23:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:15.996 03:23:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:15.996 03:23:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:15.996 03:23:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:15.996 03:23:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:15.996 03:23:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:15.996 03:23:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:15.996 03:23:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:15.996 03:23:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:16.929 00:20:16.929 03:23:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:16.929 03:23:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:16.929 03:23:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:17.187 03:23:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:17.187 03:23:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:17.187 03:23:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:17.187 03:23:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:17.187 03:23:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:17.187 03:23:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:17.187 { 00:20:17.187 "cntlid": 41, 00:20:17.187 "qid": 0, 00:20:17.187 "state": "enabled", 00:20:17.187 "thread": "nvmf_tgt_poll_group_000", 00:20:17.187 "listen_address": { 00:20:17.187 "trtype": "TCP", 00:20:17.187 "adrfam": "IPv4", 00:20:17.187 "traddr": "10.0.0.2", 00:20:17.187 "trsvcid": "4420" 00:20:17.187 }, 00:20:17.187 "peer_address": { 00:20:17.187 "trtype": "TCP", 00:20:17.187 "adrfam": "IPv4", 00:20:17.187 "traddr": "10.0.0.1", 00:20:17.187 "trsvcid": "57594" 00:20:17.187 }, 00:20:17.187 "auth": { 00:20:17.187 "state": "completed", 00:20:17.187 "digest": "sha256", 00:20:17.187 "dhgroup": "ffdhe8192" 00:20:17.187 } 00:20:17.187 } 00:20:17.187 ]' 00:20:17.187 03:23:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:17.187 03:23:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:17.187 03:23:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:17.187 03:23:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:17.187 03:23:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:17.187 03:23:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:17.187 03:23:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:17.187 03:23:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:17.444 03:23:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:MjI0NDUyYmZkMjNiOTYzNGEzZjAyYmU3NTVjZGE2OGNhY2E3OTk0YTBkOWE2MTlid1ecCw==: --dhchap-ctrl-secret DHHC-1:03:ZDU2MGU0ZTI4N2VhMGFlNjE5YTNlNWIyNzNmMjFiNTIxNjY2OTY0ZDY0ZTc4ZDM5MmZhMDc0NzUwM2FjOGM2NrFKL3Y=: 00:20:18.374 03:23:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:18.374 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:18.374 03:23:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:18.374 03:23:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:18.374 03:23:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:18.374 03:23:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:18.374 03:23:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:18.374 03:23:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:18.374 03:23:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:18.632 03:23:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 1 00:20:18.632 03:23:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:18.632 03:23:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:18.632 03:23:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:18.632 03:23:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:18.632 03:23:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:18.632 03:23:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:18.632 03:23:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:18.632 03:23:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:18.632 03:23:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:18.632 03:23:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:18.632 03:23:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:19.563 00:20:19.563 03:23:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:19.563 03:23:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:19.563 03:23:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:19.820 03:23:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:19.820 03:23:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:19.820 03:23:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:19.820 03:23:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:19.820 03:23:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:19.820 03:23:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:19.820 { 00:20:19.820 "cntlid": 43, 00:20:19.820 "qid": 0, 00:20:19.820 "state": "enabled", 00:20:19.820 "thread": "nvmf_tgt_poll_group_000", 00:20:19.820 "listen_address": { 00:20:19.820 "trtype": "TCP", 00:20:19.820 "adrfam": "IPv4", 00:20:19.820 "traddr": "10.0.0.2", 00:20:19.820 "trsvcid": "4420" 00:20:19.820 }, 00:20:19.820 "peer_address": { 00:20:19.820 "trtype": "TCP", 00:20:19.820 "adrfam": "IPv4", 00:20:19.820 "traddr": "10.0.0.1", 00:20:19.820 "trsvcid": "57636" 00:20:19.820 }, 00:20:19.820 "auth": { 00:20:19.820 "state": "completed", 00:20:19.820 "digest": "sha256", 00:20:19.820 "dhgroup": "ffdhe8192" 00:20:19.820 } 00:20:19.820 } 00:20:19.820 ]' 00:20:19.820 03:23:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:19.820 03:23:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:19.820 03:23:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:19.821 03:23:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:19.821 03:23:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:20.078 03:23:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:20.078 03:23:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:20.078 03:23:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:20.336 03:23:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:MDdiMGJlMzVmMmE3NjEzYWY2NjdkMDhlNzE2YzQ5Nzb2gSDU: --dhchap-ctrl-secret DHHC-1:02:M2ExMjQzNmQzZjM4Zjc4NDc4ZGY3Yjg0OWM5MmYxMGY5ODFjYjdhYTRlMDYxZWY1yP+/9A==: 00:20:21.267 03:23:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:21.268 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:21.268 03:23:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:21.268 03:23:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:21.268 03:23:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:21.268 03:23:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:21.268 03:23:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:21.268 03:23:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:21.268 03:23:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:21.525 03:23:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 2 00:20:21.525 03:23:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:21.525 03:23:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:21.525 03:23:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:21.525 03:23:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:21.525 03:23:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:21.525 03:23:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:21.525 03:23:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:21.525 03:23:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:21.525 03:23:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:21.525 03:23:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:21.525 03:23:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:22.454 00:20:22.454 03:23:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:22.454 03:23:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:22.454 03:23:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:22.710 03:23:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:22.710 03:23:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:22.710 03:23:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:22.710 03:23:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:22.710 03:23:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:22.710 03:23:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:22.710 { 00:20:22.710 "cntlid": 45, 00:20:22.710 "qid": 0, 00:20:22.710 "state": "enabled", 00:20:22.710 "thread": "nvmf_tgt_poll_group_000", 00:20:22.711 "listen_address": { 00:20:22.711 "trtype": "TCP", 00:20:22.711 "adrfam": "IPv4", 00:20:22.711 "traddr": "10.0.0.2", 00:20:22.711 "trsvcid": "4420" 00:20:22.711 }, 00:20:22.711 "peer_address": { 00:20:22.711 "trtype": "TCP", 00:20:22.711 "adrfam": "IPv4", 00:20:22.711 "traddr": "10.0.0.1", 00:20:22.711 "trsvcid": "57660" 00:20:22.711 }, 00:20:22.711 "auth": { 00:20:22.711 "state": "completed", 00:20:22.711 "digest": "sha256", 00:20:22.711 "dhgroup": "ffdhe8192" 00:20:22.711 } 00:20:22.711 } 00:20:22.711 ]' 00:20:22.711 03:23:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:22.711 03:23:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:22.711 03:23:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:22.711 03:23:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:22.711 03:23:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:22.711 03:23:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:22.711 03:23:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:22.711 03:23:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:22.968 03:23:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:ZWJiNDUwNzBkYjY0ZGYwZjhhNTQ5NzBhMTA1NWJkN2YxZmIxMzM4YThmOTg1NTExOimYFA==: --dhchap-ctrl-secret DHHC-1:01:ZDc1Y2VlMGM4MTU3YjIwODVkMjdkYWFmZDZiOTJhMDJJwEtf: 00:20:23.897 03:23:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:24.153 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:24.153 03:23:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:24.153 03:23:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:24.153 03:23:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:24.153 03:23:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:24.153 03:23:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:24.153 03:23:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:24.153 03:23:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:24.410 03:23:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 3 00:20:24.410 03:23:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:24.410 03:23:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:24.410 03:23:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:24.410 03:23:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:24.410 03:23:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:24.410 03:23:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:24.410 03:23:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:24.410 03:23:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:24.410 03:23:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:24.410 03:23:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:24.410 03:23:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:25.384 00:20:25.384 03:23:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:25.384 03:23:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:25.384 03:23:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:25.384 03:23:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:25.384 03:23:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:25.384 03:23:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:25.384 03:23:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:25.384 03:23:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:25.384 03:23:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:25.384 { 00:20:25.384 "cntlid": 47, 00:20:25.384 "qid": 0, 00:20:25.384 "state": "enabled", 00:20:25.384 "thread": "nvmf_tgt_poll_group_000", 00:20:25.384 "listen_address": { 00:20:25.384 "trtype": "TCP", 00:20:25.384 "adrfam": "IPv4", 00:20:25.384 "traddr": "10.0.0.2", 00:20:25.384 "trsvcid": "4420" 00:20:25.384 }, 00:20:25.384 "peer_address": { 00:20:25.384 "trtype": "TCP", 00:20:25.384 "adrfam": "IPv4", 00:20:25.384 "traddr": "10.0.0.1", 00:20:25.384 "trsvcid": "57688" 00:20:25.384 }, 00:20:25.384 "auth": { 00:20:25.384 "state": "completed", 00:20:25.384 "digest": "sha256", 00:20:25.384 "dhgroup": "ffdhe8192" 00:20:25.384 } 00:20:25.384 } 00:20:25.384 ]' 00:20:25.384 03:23:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:25.384 03:23:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:25.384 03:23:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:25.641 03:23:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:25.641 03:23:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:25.641 03:23:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:25.641 03:23:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:25.641 03:23:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:25.899 03:23:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:ODk3YTMyNWE1NDVmZTEzMzQ4ZDgwNDBhYTRiZDFmZmMyMjUzMWQ1NmJiM2U5MzVlYmM4ODI0ZmViMjEyZTZmZiS5hUA=: 00:20:26.831 03:23:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:26.831 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:26.831 03:23:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:26.831 03:23:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:26.831 03:23:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:26.831 03:23:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:26.831 03:23:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:20:26.831 03:23:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:26.831 03:23:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:26.831 03:23:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:26.831 03:23:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:27.089 03:23:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 0 00:20:27.089 03:23:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:27.089 03:23:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:27.089 03:23:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:20:27.089 03:23:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:27.089 03:23:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:27.089 03:23:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:27.089 03:23:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:27.089 03:23:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:27.089 03:23:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:27.089 03:23:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:27.089 03:23:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:27.346 00:20:27.346 03:23:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:27.346 03:23:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:27.346 03:23:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:27.603 03:23:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:27.604 03:23:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:27.604 03:23:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:27.604 03:23:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:27.604 03:23:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:27.604 03:23:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:27.604 { 00:20:27.604 "cntlid": 49, 00:20:27.604 "qid": 0, 00:20:27.604 "state": "enabled", 00:20:27.604 "thread": "nvmf_tgt_poll_group_000", 00:20:27.604 "listen_address": { 00:20:27.604 "trtype": "TCP", 00:20:27.604 "adrfam": "IPv4", 00:20:27.604 "traddr": "10.0.0.2", 00:20:27.604 "trsvcid": "4420" 00:20:27.604 }, 00:20:27.604 "peer_address": { 00:20:27.604 "trtype": "TCP", 00:20:27.604 "adrfam": "IPv4", 00:20:27.604 "traddr": "10.0.0.1", 00:20:27.604 "trsvcid": "56086" 00:20:27.604 }, 00:20:27.604 "auth": { 00:20:27.604 "state": "completed", 00:20:27.604 "digest": "sha384", 00:20:27.604 "dhgroup": "null" 00:20:27.604 } 00:20:27.604 } 00:20:27.604 ]' 00:20:27.604 03:23:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:27.861 03:23:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:27.861 03:23:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:27.861 03:23:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:20:27.861 03:23:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:27.861 03:23:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:27.861 03:23:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:27.861 03:23:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:28.118 03:23:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:MjI0NDUyYmZkMjNiOTYzNGEzZjAyYmU3NTVjZGE2OGNhY2E3OTk0YTBkOWE2MTlid1ecCw==: --dhchap-ctrl-secret DHHC-1:03:ZDU2MGU0ZTI4N2VhMGFlNjE5YTNlNWIyNzNmMjFiNTIxNjY2OTY0ZDY0ZTc4ZDM5MmZhMDc0NzUwM2FjOGM2NrFKL3Y=: 00:20:29.049 03:23:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:29.049 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:29.049 03:23:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:29.049 03:23:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:29.049 03:23:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:29.049 03:23:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:29.049 03:23:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:29.049 03:23:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:29.049 03:23:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:29.306 03:23:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 1 00:20:29.306 03:23:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:29.306 03:23:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:29.306 03:23:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:20:29.306 03:23:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:29.306 03:23:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:29.306 03:23:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:29.306 03:23:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:29.306 03:23:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:29.306 03:23:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:29.306 03:23:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:29.306 03:23:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:29.564 00:20:29.564 03:23:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:29.564 03:23:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:29.564 03:23:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:29.822 03:23:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:29.822 03:23:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:29.822 03:23:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:29.822 03:23:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:29.822 03:23:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:29.822 03:23:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:29.822 { 00:20:29.822 "cntlid": 51, 00:20:29.822 "qid": 0, 00:20:29.822 "state": "enabled", 00:20:29.822 "thread": "nvmf_tgt_poll_group_000", 00:20:29.822 "listen_address": { 00:20:29.822 "trtype": "TCP", 00:20:29.822 "adrfam": "IPv4", 00:20:29.822 "traddr": "10.0.0.2", 00:20:29.822 "trsvcid": "4420" 00:20:29.822 }, 00:20:29.822 "peer_address": { 00:20:29.822 "trtype": "TCP", 00:20:29.822 "adrfam": "IPv4", 00:20:29.822 "traddr": "10.0.0.1", 00:20:29.822 "trsvcid": "56112" 00:20:29.822 }, 00:20:29.822 "auth": { 00:20:29.822 "state": "completed", 00:20:29.822 "digest": "sha384", 00:20:29.822 "dhgroup": "null" 00:20:29.822 } 00:20:29.822 } 00:20:29.822 ]' 00:20:29.822 03:23:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:29.822 03:23:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:29.822 03:23:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:30.079 03:23:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:20:30.079 03:23:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:30.080 03:23:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:30.080 03:23:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:30.080 03:23:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:30.338 03:23:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:MDdiMGJlMzVmMmE3NjEzYWY2NjdkMDhlNzE2YzQ5Nzb2gSDU: --dhchap-ctrl-secret DHHC-1:02:M2ExMjQzNmQzZjM4Zjc4NDc4ZGY3Yjg0OWM5MmYxMGY5ODFjYjdhYTRlMDYxZWY1yP+/9A==: 00:20:31.271 03:23:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:31.272 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:31.272 03:23:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:31.272 03:23:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:31.272 03:23:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:31.272 03:23:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:31.272 03:23:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:31.272 03:23:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:31.272 03:23:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:31.529 03:23:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 2 00:20:31.529 03:23:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:31.529 03:23:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:31.529 03:23:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:20:31.529 03:23:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:31.529 03:23:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:31.529 03:23:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:31.530 03:23:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:31.530 03:23:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:31.530 03:23:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:31.530 03:23:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:31.530 03:23:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:31.787 00:20:31.787 03:23:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:31.787 03:23:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:31.787 03:23:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:32.046 03:23:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:32.046 03:23:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:32.046 03:23:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:32.046 03:23:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:32.046 03:23:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:32.046 03:23:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:32.046 { 00:20:32.046 "cntlid": 53, 00:20:32.046 "qid": 0, 00:20:32.046 "state": "enabled", 00:20:32.046 "thread": "nvmf_tgt_poll_group_000", 00:20:32.046 "listen_address": { 00:20:32.046 "trtype": "TCP", 00:20:32.046 "adrfam": "IPv4", 00:20:32.046 "traddr": "10.0.0.2", 00:20:32.046 "trsvcid": "4420" 00:20:32.046 }, 00:20:32.046 "peer_address": { 00:20:32.046 "trtype": "TCP", 00:20:32.046 "adrfam": "IPv4", 00:20:32.046 "traddr": "10.0.0.1", 00:20:32.046 "trsvcid": "56130" 00:20:32.046 }, 00:20:32.046 "auth": { 00:20:32.046 "state": "completed", 00:20:32.046 "digest": "sha384", 00:20:32.046 "dhgroup": "null" 00:20:32.046 } 00:20:32.046 } 00:20:32.046 ]' 00:20:32.046 03:23:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:32.046 03:23:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:32.046 03:23:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:32.046 03:23:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:20:32.046 03:23:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:32.304 03:23:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:32.304 03:23:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:32.304 03:23:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:32.562 03:23:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:ZWJiNDUwNzBkYjY0ZGYwZjhhNTQ5NzBhMTA1NWJkN2YxZmIxMzM4YThmOTg1NTExOimYFA==: --dhchap-ctrl-secret DHHC-1:01:ZDc1Y2VlMGM4MTU3YjIwODVkMjdkYWFmZDZiOTJhMDJJwEtf: 00:20:33.495 03:23:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:33.495 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:33.495 03:23:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:33.495 03:23:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:33.495 03:23:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:33.495 03:23:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:33.495 03:23:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:33.495 03:23:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:33.495 03:23:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:33.753 03:23:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 3 00:20:33.753 03:23:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:33.753 03:23:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:33.753 03:23:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:20:33.753 03:23:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:33.753 03:23:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:33.753 03:23:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:33.753 03:23:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:33.753 03:23:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:33.753 03:23:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:33.753 03:23:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:33.753 03:23:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:34.011 00:20:34.011 03:23:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:34.011 03:23:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:34.011 03:23:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:34.269 03:23:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:34.269 03:23:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:34.269 03:23:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:34.269 03:23:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:34.269 03:23:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:34.269 03:23:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:34.269 { 00:20:34.269 "cntlid": 55, 00:20:34.269 "qid": 0, 00:20:34.269 "state": "enabled", 00:20:34.269 "thread": "nvmf_tgt_poll_group_000", 00:20:34.269 "listen_address": { 00:20:34.269 "trtype": "TCP", 00:20:34.269 "adrfam": "IPv4", 00:20:34.269 "traddr": "10.0.0.2", 00:20:34.269 "trsvcid": "4420" 00:20:34.269 }, 00:20:34.269 "peer_address": { 00:20:34.269 "trtype": "TCP", 00:20:34.269 "adrfam": "IPv4", 00:20:34.269 "traddr": "10.0.0.1", 00:20:34.269 "trsvcid": "56152" 00:20:34.269 }, 00:20:34.269 "auth": { 00:20:34.269 "state": "completed", 00:20:34.269 "digest": "sha384", 00:20:34.269 "dhgroup": "null" 00:20:34.269 } 00:20:34.269 } 00:20:34.269 ]' 00:20:34.269 03:23:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:34.269 03:23:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:34.269 03:23:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:34.527 03:23:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:20:34.527 03:23:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:34.527 03:23:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:34.527 03:23:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:34.527 03:23:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:34.785 03:23:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:ODk3YTMyNWE1NDVmZTEzMzQ4ZDgwNDBhYTRiZDFmZmMyMjUzMWQ1NmJiM2U5MzVlYmM4ODI0ZmViMjEyZTZmZiS5hUA=: 00:20:35.715 03:23:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:35.715 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:35.715 03:23:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:35.715 03:23:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:35.715 03:23:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:35.715 03:23:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:35.715 03:23:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:35.715 03:23:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:35.715 03:23:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:35.715 03:23:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:35.972 03:23:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 0 00:20:35.972 03:23:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:35.972 03:23:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:35.972 03:23:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:20:35.972 03:23:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:35.972 03:23:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:35.972 03:23:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:35.972 03:23:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:35.972 03:23:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:35.973 03:23:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:35.973 03:23:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:35.973 03:23:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:36.230 00:20:36.230 03:23:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:36.230 03:23:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:36.230 03:23:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:36.488 03:23:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:36.488 03:23:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:36.488 03:23:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:36.488 03:23:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:36.488 03:23:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:36.488 03:23:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:36.488 { 00:20:36.488 "cntlid": 57, 00:20:36.488 "qid": 0, 00:20:36.488 "state": "enabled", 00:20:36.488 "thread": "nvmf_tgt_poll_group_000", 00:20:36.488 "listen_address": { 00:20:36.488 "trtype": "TCP", 00:20:36.488 "adrfam": "IPv4", 00:20:36.488 "traddr": "10.0.0.2", 00:20:36.488 "trsvcid": "4420" 00:20:36.488 }, 00:20:36.488 "peer_address": { 00:20:36.488 "trtype": "TCP", 00:20:36.488 "adrfam": "IPv4", 00:20:36.488 "traddr": "10.0.0.1", 00:20:36.488 "trsvcid": "48710" 00:20:36.488 }, 00:20:36.488 "auth": { 00:20:36.488 "state": "completed", 00:20:36.488 "digest": "sha384", 00:20:36.488 "dhgroup": "ffdhe2048" 00:20:36.488 } 00:20:36.488 } 00:20:36.488 ]' 00:20:36.488 03:23:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:36.488 03:23:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:36.488 03:23:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:36.488 03:23:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:36.488 03:23:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:36.746 03:23:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:36.746 03:23:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:36.746 03:23:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:37.003 03:23:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:MjI0NDUyYmZkMjNiOTYzNGEzZjAyYmU3NTVjZGE2OGNhY2E3OTk0YTBkOWE2MTlid1ecCw==: --dhchap-ctrl-secret DHHC-1:03:ZDU2MGU0ZTI4N2VhMGFlNjE5YTNlNWIyNzNmMjFiNTIxNjY2OTY0ZDY0ZTc4ZDM5MmZhMDc0NzUwM2FjOGM2NrFKL3Y=: 00:20:37.942 03:23:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:37.942 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:37.942 03:23:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:37.942 03:23:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:37.942 03:23:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:37.942 03:23:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:37.942 03:23:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:37.942 03:23:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:37.942 03:23:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:37.942 03:23:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 1 00:20:37.942 03:23:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:37.942 03:23:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:37.942 03:23:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:20:37.942 03:23:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:37.942 03:23:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:37.942 03:23:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:37.942 03:23:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:37.942 03:23:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:37.942 03:23:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:37.942 03:23:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:37.942 03:23:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:38.507 00:20:38.507 03:23:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:38.507 03:23:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:38.507 03:23:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:38.507 03:23:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:38.507 03:23:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:38.507 03:23:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:38.507 03:23:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:38.507 03:23:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:38.507 03:23:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:38.507 { 00:20:38.507 "cntlid": 59, 00:20:38.507 "qid": 0, 00:20:38.507 "state": "enabled", 00:20:38.507 "thread": "nvmf_tgt_poll_group_000", 00:20:38.507 "listen_address": { 00:20:38.507 "trtype": "TCP", 00:20:38.507 "adrfam": "IPv4", 00:20:38.507 "traddr": "10.0.0.2", 00:20:38.507 "trsvcid": "4420" 00:20:38.507 }, 00:20:38.507 "peer_address": { 00:20:38.507 "trtype": "TCP", 00:20:38.507 "adrfam": "IPv4", 00:20:38.507 "traddr": "10.0.0.1", 00:20:38.507 "trsvcid": "48736" 00:20:38.507 }, 00:20:38.507 "auth": { 00:20:38.507 "state": "completed", 00:20:38.507 "digest": "sha384", 00:20:38.507 "dhgroup": "ffdhe2048" 00:20:38.507 } 00:20:38.507 } 00:20:38.507 ]' 00:20:38.764 03:23:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:38.764 03:23:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:38.764 03:23:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:38.764 03:23:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:38.764 03:23:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:38.764 03:23:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:38.764 03:23:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:38.764 03:23:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:39.022 03:23:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:MDdiMGJlMzVmMmE3NjEzYWY2NjdkMDhlNzE2YzQ5Nzb2gSDU: --dhchap-ctrl-secret DHHC-1:02:M2ExMjQzNmQzZjM4Zjc4NDc4ZGY3Yjg0OWM5MmYxMGY5ODFjYjdhYTRlMDYxZWY1yP+/9A==: 00:20:39.955 03:23:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:39.955 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:39.955 03:23:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:39.955 03:23:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:39.955 03:23:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:39.955 03:23:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:39.955 03:23:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:39.955 03:23:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:39.955 03:23:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:40.213 03:23:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 2 00:20:40.213 03:23:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:40.213 03:23:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:40.213 03:23:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:20:40.213 03:23:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:40.213 03:23:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:40.213 03:23:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:40.213 03:23:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:40.213 03:23:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:40.213 03:23:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:40.213 03:23:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:40.213 03:23:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:40.471 00:20:40.471 03:23:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:40.471 03:23:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:40.471 03:23:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:40.729 03:23:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:40.729 03:23:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:40.729 03:23:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:40.729 03:23:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:40.729 03:23:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:40.729 03:23:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:40.729 { 00:20:40.729 "cntlid": 61, 00:20:40.729 "qid": 0, 00:20:40.729 "state": "enabled", 00:20:40.729 "thread": "nvmf_tgt_poll_group_000", 00:20:40.729 "listen_address": { 00:20:40.729 "trtype": "TCP", 00:20:40.729 "adrfam": "IPv4", 00:20:40.729 "traddr": "10.0.0.2", 00:20:40.729 "trsvcid": "4420" 00:20:40.729 }, 00:20:40.729 "peer_address": { 00:20:40.729 "trtype": "TCP", 00:20:40.729 "adrfam": "IPv4", 00:20:40.729 "traddr": "10.0.0.1", 00:20:40.729 "trsvcid": "48758" 00:20:40.729 }, 00:20:40.729 "auth": { 00:20:40.729 "state": "completed", 00:20:40.729 "digest": "sha384", 00:20:40.729 "dhgroup": "ffdhe2048" 00:20:40.729 } 00:20:40.729 } 00:20:40.729 ]' 00:20:40.729 03:23:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:40.729 03:23:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:40.729 03:23:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:40.729 03:23:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:40.729 03:23:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:40.987 03:23:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:40.987 03:23:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:40.987 03:23:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:41.244 03:23:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:ZWJiNDUwNzBkYjY0ZGYwZjhhNTQ5NzBhMTA1NWJkN2YxZmIxMzM4YThmOTg1NTExOimYFA==: --dhchap-ctrl-secret DHHC-1:01:ZDc1Y2VlMGM4MTU3YjIwODVkMjdkYWFmZDZiOTJhMDJJwEtf: 00:20:42.218 03:23:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:42.218 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:42.218 03:23:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:42.218 03:23:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:42.218 03:23:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:42.218 03:23:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:42.218 03:23:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:42.218 03:23:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:42.218 03:23:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:42.218 03:23:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 3 00:20:42.218 03:23:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:42.218 03:23:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:42.218 03:23:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:20:42.218 03:23:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:42.218 03:23:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:42.500 03:23:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:42.500 03:23:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:42.500 03:23:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:42.500 03:23:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:42.500 03:23:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:42.501 03:23:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:42.759 00:20:42.759 03:23:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:42.759 03:23:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:42.759 03:23:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:43.016 03:23:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:43.016 03:23:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:43.016 03:23:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:43.016 03:23:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:43.016 03:23:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:43.016 03:23:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:43.016 { 00:20:43.016 "cntlid": 63, 00:20:43.016 "qid": 0, 00:20:43.016 "state": "enabled", 00:20:43.016 "thread": "nvmf_tgt_poll_group_000", 00:20:43.016 "listen_address": { 00:20:43.016 "trtype": "TCP", 00:20:43.016 "adrfam": "IPv4", 00:20:43.016 "traddr": "10.0.0.2", 00:20:43.016 "trsvcid": "4420" 00:20:43.016 }, 00:20:43.016 "peer_address": { 00:20:43.016 "trtype": "TCP", 00:20:43.016 "adrfam": "IPv4", 00:20:43.016 "traddr": "10.0.0.1", 00:20:43.016 "trsvcid": "48782" 00:20:43.016 }, 00:20:43.016 "auth": { 00:20:43.016 "state": "completed", 00:20:43.016 "digest": "sha384", 00:20:43.016 "dhgroup": "ffdhe2048" 00:20:43.016 } 00:20:43.016 } 00:20:43.016 ]' 00:20:43.016 03:23:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:43.016 03:23:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:43.016 03:23:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:43.016 03:23:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:43.016 03:23:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:43.016 03:23:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:43.016 03:23:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:43.016 03:23:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:43.274 03:23:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:ODk3YTMyNWE1NDVmZTEzMzQ4ZDgwNDBhYTRiZDFmZmMyMjUzMWQ1NmJiM2U5MzVlYmM4ODI0ZmViMjEyZTZmZiS5hUA=: 00:20:44.206 03:23:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:44.206 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:44.206 03:23:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:44.206 03:23:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:44.206 03:23:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:44.206 03:23:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:44.206 03:23:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:44.206 03:23:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:44.206 03:23:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:44.206 03:23:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:44.463 03:23:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 0 00:20:44.463 03:23:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:44.463 03:23:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:44.463 03:23:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:20:44.463 03:23:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:44.463 03:23:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:44.463 03:23:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:44.463 03:23:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:44.463 03:23:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:44.463 03:23:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:44.463 03:23:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:44.463 03:23:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:45.027 00:20:45.027 03:23:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:45.027 03:23:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:45.027 03:23:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:45.284 03:23:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:45.284 03:23:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:45.284 03:23:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:45.284 03:23:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:45.284 03:23:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:45.284 03:23:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:45.284 { 00:20:45.284 "cntlid": 65, 00:20:45.284 "qid": 0, 00:20:45.284 "state": "enabled", 00:20:45.284 "thread": "nvmf_tgt_poll_group_000", 00:20:45.284 "listen_address": { 00:20:45.284 "trtype": "TCP", 00:20:45.284 "adrfam": "IPv4", 00:20:45.284 "traddr": "10.0.0.2", 00:20:45.284 "trsvcid": "4420" 00:20:45.284 }, 00:20:45.284 "peer_address": { 00:20:45.284 "trtype": "TCP", 00:20:45.284 "adrfam": "IPv4", 00:20:45.284 "traddr": "10.0.0.1", 00:20:45.284 "trsvcid": "48820" 00:20:45.284 }, 00:20:45.284 "auth": { 00:20:45.284 "state": "completed", 00:20:45.284 "digest": "sha384", 00:20:45.284 "dhgroup": "ffdhe3072" 00:20:45.284 } 00:20:45.284 } 00:20:45.284 ]' 00:20:45.284 03:23:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:45.284 03:23:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:45.284 03:23:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:45.284 03:23:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:45.284 03:23:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:45.284 03:23:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:45.284 03:23:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:45.284 03:23:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:45.541 03:23:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:MjI0NDUyYmZkMjNiOTYzNGEzZjAyYmU3NTVjZGE2OGNhY2E3OTk0YTBkOWE2MTlid1ecCw==: --dhchap-ctrl-secret DHHC-1:03:ZDU2MGU0ZTI4N2VhMGFlNjE5YTNlNWIyNzNmMjFiNTIxNjY2OTY0ZDY0ZTc4ZDM5MmZhMDc0NzUwM2FjOGM2NrFKL3Y=: 00:20:46.491 03:23:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:46.491 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:46.491 03:23:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:46.491 03:23:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:46.491 03:23:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:46.491 03:23:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:46.491 03:23:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:46.491 03:23:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:46.491 03:23:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:46.749 03:23:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 1 00:20:46.749 03:23:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:46.749 03:23:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:46.749 03:23:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:20:46.749 03:23:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:46.749 03:23:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:46.749 03:23:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:46.749 03:23:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:46.749 03:23:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:46.749 03:23:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:46.749 03:23:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:46.749 03:23:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:47.313 00:20:47.313 03:23:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:47.313 03:23:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:47.313 03:23:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:47.570 03:23:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:47.570 03:23:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:47.570 03:23:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:47.570 03:23:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:47.570 03:23:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:47.570 03:23:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:47.570 { 00:20:47.570 "cntlid": 67, 00:20:47.570 "qid": 0, 00:20:47.570 "state": "enabled", 00:20:47.570 "thread": "nvmf_tgt_poll_group_000", 00:20:47.570 "listen_address": { 00:20:47.570 "trtype": "TCP", 00:20:47.570 "adrfam": "IPv4", 00:20:47.570 "traddr": "10.0.0.2", 00:20:47.570 "trsvcid": "4420" 00:20:47.570 }, 00:20:47.570 "peer_address": { 00:20:47.570 "trtype": "TCP", 00:20:47.570 "adrfam": "IPv4", 00:20:47.570 "traddr": "10.0.0.1", 00:20:47.570 "trsvcid": "33774" 00:20:47.570 }, 00:20:47.570 "auth": { 00:20:47.570 "state": "completed", 00:20:47.570 "digest": "sha384", 00:20:47.570 "dhgroup": "ffdhe3072" 00:20:47.570 } 00:20:47.570 } 00:20:47.570 ]' 00:20:47.570 03:23:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:47.570 03:23:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:47.570 03:23:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:47.570 03:23:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:47.570 03:23:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:47.570 03:23:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:47.570 03:23:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:47.570 03:23:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:47.827 03:23:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:MDdiMGJlMzVmMmE3NjEzYWY2NjdkMDhlNzE2YzQ5Nzb2gSDU: --dhchap-ctrl-secret DHHC-1:02:M2ExMjQzNmQzZjM4Zjc4NDc4ZGY3Yjg0OWM5MmYxMGY5ODFjYjdhYTRlMDYxZWY1yP+/9A==: 00:20:48.760 03:23:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:48.760 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:48.760 03:23:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:48.760 03:23:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:48.760 03:23:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:48.760 03:23:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:48.760 03:23:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:48.760 03:23:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:48.760 03:23:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:49.018 03:23:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 2 00:20:49.018 03:23:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:49.018 03:23:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:49.018 03:23:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:20:49.018 03:23:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:49.018 03:23:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:49.018 03:23:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:49.018 03:23:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:49.018 03:23:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:49.018 03:23:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:49.018 03:23:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:49.018 03:23:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:49.276 00:20:49.534 03:23:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:49.534 03:23:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:49.534 03:23:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:49.534 03:23:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:49.534 03:23:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:49.534 03:23:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:49.534 03:23:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:49.534 03:23:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:49.534 03:23:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:49.534 { 00:20:49.534 "cntlid": 69, 00:20:49.534 "qid": 0, 00:20:49.534 "state": "enabled", 00:20:49.534 "thread": "nvmf_tgt_poll_group_000", 00:20:49.534 "listen_address": { 00:20:49.534 "trtype": "TCP", 00:20:49.534 "adrfam": "IPv4", 00:20:49.534 "traddr": "10.0.0.2", 00:20:49.534 "trsvcid": "4420" 00:20:49.534 }, 00:20:49.534 "peer_address": { 00:20:49.534 "trtype": "TCP", 00:20:49.534 "adrfam": "IPv4", 00:20:49.534 "traddr": "10.0.0.1", 00:20:49.534 "trsvcid": "33808" 00:20:49.534 }, 00:20:49.534 "auth": { 00:20:49.534 "state": "completed", 00:20:49.534 "digest": "sha384", 00:20:49.534 "dhgroup": "ffdhe3072" 00:20:49.534 } 00:20:49.534 } 00:20:49.534 ]' 00:20:49.792 03:23:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:49.792 03:23:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:49.792 03:23:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:49.792 03:23:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:49.792 03:23:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:49.792 03:23:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:49.792 03:23:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:49.792 03:23:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:50.050 03:23:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:ZWJiNDUwNzBkYjY0ZGYwZjhhNTQ5NzBhMTA1NWJkN2YxZmIxMzM4YThmOTg1NTExOimYFA==: --dhchap-ctrl-secret DHHC-1:01:ZDc1Y2VlMGM4MTU3YjIwODVkMjdkYWFmZDZiOTJhMDJJwEtf: 00:20:50.985 03:23:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:50.985 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:50.985 03:23:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:50.985 03:23:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:50.985 03:23:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:50.985 03:23:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:50.985 03:23:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:50.985 03:23:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:50.985 03:23:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:51.242 03:23:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 3 00:20:51.243 03:23:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:51.243 03:23:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:51.243 03:23:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:20:51.243 03:23:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:51.243 03:23:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:51.243 03:23:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:51.243 03:23:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:51.243 03:23:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:51.243 03:23:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:51.243 03:23:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:51.243 03:23:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:51.808 00:20:51.808 03:23:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:51.808 03:23:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:51.808 03:23:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:51.808 03:23:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:51.808 03:23:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:51.808 03:23:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:51.808 03:23:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:51.808 03:23:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:51.808 03:23:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:51.808 { 00:20:51.808 "cntlid": 71, 00:20:51.808 "qid": 0, 00:20:51.808 "state": "enabled", 00:20:51.808 "thread": "nvmf_tgt_poll_group_000", 00:20:51.808 "listen_address": { 00:20:51.808 "trtype": "TCP", 00:20:51.808 "adrfam": "IPv4", 00:20:51.808 "traddr": "10.0.0.2", 00:20:51.808 "trsvcid": "4420" 00:20:51.808 }, 00:20:51.808 "peer_address": { 00:20:51.808 "trtype": "TCP", 00:20:51.808 "adrfam": "IPv4", 00:20:51.808 "traddr": "10.0.0.1", 00:20:51.808 "trsvcid": "33824" 00:20:51.808 }, 00:20:51.808 "auth": { 00:20:51.808 "state": "completed", 00:20:51.808 "digest": "sha384", 00:20:51.808 "dhgroup": "ffdhe3072" 00:20:51.808 } 00:20:51.808 } 00:20:51.808 ]' 00:20:51.808 03:23:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:52.066 03:23:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:52.066 03:23:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:52.066 03:23:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:52.066 03:23:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:52.066 03:23:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:52.066 03:23:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:52.066 03:23:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:52.324 03:23:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:ODk3YTMyNWE1NDVmZTEzMzQ4ZDgwNDBhYTRiZDFmZmMyMjUzMWQ1NmJiM2U5MzVlYmM4ODI0ZmViMjEyZTZmZiS5hUA=: 00:20:53.258 03:23:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:53.258 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:53.258 03:23:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:53.258 03:23:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:53.258 03:23:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:53.258 03:23:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:53.258 03:23:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:53.258 03:23:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:53.258 03:23:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:53.258 03:23:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:53.516 03:23:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 0 00:20:53.516 03:23:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:53.516 03:23:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:53.516 03:23:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:20:53.516 03:23:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:53.516 03:23:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:53.516 03:23:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:53.516 03:23:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:53.516 03:23:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:53.516 03:23:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:53.516 03:23:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:53.516 03:23:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:54.081 00:20:54.081 03:23:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:54.081 03:23:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:54.081 03:23:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:54.339 03:24:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:54.339 03:24:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:54.339 03:24:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:54.339 03:24:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:54.339 03:24:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:54.339 03:24:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:54.339 { 00:20:54.339 "cntlid": 73, 00:20:54.339 "qid": 0, 00:20:54.339 "state": "enabled", 00:20:54.339 "thread": "nvmf_tgt_poll_group_000", 00:20:54.339 "listen_address": { 00:20:54.339 "trtype": "TCP", 00:20:54.339 "adrfam": "IPv4", 00:20:54.339 "traddr": "10.0.0.2", 00:20:54.339 "trsvcid": "4420" 00:20:54.339 }, 00:20:54.339 "peer_address": { 00:20:54.339 "trtype": "TCP", 00:20:54.339 "adrfam": "IPv4", 00:20:54.339 "traddr": "10.0.0.1", 00:20:54.339 "trsvcid": "33854" 00:20:54.339 }, 00:20:54.339 "auth": { 00:20:54.339 "state": "completed", 00:20:54.339 "digest": "sha384", 00:20:54.339 "dhgroup": "ffdhe4096" 00:20:54.339 } 00:20:54.339 } 00:20:54.339 ]' 00:20:54.339 03:24:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:54.339 03:24:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:54.339 03:24:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:54.339 03:24:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:54.339 03:24:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:54.339 03:24:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:54.339 03:24:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:54.339 03:24:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:54.598 03:24:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:MjI0NDUyYmZkMjNiOTYzNGEzZjAyYmU3NTVjZGE2OGNhY2E3OTk0YTBkOWE2MTlid1ecCw==: --dhchap-ctrl-secret DHHC-1:03:ZDU2MGU0ZTI4N2VhMGFlNjE5YTNlNWIyNzNmMjFiNTIxNjY2OTY0ZDY0ZTc4ZDM5MmZhMDc0NzUwM2FjOGM2NrFKL3Y=: 00:20:55.532 03:24:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:55.532 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:55.532 03:24:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:55.532 03:24:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:55.532 03:24:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:55.532 03:24:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:55.532 03:24:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:55.532 03:24:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:55.532 03:24:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:55.790 03:24:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 1 00:20:55.790 03:24:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:55.790 03:24:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:55.790 03:24:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:20:55.790 03:24:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:55.790 03:24:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:55.790 03:24:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:55.790 03:24:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:55.790 03:24:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:55.790 03:24:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:55.790 03:24:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:55.790 03:24:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:56.380 00:20:56.380 03:24:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:56.380 03:24:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:56.380 03:24:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:56.380 03:24:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:56.380 03:24:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:56.380 03:24:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:56.380 03:24:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:56.380 03:24:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:56.380 03:24:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:56.380 { 00:20:56.380 "cntlid": 75, 00:20:56.380 "qid": 0, 00:20:56.380 "state": "enabled", 00:20:56.380 "thread": "nvmf_tgt_poll_group_000", 00:20:56.380 "listen_address": { 00:20:56.380 "trtype": "TCP", 00:20:56.380 "adrfam": "IPv4", 00:20:56.380 "traddr": "10.0.0.2", 00:20:56.380 "trsvcid": "4420" 00:20:56.380 }, 00:20:56.380 "peer_address": { 00:20:56.380 "trtype": "TCP", 00:20:56.380 "adrfam": "IPv4", 00:20:56.380 "traddr": "10.0.0.1", 00:20:56.380 "trsvcid": "37364" 00:20:56.380 }, 00:20:56.380 "auth": { 00:20:56.380 "state": "completed", 00:20:56.380 "digest": "sha384", 00:20:56.380 "dhgroup": "ffdhe4096" 00:20:56.380 } 00:20:56.380 } 00:20:56.380 ]' 00:20:56.380 03:24:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:56.637 03:24:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:56.637 03:24:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:56.637 03:24:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:56.637 03:24:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:56.637 03:24:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:56.637 03:24:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:56.637 03:24:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:56.893 03:24:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:MDdiMGJlMzVmMmE3NjEzYWY2NjdkMDhlNzE2YzQ5Nzb2gSDU: --dhchap-ctrl-secret DHHC-1:02:M2ExMjQzNmQzZjM4Zjc4NDc4ZGY3Yjg0OWM5MmYxMGY5ODFjYjdhYTRlMDYxZWY1yP+/9A==: 00:20:57.825 03:24:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:57.825 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:57.825 03:24:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:57.825 03:24:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:57.825 03:24:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:57.825 03:24:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:57.825 03:24:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:57.825 03:24:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:57.825 03:24:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:58.083 03:24:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 2 00:20:58.083 03:24:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:58.083 03:24:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:58.083 03:24:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:20:58.083 03:24:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:58.083 03:24:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:58.083 03:24:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:58.083 03:24:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:58.083 03:24:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:58.083 03:24:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:58.083 03:24:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:58.083 03:24:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:58.339 00:20:58.596 03:24:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:58.596 03:24:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:58.596 03:24:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:58.853 03:24:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:58.853 03:24:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:58.853 03:24:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:58.853 03:24:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:58.853 03:24:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:58.853 03:24:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:58.853 { 00:20:58.853 "cntlid": 77, 00:20:58.853 "qid": 0, 00:20:58.853 "state": "enabled", 00:20:58.853 "thread": "nvmf_tgt_poll_group_000", 00:20:58.853 "listen_address": { 00:20:58.853 "trtype": "TCP", 00:20:58.853 "adrfam": "IPv4", 00:20:58.853 "traddr": "10.0.0.2", 00:20:58.853 "trsvcid": "4420" 00:20:58.853 }, 00:20:58.853 "peer_address": { 00:20:58.853 "trtype": "TCP", 00:20:58.853 "adrfam": "IPv4", 00:20:58.853 "traddr": "10.0.0.1", 00:20:58.853 "trsvcid": "37380" 00:20:58.853 }, 00:20:58.853 "auth": { 00:20:58.853 "state": "completed", 00:20:58.853 "digest": "sha384", 00:20:58.853 "dhgroup": "ffdhe4096" 00:20:58.853 } 00:20:58.853 } 00:20:58.853 ]' 00:20:58.853 03:24:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:58.853 03:24:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:58.853 03:24:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:58.853 03:24:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:58.854 03:24:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:58.854 03:24:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:58.854 03:24:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:58.854 03:24:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:59.111 03:24:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:ZWJiNDUwNzBkYjY0ZGYwZjhhNTQ5NzBhMTA1NWJkN2YxZmIxMzM4YThmOTg1NTExOimYFA==: --dhchap-ctrl-secret DHHC-1:01:ZDc1Y2VlMGM4MTU3YjIwODVkMjdkYWFmZDZiOTJhMDJJwEtf: 00:21:00.075 03:24:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:00.075 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:00.075 03:24:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:00.075 03:24:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:00.075 03:24:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:00.075 03:24:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:00.075 03:24:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:00.075 03:24:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:00.075 03:24:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:00.333 03:24:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 3 00:21:00.333 03:24:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:00.333 03:24:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:00.333 03:24:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:21:00.333 03:24:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:00.333 03:24:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:00.333 03:24:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:00.333 03:24:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:00.333 03:24:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:00.333 03:24:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:00.333 03:24:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:00.333 03:24:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:00.591 00:21:00.591 03:24:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:00.591 03:24:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:00.591 03:24:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:00.849 03:24:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:00.849 03:24:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:00.849 03:24:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:00.849 03:24:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:00.849 03:24:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:00.849 03:24:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:00.849 { 00:21:00.849 "cntlid": 79, 00:21:00.849 "qid": 0, 00:21:00.849 "state": "enabled", 00:21:00.849 "thread": "nvmf_tgt_poll_group_000", 00:21:00.849 "listen_address": { 00:21:00.849 "trtype": "TCP", 00:21:00.849 "adrfam": "IPv4", 00:21:00.849 "traddr": "10.0.0.2", 00:21:00.849 "trsvcid": "4420" 00:21:00.849 }, 00:21:00.849 "peer_address": { 00:21:00.849 "trtype": "TCP", 00:21:00.849 "adrfam": "IPv4", 00:21:00.849 "traddr": "10.0.0.1", 00:21:00.849 "trsvcid": "37408" 00:21:00.849 }, 00:21:00.849 "auth": { 00:21:00.849 "state": "completed", 00:21:00.849 "digest": "sha384", 00:21:00.849 "dhgroup": "ffdhe4096" 00:21:00.849 } 00:21:00.849 } 00:21:00.849 ]' 00:21:00.849 03:24:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:01.107 03:24:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:01.107 03:24:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:01.107 03:24:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:01.107 03:24:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:01.107 03:24:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:01.107 03:24:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:01.107 03:24:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:01.365 03:24:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:ODk3YTMyNWE1NDVmZTEzMzQ4ZDgwNDBhYTRiZDFmZmMyMjUzMWQ1NmJiM2U5MzVlYmM4ODI0ZmViMjEyZTZmZiS5hUA=: 00:21:02.297 03:24:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:02.297 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:02.297 03:24:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:02.297 03:24:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:02.297 03:24:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:02.297 03:24:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:02.297 03:24:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:21:02.297 03:24:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:02.297 03:24:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:02.297 03:24:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:02.560 03:24:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 0 00:21:02.560 03:24:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:02.560 03:24:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:02.560 03:24:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:21:02.560 03:24:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:02.560 03:24:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:02.560 03:24:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:02.560 03:24:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:02.560 03:24:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:02.560 03:24:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:02.560 03:24:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:02.560 03:24:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:03.127 00:21:03.127 03:24:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:03.127 03:24:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:03.127 03:24:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:03.387 03:24:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:03.387 03:24:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:03.387 03:24:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:03.387 03:24:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:03.387 03:24:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:03.387 03:24:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:03.387 { 00:21:03.387 "cntlid": 81, 00:21:03.387 "qid": 0, 00:21:03.387 "state": "enabled", 00:21:03.387 "thread": "nvmf_tgt_poll_group_000", 00:21:03.387 "listen_address": { 00:21:03.387 "trtype": "TCP", 00:21:03.387 "adrfam": "IPv4", 00:21:03.387 "traddr": "10.0.0.2", 00:21:03.387 "trsvcid": "4420" 00:21:03.387 }, 00:21:03.387 "peer_address": { 00:21:03.387 "trtype": "TCP", 00:21:03.387 "adrfam": "IPv4", 00:21:03.387 "traddr": "10.0.0.1", 00:21:03.387 "trsvcid": "37436" 00:21:03.387 }, 00:21:03.387 "auth": { 00:21:03.387 "state": "completed", 00:21:03.387 "digest": "sha384", 00:21:03.387 "dhgroup": "ffdhe6144" 00:21:03.387 } 00:21:03.387 } 00:21:03.387 ]' 00:21:03.387 03:24:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:03.387 03:24:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:03.387 03:24:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:03.387 03:24:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:03.387 03:24:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:03.646 03:24:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:03.646 03:24:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:03.646 03:24:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:03.904 03:24:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:MjI0NDUyYmZkMjNiOTYzNGEzZjAyYmU3NTVjZGE2OGNhY2E3OTk0YTBkOWE2MTlid1ecCw==: --dhchap-ctrl-secret DHHC-1:03:ZDU2MGU0ZTI4N2VhMGFlNjE5YTNlNWIyNzNmMjFiNTIxNjY2OTY0ZDY0ZTc4ZDM5MmZhMDc0NzUwM2FjOGM2NrFKL3Y=: 00:21:04.840 03:24:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:04.840 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:04.840 03:24:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:04.840 03:24:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:04.840 03:24:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:04.840 03:24:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:04.840 03:24:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:04.840 03:24:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:04.840 03:24:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:05.098 03:24:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 1 00:21:05.098 03:24:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:05.098 03:24:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:05.098 03:24:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:21:05.098 03:24:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:21:05.098 03:24:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:05.098 03:24:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:05.098 03:24:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:05.098 03:24:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:05.098 03:24:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:05.098 03:24:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:05.098 03:24:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:05.664 00:21:05.664 03:24:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:05.664 03:24:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:05.664 03:24:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:05.923 03:24:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:05.923 03:24:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:05.923 03:24:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:05.923 03:24:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:05.923 03:24:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:05.923 03:24:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:05.923 { 00:21:05.923 "cntlid": 83, 00:21:05.923 "qid": 0, 00:21:05.923 "state": "enabled", 00:21:05.923 "thread": "nvmf_tgt_poll_group_000", 00:21:05.923 "listen_address": { 00:21:05.923 "trtype": "TCP", 00:21:05.923 "adrfam": "IPv4", 00:21:05.923 "traddr": "10.0.0.2", 00:21:05.923 "trsvcid": "4420" 00:21:05.923 }, 00:21:05.923 "peer_address": { 00:21:05.923 "trtype": "TCP", 00:21:05.923 "adrfam": "IPv4", 00:21:05.923 "traddr": "10.0.0.1", 00:21:05.923 "trsvcid": "38450" 00:21:05.923 }, 00:21:05.923 "auth": { 00:21:05.923 "state": "completed", 00:21:05.923 "digest": "sha384", 00:21:05.923 "dhgroup": "ffdhe6144" 00:21:05.923 } 00:21:05.923 } 00:21:05.923 ]' 00:21:05.923 03:24:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:05.923 03:24:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:05.923 03:24:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:05.923 03:24:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:05.923 03:24:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:05.923 03:24:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:05.923 03:24:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:05.923 03:24:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:06.182 03:24:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:MDdiMGJlMzVmMmE3NjEzYWY2NjdkMDhlNzE2YzQ5Nzb2gSDU: --dhchap-ctrl-secret DHHC-1:02:M2ExMjQzNmQzZjM4Zjc4NDc4ZGY3Yjg0OWM5MmYxMGY5ODFjYjdhYTRlMDYxZWY1yP+/9A==: 00:21:07.116 03:24:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:07.116 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:07.116 03:24:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:07.116 03:24:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:07.116 03:24:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:07.116 03:24:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:07.116 03:24:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:07.116 03:24:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:07.116 03:24:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:07.374 03:24:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 2 00:21:07.374 03:24:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:07.374 03:24:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:07.374 03:24:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:21:07.374 03:24:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:21:07.374 03:24:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:07.374 03:24:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:07.374 03:24:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:07.374 03:24:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:07.374 03:24:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:07.374 03:24:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:07.374 03:24:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:07.939 00:21:07.939 03:24:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:07.939 03:24:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:07.939 03:24:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:08.197 03:24:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:08.197 03:24:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:08.197 03:24:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:08.197 03:24:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:08.197 03:24:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:08.197 03:24:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:08.197 { 00:21:08.197 "cntlid": 85, 00:21:08.197 "qid": 0, 00:21:08.197 "state": "enabled", 00:21:08.197 "thread": "nvmf_tgt_poll_group_000", 00:21:08.197 "listen_address": { 00:21:08.197 "trtype": "TCP", 00:21:08.197 "adrfam": "IPv4", 00:21:08.197 "traddr": "10.0.0.2", 00:21:08.197 "trsvcid": "4420" 00:21:08.197 }, 00:21:08.197 "peer_address": { 00:21:08.197 "trtype": "TCP", 00:21:08.197 "adrfam": "IPv4", 00:21:08.197 "traddr": "10.0.0.1", 00:21:08.197 "trsvcid": "38480" 00:21:08.197 }, 00:21:08.197 "auth": { 00:21:08.197 "state": "completed", 00:21:08.197 "digest": "sha384", 00:21:08.197 "dhgroup": "ffdhe6144" 00:21:08.197 } 00:21:08.197 } 00:21:08.197 ]' 00:21:08.197 03:24:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:08.197 03:24:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:08.197 03:24:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:08.455 03:24:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:08.455 03:24:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:08.455 03:24:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:08.455 03:24:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:08.455 03:24:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:08.712 03:24:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:ZWJiNDUwNzBkYjY0ZGYwZjhhNTQ5NzBhMTA1NWJkN2YxZmIxMzM4YThmOTg1NTExOimYFA==: --dhchap-ctrl-secret DHHC-1:01:ZDc1Y2VlMGM4MTU3YjIwODVkMjdkYWFmZDZiOTJhMDJJwEtf: 00:21:09.647 03:24:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:09.647 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:09.647 03:24:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:09.647 03:24:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:09.647 03:24:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:09.647 03:24:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:09.647 03:24:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:09.647 03:24:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:09.647 03:24:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:09.905 03:24:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 3 00:21:09.905 03:24:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:09.906 03:24:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:09.906 03:24:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:21:09.906 03:24:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:09.906 03:24:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:09.906 03:24:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:09.906 03:24:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:09.906 03:24:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:09.906 03:24:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:09.906 03:24:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:09.906 03:24:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:10.472 00:21:10.472 03:24:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:10.472 03:24:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:10.472 03:24:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:10.731 03:24:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:10.731 03:24:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:10.731 03:24:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:10.731 03:24:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:10.731 03:24:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:10.731 03:24:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:10.731 { 00:21:10.731 "cntlid": 87, 00:21:10.731 "qid": 0, 00:21:10.731 "state": "enabled", 00:21:10.731 "thread": "nvmf_tgt_poll_group_000", 00:21:10.731 "listen_address": { 00:21:10.731 "trtype": "TCP", 00:21:10.731 "adrfam": "IPv4", 00:21:10.731 "traddr": "10.0.0.2", 00:21:10.731 "trsvcid": "4420" 00:21:10.731 }, 00:21:10.731 "peer_address": { 00:21:10.731 "trtype": "TCP", 00:21:10.731 "adrfam": "IPv4", 00:21:10.731 "traddr": "10.0.0.1", 00:21:10.731 "trsvcid": "38500" 00:21:10.731 }, 00:21:10.731 "auth": { 00:21:10.731 "state": "completed", 00:21:10.731 "digest": "sha384", 00:21:10.731 "dhgroup": "ffdhe6144" 00:21:10.731 } 00:21:10.731 } 00:21:10.731 ]' 00:21:10.731 03:24:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:10.731 03:24:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:10.731 03:24:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:10.731 03:24:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:10.731 03:24:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:10.731 03:24:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:10.731 03:24:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:10.731 03:24:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:10.990 03:24:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:ODk3YTMyNWE1NDVmZTEzMzQ4ZDgwNDBhYTRiZDFmZmMyMjUzMWQ1NmJiM2U5MzVlYmM4ODI0ZmViMjEyZTZmZiS5hUA=: 00:21:11.926 03:24:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:11.926 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:11.926 03:24:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:11.926 03:24:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:11.926 03:24:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:11.926 03:24:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:11.926 03:24:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:21:11.926 03:24:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:11.926 03:24:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:11.926 03:24:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:12.183 03:24:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 0 00:21:12.183 03:24:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:12.183 03:24:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:12.183 03:24:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:21:12.183 03:24:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:12.183 03:24:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:12.183 03:24:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:12.183 03:24:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:12.183 03:24:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:12.183 03:24:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:12.183 03:24:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:12.184 03:24:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:13.117 00:21:13.117 03:24:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:13.117 03:24:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:13.117 03:24:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:13.376 03:24:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:13.376 03:24:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:13.376 03:24:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:13.376 03:24:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:13.376 03:24:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:13.376 03:24:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:13.376 { 00:21:13.376 "cntlid": 89, 00:21:13.376 "qid": 0, 00:21:13.376 "state": "enabled", 00:21:13.376 "thread": "nvmf_tgt_poll_group_000", 00:21:13.376 "listen_address": { 00:21:13.376 "trtype": "TCP", 00:21:13.376 "adrfam": "IPv4", 00:21:13.376 "traddr": "10.0.0.2", 00:21:13.376 "trsvcid": "4420" 00:21:13.376 }, 00:21:13.376 "peer_address": { 00:21:13.376 "trtype": "TCP", 00:21:13.376 "adrfam": "IPv4", 00:21:13.376 "traddr": "10.0.0.1", 00:21:13.376 "trsvcid": "38536" 00:21:13.376 }, 00:21:13.376 "auth": { 00:21:13.376 "state": "completed", 00:21:13.376 "digest": "sha384", 00:21:13.376 "dhgroup": "ffdhe8192" 00:21:13.376 } 00:21:13.376 } 00:21:13.376 ]' 00:21:13.376 03:24:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:13.376 03:24:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:13.376 03:24:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:13.376 03:24:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:13.376 03:24:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:13.635 03:24:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:13.635 03:24:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:13.635 03:24:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:13.894 03:24:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:MjI0NDUyYmZkMjNiOTYzNGEzZjAyYmU3NTVjZGE2OGNhY2E3OTk0YTBkOWE2MTlid1ecCw==: --dhchap-ctrl-secret DHHC-1:03:ZDU2MGU0ZTI4N2VhMGFlNjE5YTNlNWIyNzNmMjFiNTIxNjY2OTY0ZDY0ZTc4ZDM5MmZhMDc0NzUwM2FjOGM2NrFKL3Y=: 00:21:14.829 03:24:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:14.829 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:14.829 03:24:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:14.829 03:24:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:14.829 03:24:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:14.829 03:24:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:14.829 03:24:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:14.829 03:24:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:14.829 03:24:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:15.088 03:24:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 1 00:21:15.088 03:24:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:15.088 03:24:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:15.088 03:24:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:21:15.088 03:24:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:21:15.088 03:24:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:15.088 03:24:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:15.088 03:24:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:15.088 03:24:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:15.088 03:24:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:15.088 03:24:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:15.088 03:24:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:16.024 00:21:16.024 03:24:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:16.024 03:24:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:16.024 03:24:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:16.024 03:24:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:16.024 03:24:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:16.024 03:24:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:16.024 03:24:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:16.024 03:24:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:16.024 03:24:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:16.024 { 00:21:16.024 "cntlid": 91, 00:21:16.024 "qid": 0, 00:21:16.024 "state": "enabled", 00:21:16.024 "thread": "nvmf_tgt_poll_group_000", 00:21:16.024 "listen_address": { 00:21:16.024 "trtype": "TCP", 00:21:16.024 "adrfam": "IPv4", 00:21:16.024 "traddr": "10.0.0.2", 00:21:16.024 "trsvcid": "4420" 00:21:16.024 }, 00:21:16.024 "peer_address": { 00:21:16.024 "trtype": "TCP", 00:21:16.024 "adrfam": "IPv4", 00:21:16.024 "traddr": "10.0.0.1", 00:21:16.024 "trsvcid": "33532" 00:21:16.024 }, 00:21:16.024 "auth": { 00:21:16.024 "state": "completed", 00:21:16.024 "digest": "sha384", 00:21:16.024 "dhgroup": "ffdhe8192" 00:21:16.024 } 00:21:16.024 } 00:21:16.024 ]' 00:21:16.024 03:24:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:16.024 03:24:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:16.024 03:24:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:16.282 03:24:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:16.282 03:24:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:16.282 03:24:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:16.282 03:24:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:16.282 03:24:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:16.540 03:24:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:MDdiMGJlMzVmMmE3NjEzYWY2NjdkMDhlNzE2YzQ5Nzb2gSDU: --dhchap-ctrl-secret DHHC-1:02:M2ExMjQzNmQzZjM4Zjc4NDc4ZGY3Yjg0OWM5MmYxMGY5ODFjYjdhYTRlMDYxZWY1yP+/9A==: 00:21:17.475 03:24:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:17.475 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:17.475 03:24:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:17.475 03:24:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:17.475 03:24:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:17.475 03:24:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:17.475 03:24:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:17.475 03:24:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:17.475 03:24:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:17.759 03:24:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 2 00:21:17.759 03:24:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:17.759 03:24:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:17.759 03:24:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:21:17.759 03:24:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:21:17.759 03:24:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:17.759 03:24:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:17.759 03:24:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:17.759 03:24:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:17.759 03:24:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:17.759 03:24:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:17.759 03:24:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:18.697 00:21:18.697 03:24:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:18.697 03:24:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:18.697 03:24:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:18.956 03:24:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:18.956 03:24:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:18.956 03:24:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:18.956 03:24:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:18.956 03:24:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:18.956 03:24:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:18.956 { 00:21:18.956 "cntlid": 93, 00:21:18.956 "qid": 0, 00:21:18.956 "state": "enabled", 00:21:18.956 "thread": "nvmf_tgt_poll_group_000", 00:21:18.956 "listen_address": { 00:21:18.956 "trtype": "TCP", 00:21:18.956 "adrfam": "IPv4", 00:21:18.956 "traddr": "10.0.0.2", 00:21:18.956 "trsvcid": "4420" 00:21:18.956 }, 00:21:18.956 "peer_address": { 00:21:18.956 "trtype": "TCP", 00:21:18.956 "adrfam": "IPv4", 00:21:18.956 "traddr": "10.0.0.1", 00:21:18.956 "trsvcid": "33560" 00:21:18.956 }, 00:21:18.956 "auth": { 00:21:18.956 "state": "completed", 00:21:18.956 "digest": "sha384", 00:21:18.956 "dhgroup": "ffdhe8192" 00:21:18.956 } 00:21:18.956 } 00:21:18.956 ]' 00:21:18.956 03:24:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:18.956 03:24:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:18.956 03:24:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:18.956 03:24:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:18.956 03:24:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:18.956 03:24:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:18.956 03:24:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:18.956 03:24:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:19.215 03:24:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:ZWJiNDUwNzBkYjY0ZGYwZjhhNTQ5NzBhMTA1NWJkN2YxZmIxMzM4YThmOTg1NTExOimYFA==: --dhchap-ctrl-secret DHHC-1:01:ZDc1Y2VlMGM4MTU3YjIwODVkMjdkYWFmZDZiOTJhMDJJwEtf: 00:21:20.152 03:24:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:20.152 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:20.152 03:24:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:20.152 03:24:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:20.152 03:24:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:20.410 03:24:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:20.410 03:24:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:20.410 03:24:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:20.410 03:24:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:20.668 03:24:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 3 00:21:20.668 03:24:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:20.668 03:24:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:20.668 03:24:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:21:20.668 03:24:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:20.668 03:24:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:20.668 03:24:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:20.668 03:24:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:20.668 03:24:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:20.668 03:24:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:20.668 03:24:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:20.668 03:24:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:21.602 00:21:21.602 03:24:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:21.602 03:24:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:21.602 03:24:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:21.602 03:24:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:21.602 03:24:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:21.602 03:24:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:21.602 03:24:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:21.602 03:24:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:21.602 03:24:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:21.602 { 00:21:21.602 "cntlid": 95, 00:21:21.602 "qid": 0, 00:21:21.602 "state": "enabled", 00:21:21.602 "thread": "nvmf_tgt_poll_group_000", 00:21:21.602 "listen_address": { 00:21:21.602 "trtype": "TCP", 00:21:21.602 "adrfam": "IPv4", 00:21:21.602 "traddr": "10.0.0.2", 00:21:21.602 "trsvcid": "4420" 00:21:21.602 }, 00:21:21.602 "peer_address": { 00:21:21.602 "trtype": "TCP", 00:21:21.602 "adrfam": "IPv4", 00:21:21.602 "traddr": "10.0.0.1", 00:21:21.602 "trsvcid": "33604" 00:21:21.602 }, 00:21:21.602 "auth": { 00:21:21.602 "state": "completed", 00:21:21.602 "digest": "sha384", 00:21:21.602 "dhgroup": "ffdhe8192" 00:21:21.602 } 00:21:21.602 } 00:21:21.602 ]' 00:21:21.602 03:24:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:21.860 03:24:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:21.860 03:24:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:21.860 03:24:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:21.860 03:24:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:21.860 03:24:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:21.860 03:24:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:21.860 03:24:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:22.118 03:24:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:ODk3YTMyNWE1NDVmZTEzMzQ4ZDgwNDBhYTRiZDFmZmMyMjUzMWQ1NmJiM2U5MzVlYmM4ODI0ZmViMjEyZTZmZiS5hUA=: 00:21:23.051 03:24:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:23.051 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:23.051 03:24:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:23.051 03:24:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:23.051 03:24:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:23.051 03:24:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:23.051 03:24:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:21:23.051 03:24:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:21:23.051 03:24:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:23.051 03:24:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:23.051 03:24:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:23.309 03:24:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 0 00:21:23.309 03:24:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:23.309 03:24:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:23.309 03:24:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:21:23.309 03:24:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:23.309 03:24:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:23.309 03:24:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:23.309 03:24:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:23.309 03:24:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:23.309 03:24:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:23.309 03:24:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:23.310 03:24:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:23.567 00:21:23.567 03:24:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:23.567 03:24:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:23.567 03:24:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:23.825 03:24:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:23.825 03:24:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:23.825 03:24:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:23.825 03:24:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:23.825 03:24:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:23.825 03:24:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:23.825 { 00:21:23.825 "cntlid": 97, 00:21:23.825 "qid": 0, 00:21:23.825 "state": "enabled", 00:21:23.825 "thread": "nvmf_tgt_poll_group_000", 00:21:23.825 "listen_address": { 00:21:23.825 "trtype": "TCP", 00:21:23.825 "adrfam": "IPv4", 00:21:23.825 "traddr": "10.0.0.2", 00:21:23.825 "trsvcid": "4420" 00:21:23.825 }, 00:21:23.825 "peer_address": { 00:21:23.825 "trtype": "TCP", 00:21:23.825 "adrfam": "IPv4", 00:21:23.825 "traddr": "10.0.0.1", 00:21:23.825 "trsvcid": "33638" 00:21:23.825 }, 00:21:23.825 "auth": { 00:21:23.825 "state": "completed", 00:21:23.825 "digest": "sha512", 00:21:23.825 "dhgroup": "null" 00:21:23.825 } 00:21:23.825 } 00:21:23.825 ]' 00:21:23.825 03:24:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:23.825 03:24:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:23.825 03:24:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:24.084 03:24:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:21:24.084 03:24:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:24.084 03:24:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:24.084 03:24:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:24.084 03:24:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:24.342 03:24:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:MjI0NDUyYmZkMjNiOTYzNGEzZjAyYmU3NTVjZGE2OGNhY2E3OTk0YTBkOWE2MTlid1ecCw==: --dhchap-ctrl-secret DHHC-1:03:ZDU2MGU0ZTI4N2VhMGFlNjE5YTNlNWIyNzNmMjFiNTIxNjY2OTY0ZDY0ZTc4ZDM5MmZhMDc0NzUwM2FjOGM2NrFKL3Y=: 00:21:25.278 03:24:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:25.278 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:25.278 03:24:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:25.278 03:24:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:25.278 03:24:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:25.278 03:24:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:25.278 03:24:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:25.278 03:24:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:25.278 03:24:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:25.536 03:24:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 1 00:21:25.536 03:24:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:25.536 03:24:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:25.536 03:24:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:21:25.536 03:24:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:21:25.536 03:24:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:25.536 03:24:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:25.536 03:24:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:25.536 03:24:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:25.536 03:24:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:25.536 03:24:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:25.536 03:24:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:26.102 00:21:26.102 03:24:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:26.102 03:24:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:26.102 03:24:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:26.360 03:24:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:26.360 03:24:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:26.360 03:24:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:26.360 03:24:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:26.360 03:24:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:26.360 03:24:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:26.360 { 00:21:26.360 "cntlid": 99, 00:21:26.360 "qid": 0, 00:21:26.360 "state": "enabled", 00:21:26.360 "thread": "nvmf_tgt_poll_group_000", 00:21:26.360 "listen_address": { 00:21:26.360 "trtype": "TCP", 00:21:26.360 "adrfam": "IPv4", 00:21:26.360 "traddr": "10.0.0.2", 00:21:26.360 "trsvcid": "4420" 00:21:26.360 }, 00:21:26.360 "peer_address": { 00:21:26.360 "trtype": "TCP", 00:21:26.360 "adrfam": "IPv4", 00:21:26.360 "traddr": "10.0.0.1", 00:21:26.360 "trsvcid": "41910" 00:21:26.360 }, 00:21:26.360 "auth": { 00:21:26.360 "state": "completed", 00:21:26.360 "digest": "sha512", 00:21:26.360 "dhgroup": "null" 00:21:26.360 } 00:21:26.360 } 00:21:26.360 ]' 00:21:26.360 03:24:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:26.360 03:24:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:26.360 03:24:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:26.360 03:24:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:21:26.360 03:24:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:26.360 03:24:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:26.360 03:24:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:26.360 03:24:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:26.618 03:24:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:MDdiMGJlMzVmMmE3NjEzYWY2NjdkMDhlNzE2YzQ5Nzb2gSDU: --dhchap-ctrl-secret DHHC-1:02:M2ExMjQzNmQzZjM4Zjc4NDc4ZGY3Yjg0OWM5MmYxMGY5ODFjYjdhYTRlMDYxZWY1yP+/9A==: 00:21:27.554 03:24:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:27.554 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:27.554 03:24:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:27.554 03:24:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:27.554 03:24:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:27.554 03:24:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:27.554 03:24:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:27.554 03:24:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:27.554 03:24:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:27.812 03:24:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 2 00:21:27.812 03:24:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:27.812 03:24:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:27.812 03:24:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:21:27.812 03:24:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:21:27.812 03:24:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:27.812 03:24:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:27.812 03:24:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:27.812 03:24:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:27.812 03:24:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:27.812 03:24:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:27.812 03:24:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:28.070 00:21:28.327 03:24:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:28.327 03:24:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:28.328 03:24:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:28.328 03:24:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:28.328 03:24:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:28.328 03:24:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:28.328 03:24:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:28.585 03:24:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:28.585 03:24:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:28.585 { 00:21:28.585 "cntlid": 101, 00:21:28.585 "qid": 0, 00:21:28.585 "state": "enabled", 00:21:28.585 "thread": "nvmf_tgt_poll_group_000", 00:21:28.585 "listen_address": { 00:21:28.585 "trtype": "TCP", 00:21:28.585 "adrfam": "IPv4", 00:21:28.585 "traddr": "10.0.0.2", 00:21:28.585 "trsvcid": "4420" 00:21:28.585 }, 00:21:28.585 "peer_address": { 00:21:28.585 "trtype": "TCP", 00:21:28.585 "adrfam": "IPv4", 00:21:28.585 "traddr": "10.0.0.1", 00:21:28.585 "trsvcid": "41922" 00:21:28.585 }, 00:21:28.585 "auth": { 00:21:28.585 "state": "completed", 00:21:28.585 "digest": "sha512", 00:21:28.585 "dhgroup": "null" 00:21:28.585 } 00:21:28.585 } 00:21:28.585 ]' 00:21:28.585 03:24:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:28.585 03:24:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:28.585 03:24:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:28.585 03:24:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:21:28.585 03:24:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:28.585 03:24:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:28.585 03:24:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:28.585 03:24:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:28.843 03:24:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:ZWJiNDUwNzBkYjY0ZGYwZjhhNTQ5NzBhMTA1NWJkN2YxZmIxMzM4YThmOTg1NTExOimYFA==: --dhchap-ctrl-secret DHHC-1:01:ZDc1Y2VlMGM4MTU3YjIwODVkMjdkYWFmZDZiOTJhMDJJwEtf: 00:21:29.775 03:24:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:29.775 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:29.775 03:24:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:29.775 03:24:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:29.775 03:24:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:29.775 03:24:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:29.775 03:24:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:29.775 03:24:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:29.775 03:24:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:30.033 03:24:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 3 00:21:30.033 03:24:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:30.033 03:24:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:30.033 03:24:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:21:30.033 03:24:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:30.033 03:24:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:30.033 03:24:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:30.033 03:24:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:30.033 03:24:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:30.033 03:24:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:30.033 03:24:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:30.033 03:24:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:30.290 00:21:30.290 03:24:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:30.290 03:24:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:30.290 03:24:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:30.548 03:24:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:30.548 03:24:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:30.548 03:24:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:30.548 03:24:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:30.548 03:24:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:30.548 03:24:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:30.548 { 00:21:30.548 "cntlid": 103, 00:21:30.548 "qid": 0, 00:21:30.548 "state": "enabled", 00:21:30.548 "thread": "nvmf_tgt_poll_group_000", 00:21:30.548 "listen_address": { 00:21:30.548 "trtype": "TCP", 00:21:30.548 "adrfam": "IPv4", 00:21:30.548 "traddr": "10.0.0.2", 00:21:30.548 "trsvcid": "4420" 00:21:30.548 }, 00:21:30.548 "peer_address": { 00:21:30.548 "trtype": "TCP", 00:21:30.548 "adrfam": "IPv4", 00:21:30.548 "traddr": "10.0.0.1", 00:21:30.548 "trsvcid": "41948" 00:21:30.548 }, 00:21:30.548 "auth": { 00:21:30.548 "state": "completed", 00:21:30.548 "digest": "sha512", 00:21:30.548 "dhgroup": "null" 00:21:30.548 } 00:21:30.548 } 00:21:30.548 ]' 00:21:30.548 03:24:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:30.548 03:24:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:30.548 03:24:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:30.806 03:24:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:21:30.806 03:24:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:30.806 03:24:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:30.806 03:24:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:30.806 03:24:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:31.064 03:24:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:ODk3YTMyNWE1NDVmZTEzMzQ4ZDgwNDBhYTRiZDFmZmMyMjUzMWQ1NmJiM2U5MzVlYmM4ODI0ZmViMjEyZTZmZiS5hUA=: 00:21:31.996 03:24:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:31.997 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:31.997 03:24:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:31.997 03:24:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:31.997 03:24:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:31.997 03:24:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:31.997 03:24:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:21:31.997 03:24:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:31.997 03:24:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:31.997 03:24:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:32.254 03:24:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 0 00:21:32.254 03:24:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:32.254 03:24:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:32.254 03:24:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:21:32.254 03:24:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:32.254 03:24:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:32.254 03:24:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:32.254 03:24:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:32.254 03:24:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:32.254 03:24:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:32.254 03:24:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:32.254 03:24:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:32.511 00:21:32.512 03:24:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:32.512 03:24:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:32.512 03:24:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:32.770 03:24:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:32.770 03:24:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:32.770 03:24:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:32.770 03:24:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:32.770 03:24:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:32.770 03:24:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:32.770 { 00:21:32.770 "cntlid": 105, 00:21:32.770 "qid": 0, 00:21:32.770 "state": "enabled", 00:21:32.770 "thread": "nvmf_tgt_poll_group_000", 00:21:32.770 "listen_address": { 00:21:32.770 "trtype": "TCP", 00:21:32.770 "adrfam": "IPv4", 00:21:32.770 "traddr": "10.0.0.2", 00:21:32.770 "trsvcid": "4420" 00:21:32.770 }, 00:21:32.770 "peer_address": { 00:21:32.770 "trtype": "TCP", 00:21:32.770 "adrfam": "IPv4", 00:21:32.770 "traddr": "10.0.0.1", 00:21:32.770 "trsvcid": "41978" 00:21:32.770 }, 00:21:32.770 "auth": { 00:21:32.770 "state": "completed", 00:21:32.770 "digest": "sha512", 00:21:32.770 "dhgroup": "ffdhe2048" 00:21:32.770 } 00:21:32.770 } 00:21:32.770 ]' 00:21:32.770 03:24:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:32.770 03:24:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:32.770 03:24:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:32.770 03:24:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:32.770 03:24:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:32.770 03:24:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:32.770 03:24:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:32.770 03:24:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:33.334 03:24:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:MjI0NDUyYmZkMjNiOTYzNGEzZjAyYmU3NTVjZGE2OGNhY2E3OTk0YTBkOWE2MTlid1ecCw==: --dhchap-ctrl-secret DHHC-1:03:ZDU2MGU0ZTI4N2VhMGFlNjE5YTNlNWIyNzNmMjFiNTIxNjY2OTY0ZDY0ZTc4ZDM5MmZhMDc0NzUwM2FjOGM2NrFKL3Y=: 00:21:34.263 03:24:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:34.263 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:34.263 03:24:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:34.263 03:24:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:34.263 03:24:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:34.263 03:24:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:34.263 03:24:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:34.263 03:24:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:34.263 03:24:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:34.520 03:24:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 1 00:21:34.520 03:24:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:34.520 03:24:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:34.520 03:24:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:21:34.520 03:24:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:21:34.520 03:24:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:34.520 03:24:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:34.520 03:24:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:34.520 03:24:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:34.520 03:24:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:34.520 03:24:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:34.520 03:24:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:34.829 00:21:34.829 03:24:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:34.829 03:24:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:34.829 03:24:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:35.086 03:24:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:35.086 03:24:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:35.086 03:24:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:35.086 03:24:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:35.086 03:24:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:35.086 03:24:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:35.086 { 00:21:35.086 "cntlid": 107, 00:21:35.086 "qid": 0, 00:21:35.086 "state": "enabled", 00:21:35.086 "thread": "nvmf_tgt_poll_group_000", 00:21:35.086 "listen_address": { 00:21:35.086 "trtype": "TCP", 00:21:35.086 "adrfam": "IPv4", 00:21:35.086 "traddr": "10.0.0.2", 00:21:35.086 "trsvcid": "4420" 00:21:35.086 }, 00:21:35.086 "peer_address": { 00:21:35.086 "trtype": "TCP", 00:21:35.086 "adrfam": "IPv4", 00:21:35.086 "traddr": "10.0.0.1", 00:21:35.086 "trsvcid": "42014" 00:21:35.086 }, 00:21:35.086 "auth": { 00:21:35.086 "state": "completed", 00:21:35.086 "digest": "sha512", 00:21:35.086 "dhgroup": "ffdhe2048" 00:21:35.086 } 00:21:35.086 } 00:21:35.086 ]' 00:21:35.086 03:24:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:35.086 03:24:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:35.086 03:24:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:35.086 03:24:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:35.086 03:24:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:35.086 03:24:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:35.086 03:24:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:35.086 03:24:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:35.355 03:24:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:MDdiMGJlMzVmMmE3NjEzYWY2NjdkMDhlNzE2YzQ5Nzb2gSDU: --dhchap-ctrl-secret DHHC-1:02:M2ExMjQzNmQzZjM4Zjc4NDc4ZGY3Yjg0OWM5MmYxMGY5ODFjYjdhYTRlMDYxZWY1yP+/9A==: 00:21:36.296 03:24:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:36.296 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:36.296 03:24:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:36.296 03:24:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:36.296 03:24:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:36.296 03:24:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:36.296 03:24:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:36.296 03:24:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:36.296 03:24:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:36.553 03:24:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 2 00:21:36.553 03:24:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:36.553 03:24:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:36.553 03:24:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:21:36.553 03:24:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:21:36.553 03:24:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:36.553 03:24:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:36.553 03:24:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:36.553 03:24:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:36.553 03:24:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:36.553 03:24:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:36.553 03:24:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:37.126 00:21:37.126 03:24:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:37.126 03:24:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:37.126 03:24:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:37.383 03:24:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:37.383 03:24:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:37.383 03:24:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:37.383 03:24:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:37.383 03:24:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:37.383 03:24:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:37.383 { 00:21:37.383 "cntlid": 109, 00:21:37.383 "qid": 0, 00:21:37.383 "state": "enabled", 00:21:37.383 "thread": "nvmf_tgt_poll_group_000", 00:21:37.383 "listen_address": { 00:21:37.383 "trtype": "TCP", 00:21:37.383 "adrfam": "IPv4", 00:21:37.383 "traddr": "10.0.0.2", 00:21:37.383 "trsvcid": "4420" 00:21:37.383 }, 00:21:37.383 "peer_address": { 00:21:37.383 "trtype": "TCP", 00:21:37.383 "adrfam": "IPv4", 00:21:37.383 "traddr": "10.0.0.1", 00:21:37.383 "trsvcid": "55232" 00:21:37.383 }, 00:21:37.383 "auth": { 00:21:37.383 "state": "completed", 00:21:37.383 "digest": "sha512", 00:21:37.383 "dhgroup": "ffdhe2048" 00:21:37.383 } 00:21:37.383 } 00:21:37.383 ]' 00:21:37.383 03:24:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:37.383 03:24:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:37.383 03:24:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:37.383 03:24:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:37.383 03:24:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:37.383 03:24:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:37.383 03:24:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:37.383 03:24:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:37.640 03:24:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:ZWJiNDUwNzBkYjY0ZGYwZjhhNTQ5NzBhMTA1NWJkN2YxZmIxMzM4YThmOTg1NTExOimYFA==: --dhchap-ctrl-secret DHHC-1:01:ZDc1Y2VlMGM4MTU3YjIwODVkMjdkYWFmZDZiOTJhMDJJwEtf: 00:21:38.572 03:24:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:38.572 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:38.572 03:24:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:38.572 03:24:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:38.572 03:24:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:38.572 03:24:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:38.572 03:24:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:38.572 03:24:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:38.572 03:24:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:38.829 03:24:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 3 00:21:38.829 03:24:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:38.829 03:24:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:38.829 03:24:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:21:38.829 03:24:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:38.829 03:24:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:38.829 03:24:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:38.829 03:24:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:38.829 03:24:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:38.829 03:24:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:38.829 03:24:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:38.829 03:24:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:39.394 00:21:39.394 03:24:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:39.394 03:24:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:39.394 03:24:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:39.394 03:24:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:39.394 03:24:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:39.394 03:24:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:39.394 03:24:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:39.394 03:24:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:39.394 03:24:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:39.394 { 00:21:39.394 "cntlid": 111, 00:21:39.394 "qid": 0, 00:21:39.394 "state": "enabled", 00:21:39.394 "thread": "nvmf_tgt_poll_group_000", 00:21:39.394 "listen_address": { 00:21:39.394 "trtype": "TCP", 00:21:39.394 "adrfam": "IPv4", 00:21:39.394 "traddr": "10.0.0.2", 00:21:39.394 "trsvcid": "4420" 00:21:39.394 }, 00:21:39.394 "peer_address": { 00:21:39.394 "trtype": "TCP", 00:21:39.394 "adrfam": "IPv4", 00:21:39.394 "traddr": "10.0.0.1", 00:21:39.394 "trsvcid": "55254" 00:21:39.394 }, 00:21:39.394 "auth": { 00:21:39.394 "state": "completed", 00:21:39.394 "digest": "sha512", 00:21:39.394 "dhgroup": "ffdhe2048" 00:21:39.394 } 00:21:39.394 } 00:21:39.394 ]' 00:21:39.394 03:24:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:39.651 03:24:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:39.651 03:24:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:39.651 03:24:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:39.651 03:24:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:39.651 03:24:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:39.651 03:24:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:39.651 03:24:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:39.913 03:24:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:ODk3YTMyNWE1NDVmZTEzMzQ4ZDgwNDBhYTRiZDFmZmMyMjUzMWQ1NmJiM2U5MzVlYmM4ODI0ZmViMjEyZTZmZiS5hUA=: 00:21:40.845 03:24:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:40.845 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:40.845 03:24:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:40.845 03:24:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:40.846 03:24:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:40.846 03:24:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:40.846 03:24:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:21:40.846 03:24:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:40.846 03:24:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:40.846 03:24:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:41.103 03:24:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 0 00:21:41.103 03:24:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:41.103 03:24:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:41.103 03:24:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:21:41.103 03:24:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:41.103 03:24:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:41.103 03:24:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:41.103 03:24:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:41.103 03:24:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:41.103 03:24:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:41.103 03:24:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:41.103 03:24:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:41.361 00:21:41.361 03:24:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:41.361 03:24:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:41.361 03:24:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:41.619 03:24:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:41.619 03:24:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:41.619 03:24:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:41.619 03:24:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:41.619 03:24:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:41.619 03:24:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:41.619 { 00:21:41.619 "cntlid": 113, 00:21:41.619 "qid": 0, 00:21:41.619 "state": "enabled", 00:21:41.619 "thread": "nvmf_tgt_poll_group_000", 00:21:41.619 "listen_address": { 00:21:41.619 "trtype": "TCP", 00:21:41.619 "adrfam": "IPv4", 00:21:41.619 "traddr": "10.0.0.2", 00:21:41.619 "trsvcid": "4420" 00:21:41.619 }, 00:21:41.619 "peer_address": { 00:21:41.619 "trtype": "TCP", 00:21:41.619 "adrfam": "IPv4", 00:21:41.619 "traddr": "10.0.0.1", 00:21:41.619 "trsvcid": "55284" 00:21:41.619 }, 00:21:41.619 "auth": { 00:21:41.619 "state": "completed", 00:21:41.619 "digest": "sha512", 00:21:41.619 "dhgroup": "ffdhe3072" 00:21:41.619 } 00:21:41.619 } 00:21:41.619 ]' 00:21:41.619 03:24:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:41.878 03:24:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:41.878 03:24:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:41.878 03:24:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:41.878 03:24:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:41.878 03:24:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:41.878 03:24:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:41.878 03:24:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:42.136 03:24:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:MjI0NDUyYmZkMjNiOTYzNGEzZjAyYmU3NTVjZGE2OGNhY2E3OTk0YTBkOWE2MTlid1ecCw==: --dhchap-ctrl-secret DHHC-1:03:ZDU2MGU0ZTI4N2VhMGFlNjE5YTNlNWIyNzNmMjFiNTIxNjY2OTY0ZDY0ZTc4ZDM5MmZhMDc0NzUwM2FjOGM2NrFKL3Y=: 00:21:43.071 03:24:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:43.071 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:43.071 03:24:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:43.071 03:24:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:43.071 03:24:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:43.071 03:24:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:43.071 03:24:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:43.071 03:24:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:43.071 03:24:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:43.330 03:24:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 1 00:21:43.330 03:24:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:43.330 03:24:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:43.330 03:24:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:21:43.330 03:24:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:21:43.330 03:24:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:43.330 03:24:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:43.330 03:24:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:43.330 03:24:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:43.330 03:24:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:43.330 03:24:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:43.330 03:24:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:43.587 00:21:43.588 03:24:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:43.588 03:24:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:43.588 03:24:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:44.153 03:24:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:44.153 03:24:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:44.153 03:24:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:44.153 03:24:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:44.153 03:24:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:44.153 03:24:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:44.153 { 00:21:44.153 "cntlid": 115, 00:21:44.153 "qid": 0, 00:21:44.153 "state": "enabled", 00:21:44.154 "thread": "nvmf_tgt_poll_group_000", 00:21:44.154 "listen_address": { 00:21:44.154 "trtype": "TCP", 00:21:44.154 "adrfam": "IPv4", 00:21:44.154 "traddr": "10.0.0.2", 00:21:44.154 "trsvcid": "4420" 00:21:44.154 }, 00:21:44.154 "peer_address": { 00:21:44.154 "trtype": "TCP", 00:21:44.154 "adrfam": "IPv4", 00:21:44.154 "traddr": "10.0.0.1", 00:21:44.154 "trsvcid": "55296" 00:21:44.154 }, 00:21:44.154 "auth": { 00:21:44.154 "state": "completed", 00:21:44.154 "digest": "sha512", 00:21:44.154 "dhgroup": "ffdhe3072" 00:21:44.154 } 00:21:44.154 } 00:21:44.154 ]' 00:21:44.154 03:24:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:44.154 03:24:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:44.154 03:24:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:44.154 03:24:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:44.154 03:24:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:44.154 03:24:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:44.154 03:24:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:44.154 03:24:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:44.412 03:24:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:MDdiMGJlMzVmMmE3NjEzYWY2NjdkMDhlNzE2YzQ5Nzb2gSDU: --dhchap-ctrl-secret DHHC-1:02:M2ExMjQzNmQzZjM4Zjc4NDc4ZGY3Yjg0OWM5MmYxMGY5ODFjYjdhYTRlMDYxZWY1yP+/9A==: 00:21:45.347 03:24:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:45.347 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:45.347 03:24:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:45.347 03:24:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:45.347 03:24:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:45.347 03:24:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:45.347 03:24:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:45.347 03:24:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:45.347 03:24:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:45.605 03:24:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 2 00:21:45.605 03:24:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:45.605 03:24:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:45.605 03:24:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:21:45.605 03:24:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:21:45.605 03:24:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:45.605 03:24:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:45.605 03:24:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:45.605 03:24:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:45.605 03:24:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:45.605 03:24:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:45.605 03:24:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:45.863 00:21:45.863 03:24:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:45.863 03:24:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:45.863 03:24:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:46.121 03:24:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:46.121 03:24:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:46.121 03:24:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:46.121 03:24:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:46.121 03:24:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:46.121 03:24:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:46.121 { 00:21:46.121 "cntlid": 117, 00:21:46.121 "qid": 0, 00:21:46.121 "state": "enabled", 00:21:46.121 "thread": "nvmf_tgt_poll_group_000", 00:21:46.121 "listen_address": { 00:21:46.121 "trtype": "TCP", 00:21:46.121 "adrfam": "IPv4", 00:21:46.121 "traddr": "10.0.0.2", 00:21:46.121 "trsvcid": "4420" 00:21:46.121 }, 00:21:46.121 "peer_address": { 00:21:46.121 "trtype": "TCP", 00:21:46.121 "adrfam": "IPv4", 00:21:46.121 "traddr": "10.0.0.1", 00:21:46.121 "trsvcid": "52190" 00:21:46.121 }, 00:21:46.121 "auth": { 00:21:46.121 "state": "completed", 00:21:46.121 "digest": "sha512", 00:21:46.121 "dhgroup": "ffdhe3072" 00:21:46.121 } 00:21:46.121 } 00:21:46.121 ]' 00:21:46.121 03:24:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:46.379 03:24:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:46.379 03:24:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:46.379 03:24:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:46.379 03:24:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:46.379 03:24:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:46.379 03:24:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:46.379 03:24:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:46.637 03:24:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:ZWJiNDUwNzBkYjY0ZGYwZjhhNTQ5NzBhMTA1NWJkN2YxZmIxMzM4YThmOTg1NTExOimYFA==: --dhchap-ctrl-secret DHHC-1:01:ZDc1Y2VlMGM4MTU3YjIwODVkMjdkYWFmZDZiOTJhMDJJwEtf: 00:21:47.570 03:24:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:47.570 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:47.570 03:24:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:47.570 03:24:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:47.570 03:24:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:47.570 03:24:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:47.570 03:24:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:47.570 03:24:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:47.570 03:24:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:47.827 03:24:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 3 00:21:47.827 03:24:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:47.827 03:24:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:47.827 03:24:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:21:47.827 03:24:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:47.827 03:24:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:47.827 03:24:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:47.827 03:24:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:47.827 03:24:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:47.827 03:24:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:47.827 03:24:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:47.827 03:24:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:48.084 00:21:48.342 03:24:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:48.342 03:24:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:48.342 03:24:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:48.600 03:24:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:48.600 03:24:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:48.600 03:24:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:48.600 03:24:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:48.600 03:24:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:48.600 03:24:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:48.600 { 00:21:48.600 "cntlid": 119, 00:21:48.600 "qid": 0, 00:21:48.600 "state": "enabled", 00:21:48.600 "thread": "nvmf_tgt_poll_group_000", 00:21:48.600 "listen_address": { 00:21:48.600 "trtype": "TCP", 00:21:48.600 "adrfam": "IPv4", 00:21:48.600 "traddr": "10.0.0.2", 00:21:48.600 "trsvcid": "4420" 00:21:48.600 }, 00:21:48.600 "peer_address": { 00:21:48.600 "trtype": "TCP", 00:21:48.600 "adrfam": "IPv4", 00:21:48.600 "traddr": "10.0.0.1", 00:21:48.600 "trsvcid": "52226" 00:21:48.600 }, 00:21:48.600 "auth": { 00:21:48.600 "state": "completed", 00:21:48.600 "digest": "sha512", 00:21:48.600 "dhgroup": "ffdhe3072" 00:21:48.600 } 00:21:48.600 } 00:21:48.600 ]' 00:21:48.600 03:24:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:48.600 03:24:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:48.600 03:24:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:48.600 03:24:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:48.600 03:24:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:48.600 03:24:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:48.600 03:24:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:48.600 03:24:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:48.858 03:24:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:ODk3YTMyNWE1NDVmZTEzMzQ4ZDgwNDBhYTRiZDFmZmMyMjUzMWQ1NmJiM2U5MzVlYmM4ODI0ZmViMjEyZTZmZiS5hUA=: 00:21:49.792 03:24:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:49.792 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:49.792 03:24:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:49.792 03:24:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:49.792 03:24:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:49.792 03:24:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:49.792 03:24:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:21:49.792 03:24:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:49.792 03:24:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:49.792 03:24:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:50.050 03:24:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 0 00:21:50.050 03:24:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:50.050 03:24:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:50.050 03:24:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:21:50.050 03:24:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:50.050 03:24:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:50.050 03:24:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:50.050 03:24:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:50.050 03:24:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:50.050 03:24:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:50.050 03:24:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:50.050 03:24:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:50.615 00:21:50.615 03:24:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:50.615 03:24:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:50.615 03:24:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:50.615 03:24:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:50.615 03:24:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:50.615 03:24:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:50.615 03:24:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:50.615 03:24:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:50.615 03:24:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:50.615 { 00:21:50.615 "cntlid": 121, 00:21:50.615 "qid": 0, 00:21:50.615 "state": "enabled", 00:21:50.615 "thread": "nvmf_tgt_poll_group_000", 00:21:50.615 "listen_address": { 00:21:50.615 "trtype": "TCP", 00:21:50.615 "adrfam": "IPv4", 00:21:50.615 "traddr": "10.0.0.2", 00:21:50.615 "trsvcid": "4420" 00:21:50.615 }, 00:21:50.615 "peer_address": { 00:21:50.615 "trtype": "TCP", 00:21:50.615 "adrfam": "IPv4", 00:21:50.615 "traddr": "10.0.0.1", 00:21:50.615 "trsvcid": "52248" 00:21:50.615 }, 00:21:50.615 "auth": { 00:21:50.615 "state": "completed", 00:21:50.615 "digest": "sha512", 00:21:50.615 "dhgroup": "ffdhe4096" 00:21:50.615 } 00:21:50.615 } 00:21:50.615 ]' 00:21:50.615 03:24:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:50.873 03:24:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:50.873 03:24:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:50.873 03:24:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:50.873 03:24:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:50.873 03:24:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:50.873 03:24:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:50.873 03:24:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:51.130 03:24:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:MjI0NDUyYmZkMjNiOTYzNGEzZjAyYmU3NTVjZGE2OGNhY2E3OTk0YTBkOWE2MTlid1ecCw==: --dhchap-ctrl-secret DHHC-1:03:ZDU2MGU0ZTI4N2VhMGFlNjE5YTNlNWIyNzNmMjFiNTIxNjY2OTY0ZDY0ZTc4ZDM5MmZhMDc0NzUwM2FjOGM2NrFKL3Y=: 00:21:52.063 03:24:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:52.063 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:52.063 03:24:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:52.063 03:24:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:52.063 03:24:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:52.063 03:24:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:52.063 03:24:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:52.063 03:24:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:52.063 03:24:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:52.320 03:24:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 1 00:21:52.320 03:24:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:52.320 03:24:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:52.320 03:24:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:21:52.320 03:24:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:21:52.321 03:24:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:52.321 03:24:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:52.321 03:24:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:52.321 03:24:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:52.321 03:24:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:52.321 03:24:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:52.321 03:24:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:52.577 00:21:52.838 03:24:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:52.838 03:24:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:52.838 03:24:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:53.133 03:24:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:53.133 03:24:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:53.133 03:24:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:53.133 03:24:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:53.133 03:24:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:53.133 03:24:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:53.133 { 00:21:53.133 "cntlid": 123, 00:21:53.133 "qid": 0, 00:21:53.133 "state": "enabled", 00:21:53.133 "thread": "nvmf_tgt_poll_group_000", 00:21:53.133 "listen_address": { 00:21:53.133 "trtype": "TCP", 00:21:53.134 "adrfam": "IPv4", 00:21:53.134 "traddr": "10.0.0.2", 00:21:53.134 "trsvcid": "4420" 00:21:53.134 }, 00:21:53.134 "peer_address": { 00:21:53.134 "trtype": "TCP", 00:21:53.134 "adrfam": "IPv4", 00:21:53.134 "traddr": "10.0.0.1", 00:21:53.134 "trsvcid": "52276" 00:21:53.134 }, 00:21:53.134 "auth": { 00:21:53.134 "state": "completed", 00:21:53.134 "digest": "sha512", 00:21:53.134 "dhgroup": "ffdhe4096" 00:21:53.134 } 00:21:53.134 } 00:21:53.134 ]' 00:21:53.134 03:24:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:53.134 03:24:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:53.134 03:24:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:53.134 03:24:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:53.134 03:24:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:53.134 03:24:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:53.134 03:24:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:53.134 03:24:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:53.391 03:24:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:MDdiMGJlMzVmMmE3NjEzYWY2NjdkMDhlNzE2YzQ5Nzb2gSDU: --dhchap-ctrl-secret DHHC-1:02:M2ExMjQzNmQzZjM4Zjc4NDc4ZGY3Yjg0OWM5MmYxMGY5ODFjYjdhYTRlMDYxZWY1yP+/9A==: 00:21:54.323 03:25:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:54.323 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:54.323 03:25:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:54.323 03:25:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:54.323 03:25:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:54.323 03:25:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:54.323 03:25:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:54.323 03:25:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:54.323 03:25:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:54.581 03:25:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 2 00:21:54.581 03:25:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:54.581 03:25:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:54.581 03:25:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:21:54.581 03:25:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:21:54.581 03:25:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:54.581 03:25:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:54.581 03:25:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:54.581 03:25:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:54.581 03:25:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:54.581 03:25:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:54.581 03:25:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:54.839 00:21:54.839 03:25:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:54.839 03:25:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:54.839 03:25:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:55.096 03:25:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:55.096 03:25:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:55.096 03:25:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:55.096 03:25:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:55.096 03:25:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:55.096 03:25:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:55.096 { 00:21:55.096 "cntlid": 125, 00:21:55.096 "qid": 0, 00:21:55.096 "state": "enabled", 00:21:55.096 "thread": "nvmf_tgt_poll_group_000", 00:21:55.096 "listen_address": { 00:21:55.096 "trtype": "TCP", 00:21:55.096 "adrfam": "IPv4", 00:21:55.096 "traddr": "10.0.0.2", 00:21:55.096 "trsvcid": "4420" 00:21:55.096 }, 00:21:55.096 "peer_address": { 00:21:55.096 "trtype": "TCP", 00:21:55.096 "adrfam": "IPv4", 00:21:55.096 "traddr": "10.0.0.1", 00:21:55.096 "trsvcid": "52306" 00:21:55.096 }, 00:21:55.096 "auth": { 00:21:55.096 "state": "completed", 00:21:55.096 "digest": "sha512", 00:21:55.096 "dhgroup": "ffdhe4096" 00:21:55.096 } 00:21:55.096 } 00:21:55.096 ]' 00:21:55.096 03:25:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:55.354 03:25:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:55.354 03:25:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:55.354 03:25:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:55.354 03:25:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:55.354 03:25:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:55.354 03:25:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:55.354 03:25:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:55.612 03:25:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:ZWJiNDUwNzBkYjY0ZGYwZjhhNTQ5NzBhMTA1NWJkN2YxZmIxMzM4YThmOTg1NTExOimYFA==: --dhchap-ctrl-secret DHHC-1:01:ZDc1Y2VlMGM4MTU3YjIwODVkMjdkYWFmZDZiOTJhMDJJwEtf: 00:21:56.545 03:25:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:56.545 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:56.545 03:25:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:56.545 03:25:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:56.545 03:25:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:56.545 03:25:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:56.545 03:25:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:56.545 03:25:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:56.545 03:25:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:56.802 03:25:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 3 00:21:56.802 03:25:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:56.802 03:25:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:56.802 03:25:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:21:56.802 03:25:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:56.802 03:25:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:56.802 03:25:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:56.802 03:25:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:56.802 03:25:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:56.802 03:25:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:56.802 03:25:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:56.802 03:25:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:57.060 00:21:57.060 03:25:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:57.060 03:25:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:57.060 03:25:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:57.315 03:25:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:57.315 03:25:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:57.315 03:25:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:57.315 03:25:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:57.572 03:25:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:57.572 03:25:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:57.572 { 00:21:57.572 "cntlid": 127, 00:21:57.572 "qid": 0, 00:21:57.572 "state": "enabled", 00:21:57.572 "thread": "nvmf_tgt_poll_group_000", 00:21:57.572 "listen_address": { 00:21:57.572 "trtype": "TCP", 00:21:57.572 "adrfam": "IPv4", 00:21:57.572 "traddr": "10.0.0.2", 00:21:57.572 "trsvcid": "4420" 00:21:57.572 }, 00:21:57.572 "peer_address": { 00:21:57.572 "trtype": "TCP", 00:21:57.572 "adrfam": "IPv4", 00:21:57.572 "traddr": "10.0.0.1", 00:21:57.572 "trsvcid": "55138" 00:21:57.572 }, 00:21:57.572 "auth": { 00:21:57.572 "state": "completed", 00:21:57.572 "digest": "sha512", 00:21:57.572 "dhgroup": "ffdhe4096" 00:21:57.572 } 00:21:57.572 } 00:21:57.572 ]' 00:21:57.572 03:25:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:57.572 03:25:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:57.572 03:25:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:57.572 03:25:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:57.572 03:25:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:57.572 03:25:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:57.572 03:25:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:57.572 03:25:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:57.829 03:25:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:ODk3YTMyNWE1NDVmZTEzMzQ4ZDgwNDBhYTRiZDFmZmMyMjUzMWQ1NmJiM2U5MzVlYmM4ODI0ZmViMjEyZTZmZiS5hUA=: 00:21:58.762 03:25:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:58.762 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:58.762 03:25:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:58.762 03:25:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:58.762 03:25:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:58.762 03:25:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:58.762 03:25:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:21:58.762 03:25:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:58.762 03:25:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:58.762 03:25:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:59.019 03:25:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 0 00:21:59.019 03:25:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:59.020 03:25:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:59.020 03:25:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:21:59.020 03:25:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:59.020 03:25:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:59.020 03:25:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:59.020 03:25:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:59.020 03:25:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:59.020 03:25:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:59.020 03:25:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:59.020 03:25:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:59.585 00:21:59.585 03:25:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:59.585 03:25:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:59.585 03:25:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:59.843 03:25:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:59.843 03:25:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:59.843 03:25:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:59.843 03:25:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:59.843 03:25:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:59.843 03:25:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:59.843 { 00:21:59.843 "cntlid": 129, 00:21:59.843 "qid": 0, 00:21:59.843 "state": "enabled", 00:21:59.843 "thread": "nvmf_tgt_poll_group_000", 00:21:59.843 "listen_address": { 00:21:59.843 "trtype": "TCP", 00:21:59.843 "adrfam": "IPv4", 00:21:59.843 "traddr": "10.0.0.2", 00:21:59.843 "trsvcid": "4420" 00:21:59.843 }, 00:21:59.843 "peer_address": { 00:21:59.843 "trtype": "TCP", 00:21:59.843 "adrfam": "IPv4", 00:21:59.843 "traddr": "10.0.0.1", 00:21:59.843 "trsvcid": "55170" 00:21:59.843 }, 00:21:59.843 "auth": { 00:21:59.843 "state": "completed", 00:21:59.843 "digest": "sha512", 00:21:59.843 "dhgroup": "ffdhe6144" 00:21:59.843 } 00:21:59.843 } 00:21:59.843 ]' 00:21:59.843 03:25:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:59.843 03:25:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:59.843 03:25:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:00.100 03:25:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:00.100 03:25:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:00.100 03:25:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:00.101 03:25:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:00.101 03:25:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:00.358 03:25:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:MjI0NDUyYmZkMjNiOTYzNGEzZjAyYmU3NTVjZGE2OGNhY2E3OTk0YTBkOWE2MTlid1ecCw==: --dhchap-ctrl-secret DHHC-1:03:ZDU2MGU0ZTI4N2VhMGFlNjE5YTNlNWIyNzNmMjFiNTIxNjY2OTY0ZDY0ZTc4ZDM5MmZhMDc0NzUwM2FjOGM2NrFKL3Y=: 00:22:01.292 03:25:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:01.292 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:01.292 03:25:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:01.292 03:25:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:01.292 03:25:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:01.292 03:25:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:01.292 03:25:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:01.292 03:25:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:01.292 03:25:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:01.550 03:25:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 1 00:22:01.550 03:25:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:01.550 03:25:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:01.550 03:25:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:22:01.550 03:25:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:22:01.550 03:25:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:01.550 03:25:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:01.550 03:25:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:01.550 03:25:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:01.550 03:25:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:01.550 03:25:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:01.550 03:25:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:02.115 00:22:02.115 03:25:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:02.115 03:25:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:02.115 03:25:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:02.373 03:25:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:02.373 03:25:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:02.373 03:25:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:02.373 03:25:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:02.373 03:25:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:02.373 03:25:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:02.373 { 00:22:02.373 "cntlid": 131, 00:22:02.374 "qid": 0, 00:22:02.374 "state": "enabled", 00:22:02.374 "thread": "nvmf_tgt_poll_group_000", 00:22:02.374 "listen_address": { 00:22:02.374 "trtype": "TCP", 00:22:02.374 "adrfam": "IPv4", 00:22:02.374 "traddr": "10.0.0.2", 00:22:02.374 "trsvcid": "4420" 00:22:02.374 }, 00:22:02.374 "peer_address": { 00:22:02.374 "trtype": "TCP", 00:22:02.374 "adrfam": "IPv4", 00:22:02.374 "traddr": "10.0.0.1", 00:22:02.374 "trsvcid": "55192" 00:22:02.374 }, 00:22:02.374 "auth": { 00:22:02.374 "state": "completed", 00:22:02.374 "digest": "sha512", 00:22:02.374 "dhgroup": "ffdhe6144" 00:22:02.374 } 00:22:02.374 } 00:22:02.374 ]' 00:22:02.374 03:25:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:02.374 03:25:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:02.374 03:25:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:02.632 03:25:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:02.632 03:25:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:02.632 03:25:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:02.632 03:25:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:02.632 03:25:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:02.891 03:25:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:MDdiMGJlMzVmMmE3NjEzYWY2NjdkMDhlNzE2YzQ5Nzb2gSDU: --dhchap-ctrl-secret DHHC-1:02:M2ExMjQzNmQzZjM4Zjc4NDc4ZGY3Yjg0OWM5MmYxMGY5ODFjYjdhYTRlMDYxZWY1yP+/9A==: 00:22:03.825 03:25:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:03.825 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:03.825 03:25:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:03.825 03:25:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:03.825 03:25:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:03.825 03:25:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:03.825 03:25:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:03.825 03:25:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:03.825 03:25:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:04.083 03:25:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 2 00:22:04.083 03:25:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:04.083 03:25:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:04.083 03:25:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:22:04.083 03:25:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:22:04.083 03:25:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:04.083 03:25:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:04.083 03:25:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:04.083 03:25:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:04.083 03:25:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:04.083 03:25:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:04.083 03:25:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:04.650 00:22:04.650 03:25:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:04.650 03:25:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:04.650 03:25:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:04.908 03:25:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:04.908 03:25:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:04.908 03:25:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:04.908 03:25:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:04.908 03:25:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:04.908 03:25:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:04.908 { 00:22:04.908 "cntlid": 133, 00:22:04.908 "qid": 0, 00:22:04.908 "state": "enabled", 00:22:04.908 "thread": "nvmf_tgt_poll_group_000", 00:22:04.908 "listen_address": { 00:22:04.908 "trtype": "TCP", 00:22:04.908 "adrfam": "IPv4", 00:22:04.908 "traddr": "10.0.0.2", 00:22:04.908 "trsvcid": "4420" 00:22:04.908 }, 00:22:04.908 "peer_address": { 00:22:04.908 "trtype": "TCP", 00:22:04.908 "adrfam": "IPv4", 00:22:04.908 "traddr": "10.0.0.1", 00:22:04.908 "trsvcid": "55224" 00:22:04.908 }, 00:22:04.908 "auth": { 00:22:04.908 "state": "completed", 00:22:04.908 "digest": "sha512", 00:22:04.908 "dhgroup": "ffdhe6144" 00:22:04.908 } 00:22:04.908 } 00:22:04.908 ]' 00:22:04.908 03:25:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:04.908 03:25:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:04.908 03:25:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:04.908 03:25:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:04.908 03:25:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:04.908 03:25:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:04.908 03:25:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:04.908 03:25:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:05.166 03:25:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:ZWJiNDUwNzBkYjY0ZGYwZjhhNTQ5NzBhMTA1NWJkN2YxZmIxMzM4YThmOTg1NTExOimYFA==: --dhchap-ctrl-secret DHHC-1:01:ZDc1Y2VlMGM4MTU3YjIwODVkMjdkYWFmZDZiOTJhMDJJwEtf: 00:22:06.538 03:25:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:06.538 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:06.538 03:25:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:06.538 03:25:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:06.538 03:25:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:06.538 03:25:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:06.538 03:25:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:06.538 03:25:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:06.538 03:25:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:06.538 03:25:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 3 00:22:06.538 03:25:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:06.538 03:25:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:06.538 03:25:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:22:06.538 03:25:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:22:06.538 03:25:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:06.538 03:25:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:22:06.538 03:25:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:06.538 03:25:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:06.538 03:25:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:06.538 03:25:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:06.538 03:25:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:07.103 00:22:07.103 03:25:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:07.103 03:25:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:07.103 03:25:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:07.360 03:25:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:07.360 03:25:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:07.360 03:25:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:07.360 03:25:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:07.360 03:25:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:07.360 03:25:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:07.360 { 00:22:07.361 "cntlid": 135, 00:22:07.361 "qid": 0, 00:22:07.361 "state": "enabled", 00:22:07.361 "thread": "nvmf_tgt_poll_group_000", 00:22:07.361 "listen_address": { 00:22:07.361 "trtype": "TCP", 00:22:07.361 "adrfam": "IPv4", 00:22:07.361 "traddr": "10.0.0.2", 00:22:07.361 "trsvcid": "4420" 00:22:07.361 }, 00:22:07.361 "peer_address": { 00:22:07.361 "trtype": "TCP", 00:22:07.361 "adrfam": "IPv4", 00:22:07.361 "traddr": "10.0.0.1", 00:22:07.361 "trsvcid": "50318" 00:22:07.361 }, 00:22:07.361 "auth": { 00:22:07.361 "state": "completed", 00:22:07.361 "digest": "sha512", 00:22:07.361 "dhgroup": "ffdhe6144" 00:22:07.361 } 00:22:07.361 } 00:22:07.361 ]' 00:22:07.361 03:25:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:07.361 03:25:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:07.361 03:25:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:07.361 03:25:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:07.361 03:25:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:07.617 03:25:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:07.617 03:25:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:07.617 03:25:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:07.617 03:25:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:ODk3YTMyNWE1NDVmZTEzMzQ4ZDgwNDBhYTRiZDFmZmMyMjUzMWQ1NmJiM2U5MzVlYmM4ODI0ZmViMjEyZTZmZiS5hUA=: 00:22:08.555 03:25:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:08.832 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:08.832 03:25:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:08.832 03:25:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:08.832 03:25:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:08.832 03:25:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:08.832 03:25:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:22:08.832 03:25:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:08.832 03:25:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:08.832 03:25:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:09.105 03:25:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 0 00:22:09.105 03:25:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:09.105 03:25:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:09.105 03:25:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:22:09.105 03:25:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:22:09.105 03:25:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:09.105 03:25:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:09.105 03:25:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:09.105 03:25:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:09.105 03:25:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:09.105 03:25:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:09.105 03:25:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:10.038 00:22:10.038 03:25:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:10.038 03:25:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:10.038 03:25:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:10.295 03:25:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:10.295 03:25:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:10.295 03:25:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:10.295 03:25:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:10.295 03:25:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:10.295 03:25:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:10.295 { 00:22:10.295 "cntlid": 137, 00:22:10.295 "qid": 0, 00:22:10.295 "state": "enabled", 00:22:10.295 "thread": "nvmf_tgt_poll_group_000", 00:22:10.295 "listen_address": { 00:22:10.295 "trtype": "TCP", 00:22:10.295 "adrfam": "IPv4", 00:22:10.295 "traddr": "10.0.0.2", 00:22:10.295 "trsvcid": "4420" 00:22:10.295 }, 00:22:10.295 "peer_address": { 00:22:10.295 "trtype": "TCP", 00:22:10.295 "adrfam": "IPv4", 00:22:10.295 "traddr": "10.0.0.1", 00:22:10.295 "trsvcid": "50348" 00:22:10.295 }, 00:22:10.295 "auth": { 00:22:10.295 "state": "completed", 00:22:10.295 "digest": "sha512", 00:22:10.295 "dhgroup": "ffdhe8192" 00:22:10.295 } 00:22:10.295 } 00:22:10.295 ]' 00:22:10.295 03:25:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:10.295 03:25:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:10.295 03:25:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:10.295 03:25:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:10.295 03:25:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:10.295 03:25:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:10.295 03:25:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:10.295 03:25:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:10.552 03:25:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:MjI0NDUyYmZkMjNiOTYzNGEzZjAyYmU3NTVjZGE2OGNhY2E3OTk0YTBkOWE2MTlid1ecCw==: --dhchap-ctrl-secret DHHC-1:03:ZDU2MGU0ZTI4N2VhMGFlNjE5YTNlNWIyNzNmMjFiNTIxNjY2OTY0ZDY0ZTc4ZDM5MmZhMDc0NzUwM2FjOGM2NrFKL3Y=: 00:22:11.483 03:25:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:11.483 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:11.483 03:25:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:11.483 03:25:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:11.483 03:25:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:11.483 03:25:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:11.483 03:25:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:11.483 03:25:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:11.483 03:25:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:11.740 03:25:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 1 00:22:11.740 03:25:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:11.740 03:25:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:11.740 03:25:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:22:11.740 03:25:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:22:11.740 03:25:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:11.740 03:25:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:11.740 03:25:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:11.740 03:25:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:11.740 03:25:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:11.740 03:25:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:11.740 03:25:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:12.672 00:22:12.672 03:25:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:12.672 03:25:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:12.672 03:25:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:12.929 03:25:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:12.929 03:25:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:12.929 03:25:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:12.929 03:25:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:12.929 03:25:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:12.929 03:25:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:12.929 { 00:22:12.929 "cntlid": 139, 00:22:12.929 "qid": 0, 00:22:12.929 "state": "enabled", 00:22:12.929 "thread": "nvmf_tgt_poll_group_000", 00:22:12.929 "listen_address": { 00:22:12.930 "trtype": "TCP", 00:22:12.930 "adrfam": "IPv4", 00:22:12.930 "traddr": "10.0.0.2", 00:22:12.930 "trsvcid": "4420" 00:22:12.930 }, 00:22:12.930 "peer_address": { 00:22:12.930 "trtype": "TCP", 00:22:12.930 "adrfam": "IPv4", 00:22:12.930 "traddr": "10.0.0.1", 00:22:12.930 "trsvcid": "50362" 00:22:12.930 }, 00:22:12.930 "auth": { 00:22:12.930 "state": "completed", 00:22:12.930 "digest": "sha512", 00:22:12.930 "dhgroup": "ffdhe8192" 00:22:12.930 } 00:22:12.930 } 00:22:12.930 ]' 00:22:12.930 03:25:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:12.930 03:25:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:12.930 03:25:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:12.930 03:25:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:12.930 03:25:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:12.930 03:25:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:12.930 03:25:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:12.930 03:25:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:13.187 03:25:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:MDdiMGJlMzVmMmE3NjEzYWY2NjdkMDhlNzE2YzQ5Nzb2gSDU: --dhchap-ctrl-secret DHHC-1:02:M2ExMjQzNmQzZjM4Zjc4NDc4ZGY3Yjg0OWM5MmYxMGY5ODFjYjdhYTRlMDYxZWY1yP+/9A==: 00:22:14.118 03:25:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:14.118 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:14.118 03:25:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:14.118 03:25:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:14.118 03:25:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:14.118 03:25:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:14.118 03:25:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:14.118 03:25:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:14.118 03:25:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:14.376 03:25:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 2 00:22:14.376 03:25:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:14.376 03:25:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:14.376 03:25:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:22:14.376 03:25:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:22:14.376 03:25:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:14.376 03:25:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:14.376 03:25:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:14.376 03:25:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:14.632 03:25:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:14.632 03:25:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:14.632 03:25:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:15.197 00:22:15.455 03:25:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:15.455 03:25:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:15.455 03:25:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:15.712 03:25:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:15.713 03:25:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:15.713 03:25:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:15.713 03:25:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:15.713 03:25:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:15.713 03:25:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:15.713 { 00:22:15.713 "cntlid": 141, 00:22:15.713 "qid": 0, 00:22:15.713 "state": "enabled", 00:22:15.713 "thread": "nvmf_tgt_poll_group_000", 00:22:15.713 "listen_address": { 00:22:15.713 "trtype": "TCP", 00:22:15.713 "adrfam": "IPv4", 00:22:15.713 "traddr": "10.0.0.2", 00:22:15.713 "trsvcid": "4420" 00:22:15.713 }, 00:22:15.713 "peer_address": { 00:22:15.713 "trtype": "TCP", 00:22:15.713 "adrfam": "IPv4", 00:22:15.713 "traddr": "10.0.0.1", 00:22:15.713 "trsvcid": "50370" 00:22:15.713 }, 00:22:15.713 "auth": { 00:22:15.713 "state": "completed", 00:22:15.713 "digest": "sha512", 00:22:15.713 "dhgroup": "ffdhe8192" 00:22:15.713 } 00:22:15.713 } 00:22:15.713 ]' 00:22:15.713 03:25:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:15.713 03:25:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:15.713 03:25:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:15.713 03:25:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:15.713 03:25:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:15.713 03:25:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:15.713 03:25:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:15.713 03:25:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:15.970 03:25:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:ZWJiNDUwNzBkYjY0ZGYwZjhhNTQ5NzBhMTA1NWJkN2YxZmIxMzM4YThmOTg1NTExOimYFA==: --dhchap-ctrl-secret DHHC-1:01:ZDc1Y2VlMGM4MTU3YjIwODVkMjdkYWFmZDZiOTJhMDJJwEtf: 00:22:16.902 03:25:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:16.902 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:16.902 03:25:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:16.902 03:25:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:16.902 03:25:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:16.902 03:25:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:16.902 03:25:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:16.902 03:25:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:16.902 03:25:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:17.162 03:25:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 3 00:22:17.162 03:25:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:17.162 03:25:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:17.162 03:25:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:22:17.162 03:25:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:22:17.162 03:25:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:17.162 03:25:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:22:17.162 03:25:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:17.162 03:25:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:17.162 03:25:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:17.162 03:25:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:17.162 03:25:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:18.093 00:22:18.093 03:25:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:18.093 03:25:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:18.093 03:25:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:18.350 03:25:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:18.350 03:25:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:18.350 03:25:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:18.350 03:25:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:18.350 03:25:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:18.350 03:25:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:18.350 { 00:22:18.350 "cntlid": 143, 00:22:18.350 "qid": 0, 00:22:18.350 "state": "enabled", 00:22:18.350 "thread": "nvmf_tgt_poll_group_000", 00:22:18.350 "listen_address": { 00:22:18.350 "trtype": "TCP", 00:22:18.350 "adrfam": "IPv4", 00:22:18.350 "traddr": "10.0.0.2", 00:22:18.350 "trsvcid": "4420" 00:22:18.350 }, 00:22:18.350 "peer_address": { 00:22:18.350 "trtype": "TCP", 00:22:18.350 "adrfam": "IPv4", 00:22:18.350 "traddr": "10.0.0.1", 00:22:18.350 "trsvcid": "39840" 00:22:18.350 }, 00:22:18.350 "auth": { 00:22:18.350 "state": "completed", 00:22:18.350 "digest": "sha512", 00:22:18.350 "dhgroup": "ffdhe8192" 00:22:18.350 } 00:22:18.350 } 00:22:18.350 ]' 00:22:18.350 03:25:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:18.606 03:25:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:18.606 03:25:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:18.606 03:25:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:18.606 03:25:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:18.606 03:25:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:18.606 03:25:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:18.606 03:25:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:18.862 03:25:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:ODk3YTMyNWE1NDVmZTEzMzQ4ZDgwNDBhYTRiZDFmZmMyMjUzMWQ1NmJiM2U5MzVlYmM4ODI0ZmViMjEyZTZmZiS5hUA=: 00:22:19.795 03:25:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:19.795 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:19.795 03:25:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:19.795 03:25:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:19.795 03:25:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:19.795 03:25:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:19.795 03:25:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:22:19.795 03:25:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@103 -- # printf %s sha256,sha384,sha512 00:22:19.795 03:25:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:22:19.795 03:25:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@103 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:19.795 03:25:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:19.795 03:25:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:20.054 03:25:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@114 -- # connect_authenticate sha512 ffdhe8192 0 00:22:20.054 03:25:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:20.054 03:25:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:20.054 03:25:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:22:20.054 03:25:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:22:20.054 03:25:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:20.054 03:25:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:20.054 03:25:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:20.054 03:25:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:20.054 03:25:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:20.054 03:25:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:20.054 03:25:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:20.987 00:22:20.987 03:25:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:20.987 03:25:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:20.987 03:25:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:21.244 03:25:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:21.244 03:25:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:21.244 03:25:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:21.244 03:25:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:21.244 03:25:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:21.244 03:25:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:21.244 { 00:22:21.244 "cntlid": 145, 00:22:21.244 "qid": 0, 00:22:21.244 "state": "enabled", 00:22:21.244 "thread": "nvmf_tgt_poll_group_000", 00:22:21.244 "listen_address": { 00:22:21.244 "trtype": "TCP", 00:22:21.244 "adrfam": "IPv4", 00:22:21.244 "traddr": "10.0.0.2", 00:22:21.244 "trsvcid": "4420" 00:22:21.244 }, 00:22:21.244 "peer_address": { 00:22:21.244 "trtype": "TCP", 00:22:21.244 "adrfam": "IPv4", 00:22:21.244 "traddr": "10.0.0.1", 00:22:21.244 "trsvcid": "39868" 00:22:21.244 }, 00:22:21.244 "auth": { 00:22:21.244 "state": "completed", 00:22:21.244 "digest": "sha512", 00:22:21.244 "dhgroup": "ffdhe8192" 00:22:21.244 } 00:22:21.244 } 00:22:21.244 ]' 00:22:21.244 03:25:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:21.244 03:25:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:21.244 03:25:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:21.244 03:25:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:21.244 03:25:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:21.244 03:25:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:21.244 03:25:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:21.244 03:25:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:21.502 03:25:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:MjI0NDUyYmZkMjNiOTYzNGEzZjAyYmU3NTVjZGE2OGNhY2E3OTk0YTBkOWE2MTlid1ecCw==: --dhchap-ctrl-secret DHHC-1:03:ZDU2MGU0ZTI4N2VhMGFlNjE5YTNlNWIyNzNmMjFiNTIxNjY2OTY0ZDY0ZTc4ZDM5MmZhMDc0NzUwM2FjOGM2NrFKL3Y=: 00:22:22.437 03:25:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:22.437 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:22.437 03:25:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:22.437 03:25:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:22.437 03:25:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:22.437 03:25:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:22.437 03:25:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@117 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 00:22:22.437 03:25:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:22.437 03:25:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:22.437 03:25:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:22.437 03:25:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@118 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:22:22.437 03:25:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:22:22.437 03:25:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:22:22.437 03:25:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:22:22.437 03:25:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:22.437 03:25:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:22:22.437 03:25:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:22.437 03:25:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:22:22.437 03:25:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:22:23.371 request: 00:22:23.371 { 00:22:23.371 "name": "nvme0", 00:22:23.371 "trtype": "tcp", 00:22:23.371 "traddr": "10.0.0.2", 00:22:23.371 "adrfam": "ipv4", 00:22:23.371 "trsvcid": "4420", 00:22:23.371 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:23.371 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:23.371 "prchk_reftag": false, 00:22:23.371 "prchk_guard": false, 00:22:23.371 "hdgst": false, 00:22:23.371 "ddgst": false, 00:22:23.371 "dhchap_key": "key2", 00:22:23.371 "method": "bdev_nvme_attach_controller", 00:22:23.371 "req_id": 1 00:22:23.371 } 00:22:23.371 Got JSON-RPC error response 00:22:23.371 response: 00:22:23.371 { 00:22:23.371 "code": -5, 00:22:23.371 "message": "Input/output error" 00:22:23.371 } 00:22:23.371 03:25:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:22:23.371 03:25:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:23.371 03:25:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:23.371 03:25:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:23.371 03:25:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@121 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:23.371 03:25:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:23.371 03:25:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:23.371 03:25:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:23.371 03:25:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@124 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:23.371 03:25:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:23.371 03:25:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:23.371 03:25:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:23.371 03:25:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@125 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:23.371 03:25:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:22:23.371 03:25:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:23.371 03:25:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:22:23.371 03:25:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:23.371 03:25:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:22:23.371 03:25:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:23.371 03:25:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:23.371 03:25:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:24.303 request: 00:22:24.303 { 00:22:24.303 "name": "nvme0", 00:22:24.303 "trtype": "tcp", 00:22:24.303 "traddr": "10.0.0.2", 00:22:24.303 "adrfam": "ipv4", 00:22:24.303 "trsvcid": "4420", 00:22:24.303 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:24.303 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:24.303 "prchk_reftag": false, 00:22:24.303 "prchk_guard": false, 00:22:24.303 "hdgst": false, 00:22:24.303 "ddgst": false, 00:22:24.303 "dhchap_key": "key1", 00:22:24.303 "dhchap_ctrlr_key": "ckey2", 00:22:24.303 "method": "bdev_nvme_attach_controller", 00:22:24.303 "req_id": 1 00:22:24.303 } 00:22:24.303 Got JSON-RPC error response 00:22:24.303 response: 00:22:24.303 { 00:22:24.303 "code": -5, 00:22:24.303 "message": "Input/output error" 00:22:24.303 } 00:22:24.303 03:25:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:22:24.303 03:25:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:24.303 03:25:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:24.303 03:25:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:24.303 03:25:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@128 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:24.303 03:25:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:24.303 03:25:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:24.303 03:25:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:24.303 03:25:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@131 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 00:22:24.303 03:25:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:24.303 03:25:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:24.303 03:25:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:24.303 03:25:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@132 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:24.303 03:25:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:22:24.303 03:25:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:24.303 03:25:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:22:24.303 03:25:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:24.303 03:25:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:22:24.303 03:25:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:24.303 03:25:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:24.303 03:25:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:25.235 request: 00:22:25.235 { 00:22:25.235 "name": "nvme0", 00:22:25.235 "trtype": "tcp", 00:22:25.235 "traddr": "10.0.0.2", 00:22:25.235 "adrfam": "ipv4", 00:22:25.235 "trsvcid": "4420", 00:22:25.235 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:25.235 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:25.235 "prchk_reftag": false, 00:22:25.235 "prchk_guard": false, 00:22:25.235 "hdgst": false, 00:22:25.235 "ddgst": false, 00:22:25.235 "dhchap_key": "key1", 00:22:25.235 "dhchap_ctrlr_key": "ckey1", 00:22:25.235 "method": "bdev_nvme_attach_controller", 00:22:25.235 "req_id": 1 00:22:25.235 } 00:22:25.235 Got JSON-RPC error response 00:22:25.235 response: 00:22:25.235 { 00:22:25.235 "code": -5, 00:22:25.235 "message": "Input/output error" 00:22:25.235 } 00:22:25.235 03:25:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:22:25.235 03:25:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:25.235 03:25:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:25.235 03:25:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:25.235 03:25:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@135 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:25.235 03:25:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:25.235 03:25:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:25.235 03:25:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:25.235 03:25:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@138 -- # killprocess 3201073 00:22:25.235 03:25:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 3201073 ']' 00:22:25.235 03:25:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 3201073 00:22:25.235 03:25:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:22:25.235 03:25:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:25.235 03:25:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3201073 00:22:25.236 03:25:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:22:25.236 03:25:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:22:25.236 03:25:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3201073' 00:22:25.236 killing process with pid 3201073 00:22:25.236 03:25:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 3201073 00:22:25.236 03:25:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 3201073 00:22:25.494 03:25:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@139 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:22:25.494 03:25:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:25.494 03:25:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:25.494 03:25:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:25.494 03:25:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=3223588 00:22:25.494 03:25:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:22:25.494 03:25:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 3223588 00:22:25.494 03:25:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 3223588 ']' 00:22:25.494 03:25:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:25.494 03:25:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:25.494 03:25:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:25.494 03:25:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:25.494 03:25:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:25.753 03:25:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:25.753 03:25:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:22:25.753 03:25:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:25.753 03:25:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:25.753 03:25:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:25.753 03:25:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:25.753 03:25:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@140 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:22:25.753 03:25:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@142 -- # waitforlisten 3223588 00:22:25.753 03:25:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 3223588 ']' 00:22:25.753 03:25:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:25.753 03:25:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:25.753 03:25:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:25.753 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:25.753 03:25:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:25.754 03:25:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:26.022 03:25:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:26.022 03:25:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:22:26.022 03:25:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@143 -- # rpc_cmd 00:22:26.022 03:25:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:26.022 03:25:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:26.022 03:25:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:26.022 03:25:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@153 -- # connect_authenticate sha512 ffdhe8192 3 00:22:26.022 03:25:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:26.022 03:25:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:26.022 03:25:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:22:26.022 03:25:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:22:26.022 03:25:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:26.022 03:25:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:22:26.022 03:25:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:26.022 03:25:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:26.308 03:25:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:26.309 03:25:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:26.309 03:25:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:27.252 00:22:27.252 03:25:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:27.252 03:25:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:27.252 03:25:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:27.252 03:25:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:27.252 03:25:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:27.252 03:25:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:27.252 03:25:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:27.252 03:25:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:27.252 03:25:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:27.252 { 00:22:27.252 "cntlid": 1, 00:22:27.252 "qid": 0, 00:22:27.252 "state": "enabled", 00:22:27.252 "thread": "nvmf_tgt_poll_group_000", 00:22:27.252 "listen_address": { 00:22:27.252 "trtype": "TCP", 00:22:27.252 "adrfam": "IPv4", 00:22:27.252 "traddr": "10.0.0.2", 00:22:27.252 "trsvcid": "4420" 00:22:27.252 }, 00:22:27.252 "peer_address": { 00:22:27.252 "trtype": "TCP", 00:22:27.252 "adrfam": "IPv4", 00:22:27.252 "traddr": "10.0.0.1", 00:22:27.252 "trsvcid": "42228" 00:22:27.252 }, 00:22:27.252 "auth": { 00:22:27.252 "state": "completed", 00:22:27.252 "digest": "sha512", 00:22:27.252 "dhgroup": "ffdhe8192" 00:22:27.252 } 00:22:27.252 } 00:22:27.252 ]' 00:22:27.252 03:25:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:27.252 03:25:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:27.252 03:25:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:27.252 03:25:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:27.252 03:25:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:27.510 03:25:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:27.510 03:25:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:27.510 03:25:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:27.769 03:25:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:ODk3YTMyNWE1NDVmZTEzMzQ4ZDgwNDBhYTRiZDFmZmMyMjUzMWQ1NmJiM2U5MzVlYmM4ODI0ZmViMjEyZTZmZiS5hUA=: 00:22:28.708 03:25:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:28.708 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:28.708 03:25:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:28.708 03:25:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:28.708 03:25:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:28.708 03:25:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:28.708 03:25:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:22:28.708 03:25:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:28.708 03:25:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:28.708 03:25:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:28.708 03:25:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@157 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:22:28.708 03:25:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:22:28.967 03:25:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@158 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:28.967 03:25:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:22:28.967 03:25:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:28.967 03:25:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:22:28.967 03:25:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:28.967 03:25:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:22:28.967 03:25:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:28.967 03:25:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:28.967 03:25:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:29.226 request: 00:22:29.226 { 00:22:29.226 "name": "nvme0", 00:22:29.226 "trtype": "tcp", 00:22:29.226 "traddr": "10.0.0.2", 00:22:29.226 "adrfam": "ipv4", 00:22:29.226 "trsvcid": "4420", 00:22:29.226 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:29.226 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:29.226 "prchk_reftag": false, 00:22:29.226 "prchk_guard": false, 00:22:29.226 "hdgst": false, 00:22:29.226 "ddgst": false, 00:22:29.226 "dhchap_key": "key3", 00:22:29.226 "method": "bdev_nvme_attach_controller", 00:22:29.226 "req_id": 1 00:22:29.226 } 00:22:29.226 Got JSON-RPC error response 00:22:29.226 response: 00:22:29.226 { 00:22:29.226 "code": -5, 00:22:29.226 "message": "Input/output error" 00:22:29.226 } 00:22:29.226 03:25:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:22:29.226 03:25:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:29.226 03:25:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:29.226 03:25:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:29.226 03:25:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@163 -- # IFS=, 00:22:29.226 03:25:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@164 -- # printf %s sha256,sha384,sha512 00:22:29.226 03:25:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@163 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:22:29.226 03:25:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:22:29.486 03:25:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@169 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:29.486 03:25:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:22:29.486 03:25:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:29.486 03:25:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:22:29.486 03:25:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:29.486 03:25:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:22:29.486 03:25:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:29.486 03:25:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:29.486 03:25:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:29.486 request: 00:22:29.486 { 00:22:29.486 "name": "nvme0", 00:22:29.486 "trtype": "tcp", 00:22:29.486 "traddr": "10.0.0.2", 00:22:29.486 "adrfam": "ipv4", 00:22:29.486 "trsvcid": "4420", 00:22:29.486 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:29.486 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:29.486 "prchk_reftag": false, 00:22:29.486 "prchk_guard": false, 00:22:29.486 "hdgst": false, 00:22:29.486 "ddgst": false, 00:22:29.486 "dhchap_key": "key3", 00:22:29.486 "method": "bdev_nvme_attach_controller", 00:22:29.486 "req_id": 1 00:22:29.486 } 00:22:29.486 Got JSON-RPC error response 00:22:29.486 response: 00:22:29.486 { 00:22:29.486 "code": -5, 00:22:29.486 "message": "Input/output error" 00:22:29.486 } 00:22:29.747 03:25:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:22:29.747 03:25:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:29.747 03:25:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:29.747 03:25:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:29.747 03:25:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:22:29.747 03:25:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@176 -- # printf %s sha256,sha384,sha512 00:22:29.747 03:25:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:22:29.747 03:25:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@176 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:29.747 03:25:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:29.747 03:25:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:29.747 03:25:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@186 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:29.747 03:25:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:29.747 03:25:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:30.008 03:25:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:30.008 03:25:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@187 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:30.008 03:25:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:30.008 03:25:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:30.008 03:25:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:30.008 03:25:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@188 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:30.008 03:25:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:22:30.008 03:25:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:30.008 03:25:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:22:30.008 03:25:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:30.008 03:25:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:22:30.008 03:25:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:30.008 03:25:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:30.008 03:25:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:30.008 request: 00:22:30.008 { 00:22:30.008 "name": "nvme0", 00:22:30.008 "trtype": "tcp", 00:22:30.008 "traddr": "10.0.0.2", 00:22:30.008 "adrfam": "ipv4", 00:22:30.008 "trsvcid": "4420", 00:22:30.009 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:30.009 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:30.009 "prchk_reftag": false, 00:22:30.009 "prchk_guard": false, 00:22:30.009 "hdgst": false, 00:22:30.009 "ddgst": false, 00:22:30.009 "dhchap_key": "key0", 00:22:30.009 "dhchap_ctrlr_key": "key1", 00:22:30.009 "method": "bdev_nvme_attach_controller", 00:22:30.009 "req_id": 1 00:22:30.009 } 00:22:30.009 Got JSON-RPC error response 00:22:30.009 response: 00:22:30.009 { 00:22:30.009 "code": -5, 00:22:30.009 "message": "Input/output error" 00:22:30.009 } 00:22:30.268 03:25:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:22:30.268 03:25:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:30.268 03:25:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:30.268 03:25:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:30.268 03:25:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@192 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:22:30.268 03:25:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:22:30.526 00:22:30.526 03:25:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # hostrpc bdev_nvme_get_controllers 00:22:30.526 03:25:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:30.526 03:25:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # jq -r '.[].name' 00:22:30.785 03:25:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:30.785 03:25:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@196 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:30.785 03:25:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:31.045 03:25:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@198 -- # trap - SIGINT SIGTERM EXIT 00:22:31.045 03:25:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@199 -- # cleanup 00:22:31.045 03:25:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 3201212 00:22:31.045 03:25:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 3201212 ']' 00:22:31.045 03:25:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 3201212 00:22:31.045 03:25:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:22:31.045 03:25:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:31.045 03:25:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3201212 00:22:31.045 03:25:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:22:31.045 03:25:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:22:31.045 03:25:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3201212' 00:22:31.045 killing process with pid 3201212 00:22:31.045 03:25:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 3201212 00:22:31.045 03:25:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 3201212 00:22:31.304 03:25:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:22:31.304 03:25:37 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:31.304 03:25:37 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@117 -- # sync 00:22:31.304 03:25:37 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:31.304 03:25:37 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@120 -- # set +e 00:22:31.304 03:25:37 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:31.304 03:25:37 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:31.304 rmmod nvme_tcp 00:22:31.304 rmmod nvme_fabrics 00:22:31.304 rmmod nvme_keyring 00:22:31.563 03:25:37 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:31.563 03:25:37 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@124 -- # set -e 00:22:31.563 03:25:37 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@125 -- # return 0 00:22:31.563 03:25:37 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@489 -- # '[' -n 3223588 ']' 00:22:31.563 03:25:37 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@490 -- # killprocess 3223588 00:22:31.563 03:25:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 3223588 ']' 00:22:31.563 03:25:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 3223588 00:22:31.564 03:25:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:22:31.564 03:25:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:31.564 03:25:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3223588 00:22:31.564 03:25:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:22:31.564 03:25:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:22:31.564 03:25:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3223588' 00:22:31.564 killing process with pid 3223588 00:22:31.564 03:25:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 3223588 00:22:31.564 03:25:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 3223588 00:22:31.824 03:25:37 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:31.824 03:25:37 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:31.824 03:25:37 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:31.824 03:25:37 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:31.824 03:25:37 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:31.824 03:25:37 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:31.824 03:25:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:31.824 03:25:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:33.730 03:25:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:33.730 03:25:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.9v8 /tmp/spdk.key-sha256.uHP /tmp/spdk.key-sha384.Rra /tmp/spdk.key-sha512.b8P /tmp/spdk.key-sha512.D22 /tmp/spdk.key-sha384.RGK /tmp/spdk.key-sha256.8Ax '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:22:33.730 00:22:33.730 real 3m8.322s 00:22:33.730 user 7m18.414s 00:22:33.730 sys 0m24.765s 00:22:33.730 03:25:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:33.730 03:25:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:33.730 ************************************ 00:22:33.730 END TEST nvmf_auth_target 00:22:33.730 ************************************ 00:22:33.730 03:25:39 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:22:33.730 03:25:39 nvmf_tcp -- nvmf/nvmf.sh@59 -- # '[' tcp = tcp ']' 00:22:33.730 03:25:39 nvmf_tcp -- nvmf/nvmf.sh@60 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:22:33.730 03:25:39 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:22:33.730 03:25:39 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:33.730 03:25:39 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:33.730 ************************************ 00:22:33.730 START TEST nvmf_bdevio_no_huge 00:22:33.730 ************************************ 00:22:33.730 03:25:39 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:22:33.730 * Looking for test storage... 00:22:33.730 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:33.730 03:25:39 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:33.730 03:25:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:22:33.730 03:25:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:33.730 03:25:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:33.730 03:25:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:33.730 03:25:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:33.730 03:25:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:33.730 03:25:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:33.730 03:25:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:33.730 03:25:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:33.730 03:25:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:33.730 03:25:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:33.730 03:25:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:33.730 03:25:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:22:33.730 03:25:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:33.730 03:25:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:33.730 03:25:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:33.730 03:25:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:33.730 03:25:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:33.990 03:25:39 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:33.990 03:25:39 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:33.990 03:25:39 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:33.990 03:25:39 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:33.990 03:25:39 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:33.990 03:25:39 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:33.990 03:25:39 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:22:33.990 03:25:39 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:33.990 03:25:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@47 -- # : 0 00:22:33.990 03:25:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:33.990 03:25:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:33.990 03:25:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:33.990 03:25:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:33.990 03:25:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:33.990 03:25:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:33.990 03:25:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:33.990 03:25:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:33.990 03:25:39 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:33.990 03:25:39 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:33.990 03:25:39 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:22:33.990 03:25:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:33.990 03:25:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:33.990 03:25:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:33.990 03:25:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:33.990 03:25:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:33.990 03:25:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:33.990 03:25:39 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:33.990 03:25:39 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:33.990 03:25:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:33.990 03:25:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:33.990 03:25:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@285 -- # xtrace_disable 00:22:33.990 03:25:39 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:35.894 03:25:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:35.894 03:25:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # pci_devs=() 00:22:35.894 03:25:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:35.894 03:25:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:35.894 03:25:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:35.894 03:25:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:35.894 03:25:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:35.894 03:25:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # net_devs=() 00:22:35.894 03:25:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:35.894 03:25:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # e810=() 00:22:35.894 03:25:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # local -ga e810 00:22:35.894 03:25:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # x722=() 00:22:35.894 03:25:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # local -ga x722 00:22:35.894 03:25:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # mlx=() 00:22:35.894 03:25:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # local -ga mlx 00:22:35.894 03:25:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:35.894 03:25:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:35.894 03:25:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:35.894 03:25:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:35.894 03:25:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:35.895 03:25:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:35.895 03:25:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:35.895 03:25:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:35.895 03:25:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:35.895 03:25:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:35.895 03:25:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:35.895 03:25:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:35.895 03:25:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:35.895 03:25:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:35.895 03:25:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:35.895 03:25:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:35.895 03:25:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:35.895 03:25:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:35.895 03:25:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:22:35.895 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:22:35.895 03:25:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:35.895 03:25:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:35.895 03:25:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:35.895 03:25:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:35.895 03:25:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:35.895 03:25:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:35.895 03:25:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:22:35.895 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:22:35.895 03:25:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:35.895 03:25:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:35.895 03:25:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:35.895 03:25:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:35.895 03:25:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:35.895 03:25:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:35.895 03:25:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:35.895 03:25:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:35.895 03:25:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:35.895 03:25:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:35.895 03:25:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:35.895 03:25:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:35.895 03:25:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:35.895 03:25:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:35.895 03:25:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:35.895 03:25:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:22:35.895 Found net devices under 0000:0a:00.0: cvl_0_0 00:22:35.895 03:25:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:35.895 03:25:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:35.895 03:25:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:35.895 03:25:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:35.895 03:25:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:35.895 03:25:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:35.895 03:25:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:35.895 03:25:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:35.895 03:25:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:22:35.895 Found net devices under 0000:0a:00.1: cvl_0_1 00:22:35.895 03:25:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:35.895 03:25:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:35.895 03:25:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # is_hw=yes 00:22:35.895 03:25:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:35.895 03:25:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:35.895 03:25:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:35.895 03:25:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:35.895 03:25:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:35.895 03:25:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:35.895 03:25:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:35.895 03:25:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:35.895 03:25:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:35.895 03:25:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:35.895 03:25:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:35.895 03:25:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:35.895 03:25:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:35.895 03:25:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:35.895 03:25:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:35.895 03:25:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:35.895 03:25:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:35.895 03:25:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:35.895 03:25:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:35.895 03:25:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:35.895 03:25:42 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:36.154 03:25:42 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:36.154 03:25:42 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:36.154 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:36.154 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.123 ms 00:22:36.154 00:22:36.154 --- 10.0.0.2 ping statistics --- 00:22:36.154 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:36.154 rtt min/avg/max/mdev = 0.123/0.123/0.123/0.000 ms 00:22:36.154 03:25:42 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:36.154 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:36.154 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.177 ms 00:22:36.154 00:22:36.154 --- 10.0.0.1 ping statistics --- 00:22:36.154 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:36.154 rtt min/avg/max/mdev = 0.177/0.177/0.177/0.000 ms 00:22:36.154 03:25:42 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:36.154 03:25:42 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # return 0 00:22:36.154 03:25:42 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:36.154 03:25:42 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:36.154 03:25:42 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:36.154 03:25:42 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:36.154 03:25:42 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:36.154 03:25:42 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:36.154 03:25:42 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:36.154 03:25:42 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:22:36.154 03:25:42 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:36.154 03:25:42 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:36.154 03:25:42 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:36.154 03:25:42 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@481 -- # nvmfpid=3226344 00:22:36.155 03:25:42 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:22:36.155 03:25:42 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # waitforlisten 3226344 00:22:36.155 03:25:42 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@829 -- # '[' -z 3226344 ']' 00:22:36.155 03:25:42 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:36.155 03:25:42 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:36.155 03:25:42 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:36.155 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:36.155 03:25:42 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:36.155 03:25:42 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:36.155 [2024-07-15 03:25:42.125590] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:22:36.155 [2024-07-15 03:25:42.125683] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:22:36.155 [2024-07-15 03:25:42.197603] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:36.155 [2024-07-15 03:25:42.276546] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:36.155 [2024-07-15 03:25:42.276605] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:36.155 [2024-07-15 03:25:42.276629] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:36.155 [2024-07-15 03:25:42.276640] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:36.155 [2024-07-15 03:25:42.276650] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:36.155 [2024-07-15 03:25:42.276816] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:22:36.155 [2024-07-15 03:25:42.276911] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:22:36.155 [2024-07-15 03:25:42.276990] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:22:36.155 [2024-07-15 03:25:42.277312] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:22:36.413 03:25:42 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:36.413 03:25:42 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@862 -- # return 0 00:22:36.413 03:25:42 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:36.413 03:25:42 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:36.413 03:25:42 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:36.413 03:25:42 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:36.413 03:25:42 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:36.413 03:25:42 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:36.413 03:25:42 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:36.413 [2024-07-15 03:25:42.386895] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:36.413 03:25:42 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:36.413 03:25:42 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:22:36.413 03:25:42 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:36.413 03:25:42 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:36.413 Malloc0 00:22:36.413 03:25:42 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:36.413 03:25:42 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:36.413 03:25:42 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:36.413 03:25:42 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:36.413 03:25:42 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:36.413 03:25:42 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:36.413 03:25:42 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:36.413 03:25:42 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:36.413 03:25:42 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:36.413 03:25:42 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:36.413 03:25:42 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:36.413 03:25:42 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:36.413 [2024-07-15 03:25:42.424694] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:36.413 03:25:42 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:36.413 03:25:42 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:22:36.413 03:25:42 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:22:36.413 03:25:42 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # config=() 00:22:36.413 03:25:42 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # local subsystem config 00:22:36.413 03:25:42 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:36.413 03:25:42 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:36.413 { 00:22:36.413 "params": { 00:22:36.413 "name": "Nvme$subsystem", 00:22:36.413 "trtype": "$TEST_TRANSPORT", 00:22:36.413 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:36.413 "adrfam": "ipv4", 00:22:36.413 "trsvcid": "$NVMF_PORT", 00:22:36.413 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:36.413 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:36.413 "hdgst": ${hdgst:-false}, 00:22:36.413 "ddgst": ${ddgst:-false} 00:22:36.414 }, 00:22:36.414 "method": "bdev_nvme_attach_controller" 00:22:36.414 } 00:22:36.414 EOF 00:22:36.414 )") 00:22:36.414 03:25:42 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # cat 00:22:36.414 03:25:42 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@556 -- # jq . 00:22:36.414 03:25:42 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@557 -- # IFS=, 00:22:36.414 03:25:42 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:22:36.414 "params": { 00:22:36.414 "name": "Nvme1", 00:22:36.414 "trtype": "tcp", 00:22:36.414 "traddr": "10.0.0.2", 00:22:36.414 "adrfam": "ipv4", 00:22:36.414 "trsvcid": "4420", 00:22:36.414 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:36.414 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:36.414 "hdgst": false, 00:22:36.414 "ddgst": false 00:22:36.414 }, 00:22:36.414 "method": "bdev_nvme_attach_controller" 00:22:36.414 }' 00:22:36.414 [2024-07-15 03:25:42.468454] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:22:36.414 [2024-07-15 03:25:42.468546] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid3226372 ] 00:22:36.414 [2024-07-15 03:25:42.529044] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:22:36.672 [2024-07-15 03:25:42.611695] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:36.672 [2024-07-15 03:25:42.611751] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:36.672 [2024-07-15 03:25:42.611754] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:36.672 I/O targets: 00:22:36.672 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:22:36.672 00:22:36.672 00:22:36.672 CUnit - A unit testing framework for C - Version 2.1-3 00:22:36.672 http://cunit.sourceforge.net/ 00:22:36.672 00:22:36.672 00:22:36.672 Suite: bdevio tests on: Nvme1n1 00:22:36.930 Test: blockdev write read block ...passed 00:22:36.930 Test: blockdev write zeroes read block ...passed 00:22:36.930 Test: blockdev write zeroes read no split ...passed 00:22:36.930 Test: blockdev write zeroes read split ...passed 00:22:36.930 Test: blockdev write zeroes read split partial ...passed 00:22:36.930 Test: blockdev reset ...[2024-07-15 03:25:42.975355] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:36.930 [2024-07-15 03:25:42.975465] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12dbb00 (9): Bad file descriptor 00:22:36.930 [2024-07-15 03:25:42.989142] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:22:36.930 passed 00:22:36.930 Test: blockdev write read 8 blocks ...passed 00:22:36.930 Test: blockdev write read size > 128k ...passed 00:22:36.930 Test: blockdev write read invalid size ...passed 00:22:37.188 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:22:37.188 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:22:37.188 Test: blockdev write read max offset ...passed 00:22:37.188 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:22:37.188 Test: blockdev writev readv 8 blocks ...passed 00:22:37.188 Test: blockdev writev readv 30 x 1block ...passed 00:22:37.188 Test: blockdev writev readv block ...passed 00:22:37.188 Test: blockdev writev readv size > 128k ...passed 00:22:37.188 Test: blockdev writev readv size > 128k in two iovs ...passed 00:22:37.188 Test: blockdev comparev and writev ...[2024-07-15 03:25:43.248063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:37.188 [2024-07-15 03:25:43.248101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:37.188 [2024-07-15 03:25:43.248125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:37.188 [2024-07-15 03:25:43.248142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:37.188 [2024-07-15 03:25:43.248478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:37.188 [2024-07-15 03:25:43.248512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:22:37.188 [2024-07-15 03:25:43.248545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:37.188 [2024-07-15 03:25:43.248574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:22:37.188 [2024-07-15 03:25:43.248925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:37.188 [2024-07-15 03:25:43.248959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:22:37.188 [2024-07-15 03:25:43.248995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:37.188 [2024-07-15 03:25:43.249024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:22:37.188 [2024-07-15 03:25:43.249376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:37.188 [2024-07-15 03:25:43.249404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:22:37.188 [2024-07-15 03:25:43.249428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:37.188 [2024-07-15 03:25:43.249445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:22:37.188 passed 00:22:37.445 Test: blockdev nvme passthru rw ...passed 00:22:37.445 Test: blockdev nvme passthru vendor specific ...[2024-07-15 03:25:43.333203] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:37.445 [2024-07-15 03:25:43.333232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:37.445 [2024-07-15 03:25:43.333401] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:37.445 [2024-07-15 03:25:43.333424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:22:37.445 [2024-07-15 03:25:43.333608] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:37.445 [2024-07-15 03:25:43.333631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:22:37.445 [2024-07-15 03:25:43.333805] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:37.445 [2024-07-15 03:25:43.333830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:22:37.445 passed 00:22:37.445 Test: blockdev nvme admin passthru ...passed 00:22:37.445 Test: blockdev copy ...passed 00:22:37.445 00:22:37.445 Run Summary: Type Total Ran Passed Failed Inactive 00:22:37.445 suites 1 1 n/a 0 0 00:22:37.445 tests 23 23 23 0 0 00:22:37.445 asserts 152 152 152 0 n/a 00:22:37.445 00:22:37.445 Elapsed time = 1.161 seconds 00:22:37.703 03:25:43 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:37.703 03:25:43 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:37.703 03:25:43 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:37.703 03:25:43 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:37.703 03:25:43 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:22:37.703 03:25:43 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:22:37.703 03:25:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:37.703 03:25:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@117 -- # sync 00:22:37.703 03:25:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:37.703 03:25:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@120 -- # set +e 00:22:37.703 03:25:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:37.703 03:25:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:37.703 rmmod nvme_tcp 00:22:37.703 rmmod nvme_fabrics 00:22:37.703 rmmod nvme_keyring 00:22:37.703 03:25:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:37.703 03:25:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set -e 00:22:37.703 03:25:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # return 0 00:22:37.703 03:25:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@489 -- # '[' -n 3226344 ']' 00:22:37.703 03:25:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@490 -- # killprocess 3226344 00:22:37.703 03:25:43 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@948 -- # '[' -z 3226344 ']' 00:22:37.703 03:25:43 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@952 -- # kill -0 3226344 00:22:37.703 03:25:43 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@953 -- # uname 00:22:37.703 03:25:43 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:37.703 03:25:43 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3226344 00:22:37.703 03:25:43 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # process_name=reactor_3 00:22:37.703 03:25:43 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # '[' reactor_3 = sudo ']' 00:22:37.703 03:25:43 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3226344' 00:22:37.703 killing process with pid 3226344 00:22:37.703 03:25:43 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@967 -- # kill 3226344 00:22:37.703 03:25:43 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@972 -- # wait 3226344 00:22:38.270 03:25:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:38.270 03:25:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:38.270 03:25:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:38.270 03:25:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:38.270 03:25:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:38.270 03:25:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:38.270 03:25:44 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:38.270 03:25:44 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:40.173 03:25:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:40.173 00:22:40.173 real 0m6.373s 00:22:40.173 user 0m9.975s 00:22:40.173 sys 0m2.481s 00:22:40.173 03:25:46 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:40.173 03:25:46 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:40.173 ************************************ 00:22:40.173 END TEST nvmf_bdevio_no_huge 00:22:40.173 ************************************ 00:22:40.173 03:25:46 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:22:40.173 03:25:46 nvmf_tcp -- nvmf/nvmf.sh@61 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:22:40.173 03:25:46 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:22:40.173 03:25:46 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:40.173 03:25:46 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:40.173 ************************************ 00:22:40.173 START TEST nvmf_tls 00:22:40.173 ************************************ 00:22:40.173 03:25:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:22:40.173 * Looking for test storage... 00:22:40.173 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:40.173 03:25:46 nvmf_tcp.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:40.173 03:25:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:22:40.173 03:25:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:40.173 03:25:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:40.173 03:25:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:40.173 03:25:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:40.173 03:25:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:40.173 03:25:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:40.173 03:25:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:40.173 03:25:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:40.173 03:25:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:40.173 03:25:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:40.173 03:25:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:40.173 03:25:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:22:40.173 03:25:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:40.173 03:25:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:40.173 03:25:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:40.173 03:25:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:40.173 03:25:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:40.173 03:25:46 nvmf_tcp.nvmf_tls -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:40.173 03:25:46 nvmf_tcp.nvmf_tls -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:40.173 03:25:46 nvmf_tcp.nvmf_tls -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:40.173 03:25:46 nvmf_tcp.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:40.173 03:25:46 nvmf_tcp.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:40.173 03:25:46 nvmf_tcp.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:40.173 03:25:46 nvmf_tcp.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:22:40.173 03:25:46 nvmf_tcp.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:40.173 03:25:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@47 -- # : 0 00:22:40.173 03:25:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:40.173 03:25:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:40.173 03:25:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:40.173 03:25:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:40.173 03:25:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:40.173 03:25:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:40.173 03:25:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:40.173 03:25:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:40.173 03:25:46 nvmf_tcp.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:22:40.173 03:25:46 nvmf_tcp.nvmf_tls -- target/tls.sh@62 -- # nvmftestinit 00:22:40.173 03:25:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:40.173 03:25:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:40.173 03:25:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:40.173 03:25:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:40.174 03:25:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:40.174 03:25:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:40.174 03:25:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:40.174 03:25:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:40.174 03:25:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:40.174 03:25:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:40.174 03:25:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@285 -- # xtrace_disable 00:22:40.174 03:25:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:42.074 03:25:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:42.074 03:25:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@291 -- # pci_devs=() 00:22:42.074 03:25:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:42.074 03:25:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:42.074 03:25:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:42.074 03:25:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:42.074 03:25:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:42.074 03:25:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@295 -- # net_devs=() 00:22:42.074 03:25:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:42.074 03:25:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@296 -- # e810=() 00:22:42.074 03:25:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@296 -- # local -ga e810 00:22:42.074 03:25:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@297 -- # x722=() 00:22:42.074 03:25:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@297 -- # local -ga x722 00:22:42.074 03:25:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@298 -- # mlx=() 00:22:42.074 03:25:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@298 -- # local -ga mlx 00:22:42.074 03:25:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:42.074 03:25:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:42.074 03:25:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:42.074 03:25:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:42.074 03:25:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:42.074 03:25:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:42.074 03:25:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:42.074 03:25:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:42.074 03:25:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:42.074 03:25:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:42.074 03:25:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:42.074 03:25:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:42.074 03:25:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:42.074 03:25:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:42.074 03:25:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:42.074 03:25:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:42.074 03:25:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:42.074 03:25:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:42.074 03:25:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:22:42.074 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:22:42.074 03:25:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:42.074 03:25:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:42.074 03:25:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:42.074 03:25:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:42.074 03:25:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:42.074 03:25:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:42.074 03:25:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:22:42.074 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:22:42.074 03:25:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:42.074 03:25:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:42.074 03:25:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:42.074 03:25:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:42.074 03:25:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:42.074 03:25:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:42.074 03:25:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:42.074 03:25:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:42.074 03:25:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:42.074 03:25:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:42.074 03:25:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:42.074 03:25:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:42.074 03:25:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:42.074 03:25:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:42.074 03:25:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:42.074 03:25:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:22:42.074 Found net devices under 0000:0a:00.0: cvl_0_0 00:22:42.074 03:25:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:42.074 03:25:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:42.074 03:25:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:42.074 03:25:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:42.074 03:25:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:42.074 03:25:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:42.074 03:25:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:42.074 03:25:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:42.074 03:25:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:22:42.074 Found net devices under 0000:0a:00.1: cvl_0_1 00:22:42.074 03:25:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:42.075 03:25:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:42.075 03:25:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # is_hw=yes 00:22:42.075 03:25:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:42.075 03:25:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:42.075 03:25:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:42.075 03:25:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:42.075 03:25:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:42.075 03:25:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:42.075 03:25:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:42.075 03:25:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:42.075 03:25:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:42.075 03:25:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:42.075 03:25:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:42.075 03:25:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:42.075 03:25:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:42.075 03:25:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:42.334 03:25:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:42.334 03:25:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:42.335 03:25:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:42.335 03:25:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:42.335 03:25:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:42.335 03:25:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:42.335 03:25:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:42.335 03:25:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:42.335 03:25:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:42.335 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:42.335 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.231 ms 00:22:42.335 00:22:42.335 --- 10.0.0.2 ping statistics --- 00:22:42.335 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:42.335 rtt min/avg/max/mdev = 0.231/0.231/0.231/0.000 ms 00:22:42.335 03:25:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:42.335 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:42.335 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.122 ms 00:22:42.335 00:22:42.335 --- 10.0.0.1 ping statistics --- 00:22:42.335 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:42.335 rtt min/avg/max/mdev = 0.122/0.122/0.122/0.000 ms 00:22:42.335 03:25:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:42.335 03:25:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@422 -- # return 0 00:22:42.335 03:25:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:42.335 03:25:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:42.335 03:25:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:42.335 03:25:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:42.335 03:25:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:42.335 03:25:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:42.335 03:25:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:42.335 03:25:48 nvmf_tcp.nvmf_tls -- target/tls.sh@63 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:22:42.335 03:25:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:42.335 03:25:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:42.335 03:25:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:42.335 03:25:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=3228437 00:22:42.335 03:25:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:22:42.335 03:25:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 3228437 00:22:42.335 03:25:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3228437 ']' 00:22:42.335 03:25:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:42.335 03:25:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:42.335 03:25:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:42.335 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:42.335 03:25:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:42.335 03:25:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:42.335 [2024-07-15 03:25:48.414784] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:22:42.335 [2024-07-15 03:25:48.414895] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:42.335 EAL: No free 2048 kB hugepages reported on node 1 00:22:42.593 [2024-07-15 03:25:48.490083] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:42.593 [2024-07-15 03:25:48.584641] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:42.593 [2024-07-15 03:25:48.584715] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:42.593 [2024-07-15 03:25:48.584745] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:42.593 [2024-07-15 03:25:48.584760] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:42.593 [2024-07-15 03:25:48.584772] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:42.593 [2024-07-15 03:25:48.584810] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:42.593 03:25:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:42.593 03:25:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:22:42.593 03:25:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:42.593 03:25:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:42.593 03:25:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:42.593 03:25:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:42.593 03:25:48 nvmf_tcp.nvmf_tls -- target/tls.sh@65 -- # '[' tcp '!=' tcp ']' 00:22:42.593 03:25:48 nvmf_tcp.nvmf_tls -- target/tls.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:22:42.851 true 00:22:42.851 03:25:48 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:42.851 03:25:48 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # jq -r .tls_version 00:22:43.110 03:25:49 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # version=0 00:22:43.110 03:25:49 nvmf_tcp.nvmf_tls -- target/tls.sh@74 -- # [[ 0 != \0 ]] 00:22:43.110 03:25:49 nvmf_tcp.nvmf_tls -- target/tls.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:22:43.368 03:25:49 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:43.368 03:25:49 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # jq -r .tls_version 00:22:43.630 03:25:49 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # version=13 00:22:43.630 03:25:49 nvmf_tcp.nvmf_tls -- target/tls.sh@82 -- # [[ 13 != \1\3 ]] 00:22:43.630 03:25:49 nvmf_tcp.nvmf_tls -- target/tls.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:22:43.951 03:25:50 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:43.951 03:25:50 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # jq -r .tls_version 00:22:44.209 03:25:50 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # version=7 00:22:44.210 03:25:50 nvmf_tcp.nvmf_tls -- target/tls.sh@90 -- # [[ 7 != \7 ]] 00:22:44.210 03:25:50 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:44.210 03:25:50 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # jq -r .enable_ktls 00:22:44.468 03:25:50 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # ktls=false 00:22:44.468 03:25:50 nvmf_tcp.nvmf_tls -- target/tls.sh@97 -- # [[ false != \f\a\l\s\e ]] 00:22:44.468 03:25:50 nvmf_tcp.nvmf_tls -- target/tls.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:22:44.726 03:25:50 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:44.726 03:25:50 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # jq -r .enable_ktls 00:22:44.983 03:25:51 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # ktls=true 00:22:44.983 03:25:51 nvmf_tcp.nvmf_tls -- target/tls.sh@105 -- # [[ true != \t\r\u\e ]] 00:22:44.983 03:25:51 nvmf_tcp.nvmf_tls -- target/tls.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:22:45.240 03:25:51 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:45.240 03:25:51 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # jq -r .enable_ktls 00:22:45.498 03:25:51 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # ktls=false 00:22:45.498 03:25:51 nvmf_tcp.nvmf_tls -- target/tls.sh@113 -- # [[ false != \f\a\l\s\e ]] 00:22:45.498 03:25:51 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:22:45.498 03:25:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:22:45.498 03:25:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:22:45.498 03:25:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:22:45.498 03:25:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:22:45.498 03:25:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:22:45.498 03:25:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:22:45.498 03:25:51 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:22:45.498 03:25:51 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:22:45.498 03:25:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:22:45.498 03:25:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:22:45.498 03:25:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:22:45.498 03:25:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=ffeeddccbbaa99887766554433221100 00:22:45.498 03:25:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:22:45.498 03:25:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:22:45.498 03:25:51 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:22:45.498 03:25:51 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # mktemp 00:22:45.498 03:25:51 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # key_path=/tmp/tmp.HRL7eI6DWQ 00:22:45.498 03:25:51 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:22:45.498 03:25:51 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # key_2_path=/tmp/tmp.fk8xo6UJyn 00:22:45.498 03:25:51 nvmf_tcp.nvmf_tls -- target/tls.sh@124 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:22:45.498 03:25:51 nvmf_tcp.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:22:45.498 03:25:51 nvmf_tcp.nvmf_tls -- target/tls.sh@127 -- # chmod 0600 /tmp/tmp.HRL7eI6DWQ 00:22:45.498 03:25:51 nvmf_tcp.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.fk8xo6UJyn 00:22:45.498 03:25:51 nvmf_tcp.nvmf_tls -- target/tls.sh@130 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:22:45.755 03:25:51 nvmf_tcp.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:22:46.322 03:25:52 nvmf_tcp.nvmf_tls -- target/tls.sh@133 -- # setup_nvmf_tgt /tmp/tmp.HRL7eI6DWQ 00:22:46.322 03:25:52 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.HRL7eI6DWQ 00:22:46.322 03:25:52 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:22:46.322 [2024-07-15 03:25:52.460504] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:46.580 03:25:52 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:22:46.838 03:25:52 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:22:46.838 [2024-07-15 03:25:52.945753] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:46.838 [2024-07-15 03:25:52.946006] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:46.838 03:25:52 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:22:47.096 malloc0 00:22:47.096 03:25:53 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:22:47.352 03:25:53 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.HRL7eI6DWQ 00:22:47.609 [2024-07-15 03:25:53.708045] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:22:47.609 03:25:53 nvmf_tcp.nvmf_tls -- target/tls.sh@137 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.HRL7eI6DWQ 00:22:47.866 EAL: No free 2048 kB hugepages reported on node 1 00:22:57.828 Initializing NVMe Controllers 00:22:57.828 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:57.828 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:57.828 Initialization complete. Launching workers. 00:22:57.828 ======================================================== 00:22:57.828 Latency(us) 00:22:57.828 Device Information : IOPS MiB/s Average min max 00:22:57.828 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 7786.56 30.42 8222.03 1214.25 9979.28 00:22:57.828 ======================================================== 00:22:57.828 Total : 7786.56 30.42 8222.03 1214.25 9979.28 00:22:57.828 00:22:57.828 03:26:03 nvmf_tcp.nvmf_tls -- target/tls.sh@143 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.HRL7eI6DWQ 00:22:57.828 03:26:03 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:57.828 03:26:03 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:57.828 03:26:03 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:57.828 03:26:03 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.HRL7eI6DWQ' 00:22:57.828 03:26:03 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:57.828 03:26:03 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3230328 00:22:57.828 03:26:03 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:57.828 03:26:03 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:57.828 03:26:03 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3230328 /var/tmp/bdevperf.sock 00:22:57.828 03:26:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3230328 ']' 00:22:57.828 03:26:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:57.828 03:26:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:57.828 03:26:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:57.828 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:57.828 03:26:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:57.828 03:26:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:57.828 [2024-07-15 03:26:03.872193] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:22:57.828 [2024-07-15 03:26:03.872267] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3230328 ] 00:22:57.828 EAL: No free 2048 kB hugepages reported on node 1 00:22:57.828 [2024-07-15 03:26:03.931265] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:58.086 [2024-07-15 03:26:04.016240] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:58.086 03:26:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:58.086 03:26:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:22:58.086 03:26:04 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.HRL7eI6DWQ 00:22:58.343 [2024-07-15 03:26:04.341200] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:58.343 [2024-07-15 03:26:04.341318] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:22:58.343 TLSTESTn1 00:22:58.343 03:26:04 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:22:58.600 Running I/O for 10 seconds... 00:23:08.579 00:23:08.579 Latency(us) 00:23:08.579 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:08.579 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:23:08.579 Verification LBA range: start 0x0 length 0x2000 00:23:08.579 TLSTESTn1 : 10.02 3550.81 13.87 0.00 0.00 35980.34 8009.96 48545.19 00:23:08.579 =================================================================================================================== 00:23:08.579 Total : 3550.81 13.87 0.00 0.00 35980.34 8009.96 48545.19 00:23:08.579 0 00:23:08.579 03:26:14 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:08.579 03:26:14 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 3230328 00:23:08.579 03:26:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3230328 ']' 00:23:08.579 03:26:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3230328 00:23:08.579 03:26:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:23:08.579 03:26:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:08.579 03:26:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3230328 00:23:08.579 03:26:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:23:08.579 03:26:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:23:08.579 03:26:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3230328' 00:23:08.579 killing process with pid 3230328 00:23:08.579 03:26:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3230328 00:23:08.579 Received shutdown signal, test time was about 10.000000 seconds 00:23:08.579 00:23:08.579 Latency(us) 00:23:08.579 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:08.579 =================================================================================================================== 00:23:08.579 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:08.579 [2024-07-15 03:26:14.630325] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:23:08.579 03:26:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3230328 00:23:08.837 03:26:14 nvmf_tcp.nvmf_tls -- target/tls.sh@146 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.fk8xo6UJyn 00:23:08.837 03:26:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:23:08.837 03:26:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.fk8xo6UJyn 00:23:08.837 03:26:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:23:08.837 03:26:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:08.837 03:26:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:23:08.837 03:26:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:08.837 03:26:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.fk8xo6UJyn 00:23:08.837 03:26:14 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:08.837 03:26:14 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:08.837 03:26:14 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:08.837 03:26:14 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.fk8xo6UJyn' 00:23:08.837 03:26:14 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:08.837 03:26:14 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3231525 00:23:08.837 03:26:14 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:08.837 03:26:14 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3231525 /var/tmp/bdevperf.sock 00:23:08.837 03:26:14 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:08.837 03:26:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3231525 ']' 00:23:08.837 03:26:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:08.837 03:26:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:08.838 03:26:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:08.838 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:08.838 03:26:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:08.838 03:26:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:08.838 [2024-07-15 03:26:14.877010] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:23:08.838 [2024-07-15 03:26:14.877115] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3231525 ] 00:23:08.838 EAL: No free 2048 kB hugepages reported on node 1 00:23:08.838 [2024-07-15 03:26:14.943429] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:09.096 [2024-07-15 03:26:15.035364] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:09.096 03:26:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:09.096 03:26:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:23:09.096 03:26:15 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.fk8xo6UJyn 00:23:09.353 [2024-07-15 03:26:15.421695] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:09.353 [2024-07-15 03:26:15.421824] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:23:09.353 [2024-07-15 03:26:15.432773] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:23:09.353 [2024-07-15 03:26:15.433600] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e9abb0 (107): Transport endpoint is not connected 00:23:09.353 [2024-07-15 03:26:15.434591] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e9abb0 (9): Bad file descriptor 00:23:09.353 [2024-07-15 03:26:15.435593] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:09.353 [2024-07-15 03:26:15.435611] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:23:09.354 [2024-07-15 03:26:15.435643] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:09.354 request: 00:23:09.354 { 00:23:09.354 "name": "TLSTEST", 00:23:09.354 "trtype": "tcp", 00:23:09.354 "traddr": "10.0.0.2", 00:23:09.354 "adrfam": "ipv4", 00:23:09.354 "trsvcid": "4420", 00:23:09.354 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:09.354 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:09.354 "prchk_reftag": false, 00:23:09.354 "prchk_guard": false, 00:23:09.354 "hdgst": false, 00:23:09.354 "ddgst": false, 00:23:09.354 "psk": "/tmp/tmp.fk8xo6UJyn", 00:23:09.354 "method": "bdev_nvme_attach_controller", 00:23:09.354 "req_id": 1 00:23:09.354 } 00:23:09.354 Got JSON-RPC error response 00:23:09.354 response: 00:23:09.354 { 00:23:09.354 "code": -5, 00:23:09.354 "message": "Input/output error" 00:23:09.354 } 00:23:09.354 03:26:15 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 3231525 00:23:09.354 03:26:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3231525 ']' 00:23:09.354 03:26:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3231525 00:23:09.354 03:26:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:23:09.354 03:26:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:09.354 03:26:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3231525 00:23:09.354 03:26:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:23:09.354 03:26:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:23:09.354 03:26:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3231525' 00:23:09.354 killing process with pid 3231525 00:23:09.354 03:26:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3231525 00:23:09.354 Received shutdown signal, test time was about 10.000000 seconds 00:23:09.354 00:23:09.354 Latency(us) 00:23:09.354 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:09.354 =================================================================================================================== 00:23:09.354 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:09.354 [2024-07-15 03:26:15.488343] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:23:09.354 03:26:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3231525 00:23:09.612 03:26:15 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:23:09.612 03:26:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:23:09.612 03:26:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:09.612 03:26:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:09.612 03:26:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:09.612 03:26:15 nvmf_tcp.nvmf_tls -- target/tls.sh@149 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.HRL7eI6DWQ 00:23:09.612 03:26:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:23:09.612 03:26:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.HRL7eI6DWQ 00:23:09.612 03:26:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:23:09.612 03:26:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:09.612 03:26:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:23:09.612 03:26:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:09.612 03:26:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.HRL7eI6DWQ 00:23:09.612 03:26:15 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:09.612 03:26:15 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:09.612 03:26:15 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:23:09.612 03:26:15 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.HRL7eI6DWQ' 00:23:09.612 03:26:15 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:09.612 03:26:15 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3231655 00:23:09.612 03:26:15 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:09.612 03:26:15 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:09.612 03:26:15 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3231655 /var/tmp/bdevperf.sock 00:23:09.612 03:26:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3231655 ']' 00:23:09.612 03:26:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:09.612 03:26:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:09.612 03:26:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:09.612 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:09.612 03:26:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:09.612 03:26:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:09.612 [2024-07-15 03:26:15.749463] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:23:09.612 [2024-07-15 03:26:15.749553] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3231655 ] 00:23:09.869 EAL: No free 2048 kB hugepages reported on node 1 00:23:09.869 [2024-07-15 03:26:15.809889] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:09.869 [2024-07-15 03:26:15.903873] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:10.127 03:26:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:10.127 03:26:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:23:10.127 03:26:16 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /tmp/tmp.HRL7eI6DWQ 00:23:10.385 [2024-07-15 03:26:16.290107] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:10.385 [2024-07-15 03:26:16.290240] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:23:10.385 [2024-07-15 03:26:16.295598] tcp.c: 881:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:23:10.385 [2024-07-15 03:26:16.295630] posix.c: 589:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:23:10.386 [2024-07-15 03:26:16.295669] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:23:10.386 [2024-07-15 03:26:16.296155] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1500bb0 (107): Transport endpoint is not connected 00:23:10.386 [2024-07-15 03:26:16.297142] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1500bb0 (9): Bad file descriptor 00:23:10.386 [2024-07-15 03:26:16.298141] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:10.386 [2024-07-15 03:26:16.298163] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:23:10.386 [2024-07-15 03:26:16.298181] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:10.386 request: 00:23:10.386 { 00:23:10.386 "name": "TLSTEST", 00:23:10.386 "trtype": "tcp", 00:23:10.386 "traddr": "10.0.0.2", 00:23:10.386 "adrfam": "ipv4", 00:23:10.386 "trsvcid": "4420", 00:23:10.386 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:10.386 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:23:10.386 "prchk_reftag": false, 00:23:10.386 "prchk_guard": false, 00:23:10.386 "hdgst": false, 00:23:10.386 "ddgst": false, 00:23:10.386 "psk": "/tmp/tmp.HRL7eI6DWQ", 00:23:10.386 "method": "bdev_nvme_attach_controller", 00:23:10.386 "req_id": 1 00:23:10.386 } 00:23:10.386 Got JSON-RPC error response 00:23:10.386 response: 00:23:10.386 { 00:23:10.386 "code": -5, 00:23:10.386 "message": "Input/output error" 00:23:10.386 } 00:23:10.386 03:26:16 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 3231655 00:23:10.386 03:26:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3231655 ']' 00:23:10.386 03:26:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3231655 00:23:10.386 03:26:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:23:10.386 03:26:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:10.386 03:26:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3231655 00:23:10.386 03:26:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:23:10.386 03:26:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:23:10.386 03:26:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3231655' 00:23:10.386 killing process with pid 3231655 00:23:10.386 03:26:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3231655 00:23:10.386 Received shutdown signal, test time was about 10.000000 seconds 00:23:10.386 00:23:10.386 Latency(us) 00:23:10.386 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:10.386 =================================================================================================================== 00:23:10.386 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:10.386 [2024-07-15 03:26:16.349030] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:23:10.386 03:26:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3231655 00:23:10.644 03:26:16 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:23:10.644 03:26:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:23:10.644 03:26:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:10.644 03:26:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:10.644 03:26:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:10.644 03:26:16 nvmf_tcp.nvmf_tls -- target/tls.sh@152 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.HRL7eI6DWQ 00:23:10.644 03:26:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:23:10.644 03:26:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.HRL7eI6DWQ 00:23:10.644 03:26:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:23:10.644 03:26:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:10.644 03:26:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:23:10.644 03:26:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:10.644 03:26:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.HRL7eI6DWQ 00:23:10.644 03:26:16 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:10.644 03:26:16 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:23:10.644 03:26:16 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:10.644 03:26:16 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.HRL7eI6DWQ' 00:23:10.644 03:26:16 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:10.644 03:26:16 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3231792 00:23:10.644 03:26:16 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:10.644 03:26:16 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:10.644 03:26:16 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3231792 /var/tmp/bdevperf.sock 00:23:10.644 03:26:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3231792 ']' 00:23:10.644 03:26:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:10.644 03:26:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:10.644 03:26:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:10.644 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:10.644 03:26:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:10.644 03:26:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:10.644 [2024-07-15 03:26:16.611136] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:23:10.644 [2024-07-15 03:26:16.611242] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3231792 ] 00:23:10.644 EAL: No free 2048 kB hugepages reported on node 1 00:23:10.644 [2024-07-15 03:26:16.669936] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:10.644 [2024-07-15 03:26:16.755135] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:10.902 03:26:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:10.902 03:26:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:23:10.902 03:26:16 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.HRL7eI6DWQ 00:23:11.161 [2024-07-15 03:26:17.078788] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:11.161 [2024-07-15 03:26:17.078948] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:23:11.161 [2024-07-15 03:26:17.086954] tcp.c: 881:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:23:11.161 [2024-07-15 03:26:17.087008] posix.c: 589:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:23:11.161 [2024-07-15 03:26:17.087061] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:23:11.161 [2024-07-15 03:26:17.087983] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2479bb0 (107): Transport endpoint is not connected 00:23:11.161 [2024-07-15 03:26:17.088974] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2479bb0 (9): Bad file descriptor 00:23:11.161 [2024-07-15 03:26:17.089974] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:23:11.161 [2024-07-15 03:26:17.089996] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:23:11.161 [2024-07-15 03:26:17.090013] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:23:11.161 request: 00:23:11.161 { 00:23:11.161 "name": "TLSTEST", 00:23:11.161 "trtype": "tcp", 00:23:11.161 "traddr": "10.0.0.2", 00:23:11.161 "adrfam": "ipv4", 00:23:11.161 "trsvcid": "4420", 00:23:11.161 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:11.161 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:11.161 "prchk_reftag": false, 00:23:11.161 "prchk_guard": false, 00:23:11.161 "hdgst": false, 00:23:11.161 "ddgst": false, 00:23:11.161 "psk": "/tmp/tmp.HRL7eI6DWQ", 00:23:11.161 "method": "bdev_nvme_attach_controller", 00:23:11.161 "req_id": 1 00:23:11.161 } 00:23:11.161 Got JSON-RPC error response 00:23:11.161 response: 00:23:11.161 { 00:23:11.161 "code": -5, 00:23:11.161 "message": "Input/output error" 00:23:11.161 } 00:23:11.161 03:26:17 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 3231792 00:23:11.161 03:26:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3231792 ']' 00:23:11.161 03:26:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3231792 00:23:11.161 03:26:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:23:11.161 03:26:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:11.161 03:26:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3231792 00:23:11.161 03:26:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:23:11.161 03:26:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:23:11.161 03:26:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3231792' 00:23:11.161 killing process with pid 3231792 00:23:11.161 03:26:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3231792 00:23:11.161 Received shutdown signal, test time was about 10.000000 seconds 00:23:11.161 00:23:11.161 Latency(us) 00:23:11.161 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:11.161 =================================================================================================================== 00:23:11.161 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:11.161 [2024-07-15 03:26:17.139474] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:23:11.161 03:26:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3231792 00:23:11.420 03:26:17 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:23:11.420 03:26:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:23:11.420 03:26:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:11.420 03:26:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:11.420 03:26:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:11.420 03:26:17 nvmf_tcp.nvmf_tls -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:23:11.420 03:26:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:23:11.420 03:26:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:23:11.420 03:26:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:23:11.420 03:26:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:11.420 03:26:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:23:11.420 03:26:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:11.420 03:26:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:23:11.420 03:26:17 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:11.420 03:26:17 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:11.420 03:26:17 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:11.420 03:26:17 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk= 00:23:11.420 03:26:17 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:11.420 03:26:17 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3231814 00:23:11.420 03:26:17 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:11.420 03:26:17 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:11.420 03:26:17 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3231814 /var/tmp/bdevperf.sock 00:23:11.420 03:26:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3231814 ']' 00:23:11.420 03:26:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:11.420 03:26:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:11.420 03:26:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:11.420 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:11.420 03:26:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:11.420 03:26:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:11.420 [2024-07-15 03:26:17.406285] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:23:11.420 [2024-07-15 03:26:17.406376] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3231814 ] 00:23:11.420 EAL: No free 2048 kB hugepages reported on node 1 00:23:11.420 [2024-07-15 03:26:17.467007] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:11.420 [2024-07-15 03:26:17.550838] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:11.678 03:26:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:11.678 03:26:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:23:11.678 03:26:17 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:23:11.937 [2024-07-15 03:26:17.909119] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:23:11.937 [2024-07-15 03:26:17.910830] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6d8160 (9): Bad file descriptor 00:23:11.937 [2024-07-15 03:26:17.911827] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:11.937 [2024-07-15 03:26:17.911847] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:23:11.937 [2024-07-15 03:26:17.911896] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:11.937 request: 00:23:11.937 { 00:23:11.937 "name": "TLSTEST", 00:23:11.937 "trtype": "tcp", 00:23:11.937 "traddr": "10.0.0.2", 00:23:11.937 "adrfam": "ipv4", 00:23:11.937 "trsvcid": "4420", 00:23:11.937 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:11.937 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:11.937 "prchk_reftag": false, 00:23:11.937 "prchk_guard": false, 00:23:11.937 "hdgst": false, 00:23:11.937 "ddgst": false, 00:23:11.937 "method": "bdev_nvme_attach_controller", 00:23:11.937 "req_id": 1 00:23:11.937 } 00:23:11.937 Got JSON-RPC error response 00:23:11.937 response: 00:23:11.937 { 00:23:11.937 "code": -5, 00:23:11.937 "message": "Input/output error" 00:23:11.937 } 00:23:11.937 03:26:17 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 3231814 00:23:11.937 03:26:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3231814 ']' 00:23:11.937 03:26:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3231814 00:23:11.937 03:26:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:23:11.937 03:26:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:11.937 03:26:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3231814 00:23:11.937 03:26:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:23:11.937 03:26:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:23:11.937 03:26:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3231814' 00:23:11.937 killing process with pid 3231814 00:23:11.937 03:26:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3231814 00:23:11.937 Received shutdown signal, test time was about 10.000000 seconds 00:23:11.937 00:23:11.937 Latency(us) 00:23:11.937 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:11.937 =================================================================================================================== 00:23:11.937 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:11.937 03:26:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3231814 00:23:12.195 03:26:18 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:23:12.195 03:26:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:23:12.195 03:26:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:12.195 03:26:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:12.195 03:26:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:12.195 03:26:18 nvmf_tcp.nvmf_tls -- target/tls.sh@158 -- # killprocess 3228437 00:23:12.195 03:26:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3228437 ']' 00:23:12.195 03:26:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3228437 00:23:12.195 03:26:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:23:12.195 03:26:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:12.195 03:26:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3228437 00:23:12.195 03:26:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:23:12.195 03:26:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:23:12.195 03:26:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3228437' 00:23:12.195 killing process with pid 3228437 00:23:12.195 03:26:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3228437 00:23:12.195 [2024-07-15 03:26:18.178509] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:23:12.195 03:26:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3228437 00:23:12.455 03:26:18 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:23:12.455 03:26:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:23:12.455 03:26:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:23:12.455 03:26:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:23:12.455 03:26:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:23:12.455 03:26:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=2 00:23:12.455 03:26:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:23:12.455 03:26:18 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:23:12.455 03:26:18 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # mktemp 00:23:12.455 03:26:18 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # key_long_path=/tmp/tmp.wwkzYivz4C 00:23:12.455 03:26:18 nvmf_tcp.nvmf_tls -- target/tls.sh@161 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:23:12.455 03:26:18 nvmf_tcp.nvmf_tls -- target/tls.sh@162 -- # chmod 0600 /tmp/tmp.wwkzYivz4C 00:23:12.455 03:26:18 nvmf_tcp.nvmf_tls -- target/tls.sh@163 -- # nvmfappstart -m 0x2 00:23:12.455 03:26:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:12.455 03:26:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:12.455 03:26:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:12.455 03:26:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=3231963 00:23:12.455 03:26:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:12.455 03:26:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 3231963 00:23:12.455 03:26:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3231963 ']' 00:23:12.455 03:26:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:12.455 03:26:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:12.455 03:26:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:12.455 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:12.455 03:26:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:12.455 03:26:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:12.455 [2024-07-15 03:26:18.501383] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:23:12.455 [2024-07-15 03:26:18.501474] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:12.455 EAL: No free 2048 kB hugepages reported on node 1 00:23:12.455 [2024-07-15 03:26:18.566110] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:12.760 [2024-07-15 03:26:18.656064] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:12.760 [2024-07-15 03:26:18.656123] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:12.760 [2024-07-15 03:26:18.656136] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:12.760 [2024-07-15 03:26:18.656148] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:12.760 [2024-07-15 03:26:18.656171] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:12.760 [2024-07-15 03:26:18.656197] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:12.760 03:26:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:12.760 03:26:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:23:12.760 03:26:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:12.760 03:26:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:12.760 03:26:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:12.760 03:26:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:12.760 03:26:18 nvmf_tcp.nvmf_tls -- target/tls.sh@165 -- # setup_nvmf_tgt /tmp/tmp.wwkzYivz4C 00:23:12.760 03:26:18 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.wwkzYivz4C 00:23:12.760 03:26:18 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:13.018 [2024-07-15 03:26:19.065594] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:13.018 03:26:19 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:23:13.275 03:26:19 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:23:13.534 [2024-07-15 03:26:19.611057] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:13.534 [2024-07-15 03:26:19.611343] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:13.534 03:26:19 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:23:13.792 malloc0 00:23:13.792 03:26:19 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:23:14.357 03:26:20 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.wwkzYivz4C 00:23:14.357 [2024-07-15 03:26:20.493864] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:23:14.615 03:26:20 nvmf_tcp.nvmf_tls -- target/tls.sh@167 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.wwkzYivz4C 00:23:14.615 03:26:20 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:14.615 03:26:20 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:14.615 03:26:20 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:14.615 03:26:20 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.wwkzYivz4C' 00:23:14.615 03:26:20 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:14.615 03:26:20 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3232250 00:23:14.615 03:26:20 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:14.615 03:26:20 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:14.615 03:26:20 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3232250 /var/tmp/bdevperf.sock 00:23:14.616 03:26:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3232250 ']' 00:23:14.616 03:26:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:14.616 03:26:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:14.616 03:26:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:14.616 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:14.616 03:26:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:14.616 03:26:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:14.616 [2024-07-15 03:26:20.557266] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:23:14.616 [2024-07-15 03:26:20.557355] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3232250 ] 00:23:14.616 EAL: No free 2048 kB hugepages reported on node 1 00:23:14.616 [2024-07-15 03:26:20.615605] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:14.616 [2024-07-15 03:26:20.699003] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:14.874 03:26:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:14.874 03:26:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:23:14.874 03:26:20 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.wwkzYivz4C 00:23:15.135 [2024-07-15 03:26:21.047180] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:15.135 [2024-07-15 03:26:21.047297] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:23:15.135 TLSTESTn1 00:23:15.135 03:26:21 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:23:15.135 Running I/O for 10 seconds... 00:23:27.327 00:23:27.327 Latency(us) 00:23:27.327 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:27.327 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:23:27.327 Verification LBA range: start 0x0 length 0x2000 00:23:27.327 TLSTESTn1 : 10.02 3560.22 13.91 0.00 0.00 35887.01 9806.13 47185.92 00:23:27.327 =================================================================================================================== 00:23:27.327 Total : 3560.22 13.91 0.00 0.00 35887.01 9806.13 47185.92 00:23:27.327 0 00:23:27.327 03:26:31 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:27.327 03:26:31 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 3232250 00:23:27.327 03:26:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3232250 ']' 00:23:27.327 03:26:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3232250 00:23:27.327 03:26:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:23:27.327 03:26:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:27.327 03:26:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3232250 00:23:27.327 03:26:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:23:27.327 03:26:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:23:27.327 03:26:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3232250' 00:23:27.327 killing process with pid 3232250 00:23:27.327 03:26:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3232250 00:23:27.327 Received shutdown signal, test time was about 10.000000 seconds 00:23:27.327 00:23:27.327 Latency(us) 00:23:27.327 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:27.327 =================================================================================================================== 00:23:27.327 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:27.327 [2024-07-15 03:26:31.338545] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:23:27.327 03:26:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3232250 00:23:27.327 03:26:31 nvmf_tcp.nvmf_tls -- target/tls.sh@170 -- # chmod 0666 /tmp/tmp.wwkzYivz4C 00:23:27.327 03:26:31 nvmf_tcp.nvmf_tls -- target/tls.sh@171 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.wwkzYivz4C 00:23:27.327 03:26:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:23:27.327 03:26:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.wwkzYivz4C 00:23:27.327 03:26:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:23:27.327 03:26:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:27.327 03:26:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:23:27.327 03:26:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:27.327 03:26:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.wwkzYivz4C 00:23:27.327 03:26:31 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:27.327 03:26:31 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:27.327 03:26:31 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:27.327 03:26:31 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.wwkzYivz4C' 00:23:27.327 03:26:31 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:27.327 03:26:31 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3233560 00:23:27.327 03:26:31 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:27.327 03:26:31 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:27.327 03:26:31 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3233560 /var/tmp/bdevperf.sock 00:23:27.327 03:26:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3233560 ']' 00:23:27.327 03:26:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:27.327 03:26:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:27.327 03:26:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:27.327 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:27.327 03:26:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:27.327 03:26:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:27.327 [2024-07-15 03:26:31.615600] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:23:27.327 [2024-07-15 03:26:31.615676] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3233560 ] 00:23:27.327 EAL: No free 2048 kB hugepages reported on node 1 00:23:27.327 [2024-07-15 03:26:31.673216] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:27.327 [2024-07-15 03:26:31.753586] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:27.327 03:26:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:27.327 03:26:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:23:27.327 03:26:31 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.wwkzYivz4C 00:23:27.327 [2024-07-15 03:26:32.080965] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:27.327 [2024-07-15 03:26:32.081068] bdev_nvme.c:6125:bdev_nvme_load_psk: *ERROR*: Incorrect permissions for PSK file 00:23:27.327 [2024-07-15 03:26:32.081085] bdev_nvme.c:6230:bdev_nvme_create: *ERROR*: Could not load PSK from /tmp/tmp.wwkzYivz4C 00:23:27.327 request: 00:23:27.327 { 00:23:27.327 "name": "TLSTEST", 00:23:27.327 "trtype": "tcp", 00:23:27.327 "traddr": "10.0.0.2", 00:23:27.327 "adrfam": "ipv4", 00:23:27.327 "trsvcid": "4420", 00:23:27.327 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:27.327 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:27.327 "prchk_reftag": false, 00:23:27.327 "prchk_guard": false, 00:23:27.327 "hdgst": false, 00:23:27.327 "ddgst": false, 00:23:27.327 "psk": "/tmp/tmp.wwkzYivz4C", 00:23:27.327 "method": "bdev_nvme_attach_controller", 00:23:27.327 "req_id": 1 00:23:27.327 } 00:23:27.327 Got JSON-RPC error response 00:23:27.327 response: 00:23:27.327 { 00:23:27.327 "code": -1, 00:23:27.327 "message": "Operation not permitted" 00:23:27.327 } 00:23:27.327 03:26:32 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 3233560 00:23:27.327 03:26:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3233560 ']' 00:23:27.327 03:26:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3233560 00:23:27.327 03:26:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:23:27.327 03:26:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:27.327 03:26:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3233560 00:23:27.327 03:26:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:23:27.327 03:26:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:23:27.327 03:26:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3233560' 00:23:27.327 killing process with pid 3233560 00:23:27.327 03:26:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3233560 00:23:27.327 Received shutdown signal, test time was about 10.000000 seconds 00:23:27.327 00:23:27.327 Latency(us) 00:23:27.327 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:27.328 =================================================================================================================== 00:23:27.328 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:27.328 03:26:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3233560 00:23:27.328 03:26:32 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:23:27.328 03:26:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:23:27.328 03:26:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:27.328 03:26:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:27.328 03:26:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:27.328 03:26:32 nvmf_tcp.nvmf_tls -- target/tls.sh@174 -- # killprocess 3231963 00:23:27.328 03:26:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3231963 ']' 00:23:27.328 03:26:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3231963 00:23:27.328 03:26:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:23:27.328 03:26:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:27.328 03:26:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3231963 00:23:27.328 03:26:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:23:27.328 03:26:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:23:27.328 03:26:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3231963' 00:23:27.328 killing process with pid 3231963 00:23:27.328 03:26:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3231963 00:23:27.328 [2024-07-15 03:26:32.368872] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:23:27.328 03:26:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3231963 00:23:27.328 03:26:32 nvmf_tcp.nvmf_tls -- target/tls.sh@175 -- # nvmfappstart -m 0x2 00:23:27.328 03:26:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:27.328 03:26:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:27.328 03:26:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:27.328 03:26:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=3233704 00:23:27.328 03:26:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 3233704 00:23:27.328 03:26:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3233704 ']' 00:23:27.328 03:26:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:27.328 03:26:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:27.328 03:26:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:27.328 03:26:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:27.328 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:27.328 03:26:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:27.328 03:26:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:27.328 [2024-07-15 03:26:32.647981] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:23:27.328 [2024-07-15 03:26:32.648055] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:27.328 EAL: No free 2048 kB hugepages reported on node 1 00:23:27.328 [2024-07-15 03:26:32.714015] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:27.328 [2024-07-15 03:26:32.802467] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:27.328 [2024-07-15 03:26:32.802532] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:27.328 [2024-07-15 03:26:32.802548] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:27.328 [2024-07-15 03:26:32.802570] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:27.328 [2024-07-15 03:26:32.802583] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:27.328 [2024-07-15 03:26:32.802619] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:27.328 03:26:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:27.328 03:26:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:23:27.328 03:26:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:27.328 03:26:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:27.328 03:26:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:27.328 03:26:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:27.328 03:26:32 nvmf_tcp.nvmf_tls -- target/tls.sh@177 -- # NOT setup_nvmf_tgt /tmp/tmp.wwkzYivz4C 00:23:27.328 03:26:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:23:27.328 03:26:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.wwkzYivz4C 00:23:27.328 03:26:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=setup_nvmf_tgt 00:23:27.328 03:26:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:27.328 03:26:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t setup_nvmf_tgt 00:23:27.328 03:26:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:27.328 03:26:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # setup_nvmf_tgt /tmp/tmp.wwkzYivz4C 00:23:27.328 03:26:32 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.wwkzYivz4C 00:23:27.328 03:26:32 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:27.328 [2024-07-15 03:26:33.160716] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:27.328 03:26:33 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:23:27.328 03:26:33 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:23:27.584 [2024-07-15 03:26:33.645997] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:27.584 [2024-07-15 03:26:33.646242] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:27.584 03:26:33 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:23:27.841 malloc0 00:23:27.841 03:26:33 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:23:28.097 03:26:34 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.wwkzYivz4C 00:23:28.355 [2024-07-15 03:26:34.371649] tcp.c:3589:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:23:28.355 [2024-07-15 03:26:34.371690] tcp.c:3675:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:23:28.355 [2024-07-15 03:26:34.371738] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:23:28.355 request: 00:23:28.355 { 00:23:28.355 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:28.355 "host": "nqn.2016-06.io.spdk:host1", 00:23:28.355 "psk": "/tmp/tmp.wwkzYivz4C", 00:23:28.355 "method": "nvmf_subsystem_add_host", 00:23:28.355 "req_id": 1 00:23:28.355 } 00:23:28.355 Got JSON-RPC error response 00:23:28.355 response: 00:23:28.355 { 00:23:28.355 "code": -32603, 00:23:28.355 "message": "Internal error" 00:23:28.355 } 00:23:28.355 03:26:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:23:28.355 03:26:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:28.355 03:26:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:28.355 03:26:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:28.355 03:26:34 nvmf_tcp.nvmf_tls -- target/tls.sh@180 -- # killprocess 3233704 00:23:28.355 03:26:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3233704 ']' 00:23:28.355 03:26:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3233704 00:23:28.355 03:26:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:23:28.355 03:26:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:28.355 03:26:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3233704 00:23:28.355 03:26:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:23:28.355 03:26:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:23:28.355 03:26:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3233704' 00:23:28.355 killing process with pid 3233704 00:23:28.355 03:26:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3233704 00:23:28.355 03:26:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3233704 00:23:28.613 03:26:34 nvmf_tcp.nvmf_tls -- target/tls.sh@181 -- # chmod 0600 /tmp/tmp.wwkzYivz4C 00:23:28.613 03:26:34 nvmf_tcp.nvmf_tls -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:23:28.613 03:26:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:28.613 03:26:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:28.613 03:26:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:28.613 03:26:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=3233996 00:23:28.613 03:26:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:28.613 03:26:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 3233996 00:23:28.613 03:26:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3233996 ']' 00:23:28.613 03:26:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:28.613 03:26:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:28.613 03:26:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:28.613 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:28.613 03:26:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:28.613 03:26:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:28.613 [2024-07-15 03:26:34.689226] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:23:28.613 [2024-07-15 03:26:34.689315] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:28.613 EAL: No free 2048 kB hugepages reported on node 1 00:23:28.869 [2024-07-15 03:26:34.756787] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:28.869 [2024-07-15 03:26:34.851344] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:28.869 [2024-07-15 03:26:34.851415] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:28.869 [2024-07-15 03:26:34.851432] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:28.870 [2024-07-15 03:26:34.851445] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:28.870 [2024-07-15 03:26:34.851456] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:28.870 [2024-07-15 03:26:34.851489] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:28.870 03:26:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:28.870 03:26:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:23:28.870 03:26:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:28.870 03:26:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:28.870 03:26:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:28.870 03:26:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:28.870 03:26:34 nvmf_tcp.nvmf_tls -- target/tls.sh@185 -- # setup_nvmf_tgt /tmp/tmp.wwkzYivz4C 00:23:28.870 03:26:34 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.wwkzYivz4C 00:23:28.870 03:26:34 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:29.126 [2024-07-15 03:26:35.223313] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:29.126 03:26:35 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:23:29.382 03:26:35 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:23:29.638 [2024-07-15 03:26:35.704591] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:29.638 [2024-07-15 03:26:35.704846] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:29.639 03:26:35 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:23:29.896 malloc0 00:23:29.896 03:26:35 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:23:30.153 03:26:36 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.wwkzYivz4C 00:23:30.411 [2024-07-15 03:26:36.445535] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:23:30.411 03:26:36 nvmf_tcp.nvmf_tls -- target/tls.sh@188 -- # bdevperf_pid=3234162 00:23:30.411 03:26:36 nvmf_tcp.nvmf_tls -- target/tls.sh@187 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:30.411 03:26:36 nvmf_tcp.nvmf_tls -- target/tls.sh@190 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:30.411 03:26:36 nvmf_tcp.nvmf_tls -- target/tls.sh@191 -- # waitforlisten 3234162 /var/tmp/bdevperf.sock 00:23:30.411 03:26:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3234162 ']' 00:23:30.411 03:26:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:30.411 03:26:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:30.411 03:26:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:30.411 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:30.411 03:26:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:30.411 03:26:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:30.411 [2024-07-15 03:26:36.504731] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:23:30.411 [2024-07-15 03:26:36.504817] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3234162 ] 00:23:30.411 EAL: No free 2048 kB hugepages reported on node 1 00:23:30.669 [2024-07-15 03:26:36.566704] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:30.669 [2024-07-15 03:26:36.657852] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:30.669 03:26:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:30.669 03:26:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:23:30.669 03:26:36 nvmf_tcp.nvmf_tls -- target/tls.sh@192 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.wwkzYivz4C 00:23:30.926 [2024-07-15 03:26:37.040375] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:30.926 [2024-07-15 03:26:37.040500] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:23:31.183 TLSTESTn1 00:23:31.183 03:26:37 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:23:31.441 03:26:37 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # tgtconf='{ 00:23:31.441 "subsystems": [ 00:23:31.441 { 00:23:31.441 "subsystem": "keyring", 00:23:31.441 "config": [] 00:23:31.441 }, 00:23:31.441 { 00:23:31.441 "subsystem": "iobuf", 00:23:31.441 "config": [ 00:23:31.441 { 00:23:31.441 "method": "iobuf_set_options", 00:23:31.441 "params": { 00:23:31.441 "small_pool_count": 8192, 00:23:31.441 "large_pool_count": 1024, 00:23:31.441 "small_bufsize": 8192, 00:23:31.441 "large_bufsize": 135168 00:23:31.441 } 00:23:31.441 } 00:23:31.441 ] 00:23:31.441 }, 00:23:31.441 { 00:23:31.441 "subsystem": "sock", 00:23:31.441 "config": [ 00:23:31.441 { 00:23:31.441 "method": "sock_set_default_impl", 00:23:31.441 "params": { 00:23:31.441 "impl_name": "posix" 00:23:31.441 } 00:23:31.441 }, 00:23:31.441 { 00:23:31.441 "method": "sock_impl_set_options", 00:23:31.441 "params": { 00:23:31.441 "impl_name": "ssl", 00:23:31.441 "recv_buf_size": 4096, 00:23:31.441 "send_buf_size": 4096, 00:23:31.441 "enable_recv_pipe": true, 00:23:31.441 "enable_quickack": false, 00:23:31.441 "enable_placement_id": 0, 00:23:31.441 "enable_zerocopy_send_server": true, 00:23:31.441 "enable_zerocopy_send_client": false, 00:23:31.441 "zerocopy_threshold": 0, 00:23:31.441 "tls_version": 0, 00:23:31.441 "enable_ktls": false 00:23:31.441 } 00:23:31.441 }, 00:23:31.441 { 00:23:31.441 "method": "sock_impl_set_options", 00:23:31.441 "params": { 00:23:31.441 "impl_name": "posix", 00:23:31.441 "recv_buf_size": 2097152, 00:23:31.441 "send_buf_size": 2097152, 00:23:31.441 "enable_recv_pipe": true, 00:23:31.441 "enable_quickack": false, 00:23:31.441 "enable_placement_id": 0, 00:23:31.441 "enable_zerocopy_send_server": true, 00:23:31.441 "enable_zerocopy_send_client": false, 00:23:31.441 "zerocopy_threshold": 0, 00:23:31.441 "tls_version": 0, 00:23:31.441 "enable_ktls": false 00:23:31.441 } 00:23:31.441 } 00:23:31.441 ] 00:23:31.441 }, 00:23:31.441 { 00:23:31.441 "subsystem": "vmd", 00:23:31.441 "config": [] 00:23:31.441 }, 00:23:31.442 { 00:23:31.442 "subsystem": "accel", 00:23:31.442 "config": [ 00:23:31.442 { 00:23:31.442 "method": "accel_set_options", 00:23:31.442 "params": { 00:23:31.442 "small_cache_size": 128, 00:23:31.442 "large_cache_size": 16, 00:23:31.442 "task_count": 2048, 00:23:31.442 "sequence_count": 2048, 00:23:31.442 "buf_count": 2048 00:23:31.442 } 00:23:31.442 } 00:23:31.442 ] 00:23:31.442 }, 00:23:31.442 { 00:23:31.442 "subsystem": "bdev", 00:23:31.442 "config": [ 00:23:31.442 { 00:23:31.442 "method": "bdev_set_options", 00:23:31.442 "params": { 00:23:31.442 "bdev_io_pool_size": 65535, 00:23:31.442 "bdev_io_cache_size": 256, 00:23:31.442 "bdev_auto_examine": true, 00:23:31.442 "iobuf_small_cache_size": 128, 00:23:31.442 "iobuf_large_cache_size": 16 00:23:31.442 } 00:23:31.442 }, 00:23:31.442 { 00:23:31.442 "method": "bdev_raid_set_options", 00:23:31.442 "params": { 00:23:31.442 "process_window_size_kb": 1024 00:23:31.442 } 00:23:31.442 }, 00:23:31.442 { 00:23:31.442 "method": "bdev_iscsi_set_options", 00:23:31.442 "params": { 00:23:31.442 "timeout_sec": 30 00:23:31.442 } 00:23:31.442 }, 00:23:31.442 { 00:23:31.442 "method": "bdev_nvme_set_options", 00:23:31.442 "params": { 00:23:31.442 "action_on_timeout": "none", 00:23:31.442 "timeout_us": 0, 00:23:31.442 "timeout_admin_us": 0, 00:23:31.442 "keep_alive_timeout_ms": 10000, 00:23:31.442 "arbitration_burst": 0, 00:23:31.442 "low_priority_weight": 0, 00:23:31.442 "medium_priority_weight": 0, 00:23:31.442 "high_priority_weight": 0, 00:23:31.442 "nvme_adminq_poll_period_us": 10000, 00:23:31.442 "nvme_ioq_poll_period_us": 0, 00:23:31.442 "io_queue_requests": 0, 00:23:31.442 "delay_cmd_submit": true, 00:23:31.442 "transport_retry_count": 4, 00:23:31.442 "bdev_retry_count": 3, 00:23:31.442 "transport_ack_timeout": 0, 00:23:31.442 "ctrlr_loss_timeout_sec": 0, 00:23:31.442 "reconnect_delay_sec": 0, 00:23:31.442 "fast_io_fail_timeout_sec": 0, 00:23:31.442 "disable_auto_failback": false, 00:23:31.442 "generate_uuids": false, 00:23:31.442 "transport_tos": 0, 00:23:31.442 "nvme_error_stat": false, 00:23:31.442 "rdma_srq_size": 0, 00:23:31.442 "io_path_stat": false, 00:23:31.442 "allow_accel_sequence": false, 00:23:31.442 "rdma_max_cq_size": 0, 00:23:31.442 "rdma_cm_event_timeout_ms": 0, 00:23:31.442 "dhchap_digests": [ 00:23:31.442 "sha256", 00:23:31.442 "sha384", 00:23:31.442 "sha512" 00:23:31.442 ], 00:23:31.442 "dhchap_dhgroups": [ 00:23:31.442 "null", 00:23:31.442 "ffdhe2048", 00:23:31.442 "ffdhe3072", 00:23:31.442 "ffdhe4096", 00:23:31.442 "ffdhe6144", 00:23:31.442 "ffdhe8192" 00:23:31.442 ] 00:23:31.442 } 00:23:31.442 }, 00:23:31.442 { 00:23:31.442 "method": "bdev_nvme_set_hotplug", 00:23:31.442 "params": { 00:23:31.442 "period_us": 100000, 00:23:31.442 "enable": false 00:23:31.442 } 00:23:31.442 }, 00:23:31.442 { 00:23:31.442 "method": "bdev_malloc_create", 00:23:31.442 "params": { 00:23:31.442 "name": "malloc0", 00:23:31.442 "num_blocks": 8192, 00:23:31.442 "block_size": 4096, 00:23:31.442 "physical_block_size": 4096, 00:23:31.442 "uuid": "da5ba235-1833-495c-ad79-ee54474bd225", 00:23:31.442 "optimal_io_boundary": 0 00:23:31.442 } 00:23:31.442 }, 00:23:31.442 { 00:23:31.442 "method": "bdev_wait_for_examine" 00:23:31.442 } 00:23:31.442 ] 00:23:31.442 }, 00:23:31.442 { 00:23:31.442 "subsystem": "nbd", 00:23:31.442 "config": [] 00:23:31.442 }, 00:23:31.442 { 00:23:31.442 "subsystem": "scheduler", 00:23:31.442 "config": [ 00:23:31.442 { 00:23:31.442 "method": "framework_set_scheduler", 00:23:31.442 "params": { 00:23:31.442 "name": "static" 00:23:31.442 } 00:23:31.442 } 00:23:31.442 ] 00:23:31.442 }, 00:23:31.442 { 00:23:31.442 "subsystem": "nvmf", 00:23:31.442 "config": [ 00:23:31.442 { 00:23:31.442 "method": "nvmf_set_config", 00:23:31.442 "params": { 00:23:31.442 "discovery_filter": "match_any", 00:23:31.442 "admin_cmd_passthru": { 00:23:31.442 "identify_ctrlr": false 00:23:31.442 } 00:23:31.442 } 00:23:31.442 }, 00:23:31.442 { 00:23:31.442 "method": "nvmf_set_max_subsystems", 00:23:31.442 "params": { 00:23:31.442 "max_subsystems": 1024 00:23:31.442 } 00:23:31.442 }, 00:23:31.442 { 00:23:31.442 "method": "nvmf_set_crdt", 00:23:31.442 "params": { 00:23:31.442 "crdt1": 0, 00:23:31.442 "crdt2": 0, 00:23:31.442 "crdt3": 0 00:23:31.442 } 00:23:31.442 }, 00:23:31.442 { 00:23:31.442 "method": "nvmf_create_transport", 00:23:31.442 "params": { 00:23:31.442 "trtype": "TCP", 00:23:31.442 "max_queue_depth": 128, 00:23:31.442 "max_io_qpairs_per_ctrlr": 127, 00:23:31.442 "in_capsule_data_size": 4096, 00:23:31.442 "max_io_size": 131072, 00:23:31.442 "io_unit_size": 131072, 00:23:31.442 "max_aq_depth": 128, 00:23:31.442 "num_shared_buffers": 511, 00:23:31.442 "buf_cache_size": 4294967295, 00:23:31.442 "dif_insert_or_strip": false, 00:23:31.442 "zcopy": false, 00:23:31.442 "c2h_success": false, 00:23:31.443 "sock_priority": 0, 00:23:31.443 "abort_timeout_sec": 1, 00:23:31.443 "ack_timeout": 0, 00:23:31.443 "data_wr_pool_size": 0 00:23:31.443 } 00:23:31.443 }, 00:23:31.443 { 00:23:31.443 "method": "nvmf_create_subsystem", 00:23:31.443 "params": { 00:23:31.443 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:31.443 "allow_any_host": false, 00:23:31.443 "serial_number": "SPDK00000000000001", 00:23:31.443 "model_number": "SPDK bdev Controller", 00:23:31.443 "max_namespaces": 10, 00:23:31.443 "min_cntlid": 1, 00:23:31.443 "max_cntlid": 65519, 00:23:31.443 "ana_reporting": false 00:23:31.443 } 00:23:31.443 }, 00:23:31.443 { 00:23:31.443 "method": "nvmf_subsystem_add_host", 00:23:31.443 "params": { 00:23:31.443 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:31.443 "host": "nqn.2016-06.io.spdk:host1", 00:23:31.443 "psk": "/tmp/tmp.wwkzYivz4C" 00:23:31.443 } 00:23:31.443 }, 00:23:31.443 { 00:23:31.443 "method": "nvmf_subsystem_add_ns", 00:23:31.443 "params": { 00:23:31.443 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:31.443 "namespace": { 00:23:31.443 "nsid": 1, 00:23:31.443 "bdev_name": "malloc0", 00:23:31.443 "nguid": "DA5BA2351833495CAD79EE54474BD225", 00:23:31.443 "uuid": "da5ba235-1833-495c-ad79-ee54474bd225", 00:23:31.443 "no_auto_visible": false 00:23:31.443 } 00:23:31.443 } 00:23:31.443 }, 00:23:31.443 { 00:23:31.443 "method": "nvmf_subsystem_add_listener", 00:23:31.443 "params": { 00:23:31.443 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:31.443 "listen_address": { 00:23:31.443 "trtype": "TCP", 00:23:31.443 "adrfam": "IPv4", 00:23:31.443 "traddr": "10.0.0.2", 00:23:31.443 "trsvcid": "4420" 00:23:31.443 }, 00:23:31.443 "secure_channel": true 00:23:31.443 } 00:23:31.443 } 00:23:31.443 ] 00:23:31.443 } 00:23:31.443 ] 00:23:31.443 }' 00:23:31.443 03:26:37 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:23:31.701 03:26:37 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # bdevperfconf='{ 00:23:31.701 "subsystems": [ 00:23:31.701 { 00:23:31.701 "subsystem": "keyring", 00:23:31.701 "config": [] 00:23:31.701 }, 00:23:31.701 { 00:23:31.701 "subsystem": "iobuf", 00:23:31.701 "config": [ 00:23:31.701 { 00:23:31.701 "method": "iobuf_set_options", 00:23:31.701 "params": { 00:23:31.701 "small_pool_count": 8192, 00:23:31.701 "large_pool_count": 1024, 00:23:31.701 "small_bufsize": 8192, 00:23:31.701 "large_bufsize": 135168 00:23:31.701 } 00:23:31.701 } 00:23:31.701 ] 00:23:31.701 }, 00:23:31.701 { 00:23:31.701 "subsystem": "sock", 00:23:31.701 "config": [ 00:23:31.701 { 00:23:31.701 "method": "sock_set_default_impl", 00:23:31.701 "params": { 00:23:31.701 "impl_name": "posix" 00:23:31.701 } 00:23:31.701 }, 00:23:31.701 { 00:23:31.701 "method": "sock_impl_set_options", 00:23:31.701 "params": { 00:23:31.701 "impl_name": "ssl", 00:23:31.701 "recv_buf_size": 4096, 00:23:31.701 "send_buf_size": 4096, 00:23:31.701 "enable_recv_pipe": true, 00:23:31.701 "enable_quickack": false, 00:23:31.701 "enable_placement_id": 0, 00:23:31.701 "enable_zerocopy_send_server": true, 00:23:31.701 "enable_zerocopy_send_client": false, 00:23:31.701 "zerocopy_threshold": 0, 00:23:31.701 "tls_version": 0, 00:23:31.701 "enable_ktls": false 00:23:31.701 } 00:23:31.701 }, 00:23:31.701 { 00:23:31.701 "method": "sock_impl_set_options", 00:23:31.701 "params": { 00:23:31.701 "impl_name": "posix", 00:23:31.701 "recv_buf_size": 2097152, 00:23:31.701 "send_buf_size": 2097152, 00:23:31.701 "enable_recv_pipe": true, 00:23:31.701 "enable_quickack": false, 00:23:31.701 "enable_placement_id": 0, 00:23:31.701 "enable_zerocopy_send_server": true, 00:23:31.701 "enable_zerocopy_send_client": false, 00:23:31.701 "zerocopy_threshold": 0, 00:23:31.701 "tls_version": 0, 00:23:31.701 "enable_ktls": false 00:23:31.701 } 00:23:31.701 } 00:23:31.701 ] 00:23:31.701 }, 00:23:31.701 { 00:23:31.701 "subsystem": "vmd", 00:23:31.701 "config": [] 00:23:31.701 }, 00:23:31.701 { 00:23:31.701 "subsystem": "accel", 00:23:31.701 "config": [ 00:23:31.701 { 00:23:31.701 "method": "accel_set_options", 00:23:31.701 "params": { 00:23:31.701 "small_cache_size": 128, 00:23:31.701 "large_cache_size": 16, 00:23:31.701 "task_count": 2048, 00:23:31.701 "sequence_count": 2048, 00:23:31.701 "buf_count": 2048 00:23:31.701 } 00:23:31.702 } 00:23:31.702 ] 00:23:31.702 }, 00:23:31.702 { 00:23:31.702 "subsystem": "bdev", 00:23:31.702 "config": [ 00:23:31.702 { 00:23:31.702 "method": "bdev_set_options", 00:23:31.702 "params": { 00:23:31.702 "bdev_io_pool_size": 65535, 00:23:31.702 "bdev_io_cache_size": 256, 00:23:31.702 "bdev_auto_examine": true, 00:23:31.702 "iobuf_small_cache_size": 128, 00:23:31.702 "iobuf_large_cache_size": 16 00:23:31.702 } 00:23:31.702 }, 00:23:31.702 { 00:23:31.702 "method": "bdev_raid_set_options", 00:23:31.702 "params": { 00:23:31.702 "process_window_size_kb": 1024 00:23:31.702 } 00:23:31.702 }, 00:23:31.702 { 00:23:31.702 "method": "bdev_iscsi_set_options", 00:23:31.702 "params": { 00:23:31.702 "timeout_sec": 30 00:23:31.702 } 00:23:31.702 }, 00:23:31.702 { 00:23:31.702 "method": "bdev_nvme_set_options", 00:23:31.702 "params": { 00:23:31.702 "action_on_timeout": "none", 00:23:31.702 "timeout_us": 0, 00:23:31.702 "timeout_admin_us": 0, 00:23:31.702 "keep_alive_timeout_ms": 10000, 00:23:31.702 "arbitration_burst": 0, 00:23:31.702 "low_priority_weight": 0, 00:23:31.702 "medium_priority_weight": 0, 00:23:31.702 "high_priority_weight": 0, 00:23:31.702 "nvme_adminq_poll_period_us": 10000, 00:23:31.702 "nvme_ioq_poll_period_us": 0, 00:23:31.702 "io_queue_requests": 512, 00:23:31.702 "delay_cmd_submit": true, 00:23:31.702 "transport_retry_count": 4, 00:23:31.702 "bdev_retry_count": 3, 00:23:31.702 "transport_ack_timeout": 0, 00:23:31.702 "ctrlr_loss_timeout_sec": 0, 00:23:31.702 "reconnect_delay_sec": 0, 00:23:31.702 "fast_io_fail_timeout_sec": 0, 00:23:31.702 "disable_auto_failback": false, 00:23:31.702 "generate_uuids": false, 00:23:31.702 "transport_tos": 0, 00:23:31.702 "nvme_error_stat": false, 00:23:31.702 "rdma_srq_size": 0, 00:23:31.702 "io_path_stat": false, 00:23:31.702 "allow_accel_sequence": false, 00:23:31.702 "rdma_max_cq_size": 0, 00:23:31.702 "rdma_cm_event_timeout_ms": 0, 00:23:31.702 "dhchap_digests": [ 00:23:31.702 "sha256", 00:23:31.702 "sha384", 00:23:31.702 "sha512" 00:23:31.702 ], 00:23:31.702 "dhchap_dhgroups": [ 00:23:31.702 "null", 00:23:31.702 "ffdhe2048", 00:23:31.702 "ffdhe3072", 00:23:31.702 "ffdhe4096", 00:23:31.702 "ffdhe6144", 00:23:31.702 "ffdhe8192" 00:23:31.702 ] 00:23:31.702 } 00:23:31.702 }, 00:23:31.702 { 00:23:31.702 "method": "bdev_nvme_attach_controller", 00:23:31.702 "params": { 00:23:31.702 "name": "TLSTEST", 00:23:31.702 "trtype": "TCP", 00:23:31.702 "adrfam": "IPv4", 00:23:31.702 "traddr": "10.0.0.2", 00:23:31.702 "trsvcid": "4420", 00:23:31.702 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:31.702 "prchk_reftag": false, 00:23:31.702 "prchk_guard": false, 00:23:31.702 "ctrlr_loss_timeout_sec": 0, 00:23:31.702 "reconnect_delay_sec": 0, 00:23:31.702 "fast_io_fail_timeout_sec": 0, 00:23:31.702 "psk": "/tmp/tmp.wwkzYivz4C", 00:23:31.702 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:31.702 "hdgst": false, 00:23:31.702 "ddgst": false 00:23:31.702 } 00:23:31.702 }, 00:23:31.702 { 00:23:31.702 "method": "bdev_nvme_set_hotplug", 00:23:31.702 "params": { 00:23:31.702 "period_us": 100000, 00:23:31.702 "enable": false 00:23:31.702 } 00:23:31.702 }, 00:23:31.702 { 00:23:31.702 "method": "bdev_wait_for_examine" 00:23:31.702 } 00:23:31.702 ] 00:23:31.702 }, 00:23:31.702 { 00:23:31.702 "subsystem": "nbd", 00:23:31.702 "config": [] 00:23:31.702 } 00:23:31.702 ] 00:23:31.702 }' 00:23:31.702 03:26:37 nvmf_tcp.nvmf_tls -- target/tls.sh@199 -- # killprocess 3234162 00:23:31.702 03:26:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3234162 ']' 00:23:31.702 03:26:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3234162 00:23:31.702 03:26:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:23:31.702 03:26:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:31.702 03:26:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3234162 00:23:31.702 03:26:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:23:31.702 03:26:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:23:31.702 03:26:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3234162' 00:23:31.702 killing process with pid 3234162 00:23:31.702 03:26:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3234162 00:23:31.702 Received shutdown signal, test time was about 10.000000 seconds 00:23:31.702 00:23:31.702 Latency(us) 00:23:31.702 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:31.702 =================================================================================================================== 00:23:31.702 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:31.702 [2024-07-15 03:26:37.840076] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:23:31.702 03:26:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3234162 00:23:31.960 03:26:38 nvmf_tcp.nvmf_tls -- target/tls.sh@200 -- # killprocess 3233996 00:23:31.960 03:26:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3233996 ']' 00:23:31.960 03:26:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3233996 00:23:31.960 03:26:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:23:31.960 03:26:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:31.960 03:26:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3233996 00:23:31.960 03:26:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:23:31.960 03:26:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:23:31.960 03:26:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3233996' 00:23:31.960 killing process with pid 3233996 00:23:31.960 03:26:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3233996 00:23:31.960 [2024-07-15 03:26:38.085927] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:23:31.960 03:26:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3233996 00:23:32.217 03:26:38 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:23:32.217 03:26:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:32.217 03:26:38 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # echo '{ 00:23:32.217 "subsystems": [ 00:23:32.217 { 00:23:32.217 "subsystem": "keyring", 00:23:32.217 "config": [] 00:23:32.217 }, 00:23:32.217 { 00:23:32.218 "subsystem": "iobuf", 00:23:32.218 "config": [ 00:23:32.218 { 00:23:32.218 "method": "iobuf_set_options", 00:23:32.218 "params": { 00:23:32.218 "small_pool_count": 8192, 00:23:32.218 "large_pool_count": 1024, 00:23:32.218 "small_bufsize": 8192, 00:23:32.218 "large_bufsize": 135168 00:23:32.218 } 00:23:32.218 } 00:23:32.218 ] 00:23:32.218 }, 00:23:32.218 { 00:23:32.218 "subsystem": "sock", 00:23:32.218 "config": [ 00:23:32.218 { 00:23:32.218 "method": "sock_set_default_impl", 00:23:32.218 "params": { 00:23:32.218 "impl_name": "posix" 00:23:32.218 } 00:23:32.218 }, 00:23:32.218 { 00:23:32.218 "method": "sock_impl_set_options", 00:23:32.218 "params": { 00:23:32.218 "impl_name": "ssl", 00:23:32.218 "recv_buf_size": 4096, 00:23:32.218 "send_buf_size": 4096, 00:23:32.218 "enable_recv_pipe": true, 00:23:32.218 "enable_quickack": false, 00:23:32.218 "enable_placement_id": 0, 00:23:32.218 "enable_zerocopy_send_server": true, 00:23:32.218 "enable_zerocopy_send_client": false, 00:23:32.218 "zerocopy_threshold": 0, 00:23:32.218 "tls_version": 0, 00:23:32.218 "enable_ktls": false 00:23:32.218 } 00:23:32.218 }, 00:23:32.218 { 00:23:32.218 "method": "sock_impl_set_options", 00:23:32.218 "params": { 00:23:32.218 "impl_name": "posix", 00:23:32.218 "recv_buf_size": 2097152, 00:23:32.218 "send_buf_size": 2097152, 00:23:32.218 "enable_recv_pipe": true, 00:23:32.218 "enable_quickack": false, 00:23:32.218 "enable_placement_id": 0, 00:23:32.218 "enable_zerocopy_send_server": true, 00:23:32.218 "enable_zerocopy_send_client": false, 00:23:32.218 "zerocopy_threshold": 0, 00:23:32.218 "tls_version": 0, 00:23:32.218 "enable_ktls": false 00:23:32.218 } 00:23:32.218 } 00:23:32.218 ] 00:23:32.218 }, 00:23:32.218 { 00:23:32.218 "subsystem": "vmd", 00:23:32.218 "config": [] 00:23:32.218 }, 00:23:32.218 { 00:23:32.218 "subsystem": "accel", 00:23:32.218 "config": [ 00:23:32.218 { 00:23:32.218 "method": "accel_set_options", 00:23:32.218 "params": { 00:23:32.218 "small_cache_size": 128, 00:23:32.218 "large_cache_size": 16, 00:23:32.218 "task_count": 2048, 00:23:32.218 "sequence_count": 2048, 00:23:32.218 "buf_count": 2048 00:23:32.218 } 00:23:32.218 } 00:23:32.218 ] 00:23:32.218 }, 00:23:32.218 { 00:23:32.218 "subsystem": "bdev", 00:23:32.218 "config": [ 00:23:32.218 { 00:23:32.218 "method": "bdev_set_options", 00:23:32.218 "params": { 00:23:32.218 "bdev_io_pool_size": 65535, 00:23:32.218 "bdev_io_cache_size": 256, 00:23:32.218 "bdev_auto_examine": true, 00:23:32.218 "iobuf_small_cache_size": 128, 00:23:32.218 "iobuf_large_cache_size": 16 00:23:32.218 } 00:23:32.218 }, 00:23:32.218 { 00:23:32.218 "method": "bdev_raid_set_options", 00:23:32.218 "params": { 00:23:32.218 "process_window_size_kb": 1024 00:23:32.218 } 00:23:32.218 }, 00:23:32.218 { 00:23:32.218 "method": "bdev_iscsi_set_options", 00:23:32.218 "params": { 00:23:32.218 "timeout_sec": 30 00:23:32.218 } 00:23:32.218 }, 00:23:32.218 { 00:23:32.218 "method": "bdev_nvme_set_options", 00:23:32.218 "params": { 00:23:32.218 "action_on_timeout": "none", 00:23:32.218 "timeout_us": 0, 00:23:32.218 "timeout_admin_us": 0, 00:23:32.218 "keep_alive_timeout_ms": 10000, 00:23:32.218 "arbitration_burst": 0, 00:23:32.218 "low_priority_weight": 0, 00:23:32.218 "medium_priority_weight": 0, 00:23:32.218 "high_priority_weight": 0, 00:23:32.218 "nvme_adminq_poll_period_us": 10000, 00:23:32.218 "nvme_ioq_poll_period_us": 0, 00:23:32.218 "io_queue_requests": 0, 00:23:32.218 "delay_cmd_submit": true, 00:23:32.218 "transport_retry_count": 4, 00:23:32.218 "bdev_retry_count": 3, 00:23:32.218 "transport_ack_timeout": 0, 00:23:32.218 "ctrlr_loss_timeout_sec": 0, 00:23:32.218 "reconnect_delay_sec": 0, 00:23:32.218 "fast_io_fail_timeout_sec": 0, 00:23:32.218 "disable_auto_failback": false, 00:23:32.218 "generate_uuids": false, 00:23:32.218 "transport_tos": 0, 00:23:32.218 "nvme_error_stat": false, 00:23:32.218 "rdma_srq_size": 0, 00:23:32.218 "io_path_stat": false, 00:23:32.218 "allow_accel_sequence": false, 00:23:32.218 "rdma_max_cq_size": 0, 00:23:32.218 "rdma_cm_event_timeout_ms": 0, 00:23:32.218 "dhchap_digests": [ 00:23:32.218 "sha256", 00:23:32.218 "sha384", 00:23:32.218 "sha512" 00:23:32.218 ], 00:23:32.218 "dhchap_dhgroups": [ 00:23:32.219 "null", 00:23:32.219 "ffdhe2048", 00:23:32.219 "ffdhe3072", 00:23:32.219 "ffdhe4096", 00:23:32.219 "ffdhe6144", 00:23:32.219 "ffdhe8192" 00:23:32.219 ] 00:23:32.219 } 00:23:32.219 }, 00:23:32.219 { 00:23:32.219 "method": "bdev_nvme_set_hotplug", 00:23:32.219 "params": { 00:23:32.219 "period_us": 100000, 00:23:32.219 "enable": false 00:23:32.219 } 00:23:32.219 }, 00:23:32.219 { 00:23:32.219 "method": "bdev_malloc_create", 00:23:32.219 "params": { 00:23:32.219 "name": "malloc0", 00:23:32.219 "num_blocks": 8192, 00:23:32.219 "block_size": 4096, 00:23:32.219 "physical_block_size": 4096, 00:23:32.219 "uuid": "da5ba235-1833-495c-ad79-ee54474bd225", 00:23:32.219 "optimal_io_boundary": 0 00:23:32.219 } 00:23:32.219 }, 00:23:32.219 { 00:23:32.219 "method": "bdev_wait_for_examine" 00:23:32.219 } 00:23:32.219 ] 00:23:32.219 }, 00:23:32.219 { 00:23:32.219 "subsystem": "nbd", 00:23:32.219 "config": [] 00:23:32.219 }, 00:23:32.219 { 00:23:32.219 "subsystem": "scheduler", 00:23:32.219 "config": [ 00:23:32.219 { 00:23:32.219 "method": "framework_set_scheduler", 00:23:32.219 "params": { 00:23:32.219 "name": "static" 00:23:32.219 } 00:23:32.219 } 00:23:32.219 ] 00:23:32.219 }, 00:23:32.219 { 00:23:32.219 "subsystem": "nvmf", 00:23:32.219 "config": [ 00:23:32.219 { 00:23:32.219 "method": "nvmf_set_config", 00:23:32.219 "params": { 00:23:32.219 "discovery_filter": "match_any", 00:23:32.219 "admin_cmd_passthru": { 00:23:32.219 "identify_ctrlr": false 00:23:32.219 } 00:23:32.219 } 00:23:32.219 }, 00:23:32.219 { 00:23:32.219 "method": "nvmf_set_max_subsystems", 00:23:32.219 "params": { 00:23:32.219 "max_subsystems": 1024 00:23:32.219 } 00:23:32.219 }, 00:23:32.219 { 00:23:32.219 "method": "nvmf_set_crdt", 00:23:32.219 "params": { 00:23:32.219 "crdt1": 0, 00:23:32.219 "crdt2": 0, 00:23:32.219 "crdt3": 0 00:23:32.219 } 00:23:32.219 }, 00:23:32.219 { 00:23:32.219 "method": "nvmf_create_transport", 00:23:32.219 "params": { 00:23:32.219 "trtype": "TCP", 00:23:32.219 "max_queue_depth": 128, 00:23:32.219 "max_io_qpairs_per_ctrlr": 127, 00:23:32.219 "in_capsule_data_size": 4096, 00:23:32.219 "max_io_size": 131072, 00:23:32.219 "io_unit_size": 131072, 00:23:32.219 "max_aq_depth": 128, 00:23:32.219 "num_shared_buffers": 511, 00:23:32.219 "buf_cache_size": 4294967295, 00:23:32.219 "dif_insert_or_strip": false, 00:23:32.219 "zcopy": false, 00:23:32.219 "c2h_success": false, 00:23:32.219 "sock_priority": 0, 00:23:32.219 "abort_timeout_sec": 1, 00:23:32.219 "ack_timeout": 0, 00:23:32.219 "data_wr_pool_size": 0 00:23:32.219 } 00:23:32.219 }, 00:23:32.219 { 00:23:32.219 "method": "nvmf_create_subsystem", 00:23:32.219 "params": { 00:23:32.219 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:32.219 "allow_any_host": false, 00:23:32.219 "serial_number": "SPDK00000000000001", 00:23:32.219 "model_number": "SPDK bdev Controller", 00:23:32.219 "max_namespaces": 10, 00:23:32.219 "min_cntlid": 1, 00:23:32.219 "max_cntlid": 65519, 00:23:32.219 "ana_reporting": false 00:23:32.219 } 00:23:32.219 }, 00:23:32.219 { 00:23:32.219 "method": "nvmf_subsystem_add_host", 00:23:32.219 "params": { 00:23:32.219 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:32.219 "host": "nqn.2016-06.io.spdk:host1", 00:23:32.219 "psk": "/tmp/tmp.wwkzYivz4C" 00:23:32.219 } 00:23:32.219 }, 00:23:32.219 { 00:23:32.219 "method": "nvmf_subsystem_add_ns", 00:23:32.219 "params": { 00:23:32.219 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:32.219 "namespace": { 00:23:32.219 "nsid": 1, 00:23:32.219 "bdev_name": "malloc0", 00:23:32.219 "nguid": "DA5BA2351833495CAD79EE54474BD225", 00:23:32.219 "uuid": "da5ba235-1833-495c-ad79-ee54474bd225", 00:23:32.219 "no_auto_visible": false 00:23:32.219 } 00:23:32.219 } 00:23:32.219 }, 00:23:32.219 { 00:23:32.219 "method": "nvmf_subsystem_add_listener", 00:23:32.219 "params": { 00:23:32.219 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:32.219 "listen_address": { 00:23:32.219 "trtype": "TCP", 00:23:32.219 "adrfam": "IPv4", 00:23:32.219 "traddr": "10.0.0.2", 00:23:32.219 "trsvcid": "4420" 00:23:32.219 }, 00:23:32.219 "secure_channel": true 00:23:32.219 } 00:23:32.219 } 00:23:32.219 ] 00:23:32.219 } 00:23:32.219 ] 00:23:32.219 }' 00:23:32.219 03:26:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:32.219 03:26:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:32.219 03:26:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=3234436 00:23:32.219 03:26:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:23:32.219 03:26:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 3234436 00:23:32.219 03:26:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3234436 ']' 00:23:32.219 03:26:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:32.219 03:26:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:32.219 03:26:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:32.219 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:32.219 03:26:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:32.219 03:26:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:32.477 [2024-07-15 03:26:38.382135] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:23:32.477 [2024-07-15 03:26:38.382240] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:32.477 EAL: No free 2048 kB hugepages reported on node 1 00:23:32.477 [2024-07-15 03:26:38.445365] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:32.477 [2024-07-15 03:26:38.530555] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:32.477 [2024-07-15 03:26:38.530618] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:32.477 [2024-07-15 03:26:38.530645] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:32.477 [2024-07-15 03:26:38.530659] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:32.477 [2024-07-15 03:26:38.530670] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:32.477 [2024-07-15 03:26:38.530758] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:32.735 [2024-07-15 03:26:38.768875] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:32.735 [2024-07-15 03:26:38.784824] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:23:32.735 [2024-07-15 03:26:38.800889] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:32.735 [2024-07-15 03:26:38.818106] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:33.299 03:26:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:33.299 03:26:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:23:33.299 03:26:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:33.299 03:26:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:33.299 03:26:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:33.299 03:26:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:33.299 03:26:39 nvmf_tcp.nvmf_tls -- target/tls.sh@207 -- # bdevperf_pid=3234584 00:23:33.299 03:26:39 nvmf_tcp.nvmf_tls -- target/tls.sh@208 -- # waitforlisten 3234584 /var/tmp/bdevperf.sock 00:23:33.299 03:26:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3234584 ']' 00:23:33.299 03:26:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:33.299 03:26:39 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:23:33.299 03:26:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:33.299 03:26:39 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # echo '{ 00:23:33.299 "subsystems": [ 00:23:33.299 { 00:23:33.299 "subsystem": "keyring", 00:23:33.299 "config": [] 00:23:33.299 }, 00:23:33.299 { 00:23:33.299 "subsystem": "iobuf", 00:23:33.299 "config": [ 00:23:33.299 { 00:23:33.299 "method": "iobuf_set_options", 00:23:33.299 "params": { 00:23:33.299 "small_pool_count": 8192, 00:23:33.299 "large_pool_count": 1024, 00:23:33.299 "small_bufsize": 8192, 00:23:33.299 "large_bufsize": 135168 00:23:33.299 } 00:23:33.299 } 00:23:33.299 ] 00:23:33.299 }, 00:23:33.299 { 00:23:33.299 "subsystem": "sock", 00:23:33.299 "config": [ 00:23:33.299 { 00:23:33.299 "method": "sock_set_default_impl", 00:23:33.299 "params": { 00:23:33.299 "impl_name": "posix" 00:23:33.299 } 00:23:33.299 }, 00:23:33.299 { 00:23:33.299 "method": "sock_impl_set_options", 00:23:33.299 "params": { 00:23:33.299 "impl_name": "ssl", 00:23:33.299 "recv_buf_size": 4096, 00:23:33.299 "send_buf_size": 4096, 00:23:33.299 "enable_recv_pipe": true, 00:23:33.299 "enable_quickack": false, 00:23:33.299 "enable_placement_id": 0, 00:23:33.299 "enable_zerocopy_send_server": true, 00:23:33.299 "enable_zerocopy_send_client": false, 00:23:33.299 "zerocopy_threshold": 0, 00:23:33.299 "tls_version": 0, 00:23:33.299 "enable_ktls": false 00:23:33.299 } 00:23:33.299 }, 00:23:33.299 { 00:23:33.299 "method": "sock_impl_set_options", 00:23:33.299 "params": { 00:23:33.299 "impl_name": "posix", 00:23:33.299 "recv_buf_size": 2097152, 00:23:33.299 "send_buf_size": 2097152, 00:23:33.299 "enable_recv_pipe": true, 00:23:33.299 "enable_quickack": false, 00:23:33.299 "enable_placement_id": 0, 00:23:33.299 "enable_zerocopy_send_server": true, 00:23:33.299 "enable_zerocopy_send_client": false, 00:23:33.299 "zerocopy_threshold": 0, 00:23:33.299 "tls_version": 0, 00:23:33.299 "enable_ktls": false 00:23:33.299 } 00:23:33.299 } 00:23:33.299 ] 00:23:33.299 }, 00:23:33.299 { 00:23:33.299 "subsystem": "vmd", 00:23:33.299 "config": [] 00:23:33.299 }, 00:23:33.299 { 00:23:33.299 "subsystem": "accel", 00:23:33.299 "config": [ 00:23:33.299 { 00:23:33.299 "method": "accel_set_options", 00:23:33.299 "params": { 00:23:33.299 "small_cache_size": 128, 00:23:33.299 "large_cache_size": 16, 00:23:33.299 "task_count": 2048, 00:23:33.299 "sequence_count": 2048, 00:23:33.299 "buf_count": 2048 00:23:33.299 } 00:23:33.299 } 00:23:33.299 ] 00:23:33.299 }, 00:23:33.299 { 00:23:33.299 "subsystem": "bdev", 00:23:33.299 "config": [ 00:23:33.299 { 00:23:33.300 "method": "bdev_set_options", 00:23:33.300 "params": { 00:23:33.300 "bdev_io_pool_size": 65535, 00:23:33.300 "bdev_io_cache_size": 256, 00:23:33.300 "bdev_auto_examine": true, 00:23:33.300 "iobuf_small_cache_size": 128, 00:23:33.300 "iobuf_large_cache_size": 16 00:23:33.300 } 00:23:33.300 }, 00:23:33.300 { 00:23:33.300 "method": "bdev_raid_set_options", 00:23:33.300 "params": { 00:23:33.300 "process_window_size_kb": 1024 00:23:33.300 } 00:23:33.300 }, 00:23:33.300 { 00:23:33.300 "method": "bdev_iscsi_set_options", 00:23:33.300 "params": { 00:23:33.300 "timeout_sec": 30 00:23:33.300 } 00:23:33.300 }, 00:23:33.300 { 00:23:33.300 "method": "bdev_nvme_set_options", 00:23:33.300 "params": { 00:23:33.300 "action_on_timeout": "none", 00:23:33.300 "timeout_us": 0, 00:23:33.300 "timeout_admin_us": 0, 00:23:33.300 "keep_alive_timeout_ms": 10000, 00:23:33.300 "arbitration_burst": 0, 00:23:33.300 "low_priority_weight": 0, 00:23:33.300 "medium_priority_weight": 0, 00:23:33.300 "high_priority_weight": 0, 00:23:33.300 "nvme_adminq_poll_period_us": 10000, 00:23:33.300 "nvme_ioq_poll_period_us": 0, 00:23:33.300 "io_queue_requests": 512, 00:23:33.300 "delay_cmd_submit": true, 00:23:33.300 "transport_retry_count": 4, 00:23:33.300 "bdev_retry_count": 3, 00:23:33.300 "transport_ack_timeout": 0, 00:23:33.300 "ctrlr_loss_timeout_sec": 0, 00:23:33.300 "reconnect_delay_sec": 0, 00:23:33.300 "fast_io_fail_timeout_sec": 0, 00:23:33.300 "disable_auto_failback": false, 00:23:33.300 "generate_uuids": false, 00:23:33.300 "transport_tos": 0, 00:23:33.300 "nvme_error_stat": false, 00:23:33.300 "rdma_srq_size": 0, 00:23:33.300 "io_path_stat": false, 00:23:33.300 "allow_accel_sequence": false, 00:23:33.300 "rdma_max_cq_size": 0, 00:23:33.300 "rdma_cm_event_timeout_ms": 0, 00:23:33.300 "dhchap_digests": [ 00:23:33.300 "sha256", 00:23:33.300 "sha384", 00:23:33.300 "sha512" 00:23:33.300 ], 00:23:33.300 "dhchap_dhgroups": [ 00:23:33.300 "null", 00:23:33.300 "ffdhe2048", 00:23:33.300 "ffdhe3072", 00:23:33.300 "ffdhe4096", 00:23:33.300 "ffdhe6144", 00:23:33.300 "ffdhe8192" 00:23:33.300 ] 00:23:33.300 } 00:23:33.300 }, 00:23:33.300 { 00:23:33.300 "method": "bdev_nvme_attach_controller", 00:23:33.300 "params": { 00:23:33.300 "name": "TLSTEST", 00:23:33.300 "trtype": "TCP", 00:23:33.300 "adrfam": "IPv4", 00:23:33.300 "traddr": "10.0.0.2", 00:23:33.300 "trsvcid": "4420", 00:23:33.300 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:33.300 "prchk_reftag": false, 00:23:33.300 "prchk_guard": false, 00:23:33.300 "ctrlr_loss_timeout_sec": 0, 00:23:33.300 "reconnect_delay_sec": 0, 00:23:33.300 "fast_io_fail_timeout_sec": 0, 00:23:33.300 "psk": "/tmp/tmp.wwkzYivz4C", 00:23:33.300 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:33.300 "hdgst": false, 00:23:33.300 "ddgst": false 00:23:33.300 } 00:23:33.300 }, 00:23:33.300 { 00:23:33.300 "method": "bdev_nvme_set_hotplug", 00:23:33.300 "params": { 00:23:33.300 "period_us": 100000, 00:23:33.300 "enable": false 00:23:33.300 } 00:23:33.300 }, 00:23:33.300 { 00:23:33.300 "method": "bdev_wait_for_examine" 00:23:33.300 } 00:23:33.300 ] 00:23:33.300 }, 00:23:33.300 { 00:23:33.300 "subsystem": "nbd", 00:23:33.300 "config": [] 00:23:33.300 } 00:23:33.300 ] 00:23:33.300 }' 00:23:33.300 03:26:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:33.300 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:33.300 03:26:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:33.300 03:26:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:33.300 [2024-07-15 03:26:39.390232] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:23:33.300 [2024-07-15 03:26:39.390320] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3234584 ] 00:23:33.300 EAL: No free 2048 kB hugepages reported on node 1 00:23:33.558 [2024-07-15 03:26:39.449520] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:33.558 [2024-07-15 03:26:39.535058] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:33.816 [2024-07-15 03:26:39.706579] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:33.816 [2024-07-15 03:26:39.706691] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:23:34.383 03:26:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:34.383 03:26:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:23:34.383 03:26:40 nvmf_tcp.nvmf_tls -- target/tls.sh@211 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:23:34.383 Running I/O for 10 seconds... 00:23:46.607 00:23:46.608 Latency(us) 00:23:46.608 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:46.608 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:23:46.608 Verification LBA range: start 0x0 length 0x2000 00:23:46.608 TLSTESTn1 : 10.02 3519.52 13.75 0.00 0.00 36302.97 6262.33 53982.25 00:23:46.608 =================================================================================================================== 00:23:46.608 Total : 3519.52 13.75 0.00 0.00 36302.97 6262.33 53982.25 00:23:46.608 0 00:23:46.608 03:26:50 nvmf_tcp.nvmf_tls -- target/tls.sh@213 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:46.608 03:26:50 nvmf_tcp.nvmf_tls -- target/tls.sh@214 -- # killprocess 3234584 00:23:46.608 03:26:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3234584 ']' 00:23:46.608 03:26:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3234584 00:23:46.608 03:26:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:23:46.608 03:26:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:46.608 03:26:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3234584 00:23:46.608 03:26:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:23:46.608 03:26:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:23:46.608 03:26:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3234584' 00:23:46.608 killing process with pid 3234584 00:23:46.608 03:26:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3234584 00:23:46.608 Received shutdown signal, test time was about 10.000000 seconds 00:23:46.608 00:23:46.608 Latency(us) 00:23:46.608 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:46.608 =================================================================================================================== 00:23:46.608 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:46.608 [2024-07-15 03:26:50.586602] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:23:46.608 03:26:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3234584 00:23:46.608 03:26:50 nvmf_tcp.nvmf_tls -- target/tls.sh@215 -- # killprocess 3234436 00:23:46.608 03:26:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3234436 ']' 00:23:46.608 03:26:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3234436 00:23:46.608 03:26:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:23:46.608 03:26:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:46.608 03:26:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3234436 00:23:46.608 03:26:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:23:46.608 03:26:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:23:46.608 03:26:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3234436' 00:23:46.608 killing process with pid 3234436 00:23:46.608 03:26:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3234436 00:23:46.608 [2024-07-15 03:26:50.811039] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:23:46.608 03:26:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3234436 00:23:46.608 03:26:51 nvmf_tcp.nvmf_tls -- target/tls.sh@218 -- # nvmfappstart 00:23:46.608 03:26:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:46.608 03:26:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:46.608 03:26:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:46.608 03:26:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=3235911 00:23:46.608 03:26:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:23:46.608 03:26:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 3235911 00:23:46.608 03:26:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3235911 ']' 00:23:46.608 03:26:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:46.608 03:26:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:46.608 03:26:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:46.608 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:46.608 03:26:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:46.608 03:26:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:46.608 [2024-07-15 03:26:51.088223] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:23:46.608 [2024-07-15 03:26:51.088302] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:46.608 EAL: No free 2048 kB hugepages reported on node 1 00:23:46.608 [2024-07-15 03:26:51.151377] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:46.608 [2024-07-15 03:26:51.233864] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:46.608 [2024-07-15 03:26:51.233939] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:46.608 [2024-07-15 03:26:51.233963] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:46.608 [2024-07-15 03:26:51.233974] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:46.608 [2024-07-15 03:26:51.233983] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:46.608 [2024-07-15 03:26:51.234008] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:46.608 03:26:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:46.608 03:26:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:23:46.608 03:26:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:46.608 03:26:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:46.608 03:26:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:46.608 03:26:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:46.608 03:26:51 nvmf_tcp.nvmf_tls -- target/tls.sh@219 -- # setup_nvmf_tgt /tmp/tmp.wwkzYivz4C 00:23:46.608 03:26:51 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.wwkzYivz4C 00:23:46.608 03:26:51 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:46.608 [2024-07-15 03:26:51.600437] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:46.608 03:26:51 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:23:46.608 03:26:51 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:23:46.608 [2024-07-15 03:26:52.149886] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:46.608 [2024-07-15 03:26:52.150124] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:46.608 03:26:52 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:23:46.608 malloc0 00:23:46.608 03:26:52 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:23:46.608 03:26:52 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.wwkzYivz4C 00:23:46.867 [2024-07-15 03:26:52.979027] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:23:46.867 03:26:52 nvmf_tcp.nvmf_tls -- target/tls.sh@222 -- # bdevperf_pid=3236195 00:23:46.867 03:26:52 nvmf_tcp.nvmf_tls -- target/tls.sh@220 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:23:46.867 03:26:52 nvmf_tcp.nvmf_tls -- target/tls.sh@224 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:46.867 03:26:52 nvmf_tcp.nvmf_tls -- target/tls.sh@225 -- # waitforlisten 3236195 /var/tmp/bdevperf.sock 00:23:46.867 03:26:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3236195 ']' 00:23:46.867 03:26:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:46.867 03:26:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:46.867 03:26:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:46.867 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:46.867 03:26:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:46.867 03:26:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:47.124 [2024-07-15 03:26:53.044470] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:23:47.124 [2024-07-15 03:26:53.044553] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3236195 ] 00:23:47.124 EAL: No free 2048 kB hugepages reported on node 1 00:23:47.124 [2024-07-15 03:26:53.107274] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:47.124 [2024-07-15 03:26:53.198040] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:47.381 03:26:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:47.381 03:26:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:23:47.381 03:26:53 nvmf_tcp.nvmf_tls -- target/tls.sh@227 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.wwkzYivz4C 00:23:47.637 03:26:53 nvmf_tcp.nvmf_tls -- target/tls.sh@228 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:23:47.894 [2024-07-15 03:26:53.851007] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:47.894 nvme0n1 00:23:47.894 03:26:53 nvmf_tcp.nvmf_tls -- target/tls.sh@232 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:47.894 Running I/O for 1 seconds... 00:23:49.264 00:23:49.264 Latency(us) 00:23:49.264 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:49.264 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:23:49.264 Verification LBA range: start 0x0 length 0x2000 00:23:49.264 nvme0n1 : 1.03 3479.17 13.59 0.00 0.00 36193.16 6407.96 50486.99 00:23:49.264 =================================================================================================================== 00:23:49.264 Total : 3479.17 13.59 0.00 0.00 36193.16 6407.96 50486.99 00:23:49.264 0 00:23:49.264 03:26:55 nvmf_tcp.nvmf_tls -- target/tls.sh@234 -- # killprocess 3236195 00:23:49.264 03:26:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3236195 ']' 00:23:49.264 03:26:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3236195 00:23:49.264 03:26:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:23:49.264 03:26:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:49.264 03:26:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3236195 00:23:49.264 03:26:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:23:49.264 03:26:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:23:49.264 03:26:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3236195' 00:23:49.264 killing process with pid 3236195 00:23:49.264 03:26:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3236195 00:23:49.264 Received shutdown signal, test time was about 1.000000 seconds 00:23:49.264 00:23:49.264 Latency(us) 00:23:49.264 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:49.264 =================================================================================================================== 00:23:49.264 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:49.264 03:26:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3236195 00:23:49.264 03:26:55 nvmf_tcp.nvmf_tls -- target/tls.sh@235 -- # killprocess 3235911 00:23:49.264 03:26:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3235911 ']' 00:23:49.264 03:26:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3235911 00:23:49.264 03:26:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:23:49.264 03:26:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:49.264 03:26:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3235911 00:23:49.264 03:26:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:23:49.264 03:26:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:23:49.264 03:26:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3235911' 00:23:49.264 killing process with pid 3235911 00:23:49.264 03:26:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3235911 00:23:49.264 [2024-07-15 03:26:55.379728] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:23:49.264 03:26:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3235911 00:23:49.522 03:26:55 nvmf_tcp.nvmf_tls -- target/tls.sh@238 -- # nvmfappstart 00:23:49.522 03:26:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:49.522 03:26:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:49.522 03:26:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:49.522 03:26:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=3236477 00:23:49.522 03:26:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:23:49.522 03:26:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 3236477 00:23:49.522 03:26:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3236477 ']' 00:23:49.522 03:26:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:49.522 03:26:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:49.522 03:26:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:49.522 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:49.522 03:26:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:49.522 03:26:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:49.780 [2024-07-15 03:26:55.686731] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:23:49.780 [2024-07-15 03:26:55.686814] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:49.780 EAL: No free 2048 kB hugepages reported on node 1 00:23:49.780 [2024-07-15 03:26:55.753345] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:49.780 [2024-07-15 03:26:55.840547] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:49.780 [2024-07-15 03:26:55.840609] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:49.780 [2024-07-15 03:26:55.840622] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:49.780 [2024-07-15 03:26:55.840634] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:49.780 [2024-07-15 03:26:55.840644] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:49.780 [2024-07-15 03:26:55.840671] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:50.038 03:26:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:50.038 03:26:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:23:50.038 03:26:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:50.038 03:26:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:50.038 03:26:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:50.038 03:26:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:50.038 03:26:55 nvmf_tcp.nvmf_tls -- target/tls.sh@239 -- # rpc_cmd 00:23:50.038 03:26:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:50.039 03:26:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:50.039 [2024-07-15 03:26:55.981660] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:50.039 malloc0 00:23:50.039 [2024-07-15 03:26:56.013433] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:50.039 [2024-07-15 03:26:56.013681] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:50.039 03:26:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:50.039 03:26:56 nvmf_tcp.nvmf_tls -- target/tls.sh@252 -- # bdevperf_pid=3236505 00:23:50.039 03:26:56 nvmf_tcp.nvmf_tls -- target/tls.sh@250 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:23:50.039 03:26:56 nvmf_tcp.nvmf_tls -- target/tls.sh@254 -- # waitforlisten 3236505 /var/tmp/bdevperf.sock 00:23:50.039 03:26:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3236505 ']' 00:23:50.039 03:26:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:50.039 03:26:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:50.039 03:26:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:50.039 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:50.039 03:26:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:50.039 03:26:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:50.039 [2024-07-15 03:26:56.084511] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:23:50.039 [2024-07-15 03:26:56.084571] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3236505 ] 00:23:50.039 EAL: No free 2048 kB hugepages reported on node 1 00:23:50.039 [2024-07-15 03:26:56.147801] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:50.296 [2024-07-15 03:26:56.249032] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:50.296 03:26:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:50.296 03:26:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:23:50.296 03:26:56 nvmf_tcp.nvmf_tls -- target/tls.sh@255 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.wwkzYivz4C 00:23:50.554 03:26:56 nvmf_tcp.nvmf_tls -- target/tls.sh@256 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:23:50.811 [2024-07-15 03:26:56.815993] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:50.811 nvme0n1 00:23:50.811 03:26:56 nvmf_tcp.nvmf_tls -- target/tls.sh@260 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:51.072 Running I/O for 1 seconds... 00:23:52.008 00:23:52.009 Latency(us) 00:23:52.009 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:52.009 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:23:52.009 Verification LBA range: start 0x0 length 0x2000 00:23:52.009 nvme0n1 : 1.02 2955.48 11.54 0.00 0.00 42913.68 9660.49 47962.64 00:23:52.009 =================================================================================================================== 00:23:52.009 Total : 2955.48 11.54 0.00 0.00 42913.68 9660.49 47962.64 00:23:52.009 0 00:23:52.009 03:26:58 nvmf_tcp.nvmf_tls -- target/tls.sh@263 -- # rpc_cmd save_config 00:23:52.009 03:26:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:52.009 03:26:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:52.266 03:26:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:52.267 03:26:58 nvmf_tcp.nvmf_tls -- target/tls.sh@263 -- # tgtcfg='{ 00:23:52.267 "subsystems": [ 00:23:52.267 { 00:23:52.267 "subsystem": "keyring", 00:23:52.267 "config": [ 00:23:52.267 { 00:23:52.267 "method": "keyring_file_add_key", 00:23:52.267 "params": { 00:23:52.267 "name": "key0", 00:23:52.267 "path": "/tmp/tmp.wwkzYivz4C" 00:23:52.267 } 00:23:52.267 } 00:23:52.267 ] 00:23:52.267 }, 00:23:52.267 { 00:23:52.267 "subsystem": "iobuf", 00:23:52.267 "config": [ 00:23:52.267 { 00:23:52.267 "method": "iobuf_set_options", 00:23:52.267 "params": { 00:23:52.267 "small_pool_count": 8192, 00:23:52.267 "large_pool_count": 1024, 00:23:52.267 "small_bufsize": 8192, 00:23:52.267 "large_bufsize": 135168 00:23:52.267 } 00:23:52.267 } 00:23:52.267 ] 00:23:52.267 }, 00:23:52.267 { 00:23:52.267 "subsystem": "sock", 00:23:52.267 "config": [ 00:23:52.267 { 00:23:52.267 "method": "sock_set_default_impl", 00:23:52.267 "params": { 00:23:52.267 "impl_name": "posix" 00:23:52.267 } 00:23:52.267 }, 00:23:52.267 { 00:23:52.267 "method": "sock_impl_set_options", 00:23:52.267 "params": { 00:23:52.267 "impl_name": "ssl", 00:23:52.267 "recv_buf_size": 4096, 00:23:52.267 "send_buf_size": 4096, 00:23:52.267 "enable_recv_pipe": true, 00:23:52.267 "enable_quickack": false, 00:23:52.267 "enable_placement_id": 0, 00:23:52.267 "enable_zerocopy_send_server": true, 00:23:52.267 "enable_zerocopy_send_client": false, 00:23:52.267 "zerocopy_threshold": 0, 00:23:52.267 "tls_version": 0, 00:23:52.267 "enable_ktls": false 00:23:52.267 } 00:23:52.267 }, 00:23:52.267 { 00:23:52.267 "method": "sock_impl_set_options", 00:23:52.267 "params": { 00:23:52.267 "impl_name": "posix", 00:23:52.267 "recv_buf_size": 2097152, 00:23:52.267 "send_buf_size": 2097152, 00:23:52.267 "enable_recv_pipe": true, 00:23:52.267 "enable_quickack": false, 00:23:52.267 "enable_placement_id": 0, 00:23:52.267 "enable_zerocopy_send_server": true, 00:23:52.267 "enable_zerocopy_send_client": false, 00:23:52.267 "zerocopy_threshold": 0, 00:23:52.267 "tls_version": 0, 00:23:52.267 "enable_ktls": false 00:23:52.267 } 00:23:52.267 } 00:23:52.267 ] 00:23:52.267 }, 00:23:52.267 { 00:23:52.267 "subsystem": "vmd", 00:23:52.267 "config": [] 00:23:52.267 }, 00:23:52.267 { 00:23:52.267 "subsystem": "accel", 00:23:52.267 "config": [ 00:23:52.267 { 00:23:52.267 "method": "accel_set_options", 00:23:52.267 "params": { 00:23:52.267 "small_cache_size": 128, 00:23:52.267 "large_cache_size": 16, 00:23:52.267 "task_count": 2048, 00:23:52.267 "sequence_count": 2048, 00:23:52.267 "buf_count": 2048 00:23:52.267 } 00:23:52.267 } 00:23:52.267 ] 00:23:52.267 }, 00:23:52.267 { 00:23:52.267 "subsystem": "bdev", 00:23:52.267 "config": [ 00:23:52.267 { 00:23:52.267 "method": "bdev_set_options", 00:23:52.267 "params": { 00:23:52.267 "bdev_io_pool_size": 65535, 00:23:52.267 "bdev_io_cache_size": 256, 00:23:52.267 "bdev_auto_examine": true, 00:23:52.267 "iobuf_small_cache_size": 128, 00:23:52.267 "iobuf_large_cache_size": 16 00:23:52.267 } 00:23:52.267 }, 00:23:52.267 { 00:23:52.267 "method": "bdev_raid_set_options", 00:23:52.267 "params": { 00:23:52.267 "process_window_size_kb": 1024 00:23:52.267 } 00:23:52.267 }, 00:23:52.267 { 00:23:52.267 "method": "bdev_iscsi_set_options", 00:23:52.267 "params": { 00:23:52.267 "timeout_sec": 30 00:23:52.267 } 00:23:52.267 }, 00:23:52.267 { 00:23:52.267 "method": "bdev_nvme_set_options", 00:23:52.267 "params": { 00:23:52.267 "action_on_timeout": "none", 00:23:52.267 "timeout_us": 0, 00:23:52.267 "timeout_admin_us": 0, 00:23:52.267 "keep_alive_timeout_ms": 10000, 00:23:52.267 "arbitration_burst": 0, 00:23:52.267 "low_priority_weight": 0, 00:23:52.267 "medium_priority_weight": 0, 00:23:52.267 "high_priority_weight": 0, 00:23:52.267 "nvme_adminq_poll_period_us": 10000, 00:23:52.267 "nvme_ioq_poll_period_us": 0, 00:23:52.267 "io_queue_requests": 0, 00:23:52.267 "delay_cmd_submit": true, 00:23:52.267 "transport_retry_count": 4, 00:23:52.267 "bdev_retry_count": 3, 00:23:52.267 "transport_ack_timeout": 0, 00:23:52.267 "ctrlr_loss_timeout_sec": 0, 00:23:52.267 "reconnect_delay_sec": 0, 00:23:52.267 "fast_io_fail_timeout_sec": 0, 00:23:52.267 "disable_auto_failback": false, 00:23:52.267 "generate_uuids": false, 00:23:52.267 "transport_tos": 0, 00:23:52.267 "nvme_error_stat": false, 00:23:52.267 "rdma_srq_size": 0, 00:23:52.267 "io_path_stat": false, 00:23:52.267 "allow_accel_sequence": false, 00:23:52.267 "rdma_max_cq_size": 0, 00:23:52.267 "rdma_cm_event_timeout_ms": 0, 00:23:52.267 "dhchap_digests": [ 00:23:52.267 "sha256", 00:23:52.267 "sha384", 00:23:52.267 "sha512" 00:23:52.267 ], 00:23:52.267 "dhchap_dhgroups": [ 00:23:52.267 "null", 00:23:52.267 "ffdhe2048", 00:23:52.267 "ffdhe3072", 00:23:52.267 "ffdhe4096", 00:23:52.267 "ffdhe6144", 00:23:52.267 "ffdhe8192" 00:23:52.267 ] 00:23:52.267 } 00:23:52.267 }, 00:23:52.267 { 00:23:52.267 "method": "bdev_nvme_set_hotplug", 00:23:52.267 "params": { 00:23:52.267 "period_us": 100000, 00:23:52.267 "enable": false 00:23:52.267 } 00:23:52.267 }, 00:23:52.267 { 00:23:52.267 "method": "bdev_malloc_create", 00:23:52.267 "params": { 00:23:52.267 "name": "malloc0", 00:23:52.267 "num_blocks": 8192, 00:23:52.267 "block_size": 4096, 00:23:52.267 "physical_block_size": 4096, 00:23:52.267 "uuid": "52059ebf-a2a6-4731-93fa-d57c57fb3c9c", 00:23:52.267 "optimal_io_boundary": 0 00:23:52.267 } 00:23:52.267 }, 00:23:52.267 { 00:23:52.267 "method": "bdev_wait_for_examine" 00:23:52.267 } 00:23:52.267 ] 00:23:52.267 }, 00:23:52.267 { 00:23:52.267 "subsystem": "nbd", 00:23:52.267 "config": [] 00:23:52.267 }, 00:23:52.267 { 00:23:52.267 "subsystem": "scheduler", 00:23:52.267 "config": [ 00:23:52.267 { 00:23:52.267 "method": "framework_set_scheduler", 00:23:52.267 "params": { 00:23:52.267 "name": "static" 00:23:52.267 } 00:23:52.267 } 00:23:52.267 ] 00:23:52.267 }, 00:23:52.267 { 00:23:52.267 "subsystem": "nvmf", 00:23:52.267 "config": [ 00:23:52.267 { 00:23:52.267 "method": "nvmf_set_config", 00:23:52.267 "params": { 00:23:52.267 "discovery_filter": "match_any", 00:23:52.267 "admin_cmd_passthru": { 00:23:52.267 "identify_ctrlr": false 00:23:52.267 } 00:23:52.267 } 00:23:52.267 }, 00:23:52.267 { 00:23:52.267 "method": "nvmf_set_max_subsystems", 00:23:52.267 "params": { 00:23:52.267 "max_subsystems": 1024 00:23:52.267 } 00:23:52.267 }, 00:23:52.267 { 00:23:52.267 "method": "nvmf_set_crdt", 00:23:52.267 "params": { 00:23:52.267 "crdt1": 0, 00:23:52.267 "crdt2": 0, 00:23:52.267 "crdt3": 0 00:23:52.267 } 00:23:52.267 }, 00:23:52.267 { 00:23:52.267 "method": "nvmf_create_transport", 00:23:52.267 "params": { 00:23:52.267 "trtype": "TCP", 00:23:52.267 "max_queue_depth": 128, 00:23:52.267 "max_io_qpairs_per_ctrlr": 127, 00:23:52.267 "in_capsule_data_size": 4096, 00:23:52.267 "max_io_size": 131072, 00:23:52.267 "io_unit_size": 131072, 00:23:52.267 "max_aq_depth": 128, 00:23:52.268 "num_shared_buffers": 511, 00:23:52.268 "buf_cache_size": 4294967295, 00:23:52.268 "dif_insert_or_strip": false, 00:23:52.268 "zcopy": false, 00:23:52.268 "c2h_success": false, 00:23:52.268 "sock_priority": 0, 00:23:52.268 "abort_timeout_sec": 1, 00:23:52.268 "ack_timeout": 0, 00:23:52.268 "data_wr_pool_size": 0 00:23:52.268 } 00:23:52.268 }, 00:23:52.268 { 00:23:52.268 "method": "nvmf_create_subsystem", 00:23:52.268 "params": { 00:23:52.268 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:52.268 "allow_any_host": false, 00:23:52.268 "serial_number": "00000000000000000000", 00:23:52.268 "model_number": "SPDK bdev Controller", 00:23:52.268 "max_namespaces": 32, 00:23:52.268 "min_cntlid": 1, 00:23:52.268 "max_cntlid": 65519, 00:23:52.268 "ana_reporting": false 00:23:52.268 } 00:23:52.268 }, 00:23:52.268 { 00:23:52.268 "method": "nvmf_subsystem_add_host", 00:23:52.268 "params": { 00:23:52.268 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:52.268 "host": "nqn.2016-06.io.spdk:host1", 00:23:52.268 "psk": "key0" 00:23:52.268 } 00:23:52.268 }, 00:23:52.268 { 00:23:52.268 "method": "nvmf_subsystem_add_ns", 00:23:52.268 "params": { 00:23:52.268 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:52.268 "namespace": { 00:23:52.268 "nsid": 1, 00:23:52.268 "bdev_name": "malloc0", 00:23:52.268 "nguid": "52059EBFA2A6473193FAD57C57FB3C9C", 00:23:52.268 "uuid": "52059ebf-a2a6-4731-93fa-d57c57fb3c9c", 00:23:52.268 "no_auto_visible": false 00:23:52.268 } 00:23:52.268 } 00:23:52.268 }, 00:23:52.268 { 00:23:52.268 "method": "nvmf_subsystem_add_listener", 00:23:52.268 "params": { 00:23:52.268 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:52.268 "listen_address": { 00:23:52.268 "trtype": "TCP", 00:23:52.268 "adrfam": "IPv4", 00:23:52.268 "traddr": "10.0.0.2", 00:23:52.268 "trsvcid": "4420" 00:23:52.268 }, 00:23:52.268 "secure_channel": true 00:23:52.268 } 00:23:52.268 } 00:23:52.268 ] 00:23:52.268 } 00:23:52.268 ] 00:23:52.268 }' 00:23:52.268 03:26:58 nvmf_tcp.nvmf_tls -- target/tls.sh@264 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:23:52.526 03:26:58 nvmf_tcp.nvmf_tls -- target/tls.sh@264 -- # bperfcfg='{ 00:23:52.526 "subsystems": [ 00:23:52.526 { 00:23:52.526 "subsystem": "keyring", 00:23:52.526 "config": [ 00:23:52.526 { 00:23:52.526 "method": "keyring_file_add_key", 00:23:52.526 "params": { 00:23:52.526 "name": "key0", 00:23:52.526 "path": "/tmp/tmp.wwkzYivz4C" 00:23:52.526 } 00:23:52.526 } 00:23:52.526 ] 00:23:52.526 }, 00:23:52.526 { 00:23:52.526 "subsystem": "iobuf", 00:23:52.526 "config": [ 00:23:52.526 { 00:23:52.526 "method": "iobuf_set_options", 00:23:52.526 "params": { 00:23:52.526 "small_pool_count": 8192, 00:23:52.526 "large_pool_count": 1024, 00:23:52.526 "small_bufsize": 8192, 00:23:52.526 "large_bufsize": 135168 00:23:52.526 } 00:23:52.526 } 00:23:52.526 ] 00:23:52.526 }, 00:23:52.526 { 00:23:52.526 "subsystem": "sock", 00:23:52.526 "config": [ 00:23:52.526 { 00:23:52.526 "method": "sock_set_default_impl", 00:23:52.526 "params": { 00:23:52.526 "impl_name": "posix" 00:23:52.526 } 00:23:52.526 }, 00:23:52.526 { 00:23:52.526 "method": "sock_impl_set_options", 00:23:52.526 "params": { 00:23:52.526 "impl_name": "ssl", 00:23:52.526 "recv_buf_size": 4096, 00:23:52.526 "send_buf_size": 4096, 00:23:52.526 "enable_recv_pipe": true, 00:23:52.526 "enable_quickack": false, 00:23:52.526 "enable_placement_id": 0, 00:23:52.526 "enable_zerocopy_send_server": true, 00:23:52.526 "enable_zerocopy_send_client": false, 00:23:52.526 "zerocopy_threshold": 0, 00:23:52.526 "tls_version": 0, 00:23:52.526 "enable_ktls": false 00:23:52.526 } 00:23:52.526 }, 00:23:52.526 { 00:23:52.526 "method": "sock_impl_set_options", 00:23:52.526 "params": { 00:23:52.526 "impl_name": "posix", 00:23:52.526 "recv_buf_size": 2097152, 00:23:52.526 "send_buf_size": 2097152, 00:23:52.526 "enable_recv_pipe": true, 00:23:52.526 "enable_quickack": false, 00:23:52.526 "enable_placement_id": 0, 00:23:52.526 "enable_zerocopy_send_server": true, 00:23:52.526 "enable_zerocopy_send_client": false, 00:23:52.526 "zerocopy_threshold": 0, 00:23:52.526 "tls_version": 0, 00:23:52.527 "enable_ktls": false 00:23:52.527 } 00:23:52.527 } 00:23:52.527 ] 00:23:52.527 }, 00:23:52.527 { 00:23:52.527 "subsystem": "vmd", 00:23:52.527 "config": [] 00:23:52.527 }, 00:23:52.527 { 00:23:52.527 "subsystem": "accel", 00:23:52.527 "config": [ 00:23:52.527 { 00:23:52.527 "method": "accel_set_options", 00:23:52.527 "params": { 00:23:52.527 "small_cache_size": 128, 00:23:52.527 "large_cache_size": 16, 00:23:52.527 "task_count": 2048, 00:23:52.527 "sequence_count": 2048, 00:23:52.527 "buf_count": 2048 00:23:52.527 } 00:23:52.527 } 00:23:52.527 ] 00:23:52.527 }, 00:23:52.527 { 00:23:52.527 "subsystem": "bdev", 00:23:52.527 "config": [ 00:23:52.527 { 00:23:52.527 "method": "bdev_set_options", 00:23:52.527 "params": { 00:23:52.527 "bdev_io_pool_size": 65535, 00:23:52.527 "bdev_io_cache_size": 256, 00:23:52.527 "bdev_auto_examine": true, 00:23:52.527 "iobuf_small_cache_size": 128, 00:23:52.527 "iobuf_large_cache_size": 16 00:23:52.527 } 00:23:52.527 }, 00:23:52.527 { 00:23:52.527 "method": "bdev_raid_set_options", 00:23:52.527 "params": { 00:23:52.527 "process_window_size_kb": 1024 00:23:52.527 } 00:23:52.527 }, 00:23:52.527 { 00:23:52.527 "method": "bdev_iscsi_set_options", 00:23:52.527 "params": { 00:23:52.527 "timeout_sec": 30 00:23:52.527 } 00:23:52.527 }, 00:23:52.527 { 00:23:52.527 "method": "bdev_nvme_set_options", 00:23:52.527 "params": { 00:23:52.527 "action_on_timeout": "none", 00:23:52.527 "timeout_us": 0, 00:23:52.527 "timeout_admin_us": 0, 00:23:52.527 "keep_alive_timeout_ms": 10000, 00:23:52.527 "arbitration_burst": 0, 00:23:52.527 "low_priority_weight": 0, 00:23:52.527 "medium_priority_weight": 0, 00:23:52.527 "high_priority_weight": 0, 00:23:52.527 "nvme_adminq_poll_period_us": 10000, 00:23:52.527 "nvme_ioq_poll_period_us": 0, 00:23:52.527 "io_queue_requests": 512, 00:23:52.527 "delay_cmd_submit": true, 00:23:52.527 "transport_retry_count": 4, 00:23:52.527 "bdev_retry_count": 3, 00:23:52.527 "transport_ack_timeout": 0, 00:23:52.527 "ctrlr_loss_timeout_sec": 0, 00:23:52.527 "reconnect_delay_sec": 0, 00:23:52.527 "fast_io_fail_timeout_sec": 0, 00:23:52.527 "disable_auto_failback": false, 00:23:52.527 "generate_uuids": false, 00:23:52.527 "transport_tos": 0, 00:23:52.527 "nvme_error_stat": false, 00:23:52.527 "rdma_srq_size": 0, 00:23:52.527 "io_path_stat": false, 00:23:52.527 "allow_accel_sequence": false, 00:23:52.527 "rdma_max_cq_size": 0, 00:23:52.527 "rdma_cm_event_timeout_ms": 0, 00:23:52.527 "dhchap_digests": [ 00:23:52.527 "sha256", 00:23:52.527 "sha384", 00:23:52.527 "sha512" 00:23:52.527 ], 00:23:52.527 "dhchap_dhgroups": [ 00:23:52.527 "null", 00:23:52.527 "ffdhe2048", 00:23:52.527 "ffdhe3072", 00:23:52.527 "ffdhe4096", 00:23:52.527 "ffdhe6144", 00:23:52.527 "ffdhe8192" 00:23:52.527 ] 00:23:52.527 } 00:23:52.527 }, 00:23:52.527 { 00:23:52.527 "method": "bdev_nvme_attach_controller", 00:23:52.527 "params": { 00:23:52.527 "name": "nvme0", 00:23:52.527 "trtype": "TCP", 00:23:52.527 "adrfam": "IPv4", 00:23:52.527 "traddr": "10.0.0.2", 00:23:52.527 "trsvcid": "4420", 00:23:52.527 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:52.527 "prchk_reftag": false, 00:23:52.527 "prchk_guard": false, 00:23:52.527 "ctrlr_loss_timeout_sec": 0, 00:23:52.527 "reconnect_delay_sec": 0, 00:23:52.527 "fast_io_fail_timeout_sec": 0, 00:23:52.527 "psk": "key0", 00:23:52.527 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:52.527 "hdgst": false, 00:23:52.527 "ddgst": false 00:23:52.527 } 00:23:52.527 }, 00:23:52.527 { 00:23:52.527 "method": "bdev_nvme_set_hotplug", 00:23:52.527 "params": { 00:23:52.527 "period_us": 100000, 00:23:52.527 "enable": false 00:23:52.527 } 00:23:52.527 }, 00:23:52.527 { 00:23:52.527 "method": "bdev_enable_histogram", 00:23:52.527 "params": { 00:23:52.527 "name": "nvme0n1", 00:23:52.527 "enable": true 00:23:52.527 } 00:23:52.527 }, 00:23:52.527 { 00:23:52.527 "method": "bdev_wait_for_examine" 00:23:52.527 } 00:23:52.527 ] 00:23:52.527 }, 00:23:52.527 { 00:23:52.527 "subsystem": "nbd", 00:23:52.527 "config": [] 00:23:52.527 } 00:23:52.527 ] 00:23:52.527 }' 00:23:52.527 03:26:58 nvmf_tcp.nvmf_tls -- target/tls.sh@266 -- # killprocess 3236505 00:23:52.527 03:26:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3236505 ']' 00:23:52.527 03:26:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3236505 00:23:52.527 03:26:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:23:52.527 03:26:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:52.527 03:26:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3236505 00:23:52.527 03:26:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:23:52.527 03:26:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:23:52.527 03:26:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3236505' 00:23:52.527 killing process with pid 3236505 00:23:52.527 03:26:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3236505 00:23:52.527 Received shutdown signal, test time was about 1.000000 seconds 00:23:52.527 00:23:52.527 Latency(us) 00:23:52.527 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:52.527 =================================================================================================================== 00:23:52.527 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:52.527 03:26:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3236505 00:23:52.785 03:26:58 nvmf_tcp.nvmf_tls -- target/tls.sh@267 -- # killprocess 3236477 00:23:52.785 03:26:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3236477 ']' 00:23:52.785 03:26:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3236477 00:23:52.785 03:26:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:23:52.785 03:26:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:52.785 03:26:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3236477 00:23:52.785 03:26:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:23:52.785 03:26:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:23:52.785 03:26:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3236477' 00:23:52.785 killing process with pid 3236477 00:23:52.785 03:26:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3236477 00:23:52.785 03:26:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3236477 00:23:53.043 03:26:59 nvmf_tcp.nvmf_tls -- target/tls.sh@269 -- # nvmfappstart -c /dev/fd/62 00:23:53.043 03:26:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:53.043 03:26:59 nvmf_tcp.nvmf_tls -- target/tls.sh@269 -- # echo '{ 00:23:53.043 "subsystems": [ 00:23:53.043 { 00:23:53.043 "subsystem": "keyring", 00:23:53.043 "config": [ 00:23:53.043 { 00:23:53.043 "method": "keyring_file_add_key", 00:23:53.043 "params": { 00:23:53.043 "name": "key0", 00:23:53.043 "path": "/tmp/tmp.wwkzYivz4C" 00:23:53.043 } 00:23:53.043 } 00:23:53.043 ] 00:23:53.043 }, 00:23:53.043 { 00:23:53.043 "subsystem": "iobuf", 00:23:53.043 "config": [ 00:23:53.043 { 00:23:53.043 "method": "iobuf_set_options", 00:23:53.043 "params": { 00:23:53.043 "small_pool_count": 8192, 00:23:53.043 "large_pool_count": 1024, 00:23:53.043 "small_bufsize": 8192, 00:23:53.043 "large_bufsize": 135168 00:23:53.043 } 00:23:53.043 } 00:23:53.043 ] 00:23:53.043 }, 00:23:53.043 { 00:23:53.043 "subsystem": "sock", 00:23:53.043 "config": [ 00:23:53.043 { 00:23:53.043 "method": "sock_set_default_impl", 00:23:53.043 "params": { 00:23:53.043 "impl_name": "posix" 00:23:53.043 } 00:23:53.043 }, 00:23:53.043 { 00:23:53.043 "method": "sock_impl_set_options", 00:23:53.043 "params": { 00:23:53.043 "impl_name": "ssl", 00:23:53.043 "recv_buf_size": 4096, 00:23:53.043 "send_buf_size": 4096, 00:23:53.043 "enable_recv_pipe": true, 00:23:53.043 "enable_quickack": false, 00:23:53.043 "enable_placement_id": 0, 00:23:53.043 "enable_zerocopy_send_server": true, 00:23:53.043 "enable_zerocopy_send_client": false, 00:23:53.043 "zerocopy_threshold": 0, 00:23:53.043 "tls_version": 0, 00:23:53.043 "enable_ktls": false 00:23:53.043 } 00:23:53.043 }, 00:23:53.043 { 00:23:53.043 "method": "sock_impl_set_options", 00:23:53.043 "params": { 00:23:53.043 "impl_name": "posix", 00:23:53.043 "recv_buf_size": 2097152, 00:23:53.043 "send_buf_size": 2097152, 00:23:53.043 "enable_recv_pipe": true, 00:23:53.043 "enable_quickack": false, 00:23:53.043 "enable_placement_id": 0, 00:23:53.043 "enable_zerocopy_send_server": true, 00:23:53.043 "enable_zerocopy_send_client": false, 00:23:53.043 "zerocopy_threshold": 0, 00:23:53.043 "tls_version": 0, 00:23:53.043 "enable_ktls": false 00:23:53.043 } 00:23:53.043 } 00:23:53.043 ] 00:23:53.043 }, 00:23:53.043 { 00:23:53.043 "subsystem": "vmd", 00:23:53.043 "config": [] 00:23:53.043 }, 00:23:53.043 { 00:23:53.043 "subsystem": "accel", 00:23:53.043 "config": [ 00:23:53.043 { 00:23:53.043 "method": "accel_set_options", 00:23:53.043 "params": { 00:23:53.043 "small_cache_size": 128, 00:23:53.043 "large_cache_size": 16, 00:23:53.043 "task_count": 2048, 00:23:53.043 "sequence_count": 2048, 00:23:53.043 "buf_count": 2048 00:23:53.043 } 00:23:53.043 } 00:23:53.043 ] 00:23:53.043 }, 00:23:53.043 { 00:23:53.043 "subsystem": "bdev", 00:23:53.043 "config": [ 00:23:53.043 { 00:23:53.043 "method": "bdev_set_options", 00:23:53.043 "params": { 00:23:53.043 "bdev_io_pool_size": 65535, 00:23:53.043 "bdev_io_cache_size": 256, 00:23:53.043 "bdev_auto_examine": true, 00:23:53.043 "iobuf_small_cache_size": 128, 00:23:53.043 "iobuf_large_cache_size": 16 00:23:53.043 } 00:23:53.043 }, 00:23:53.043 { 00:23:53.043 "method": "bdev_raid_set_options", 00:23:53.043 "params": { 00:23:53.043 "process_window_size_kb": 1024 00:23:53.043 } 00:23:53.043 }, 00:23:53.043 { 00:23:53.043 "method": "bdev_iscsi_set_options", 00:23:53.043 "params": { 00:23:53.043 "timeout_sec": 30 00:23:53.043 } 00:23:53.043 }, 00:23:53.043 { 00:23:53.043 "method": "bdev_nvme_set_options", 00:23:53.043 "params": { 00:23:53.043 "action_on_timeout": "none", 00:23:53.043 "timeout_us": 0, 00:23:53.043 "timeout_admin_us": 0, 00:23:53.043 "keep_alive_timeout_ms": 10000, 00:23:53.043 "arbitration_burst": 0, 00:23:53.043 "low_priority_weight": 0, 00:23:53.043 "medium_priority_weight": 0, 00:23:53.043 "high_priority_weight": 0, 00:23:53.043 "nvme_adminq_poll_period_us": 10000, 00:23:53.043 "nvme_ioq_poll_period_us": 0, 00:23:53.043 "io_queue_requests": 0, 00:23:53.043 "delay_cmd_submit": true, 00:23:53.043 "transport_retry_count": 4, 00:23:53.043 "bdev_retry_count": 3, 00:23:53.043 "transport_ack_timeout": 0, 00:23:53.043 "ctrlr_loss_timeout_sec": 0, 00:23:53.043 "reconnect_delay_sec": 0, 00:23:53.043 "fast_io_fail_timeout_sec": 0, 00:23:53.043 "disable_auto_failback": false, 00:23:53.043 "generate_uuids": false, 00:23:53.043 "transport_tos": 0, 00:23:53.043 "nvme_error_stat": false, 00:23:53.043 "rdma_srq_size": 0, 00:23:53.043 "io_path_stat": false, 00:23:53.043 "allow_accel_sequence": false, 00:23:53.043 "rdma_max_cq_size": 0, 00:23:53.043 "rdma_cm_event_timeout_ms": 0, 00:23:53.043 "dhchap_digests": [ 00:23:53.043 "sha256", 00:23:53.043 "sha384", 00:23:53.043 "sha512" 00:23:53.043 ], 00:23:53.043 "dhchap_dhgroups": [ 00:23:53.043 "null", 00:23:53.043 "ffdhe2048", 00:23:53.043 "ffdhe3072", 00:23:53.043 "ffdhe4096", 00:23:53.043 "ffdhe6144", 00:23:53.043 "ffdhe8192" 00:23:53.043 ] 00:23:53.043 } 00:23:53.043 }, 00:23:53.043 { 00:23:53.043 "method": "bdev_nvme_set_hotplug", 00:23:53.043 "params": { 00:23:53.043 "period_us": 100000, 00:23:53.043 "enable": false 00:23:53.043 } 00:23:53.043 }, 00:23:53.043 { 00:23:53.043 "method": "bdev_malloc_create", 00:23:53.043 "params": { 00:23:53.043 "name": "malloc0", 00:23:53.043 "num_blocks": 8192, 00:23:53.043 "block_size": 4096, 00:23:53.043 "physical_block_size": 4096, 00:23:53.043 "uuid": "52059ebf-a2a6-4731-93fa-d57c57fb3c9c", 00:23:53.043 "optimal_io_boundary": 0 00:23:53.043 } 00:23:53.043 }, 00:23:53.043 { 00:23:53.043 "method": "bdev_wait_for_examine" 00:23:53.043 } 00:23:53.043 ] 00:23:53.043 }, 00:23:53.043 { 00:23:53.043 "subsystem": "nbd", 00:23:53.043 "config": [] 00:23:53.043 }, 00:23:53.043 { 00:23:53.043 "subsystem": "scheduler", 00:23:53.043 "config": [ 00:23:53.044 { 00:23:53.044 "method": "framework_set_scheduler", 00:23:53.044 "params": { 00:23:53.044 "name": "static" 00:23:53.044 } 00:23:53.044 } 00:23:53.044 ] 00:23:53.044 }, 00:23:53.044 { 00:23:53.044 "subsystem": "nvmf", 00:23:53.044 "config": [ 00:23:53.044 { 00:23:53.044 "method": "nvmf_set_config", 00:23:53.044 "params": { 00:23:53.044 "discovery_filter": "match_any", 00:23:53.044 "admin_cmd_passthru": { 00:23:53.044 "identify_ctrlr": false 00:23:53.044 } 00:23:53.044 } 00:23:53.044 }, 00:23:53.044 { 00:23:53.044 "method": "nvmf_set_max_subsystems", 00:23:53.044 "params": { 00:23:53.044 "max_subsystems": 1024 00:23:53.044 } 00:23:53.044 }, 00:23:53.044 { 00:23:53.044 "method": "nvmf_set_crdt", 00:23:53.044 "params": { 00:23:53.044 "crdt1": 0, 00:23:53.044 "crdt2": 0, 00:23:53.044 "crdt3": 0 00:23:53.044 } 00:23:53.044 }, 00:23:53.044 { 00:23:53.044 "method": "nvmf_create_transport", 00:23:53.044 "params": { 00:23:53.044 "trtype": "TCP", 00:23:53.044 "max_queue_depth": 128, 00:23:53.044 "max_io_qpairs_per_ctrlr": 127, 00:23:53.044 "in_capsule_data_size": 4096, 00:23:53.044 "max_io_size": 131072, 00:23:53.044 "io_unit_size": 131072, 00:23:53.044 "max_aq_depth": 128, 00:23:53.044 "num_shared_buffers": 511, 00:23:53.044 "buf_cache_size": 4294967295, 00:23:53.044 "dif_insert_or_strip": false, 00:23:53.044 "zcopy": false, 00:23:53.044 "c2h_success": false, 00:23:53.044 "sock_priority": 0, 00:23:53.044 "abort_timeout_sec": 1, 00:23:53.044 "ack_timeout": 0, 00:23:53.044 "data_wr_pool_size": 0 00:23:53.044 } 00:23:53.044 }, 00:23:53.044 { 00:23:53.044 "method": "nvmf_create_subsystem", 00:23:53.044 "params": { 00:23:53.044 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:53.044 "allow_any_host": false, 00:23:53.044 "serial_number": "00000000000000000000", 00:23:53.044 "model_number": "SPDK bdev Controller", 00:23:53.044 "max_namespaces": 32, 00:23:53.044 "min_cntlid": 1, 00:23:53.044 "max_cntlid": 65519, 00:23:53.044 "ana_reporting": false 00:23:53.044 } 00:23:53.044 }, 00:23:53.044 { 00:23:53.044 "method": "nvmf_subsystem_add_host", 00:23:53.044 "params": { 00:23:53.044 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:53.044 "host": "nqn.2016-06.io.spdk:host1", 00:23:53.044 "psk": "key0" 00:23:53.044 } 00:23:53.044 }, 00:23:53.044 { 00:23:53.044 "method": "nvmf_subsystem_add_ns", 00:23:53.044 "params": { 00:23:53.044 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:53.044 "namespace": { 00:23:53.044 "nsid": 1, 00:23:53.044 "bdev_name": "malloc0", 00:23:53.044 "nguid": "52059EBFA2A6473193FAD57C57FB3C9C", 00:23:53.044 "uuid": "52059ebf-a2a6-4731-93fa-d57c57fb3c9c", 00:23:53.044 "no_auto_visible": false 00:23:53.044 } 00:23:53.044 } 00:23:53.044 }, 00:23:53.044 { 00:23:53.044 "method": "nvmf_subsystem_add_listener", 00:23:53.044 "params": { 00:23:53.044 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:53.044 "listen_address": { 00:23:53.044 "trtype": "TCP", 00:23:53.044 "adrfam": "IPv4", 00:23:53.044 "traddr": "10.0.0.2", 00:23:53.044 "trsvcid": "4420" 00:23:53.044 }, 00:23:53.044 "secure_channel": true 00:23:53.044 } 00:23:53.044 } 00:23:53.044 ] 00:23:53.044 } 00:23:53.044 ] 00:23:53.044 }' 00:23:53.044 03:26:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:53.044 03:26:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:53.044 03:26:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=3236907 00:23:53.044 03:26:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:23:53.044 03:26:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 3236907 00:23:53.044 03:26:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3236907 ']' 00:23:53.044 03:26:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:53.044 03:26:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:53.044 03:26:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:53.044 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:53.044 03:26:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:53.044 03:26:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:53.044 [2024-07-15 03:26:59.068276] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:23:53.044 [2024-07-15 03:26:59.068374] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:53.044 EAL: No free 2048 kB hugepages reported on node 1 00:23:53.044 [2024-07-15 03:26:59.137117] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:53.302 [2024-07-15 03:26:59.224510] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:53.302 [2024-07-15 03:26:59.224575] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:53.302 [2024-07-15 03:26:59.224601] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:53.302 [2024-07-15 03:26:59.224615] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:53.302 [2024-07-15 03:26:59.224627] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:53.302 [2024-07-15 03:26:59.224723] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:53.560 [2024-07-15 03:26:59.466855] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:53.560 [2024-07-15 03:26:59.498868] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:53.560 [2024-07-15 03:26:59.510048] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:54.126 03:26:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:54.126 03:26:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:23:54.126 03:26:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:54.126 03:26:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:54.126 03:26:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:54.126 03:27:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:54.126 03:27:00 nvmf_tcp.nvmf_tls -- target/tls.sh@272 -- # bdevperf_pid=3237066 00:23:54.126 03:27:00 nvmf_tcp.nvmf_tls -- target/tls.sh@273 -- # waitforlisten 3237066 /var/tmp/bdevperf.sock 00:23:54.126 03:27:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3237066 ']' 00:23:54.126 03:27:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:54.126 03:27:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:54.126 03:27:00 nvmf_tcp.nvmf_tls -- target/tls.sh@270 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:23:54.126 03:27:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:54.126 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:54.126 03:27:00 nvmf_tcp.nvmf_tls -- target/tls.sh@270 -- # echo '{ 00:23:54.126 "subsystems": [ 00:23:54.126 { 00:23:54.126 "subsystem": "keyring", 00:23:54.126 "config": [ 00:23:54.126 { 00:23:54.126 "method": "keyring_file_add_key", 00:23:54.126 "params": { 00:23:54.126 "name": "key0", 00:23:54.126 "path": "/tmp/tmp.wwkzYivz4C" 00:23:54.126 } 00:23:54.126 } 00:23:54.126 ] 00:23:54.126 }, 00:23:54.126 { 00:23:54.126 "subsystem": "iobuf", 00:23:54.126 "config": [ 00:23:54.126 { 00:23:54.126 "method": "iobuf_set_options", 00:23:54.126 "params": { 00:23:54.126 "small_pool_count": 8192, 00:23:54.126 "large_pool_count": 1024, 00:23:54.126 "small_bufsize": 8192, 00:23:54.126 "large_bufsize": 135168 00:23:54.126 } 00:23:54.126 } 00:23:54.126 ] 00:23:54.126 }, 00:23:54.126 { 00:23:54.126 "subsystem": "sock", 00:23:54.126 "config": [ 00:23:54.126 { 00:23:54.126 "method": "sock_set_default_impl", 00:23:54.126 "params": { 00:23:54.126 "impl_name": "posix" 00:23:54.126 } 00:23:54.126 }, 00:23:54.126 { 00:23:54.126 "method": "sock_impl_set_options", 00:23:54.126 "params": { 00:23:54.126 "impl_name": "ssl", 00:23:54.126 "recv_buf_size": 4096, 00:23:54.126 "send_buf_size": 4096, 00:23:54.126 "enable_recv_pipe": true, 00:23:54.126 "enable_quickack": false, 00:23:54.126 "enable_placement_id": 0, 00:23:54.126 "enable_zerocopy_send_server": true, 00:23:54.126 "enable_zerocopy_send_client": false, 00:23:54.126 "zerocopy_threshold": 0, 00:23:54.126 "tls_version": 0, 00:23:54.126 "enable_ktls": false 00:23:54.126 } 00:23:54.126 }, 00:23:54.126 { 00:23:54.126 "method": "sock_impl_set_options", 00:23:54.126 "params": { 00:23:54.126 "impl_name": "posix", 00:23:54.126 "recv_buf_size": 2097152, 00:23:54.126 "send_buf_size": 2097152, 00:23:54.126 "enable_recv_pipe": true, 00:23:54.126 "enable_quickack": false, 00:23:54.126 "enable_placement_id": 0, 00:23:54.126 "enable_zerocopy_send_server": true, 00:23:54.126 "enable_zerocopy_send_client": false, 00:23:54.126 "zerocopy_threshold": 0, 00:23:54.126 "tls_version": 0, 00:23:54.126 "enable_ktls": false 00:23:54.126 } 00:23:54.126 } 00:23:54.126 ] 00:23:54.126 }, 00:23:54.126 { 00:23:54.126 "subsystem": "vmd", 00:23:54.126 "config": [] 00:23:54.126 }, 00:23:54.126 { 00:23:54.126 "subsystem": "accel", 00:23:54.126 "config": [ 00:23:54.126 { 00:23:54.126 "method": "accel_set_options", 00:23:54.126 "params": { 00:23:54.127 "small_cache_size": 128, 00:23:54.127 "large_cache_size": 16, 00:23:54.127 "task_count": 2048, 00:23:54.127 "sequence_count": 2048, 00:23:54.127 "buf_count": 2048 00:23:54.127 } 00:23:54.127 } 00:23:54.127 ] 00:23:54.127 }, 00:23:54.127 { 00:23:54.127 "subsystem": "bdev", 00:23:54.127 "config": [ 00:23:54.127 { 00:23:54.127 "method": "bdev_set_options", 00:23:54.127 "params": { 00:23:54.127 "bdev_io_pool_size": 65535, 00:23:54.127 "bdev_io_cache_size": 256, 00:23:54.127 "bdev_auto_examine": true, 00:23:54.127 "iobuf_small_cache_size": 128, 00:23:54.127 "iobuf_large_cache_size": 16 00:23:54.127 } 00:23:54.127 }, 00:23:54.127 { 00:23:54.127 "method": "bdev_raid_set_options", 00:23:54.127 "params": { 00:23:54.127 "process_window_size_kb": 1024 00:23:54.127 } 00:23:54.127 }, 00:23:54.127 { 00:23:54.127 "method": "bdev_iscsi_set_options", 00:23:54.127 "params": { 00:23:54.127 "timeout_sec": 30 00:23:54.127 } 00:23:54.127 }, 00:23:54.127 { 00:23:54.127 "method": "bdev_nvme_set_options", 00:23:54.127 "params": { 00:23:54.127 "action_on_timeout": "none", 00:23:54.127 "timeout_us": 0, 00:23:54.127 "timeout_admin_us": 0, 00:23:54.127 "keep_alive_timeout_ms": 10000, 00:23:54.127 "arbitration_burst": 0, 00:23:54.127 "low_priority_weight": 0, 00:23:54.127 "medium_priority_weight": 0, 00:23:54.127 "high_priority_weight": 0, 00:23:54.127 "nvme_adminq_poll_period_us": 10000, 00:23:54.127 "nvme_ioq_poll_period_us": 0, 00:23:54.127 "io_queue_requests": 512, 00:23:54.127 "delay_cmd_submit": true, 00:23:54.127 "transport_retry_count": 4, 00:23:54.127 "bdev_retry_count": 3, 00:23:54.127 "transport_ack_timeout": 0, 00:23:54.127 "ctrlr_loss_timeout_sec": 0, 00:23:54.127 "reconnect_delay_sec": 0, 00:23:54.127 "fast_io_fail_timeout_sec": 0, 00:23:54.127 "disable_auto_failback": false, 00:23:54.127 "generate_uuids": false, 00:23:54.127 "transport_tos": 0, 00:23:54.127 "nvme_error_stat": false, 00:23:54.127 "rdma_srq_size": 0, 00:23:54.127 "io_path_stat": false, 00:23:54.127 "allow_accel_sequence": false, 00:23:54.127 "rdma_max_cq_size": 0, 00:23:54.127 "rdma_cm_event_timeout_ms": 0, 00:23:54.127 "dhchap_digests": [ 00:23:54.127 "sha256", 00:23:54.127 "sha384", 00:23:54.127 "sha512" 00:23:54.127 ], 00:23:54.127 "dhchap_dhgroups": [ 00:23:54.127 "null", 00:23:54.127 "ffdhe2048", 00:23:54.127 "ffdhe3072", 00:23:54.127 "ffdhe4096", 00:23:54.127 "ffdhe6144", 00:23:54.127 "ffdhe8192" 00:23:54.127 ] 00:23:54.127 } 00:23:54.127 }, 00:23:54.127 { 00:23:54.127 "method": "bdev_nvme_attach_controller", 00:23:54.127 "params": { 00:23:54.127 "name": "nvme0", 00:23:54.127 "trtype": "TCP", 00:23:54.127 "adrfam": "IPv4", 00:23:54.127 "traddr": "10.0.0.2", 00:23:54.127 "trsvcid": "4420", 00:23:54.127 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:54.127 "prchk_reftag": false, 00:23:54.127 "prchk_guard": false, 00:23:54.127 "ctrlr_loss_timeout_sec": 0, 00:23:54.127 "reconnect_delay_sec": 0, 00:23:54.127 "fast_io_fail_timeout_sec": 0, 00:23:54.127 "psk": "key0", 00:23:54.127 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:54.127 "hdgst": false, 00:23:54.127 "ddgst": false 00:23:54.127 } 00:23:54.127 }, 00:23:54.127 { 00:23:54.127 "method": "bdev_nvme_set_hotplug", 00:23:54.127 "params": { 00:23:54.127 "period_us": 100000, 00:23:54.127 "enable": false 00:23:54.127 } 00:23:54.127 }, 00:23:54.127 { 00:23:54.127 "method": "bdev_enable_histogram", 00:23:54.127 "params": { 00:23:54.127 "name": "nvme0n1", 00:23:54.127 "enable": true 00:23:54.127 } 00:23:54.127 }, 00:23:54.127 { 00:23:54.127 "method": "bdev_wait_for_examine" 00:23:54.127 } 00:23:54.127 ] 00:23:54.127 }, 00:23:54.127 { 00:23:54.127 "subsystem": "nbd", 00:23:54.127 "config": [] 00:23:54.127 } 00:23:54.127 ] 00:23:54.127 }' 00:23:54.127 03:27:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:54.127 03:27:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:54.127 [2024-07-15 03:27:00.057651] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:23:54.127 [2024-07-15 03:27:00.057738] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3237066 ] 00:23:54.127 EAL: No free 2048 kB hugepages reported on node 1 00:23:54.127 [2024-07-15 03:27:00.121347] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:54.127 [2024-07-15 03:27:00.214370] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:54.385 [2024-07-15 03:27:00.392642] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:54.949 03:27:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:54.949 03:27:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:23:54.949 03:27:01 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:54.949 03:27:01 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # jq -r '.[].name' 00:23:55.207 03:27:01 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:55.207 03:27:01 nvmf_tcp.nvmf_tls -- target/tls.sh@276 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:55.465 Running I/O for 1 seconds... 00:23:56.397 00:23:56.397 Latency(us) 00:23:56.397 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:56.397 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:23:56.397 Verification LBA range: start 0x0 length 0x2000 00:23:56.397 nvme0n1 : 1.03 3550.16 13.87 0.00 0.00 35547.22 6140.97 36700.16 00:23:56.397 =================================================================================================================== 00:23:56.397 Total : 3550.16 13.87 0.00 0.00 35547.22 6140.97 36700.16 00:23:56.397 0 00:23:56.397 03:27:02 nvmf_tcp.nvmf_tls -- target/tls.sh@278 -- # trap - SIGINT SIGTERM EXIT 00:23:56.397 03:27:02 nvmf_tcp.nvmf_tls -- target/tls.sh@279 -- # cleanup 00:23:56.397 03:27:02 nvmf_tcp.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:23:56.397 03:27:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@806 -- # type=--id 00:23:56.397 03:27:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@807 -- # id=0 00:23:56.397 03:27:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:23:56.397 03:27:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:23:56.397 03:27:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:23:56.397 03:27:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:23:56.397 03:27:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@818 -- # for n in $shm_files 00:23:56.397 03:27:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:23:56.397 nvmf_trace.0 00:23:56.397 03:27:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@821 -- # return 0 00:23:56.397 03:27:02 nvmf_tcp.nvmf_tls -- target/tls.sh@16 -- # killprocess 3237066 00:23:56.397 03:27:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3237066 ']' 00:23:56.397 03:27:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3237066 00:23:56.397 03:27:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:23:56.397 03:27:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:56.397 03:27:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3237066 00:23:56.397 03:27:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:23:56.397 03:27:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:23:56.397 03:27:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3237066' 00:23:56.397 killing process with pid 3237066 00:23:56.397 03:27:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3237066 00:23:56.397 Received shutdown signal, test time was about 1.000000 seconds 00:23:56.397 00:23:56.397 Latency(us) 00:23:56.397 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:56.397 =================================================================================================================== 00:23:56.397 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:56.397 03:27:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3237066 00:23:56.654 03:27:02 nvmf_tcp.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:23:56.654 03:27:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:56.654 03:27:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@117 -- # sync 00:23:56.654 03:27:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:56.654 03:27:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@120 -- # set +e 00:23:56.654 03:27:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:56.654 03:27:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:56.654 rmmod nvme_tcp 00:23:56.654 rmmod nvme_fabrics 00:23:56.912 rmmod nvme_keyring 00:23:56.912 03:27:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:56.912 03:27:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@124 -- # set -e 00:23:56.912 03:27:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@125 -- # return 0 00:23:56.912 03:27:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@489 -- # '[' -n 3236907 ']' 00:23:56.912 03:27:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@490 -- # killprocess 3236907 00:23:56.912 03:27:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3236907 ']' 00:23:56.912 03:27:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3236907 00:23:56.912 03:27:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:23:56.912 03:27:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:56.912 03:27:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3236907 00:23:56.912 03:27:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:23:56.912 03:27:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:23:56.912 03:27:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3236907' 00:23:56.912 killing process with pid 3236907 00:23:56.912 03:27:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3236907 00:23:56.912 03:27:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3236907 00:23:57.171 03:27:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:57.171 03:27:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:57.171 03:27:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:57.171 03:27:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:57.171 03:27:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:57.171 03:27:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:57.171 03:27:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:57.171 03:27:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:59.071 03:27:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:59.071 03:27:05 nvmf_tcp.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.HRL7eI6DWQ /tmp/tmp.fk8xo6UJyn /tmp/tmp.wwkzYivz4C 00:23:59.071 00:23:59.071 real 1m18.891s 00:23:59.071 user 2m9.665s 00:23:59.071 sys 0m24.362s 00:23:59.071 03:27:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:59.071 03:27:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:59.071 ************************************ 00:23:59.071 END TEST nvmf_tls 00:23:59.071 ************************************ 00:23:59.071 03:27:05 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:23:59.071 03:27:05 nvmf_tcp -- nvmf/nvmf.sh@62 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:23:59.071 03:27:05 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:23:59.071 03:27:05 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:59.071 03:27:05 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:59.071 ************************************ 00:23:59.071 START TEST nvmf_fips 00:23:59.071 ************************************ 00:23:59.071 03:27:05 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:23:59.330 * Looking for test storage... 00:23:59.330 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:23:59.330 03:27:05 nvmf_tcp.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:59.330 03:27:05 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:23:59.330 03:27:05 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:59.330 03:27:05 nvmf_tcp.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:59.330 03:27:05 nvmf_tcp.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:59.330 03:27:05 nvmf_tcp.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:59.330 03:27:05 nvmf_tcp.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:59.330 03:27:05 nvmf_tcp.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:59.330 03:27:05 nvmf_tcp.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:59.330 03:27:05 nvmf_tcp.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:59.330 03:27:05 nvmf_tcp.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:59.330 03:27:05 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:59.330 03:27:05 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:23:59.330 03:27:05 nvmf_tcp.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:23:59.330 03:27:05 nvmf_tcp.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:59.330 03:27:05 nvmf_tcp.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:59.330 03:27:05 nvmf_tcp.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:59.330 03:27:05 nvmf_tcp.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:59.330 03:27:05 nvmf_tcp.nvmf_fips -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:59.330 03:27:05 nvmf_tcp.nvmf_fips -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:59.330 03:27:05 nvmf_tcp.nvmf_fips -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:59.330 03:27:05 nvmf_tcp.nvmf_fips -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:59.330 03:27:05 nvmf_tcp.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:59.330 03:27:05 nvmf_tcp.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:59.330 03:27:05 nvmf_tcp.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:59.330 03:27:05 nvmf_tcp.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:23:59.330 03:27:05 nvmf_tcp.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:59.330 03:27:05 nvmf_tcp.nvmf_fips -- nvmf/common.sh@47 -- # : 0 00:23:59.330 03:27:05 nvmf_tcp.nvmf_fips -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:59.330 03:27:05 nvmf_tcp.nvmf_fips -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:59.330 03:27:05 nvmf_tcp.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:59.330 03:27:05 nvmf_tcp.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:59.330 03:27:05 nvmf_tcp.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:59.330 03:27:05 nvmf_tcp.nvmf_fips -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:59.330 03:27:05 nvmf_tcp.nvmf_fips -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:59.330 03:27:05 nvmf_tcp.nvmf_fips -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:59.330 03:27:05 nvmf_tcp.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:23:59.330 03:27:05 nvmf_tcp.nvmf_fips -- fips/fips.sh@89 -- # check_openssl_version 00:23:59.330 03:27:05 nvmf_tcp.nvmf_fips -- fips/fips.sh@83 -- # local target=3.0.0 00:23:59.330 03:27:05 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # openssl version 00:23:59.330 03:27:05 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # awk '{print $2}' 00:23:59.330 03:27:05 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # ge 3.0.9 3.0.0 00:23:59.330 03:27:05 nvmf_tcp.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 3.0.9 '>=' 3.0.0 00:23:59.330 03:27:05 nvmf_tcp.nvmf_fips -- scripts/common.sh@330 -- # local ver1 ver1_l 00:23:59.330 03:27:05 nvmf_tcp.nvmf_fips -- scripts/common.sh@331 -- # local ver2 ver2_l 00:23:59.330 03:27:05 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # IFS=.-: 00:23:59.330 03:27:05 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # read -ra ver1 00:23:59.330 03:27:05 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # IFS=.-: 00:23:59.330 03:27:05 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # read -ra ver2 00:23:59.330 03:27:05 nvmf_tcp.nvmf_fips -- scripts/common.sh@335 -- # local 'op=>=' 00:23:59.330 03:27:05 nvmf_tcp.nvmf_fips -- scripts/common.sh@337 -- # ver1_l=3 00:23:59.330 03:27:05 nvmf_tcp.nvmf_fips -- scripts/common.sh@338 -- # ver2_l=3 00:23:59.330 03:27:05 nvmf_tcp.nvmf_fips -- scripts/common.sh@340 -- # local lt=0 gt=0 eq=0 v 00:23:59.330 03:27:05 nvmf_tcp.nvmf_fips -- scripts/common.sh@341 -- # case "$op" in 00:23:59.330 03:27:05 nvmf_tcp.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:23:59.330 03:27:05 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v = 0 )) 00:23:59.330 03:27:05 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:59.330 03:27:05 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 3 00:23:59.330 03:27:05 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:23:59.330 03:27:05 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:23:59.330 03:27:05 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:23:59.330 03:27:05 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=3 00:23:59.330 03:27:05 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 3 00:23:59.330 03:27:05 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:23:59.330 03:27:05 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:23:59.330 03:27:05 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:23:59.330 03:27:05 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=3 00:23:59.330 03:27:05 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:23:59.330 03:27:05 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:23:59.330 03:27:05 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:23:59.330 03:27:05 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:59.330 03:27:05 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 0 00:23:59.330 03:27:05 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:23:59.330 03:27:05 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:23:59.330 03:27:05 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:23:59.330 03:27:05 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=0 00:23:59.330 03:27:05 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:23:59.330 03:27:05 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:23:59.330 03:27:05 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:23:59.330 03:27:05 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:23:59.330 03:27:05 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:23:59.330 03:27:05 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:23:59.330 03:27:05 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:23:59.330 03:27:05 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:23:59.330 03:27:05 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:59.330 03:27:05 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 9 00:23:59.330 03:27:05 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=9 00:23:59.330 03:27:05 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 9 =~ ^[0-9]+$ ]] 00:23:59.330 03:27:05 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 9 00:23:59.330 03:27:05 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=9 00:23:59.330 03:27:05 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:23:59.330 03:27:05 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:23:59.330 03:27:05 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:23:59.330 03:27:05 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:23:59.330 03:27:05 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:23:59.330 03:27:05 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:23:59.330 03:27:05 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # return 0 00:23:59.330 03:27:05 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # openssl info -modulesdir 00:23:59.330 03:27:05 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:23:59.330 03:27:05 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:23:59.330 03:27:05 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:23:59.330 03:27:05 nvmf_tcp.nvmf_fips -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:23:59.330 03:27:05 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:23:59.331 03:27:05 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # callback=build_openssl_config 00:23:59.331 03:27:05 nvmf_tcp.nvmf_fips -- fips/fips.sh@113 -- # build_openssl_config 00:23:59.331 03:27:05 nvmf_tcp.nvmf_fips -- fips/fips.sh@37 -- # cat 00:23:59.331 03:27:05 nvmf_tcp.nvmf_fips -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:23:59.331 03:27:05 nvmf_tcp.nvmf_fips -- fips/fips.sh@58 -- # cat - 00:23:59.331 03:27:05 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # export OPENSSL_CONF=spdk_fips.conf 00:23:59.331 03:27:05 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # OPENSSL_CONF=spdk_fips.conf 00:23:59.331 03:27:05 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # mapfile -t providers 00:23:59.331 03:27:05 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # openssl list -providers 00:23:59.331 03:27:05 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # grep name 00:23:59.331 03:27:05 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # (( 2 != 2 )) 00:23:59.331 03:27:05 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: openssl base provider != *base* ]] 00:23:59.331 03:27:05 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:23:59.331 03:27:05 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # NOT openssl md5 /dev/fd/62 00:23:59.331 03:27:05 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # : 00:23:59.331 03:27:05 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@648 -- # local es=0 00:23:59.331 03:27:05 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@650 -- # valid_exec_arg openssl md5 /dev/fd/62 00:23:59.331 03:27:05 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@636 -- # local arg=openssl 00:23:59.331 03:27:05 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:59.331 03:27:05 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # type -t openssl 00:23:59.331 03:27:05 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:59.331 03:27:05 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # type -P openssl 00:23:59.331 03:27:05 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:59.331 03:27:05 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # arg=/usr/bin/openssl 00:23:59.331 03:27:05 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # [[ -x /usr/bin/openssl ]] 00:23:59.331 03:27:05 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@651 -- # openssl md5 /dev/fd/62 00:23:59.331 Error setting digest 00:23:59.331 00524E7A007F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:373:Global default library context, Algorithm (MD5 : 97), Properties () 00:23:59.331 00524E7A007F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:254: 00:23:59.331 03:27:05 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@651 -- # es=1 00:23:59.331 03:27:05 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:59.331 03:27:05 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:59.331 03:27:05 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:59.331 03:27:05 nvmf_tcp.nvmf_fips -- fips/fips.sh@130 -- # nvmftestinit 00:23:59.331 03:27:05 nvmf_tcp.nvmf_fips -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:59.331 03:27:05 nvmf_tcp.nvmf_fips -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:59.331 03:27:05 nvmf_tcp.nvmf_fips -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:59.331 03:27:05 nvmf_tcp.nvmf_fips -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:59.331 03:27:05 nvmf_tcp.nvmf_fips -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:59.331 03:27:05 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:59.331 03:27:05 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:59.331 03:27:05 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:59.331 03:27:05 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:59.331 03:27:05 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:59.331 03:27:05 nvmf_tcp.nvmf_fips -- nvmf/common.sh@285 -- # xtrace_disable 00:23:59.331 03:27:05 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:01.228 03:27:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:01.228 03:27:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@291 -- # pci_devs=() 00:24:01.228 03:27:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:01.228 03:27:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:01.228 03:27:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:01.228 03:27:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:01.228 03:27:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:01.228 03:27:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@295 -- # net_devs=() 00:24:01.228 03:27:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:01.228 03:27:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@296 -- # e810=() 00:24:01.228 03:27:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@296 -- # local -ga e810 00:24:01.228 03:27:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@297 -- # x722=() 00:24:01.228 03:27:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@297 -- # local -ga x722 00:24:01.228 03:27:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@298 -- # mlx=() 00:24:01.228 03:27:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@298 -- # local -ga mlx 00:24:01.228 03:27:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:01.228 03:27:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:01.228 03:27:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:01.228 03:27:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:01.228 03:27:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:01.228 03:27:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:01.228 03:27:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:01.228 03:27:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:01.228 03:27:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:01.228 03:27:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:01.228 03:27:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:01.228 03:27:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:01.228 03:27:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:01.229 03:27:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:01.229 03:27:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:01.229 03:27:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:01.229 03:27:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:01.229 03:27:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:01.229 03:27:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:24:01.229 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:24:01.229 03:27:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:01.229 03:27:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:01.229 03:27:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:01.229 03:27:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:01.229 03:27:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:01.229 03:27:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:01.229 03:27:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:24:01.229 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:24:01.229 03:27:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:01.229 03:27:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:01.229 03:27:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:01.229 03:27:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:01.229 03:27:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:01.229 03:27:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:01.229 03:27:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:01.229 03:27:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:01.229 03:27:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:01.229 03:27:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:01.229 03:27:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:01.229 03:27:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:01.229 03:27:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:01.229 03:27:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:01.229 03:27:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:01.229 03:27:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:24:01.229 Found net devices under 0000:0a:00.0: cvl_0_0 00:24:01.229 03:27:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:01.229 03:27:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:01.229 03:27:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:01.229 03:27:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:01.229 03:27:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:01.229 03:27:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:01.229 03:27:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:01.229 03:27:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:01.229 03:27:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:24:01.229 Found net devices under 0000:0a:00.1: cvl_0_1 00:24:01.229 03:27:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:01.229 03:27:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:01.229 03:27:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # is_hw=yes 00:24:01.229 03:27:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:01.229 03:27:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:01.229 03:27:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:01.229 03:27:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:01.229 03:27:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:01.229 03:27:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:01.229 03:27:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:01.229 03:27:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:01.229 03:27:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:01.229 03:27:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:01.229 03:27:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:01.229 03:27:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:01.229 03:27:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:01.229 03:27:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:01.229 03:27:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:01.229 03:27:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:01.229 03:27:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:01.229 03:27:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:01.229 03:27:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:01.229 03:27:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:01.229 03:27:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:01.229 03:27:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:01.486 03:27:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:01.486 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:01.486 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.131 ms 00:24:01.486 00:24:01.486 --- 10.0.0.2 ping statistics --- 00:24:01.486 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:01.486 rtt min/avg/max/mdev = 0.131/0.131/0.131/0.000 ms 00:24:01.486 03:27:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:01.486 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:01.486 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.118 ms 00:24:01.486 00:24:01.486 --- 10.0.0.1 ping statistics --- 00:24:01.486 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:01.486 rtt min/avg/max/mdev = 0.118/0.118/0.118/0.000 ms 00:24:01.486 03:27:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:01.486 03:27:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@422 -- # return 0 00:24:01.486 03:27:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:01.486 03:27:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:01.486 03:27:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:01.486 03:27:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:01.486 03:27:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:01.486 03:27:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:01.486 03:27:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:01.486 03:27:07 nvmf_tcp.nvmf_fips -- fips/fips.sh@131 -- # nvmfappstart -m 0x2 00:24:01.486 03:27:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:01.486 03:27:07 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:01.486 03:27:07 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:01.486 03:27:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@481 -- # nvmfpid=3239560 00:24:01.486 03:27:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:24:01.486 03:27:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@482 -- # waitforlisten 3239560 00:24:01.486 03:27:07 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@829 -- # '[' -z 3239560 ']' 00:24:01.486 03:27:07 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:01.486 03:27:07 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:01.486 03:27:07 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:01.486 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:01.486 03:27:07 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:01.486 03:27:07 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:01.486 [2024-07-15 03:27:07.483575] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:24:01.486 [2024-07-15 03:27:07.483671] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:01.486 EAL: No free 2048 kB hugepages reported on node 1 00:24:01.486 [2024-07-15 03:27:07.550664] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:01.744 [2024-07-15 03:27:07.650524] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:01.744 [2024-07-15 03:27:07.650572] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:01.744 [2024-07-15 03:27:07.650592] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:01.744 [2024-07-15 03:27:07.650610] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:01.744 [2024-07-15 03:27:07.650620] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:01.744 [2024-07-15 03:27:07.650646] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:01.744 03:27:07 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:01.744 03:27:07 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@862 -- # return 0 00:24:01.744 03:27:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:01.744 03:27:07 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:01.744 03:27:07 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:01.744 03:27:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:01.744 03:27:07 nvmf_tcp.nvmf_fips -- fips/fips.sh@133 -- # trap cleanup EXIT 00:24:01.744 03:27:07 nvmf_tcp.nvmf_fips -- fips/fips.sh@136 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:24:01.744 03:27:07 nvmf_tcp.nvmf_fips -- fips/fips.sh@137 -- # key_path=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:24:01.744 03:27:07 nvmf_tcp.nvmf_fips -- fips/fips.sh@138 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:24:01.744 03:27:07 nvmf_tcp.nvmf_fips -- fips/fips.sh@139 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:24:01.744 03:27:07 nvmf_tcp.nvmf_fips -- fips/fips.sh@141 -- # setup_nvmf_tgt_conf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:24:01.744 03:27:07 nvmf_tcp.nvmf_fips -- fips/fips.sh@22 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:24:01.744 03:27:07 nvmf_tcp.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:24:02.003 [2024-07-15 03:27:08.027023] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:02.003 [2024-07-15 03:27:08.042975] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:02.003 [2024-07-15 03:27:08.043224] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:02.003 [2024-07-15 03:27:08.075466] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:24:02.003 malloc0 00:24:02.003 03:27:08 nvmf_tcp.nvmf_fips -- fips/fips.sh@144 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:02.003 03:27:08 nvmf_tcp.nvmf_fips -- fips/fips.sh@147 -- # bdevperf_pid=3239888 00:24:02.003 03:27:08 nvmf_tcp.nvmf_fips -- fips/fips.sh@145 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:24:02.003 03:27:08 nvmf_tcp.nvmf_fips -- fips/fips.sh@148 -- # waitforlisten 3239888 /var/tmp/bdevperf.sock 00:24:02.003 03:27:08 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@829 -- # '[' -z 3239888 ']' 00:24:02.003 03:27:08 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:02.003 03:27:08 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:02.003 03:27:08 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:02.003 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:02.003 03:27:08 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:02.003 03:27:08 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:02.291 [2024-07-15 03:27:08.172716] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:24:02.291 [2024-07-15 03:27:08.172817] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3239888 ] 00:24:02.291 EAL: No free 2048 kB hugepages reported on node 1 00:24:02.291 [2024-07-15 03:27:08.231403] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:02.291 [2024-07-15 03:27:08.315514] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:24:02.549 03:27:08 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:02.549 03:27:08 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@862 -- # return 0 00:24:02.549 03:27:08 nvmf_tcp.nvmf_fips -- fips/fips.sh@150 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:24:02.549 [2024-07-15 03:27:08.660993] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:02.549 [2024-07-15 03:27:08.661114] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:24:02.813 TLSTESTn1 00:24:02.813 03:27:08 nvmf_tcp.nvmf_fips -- fips/fips.sh@154 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:02.813 Running I/O for 10 seconds... 00:24:12.775 00:24:12.775 Latency(us) 00:24:12.775 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:12.775 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:24:12.775 Verification LBA range: start 0x0 length 0x2000 00:24:12.775 TLSTESTn1 : 10.04 3018.08 11.79 0.00 0.00 42302.92 9223.59 56312.41 00:24:12.775 =================================================================================================================== 00:24:12.775 Total : 3018.08 11.79 0.00 0.00 42302.92 9223.59 56312.41 00:24:12.775 0 00:24:13.034 03:27:18 nvmf_tcp.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:24:13.034 03:27:18 nvmf_tcp.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:24:13.034 03:27:18 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@806 -- # type=--id 00:24:13.034 03:27:18 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@807 -- # id=0 00:24:13.034 03:27:18 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:24:13.034 03:27:18 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:24:13.034 03:27:18 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:24:13.034 03:27:18 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:24:13.034 03:27:18 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@818 -- # for n in $shm_files 00:24:13.034 03:27:18 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:24:13.034 nvmf_trace.0 00:24:13.034 03:27:19 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@821 -- # return 0 00:24:13.034 03:27:19 nvmf_tcp.nvmf_fips -- fips/fips.sh@16 -- # killprocess 3239888 00:24:13.034 03:27:19 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@948 -- # '[' -z 3239888 ']' 00:24:13.034 03:27:19 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # kill -0 3239888 00:24:13.034 03:27:19 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # uname 00:24:13.034 03:27:19 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:13.034 03:27:19 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3239888 00:24:13.034 03:27:19 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:24:13.034 03:27:19 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:24:13.034 03:27:19 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3239888' 00:24:13.034 killing process with pid 3239888 00:24:13.034 03:27:19 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@967 -- # kill 3239888 00:24:13.034 Received shutdown signal, test time was about 10.000000 seconds 00:24:13.034 00:24:13.034 Latency(us) 00:24:13.034 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:13.034 =================================================================================================================== 00:24:13.034 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:13.034 [2024-07-15 03:27:19.034940] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:24:13.034 03:27:19 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@972 -- # wait 3239888 00:24:13.292 03:27:19 nvmf_tcp.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:24:13.292 03:27:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:13.292 03:27:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@117 -- # sync 00:24:13.292 03:27:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:13.292 03:27:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@120 -- # set +e 00:24:13.292 03:27:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:13.292 03:27:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:13.292 rmmod nvme_tcp 00:24:13.292 rmmod nvme_fabrics 00:24:13.292 rmmod nvme_keyring 00:24:13.292 03:27:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:13.292 03:27:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@124 -- # set -e 00:24:13.292 03:27:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@125 -- # return 0 00:24:13.292 03:27:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@489 -- # '[' -n 3239560 ']' 00:24:13.292 03:27:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@490 -- # killprocess 3239560 00:24:13.292 03:27:19 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@948 -- # '[' -z 3239560 ']' 00:24:13.292 03:27:19 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # kill -0 3239560 00:24:13.292 03:27:19 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # uname 00:24:13.292 03:27:19 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:13.292 03:27:19 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3239560 00:24:13.292 03:27:19 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:24:13.292 03:27:19 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:24:13.292 03:27:19 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3239560' 00:24:13.292 killing process with pid 3239560 00:24:13.292 03:27:19 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@967 -- # kill 3239560 00:24:13.292 [2024-07-15 03:27:19.345126] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:24:13.292 03:27:19 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@972 -- # wait 3239560 00:24:13.549 03:27:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:13.549 03:27:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:13.549 03:27:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:13.550 03:27:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:13.550 03:27:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:13.550 03:27:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:13.550 03:27:19 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:13.550 03:27:19 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:16.081 03:27:21 nvmf_tcp.nvmf_fips -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:16.081 03:27:21 nvmf_tcp.nvmf_fips -- fips/fips.sh@18 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:24:16.081 00:24:16.081 real 0m16.460s 00:24:16.081 user 0m20.559s 00:24:16.081 sys 0m5.950s 00:24:16.081 03:27:21 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:16.081 03:27:21 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:16.081 ************************************ 00:24:16.081 END TEST nvmf_fips 00:24:16.081 ************************************ 00:24:16.081 03:27:21 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:24:16.081 03:27:21 nvmf_tcp -- nvmf/nvmf.sh@65 -- # '[' 1 -eq 1 ']' 00:24:16.081 03:27:21 nvmf_tcp -- nvmf/nvmf.sh@66 -- # run_test nvmf_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:24:16.081 03:27:21 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:24:16.081 03:27:21 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:16.081 03:27:21 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:16.081 ************************************ 00:24:16.081 START TEST nvmf_fuzz 00:24:16.081 ************************************ 00:24:16.081 03:27:21 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:24:16.081 * Looking for test storage... 00:24:16.081 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:24:16.081 03:27:21 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:16.081 03:27:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@7 -- # uname -s 00:24:16.081 03:27:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:16.081 03:27:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:16.081 03:27:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:16.081 03:27:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:16.081 03:27:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:16.081 03:27:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:16.081 03:27:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:16.081 03:27:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:16.081 03:27:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:16.081 03:27:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:16.081 03:27:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:24:16.081 03:27:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:24:16.081 03:27:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:16.081 03:27:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:16.081 03:27:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:16.081 03:27:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:16.081 03:27:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:16.081 03:27:21 nvmf_tcp.nvmf_fuzz -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:16.081 03:27:21 nvmf_tcp.nvmf_fuzz -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:16.081 03:27:21 nvmf_tcp.nvmf_fuzz -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:16.081 03:27:21 nvmf_tcp.nvmf_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:16.081 03:27:21 nvmf_tcp.nvmf_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:16.081 03:27:21 nvmf_tcp.nvmf_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:16.081 03:27:21 nvmf_tcp.nvmf_fuzz -- paths/export.sh@5 -- # export PATH 00:24:16.081 03:27:21 nvmf_tcp.nvmf_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:16.081 03:27:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@47 -- # : 0 00:24:16.081 03:27:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:16.081 03:27:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:16.081 03:27:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:16.081 03:27:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:16.081 03:27:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:16.081 03:27:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:16.081 03:27:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:16.081 03:27:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:16.081 03:27:21 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@11 -- # nvmftestinit 00:24:16.081 03:27:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:16.081 03:27:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:16.081 03:27:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:16.081 03:27:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:16.081 03:27:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:16.081 03:27:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:16.081 03:27:21 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:16.081 03:27:21 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:16.081 03:27:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:16.081 03:27:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:16.081 03:27:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@285 -- # xtrace_disable 00:24:16.081 03:27:21 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:17.981 03:27:23 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:17.981 03:27:23 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@291 -- # pci_devs=() 00:24:17.981 03:27:23 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:17.981 03:27:23 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:17.981 03:27:23 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:17.981 03:27:23 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:17.981 03:27:23 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:17.981 03:27:23 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@295 -- # net_devs=() 00:24:17.981 03:27:23 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:17.981 03:27:23 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@296 -- # e810=() 00:24:17.981 03:27:23 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@296 -- # local -ga e810 00:24:17.981 03:27:23 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@297 -- # x722=() 00:24:17.981 03:27:23 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@297 -- # local -ga x722 00:24:17.981 03:27:23 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@298 -- # mlx=() 00:24:17.981 03:27:23 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@298 -- # local -ga mlx 00:24:17.981 03:27:23 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:17.981 03:27:23 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:17.981 03:27:23 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:17.981 03:27:23 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:17.981 03:27:23 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:17.981 03:27:23 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:17.981 03:27:23 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:17.981 03:27:23 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:17.981 03:27:23 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:17.981 03:27:23 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:17.981 03:27:23 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:17.981 03:27:23 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:17.981 03:27:23 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:17.981 03:27:23 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:17.981 03:27:23 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:17.981 03:27:23 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:17.981 03:27:23 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:17.981 03:27:23 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:17.981 03:27:23 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:24:17.981 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:24:17.981 03:27:23 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:17.981 03:27:23 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:17.981 03:27:23 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:17.981 03:27:23 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:17.981 03:27:23 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:17.981 03:27:23 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:17.981 03:27:23 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:24:17.981 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:24:17.981 03:27:23 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:17.981 03:27:23 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:17.981 03:27:23 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:17.981 03:27:23 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:17.981 03:27:23 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:17.981 03:27:23 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:17.981 03:27:23 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:17.981 03:27:23 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:17.981 03:27:23 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:17.981 03:27:23 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:17.981 03:27:23 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:17.981 03:27:23 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:17.981 03:27:23 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:17.981 03:27:23 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:17.981 03:27:23 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:17.981 03:27:23 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:24:17.981 Found net devices under 0000:0a:00.0: cvl_0_0 00:24:17.981 03:27:23 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:17.981 03:27:23 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:17.981 03:27:23 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:17.981 03:27:23 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:17.981 03:27:23 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:17.981 03:27:23 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:17.981 03:27:23 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:17.981 03:27:23 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:17.981 03:27:23 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:24:17.981 Found net devices under 0000:0a:00.1: cvl_0_1 00:24:17.981 03:27:23 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:17.981 03:27:23 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:17.981 03:27:23 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@414 -- # is_hw=yes 00:24:17.981 03:27:23 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:17.981 03:27:23 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:17.981 03:27:23 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:17.981 03:27:23 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:17.981 03:27:23 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:17.981 03:27:23 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:17.981 03:27:23 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:17.981 03:27:23 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:17.981 03:27:23 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:17.982 03:27:23 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:17.982 03:27:23 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:17.982 03:27:23 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:17.982 03:27:23 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:17.982 03:27:23 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:17.982 03:27:23 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:17.982 03:27:23 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:17.982 03:27:23 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:17.982 03:27:23 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:17.982 03:27:23 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:17.982 03:27:23 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:17.982 03:27:23 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:17.982 03:27:23 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:17.982 03:27:23 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:17.982 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:17.982 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.240 ms 00:24:17.982 00:24:17.982 --- 10.0.0.2 ping statistics --- 00:24:17.982 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:17.982 rtt min/avg/max/mdev = 0.240/0.240/0.240/0.000 ms 00:24:17.982 03:27:23 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:17.982 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:17.982 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.230 ms 00:24:17.982 00:24:17.982 --- 10.0.0.1 ping statistics --- 00:24:17.982 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:17.982 rtt min/avg/max/mdev = 0.230/0.230/0.230/0.000 ms 00:24:17.982 03:27:23 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:17.982 03:27:23 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@422 -- # return 0 00:24:17.982 03:27:23 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:17.982 03:27:23 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:17.982 03:27:23 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:17.982 03:27:23 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:17.982 03:27:23 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:17.982 03:27:23 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:17.982 03:27:23 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:17.982 03:27:23 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@14 -- # nvmfpid=3243306 00:24:17.982 03:27:23 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@13 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:24:17.982 03:27:23 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@16 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:24:17.982 03:27:23 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@18 -- # waitforlisten 3243306 00:24:17.982 03:27:23 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@829 -- # '[' -z 3243306 ']' 00:24:17.982 03:27:23 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:17.982 03:27:23 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:17.982 03:27:23 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:17.982 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:17.982 03:27:23 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:17.982 03:27:23 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:18.240 03:27:24 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:18.240 03:27:24 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@862 -- # return 0 00:24:18.240 03:27:24 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:18.240 03:27:24 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:18.240 03:27:24 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:18.240 03:27:24 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:18.240 03:27:24 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 64 512 00:24:18.240 03:27:24 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:18.240 03:27:24 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:18.240 Malloc0 00:24:18.241 03:27:24 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:18.241 03:27:24 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:18.241 03:27:24 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:18.241 03:27:24 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:18.241 03:27:24 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:18.241 03:27:24 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:18.241 03:27:24 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:18.241 03:27:24 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:18.241 03:27:24 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:18.241 03:27:24 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:18.241 03:27:24 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:18.241 03:27:24 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:18.241 03:27:24 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:18.241 03:27:24 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@27 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' 00:24:18.241 03:27:24 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -N -a 00:24:50.300 Fuzzing completed. Shutting down the fuzz application 00:24:50.300 00:24:50.300 Dumping successful admin opcodes: 00:24:50.300 8, 9, 10, 24, 00:24:50.300 Dumping successful io opcodes: 00:24:50.300 0, 9, 00:24:50.300 NS: 0x200003aeff00 I/O qp, Total commands completed: 481255, total successful commands: 2780, random_seed: 346359296 00:24:50.300 NS: 0x200003aeff00 admin qp, Total commands completed: 58208, total successful commands: 463, random_seed: 3395706368 00:24:50.300 03:27:54 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -j /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/example.json -a 00:24:50.300 Fuzzing completed. Shutting down the fuzz application 00:24:50.300 00:24:50.300 Dumping successful admin opcodes: 00:24:50.300 24, 00:24:50.300 Dumping successful io opcodes: 00:24:50.300 00:24:50.300 NS: 0x200003aeff00 I/O qp, Total commands completed: 0, total successful commands: 0, random_seed: 684504961 00:24:50.300 NS: 0x200003aeff00 admin qp, Total commands completed: 16, total successful commands: 4, random_seed: 684629697 00:24:50.300 03:27:56 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:50.300 03:27:56 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:50.300 03:27:56 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:50.300 03:27:56 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:50.300 03:27:56 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:24:50.300 03:27:56 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@38 -- # nvmftestfini 00:24:50.300 03:27:56 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:50.300 03:27:56 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@117 -- # sync 00:24:50.300 03:27:56 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:50.300 03:27:56 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@120 -- # set +e 00:24:50.300 03:27:56 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:50.300 03:27:56 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:50.300 rmmod nvme_tcp 00:24:50.300 rmmod nvme_fabrics 00:24:50.300 rmmod nvme_keyring 00:24:50.300 03:27:56 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:50.300 03:27:56 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@124 -- # set -e 00:24:50.300 03:27:56 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@125 -- # return 0 00:24:50.300 03:27:56 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@489 -- # '[' -n 3243306 ']' 00:24:50.300 03:27:56 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@490 -- # killprocess 3243306 00:24:50.300 03:27:56 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@948 -- # '[' -z 3243306 ']' 00:24:50.300 03:27:56 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@952 -- # kill -0 3243306 00:24:50.300 03:27:56 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@953 -- # uname 00:24:50.300 03:27:56 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:50.300 03:27:56 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3243306 00:24:50.300 03:27:56 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:24:50.300 03:27:56 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:24:50.300 03:27:56 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3243306' 00:24:50.300 killing process with pid 3243306 00:24:50.300 03:27:56 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@967 -- # kill 3243306 00:24:50.300 03:27:56 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@972 -- # wait 3243306 00:24:50.300 03:27:56 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:50.300 03:27:56 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:50.300 03:27:56 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:50.300 03:27:56 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:50.300 03:27:56 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:50.300 03:27:56 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:50.300 03:27:56 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:50.300 03:27:56 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:52.832 03:27:58 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:52.832 03:27:58 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@39 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_fuzz_logs1.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_fuzz_logs2.txt 00:24:52.832 00:24:52.832 real 0m36.789s 00:24:52.832 user 0m50.941s 00:24:52.832 sys 0m14.976s 00:24:52.832 03:27:58 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:52.832 03:27:58 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:52.832 ************************************ 00:24:52.832 END TEST nvmf_fuzz 00:24:52.832 ************************************ 00:24:52.832 03:27:58 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:24:52.832 03:27:58 nvmf_tcp -- nvmf/nvmf.sh@67 -- # run_test nvmf_multiconnection /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:24:52.832 03:27:58 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:24:52.832 03:27:58 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:52.832 03:27:58 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:52.832 ************************************ 00:24:52.832 START TEST nvmf_multiconnection 00:24:52.832 ************************************ 00:24:52.832 03:27:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:24:52.832 * Looking for test storage... 00:24:52.832 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:24:52.832 03:27:58 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:52.832 03:27:58 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@7 -- # uname -s 00:24:52.832 03:27:58 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:52.832 03:27:58 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:52.832 03:27:58 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:52.832 03:27:58 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:52.832 03:27:58 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:52.832 03:27:58 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:52.832 03:27:58 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:52.832 03:27:58 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:52.832 03:27:58 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:52.832 03:27:58 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:52.832 03:27:58 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:24:52.832 03:27:58 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:24:52.832 03:27:58 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:52.832 03:27:58 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:52.832 03:27:58 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:52.832 03:27:58 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:52.832 03:27:58 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:52.832 03:27:58 nvmf_tcp.nvmf_multiconnection -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:52.832 03:27:58 nvmf_tcp.nvmf_multiconnection -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:52.832 03:27:58 nvmf_tcp.nvmf_multiconnection -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:52.832 03:27:58 nvmf_tcp.nvmf_multiconnection -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:52.833 03:27:58 nvmf_tcp.nvmf_multiconnection -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:52.833 03:27:58 nvmf_tcp.nvmf_multiconnection -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:52.833 03:27:58 nvmf_tcp.nvmf_multiconnection -- paths/export.sh@5 -- # export PATH 00:24:52.833 03:27:58 nvmf_tcp.nvmf_multiconnection -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:52.833 03:27:58 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@47 -- # : 0 00:24:52.833 03:27:58 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:52.833 03:27:58 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:52.833 03:27:58 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:52.833 03:27:58 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:52.833 03:27:58 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:52.833 03:27:58 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:52.833 03:27:58 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:52.833 03:27:58 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:52.833 03:27:58 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:52.833 03:27:58 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:52.833 03:27:58 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@14 -- # NVMF_SUBSYS=11 00:24:52.833 03:27:58 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@16 -- # nvmftestinit 00:24:52.833 03:27:58 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:52.833 03:27:58 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:52.833 03:27:58 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:52.833 03:27:58 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:52.833 03:27:58 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:52.833 03:27:58 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:52.833 03:27:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:52.833 03:27:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:52.833 03:27:58 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:52.833 03:27:58 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:52.833 03:27:58 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@285 -- # xtrace_disable 00:24:52.833 03:27:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:54.764 03:28:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:54.764 03:28:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@291 -- # pci_devs=() 00:24:54.764 03:28:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:54.764 03:28:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:54.764 03:28:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:54.764 03:28:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:54.764 03:28:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:54.764 03:28:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@295 -- # net_devs=() 00:24:54.764 03:28:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:54.764 03:28:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@296 -- # e810=() 00:24:54.764 03:28:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@296 -- # local -ga e810 00:24:54.764 03:28:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@297 -- # x722=() 00:24:54.764 03:28:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@297 -- # local -ga x722 00:24:54.764 03:28:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@298 -- # mlx=() 00:24:54.764 03:28:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@298 -- # local -ga mlx 00:24:54.764 03:28:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:54.764 03:28:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:54.764 03:28:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:54.764 03:28:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:54.764 03:28:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:54.764 03:28:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:54.764 03:28:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:54.764 03:28:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:54.764 03:28:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:54.764 03:28:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:54.764 03:28:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:54.764 03:28:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:54.764 03:28:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:54.764 03:28:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:54.764 03:28:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:54.765 03:28:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:54.765 03:28:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:54.765 03:28:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:54.765 03:28:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:24:54.765 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:24:54.765 03:28:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:54.765 03:28:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:54.765 03:28:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:54.765 03:28:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:54.765 03:28:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:54.765 03:28:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:54.765 03:28:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:24:54.765 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:24:54.765 03:28:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:54.765 03:28:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:54.765 03:28:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:54.765 03:28:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:54.765 03:28:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:54.765 03:28:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:54.765 03:28:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:54.765 03:28:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:54.765 03:28:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:54.765 03:28:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:54.765 03:28:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:54.765 03:28:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:54.765 03:28:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:54.765 03:28:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:54.765 03:28:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:54.765 03:28:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:24:54.765 Found net devices under 0000:0a:00.0: cvl_0_0 00:24:54.765 03:28:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:54.765 03:28:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:54.765 03:28:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:54.765 03:28:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:54.765 03:28:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:54.765 03:28:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:54.765 03:28:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:54.765 03:28:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:54.765 03:28:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:24:54.765 Found net devices under 0000:0a:00.1: cvl_0_1 00:24:54.765 03:28:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:54.765 03:28:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:54.765 03:28:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@414 -- # is_hw=yes 00:24:54.765 03:28:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:54.765 03:28:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:54.765 03:28:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:54.765 03:28:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:54.765 03:28:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:54.765 03:28:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:54.765 03:28:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:54.765 03:28:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:54.765 03:28:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:54.765 03:28:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:54.765 03:28:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:54.765 03:28:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:54.765 03:28:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:54.765 03:28:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:54.765 03:28:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:54.765 03:28:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:54.765 03:28:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:54.765 03:28:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:54.765 03:28:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:54.765 03:28:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:54.765 03:28:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:54.765 03:28:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:54.765 03:28:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:54.765 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:54.765 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.181 ms 00:24:54.765 00:24:54.765 --- 10.0.0.2 ping statistics --- 00:24:54.765 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:54.765 rtt min/avg/max/mdev = 0.181/0.181/0.181/0.000 ms 00:24:54.765 03:28:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:54.765 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:54.765 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.111 ms 00:24:54.765 00:24:54.765 --- 10.0.0.1 ping statistics --- 00:24:54.765 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:54.765 rtt min/avg/max/mdev = 0.111/0.111/0.111/0.000 ms 00:24:54.765 03:28:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:54.765 03:28:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@422 -- # return 0 00:24:54.765 03:28:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:54.765 03:28:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:54.765 03:28:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:54.765 03:28:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:54.765 03:28:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:54.765 03:28:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:54.765 03:28:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:54.765 03:28:00 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@17 -- # nvmfappstart -m 0xF 00:24:54.765 03:28:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:54.765 03:28:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:54.765 03:28:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:54.765 03:28:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@481 -- # nvmfpid=3248905 00:24:54.765 03:28:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:24:54.765 03:28:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@482 -- # waitforlisten 3248905 00:24:54.765 03:28:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@829 -- # '[' -z 3248905 ']' 00:24:54.765 03:28:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:54.765 03:28:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:54.765 03:28:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:54.765 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:54.765 03:28:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:54.765 03:28:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:54.765 [2024-07-15 03:28:00.766389] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:24:54.765 [2024-07-15 03:28:00.766469] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:54.765 EAL: No free 2048 kB hugepages reported on node 1 00:24:54.765 [2024-07-15 03:28:00.832291] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:55.022 [2024-07-15 03:28:00.927274] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:55.022 [2024-07-15 03:28:00.927327] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:55.022 [2024-07-15 03:28:00.927341] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:55.022 [2024-07-15 03:28:00.927353] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:55.022 [2024-07-15 03:28:00.927364] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:55.022 [2024-07-15 03:28:00.927484] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:55.022 [2024-07-15 03:28:00.927545] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:24:55.022 [2024-07-15 03:28:00.927596] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:24:55.022 [2024-07-15 03:28:00.927599] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:55.022 03:28:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:55.022 03:28:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@862 -- # return 0 00:24:55.022 03:28:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:55.022 03:28:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:55.022 03:28:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:55.022 03:28:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:55.022 03:28:01 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:55.022 03:28:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:55.022 03:28:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:55.022 [2024-07-15 03:28:01.068519] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:55.022 03:28:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:55.022 03:28:01 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # seq 1 11 00:24:55.022 03:28:01 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:55.022 03:28:01 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:24:55.022 03:28:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:55.022 03:28:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:55.022 Malloc1 00:24:55.022 03:28:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:55.022 03:28:01 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK1 00:24:55.022 03:28:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:55.022 03:28:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:55.022 03:28:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:55.022 03:28:01 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:24:55.022 03:28:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:55.022 03:28:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:55.022 03:28:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:55.022 03:28:01 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:55.022 03:28:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:55.022 03:28:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:55.022 [2024-07-15 03:28:01.123596] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:55.022 03:28:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:55.022 03:28:01 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:55.022 03:28:01 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc2 00:24:55.022 03:28:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:55.022 03:28:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:55.022 Malloc2 00:24:55.022 03:28:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:55.022 03:28:01 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:24:55.022 03:28:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:55.022 03:28:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:55.022 03:28:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:55.022 03:28:01 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc2 00:24:55.022 03:28:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:55.022 03:28:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:55.022 03:28:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:55.281 03:28:01 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:24:55.281 03:28:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:55.281 03:28:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:55.281 03:28:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:55.281 03:28:01 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:55.281 03:28:01 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc3 00:24:55.281 03:28:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:55.281 03:28:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:55.281 Malloc3 00:24:55.281 03:28:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:55.281 03:28:01 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK3 00:24:55.281 03:28:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:55.281 03:28:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:55.281 03:28:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:55.282 03:28:01 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Malloc3 00:24:55.282 03:28:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:55.282 03:28:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:55.282 03:28:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:55.282 03:28:01 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:24:55.282 03:28:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:55.282 03:28:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:55.282 03:28:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:55.282 03:28:01 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:55.282 03:28:01 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc4 00:24:55.282 03:28:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:55.282 03:28:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:55.282 Malloc4 00:24:55.282 03:28:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:55.282 03:28:01 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK4 00:24:55.282 03:28:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:55.282 03:28:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:55.282 03:28:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:55.282 03:28:01 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Malloc4 00:24:55.282 03:28:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:55.282 03:28:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:55.282 03:28:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:55.282 03:28:01 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:24:55.282 03:28:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:55.282 03:28:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:55.282 03:28:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:55.282 03:28:01 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:55.282 03:28:01 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc5 00:24:55.282 03:28:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:55.282 03:28:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:55.282 Malloc5 00:24:55.282 03:28:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:55.282 03:28:01 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5 -a -s SPDK5 00:24:55.282 03:28:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:55.282 03:28:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:55.282 03:28:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:55.282 03:28:01 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode5 Malloc5 00:24:55.282 03:28:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:55.282 03:28:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:55.282 03:28:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:55.282 03:28:01 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode5 -t tcp -a 10.0.0.2 -s 4420 00:24:55.282 03:28:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:55.282 03:28:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:55.282 03:28:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:55.282 03:28:01 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:55.282 03:28:01 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc6 00:24:55.282 03:28:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:55.282 03:28:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:55.282 Malloc6 00:24:55.282 03:28:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:55.282 03:28:01 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6 -a -s SPDK6 00:24:55.282 03:28:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:55.282 03:28:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:55.282 03:28:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:55.282 03:28:01 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode6 Malloc6 00:24:55.282 03:28:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:55.282 03:28:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:55.282 03:28:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:55.282 03:28:01 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode6 -t tcp -a 10.0.0.2 -s 4420 00:24:55.282 03:28:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:55.282 03:28:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:55.282 03:28:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:55.282 03:28:01 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:55.282 03:28:01 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc7 00:24:55.282 03:28:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:55.282 03:28:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:55.282 Malloc7 00:24:55.282 03:28:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:55.282 03:28:01 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7 -a -s SPDK7 00:24:55.282 03:28:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:55.282 03:28:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:55.282 03:28:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:55.282 03:28:01 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode7 Malloc7 00:24:55.282 03:28:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:55.282 03:28:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:55.282 03:28:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:55.282 03:28:01 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode7 -t tcp -a 10.0.0.2 -s 4420 00:24:55.282 03:28:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:55.282 03:28:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:55.282 03:28:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:55.282 03:28:01 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:55.282 03:28:01 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc8 00:24:55.282 03:28:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:55.282 03:28:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:55.541 Malloc8 00:24:55.541 03:28:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:55.541 03:28:01 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8 -a -s SPDK8 00:24:55.541 03:28:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:55.541 03:28:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:55.541 03:28:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:55.541 03:28:01 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode8 Malloc8 00:24:55.541 03:28:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:55.541 03:28:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:55.541 03:28:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:55.541 03:28:01 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode8 -t tcp -a 10.0.0.2 -s 4420 00:24:55.541 03:28:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:55.541 03:28:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:55.541 03:28:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:55.541 03:28:01 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:55.541 03:28:01 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc9 00:24:55.541 03:28:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:55.541 03:28:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:55.541 Malloc9 00:24:55.541 03:28:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:55.541 03:28:01 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9 -a -s SPDK9 00:24:55.541 03:28:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:55.541 03:28:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:55.541 03:28:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:55.541 03:28:01 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode9 Malloc9 00:24:55.541 03:28:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:55.541 03:28:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:55.541 03:28:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:55.541 03:28:01 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode9 -t tcp -a 10.0.0.2 -s 4420 00:24:55.541 03:28:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:55.541 03:28:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:55.541 03:28:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:55.541 03:28:01 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:55.541 03:28:01 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc10 00:24:55.541 03:28:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:55.541 03:28:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:55.541 Malloc10 00:24:55.541 03:28:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:55.541 03:28:01 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10 -a -s SPDK10 00:24:55.541 03:28:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:55.541 03:28:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:55.541 03:28:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:55.541 03:28:01 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode10 Malloc10 00:24:55.541 03:28:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:55.541 03:28:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:55.541 03:28:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:55.541 03:28:01 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode10 -t tcp -a 10.0.0.2 -s 4420 00:24:55.541 03:28:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:55.541 03:28:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:55.541 03:28:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:55.541 03:28:01 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:55.541 03:28:01 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc11 00:24:55.541 03:28:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:55.541 03:28:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:55.541 Malloc11 00:24:55.541 03:28:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:55.541 03:28:01 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11 -a -s SPDK11 00:24:55.541 03:28:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:55.541 03:28:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:55.542 03:28:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:55.542 03:28:01 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode11 Malloc11 00:24:55.542 03:28:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:55.542 03:28:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:55.542 03:28:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:55.542 03:28:01 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode11 -t tcp -a 10.0.0.2 -s 4420 00:24:55.542 03:28:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:55.542 03:28:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:55.542 03:28:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:55.542 03:28:01 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # seq 1 11 00:24:55.542 03:28:01 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:55.542 03:28:01 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:24:56.108 03:28:02 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK1 00:24:56.108 03:28:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:24:56.108 03:28:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:24:56.108 03:28:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:24:56.108 03:28:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:24:58.638 03:28:04 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:24:58.638 03:28:04 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:24:58.638 03:28:04 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK1 00:24:58.638 03:28:04 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:24:58.638 03:28:04 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:24:58.638 03:28:04 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:24:58.638 03:28:04 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:58.638 03:28:04 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode2 -a 10.0.0.2 -s 4420 00:24:58.897 03:28:04 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK2 00:24:58.897 03:28:04 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:24:58.897 03:28:04 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:24:58.897 03:28:04 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:24:58.897 03:28:04 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:25:00.799 03:28:06 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:25:00.799 03:28:06 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:25:00.799 03:28:06 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK2 00:25:00.799 03:28:06 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:25:00.799 03:28:06 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:25:00.799 03:28:06 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:25:00.799 03:28:06 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:00.799 03:28:06 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode3 -a 10.0.0.2 -s 4420 00:25:01.736 03:28:07 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK3 00:25:01.736 03:28:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:25:01.736 03:28:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:25:01.736 03:28:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:25:01.736 03:28:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:25:03.640 03:28:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:25:03.640 03:28:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:25:03.640 03:28:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK3 00:25:03.640 03:28:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:25:03.640 03:28:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:25:03.640 03:28:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:25:03.640 03:28:09 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:03.640 03:28:09 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode4 -a 10.0.0.2 -s 4420 00:25:04.207 03:28:10 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK4 00:25:04.207 03:28:10 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:25:04.207 03:28:10 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:25:04.207 03:28:10 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:25:04.207 03:28:10 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:25:06.740 03:28:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:25:06.741 03:28:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:25:06.741 03:28:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK4 00:25:06.741 03:28:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:25:06.741 03:28:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:25:06.741 03:28:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:25:06.741 03:28:12 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:06.741 03:28:12 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode5 -a 10.0.0.2 -s 4420 00:25:07.000 03:28:13 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK5 00:25:07.000 03:28:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:25:07.000 03:28:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:25:07.000 03:28:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:25:07.000 03:28:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:25:09.532 03:28:15 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:25:09.532 03:28:15 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:25:09.532 03:28:15 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK5 00:25:09.532 03:28:15 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:25:09.532 03:28:15 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:25:09.532 03:28:15 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:25:09.532 03:28:15 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:09.532 03:28:15 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode6 -a 10.0.0.2 -s 4420 00:25:10.101 03:28:16 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK6 00:25:10.101 03:28:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:25:10.101 03:28:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:25:10.101 03:28:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:25:10.101 03:28:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:25:12.006 03:28:18 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:25:12.006 03:28:18 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:25:12.006 03:28:18 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK6 00:25:12.006 03:28:18 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:25:12.006 03:28:18 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:25:12.006 03:28:18 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:25:12.006 03:28:18 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:12.006 03:28:18 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode7 -a 10.0.0.2 -s 4420 00:25:12.943 03:28:18 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK7 00:25:12.943 03:28:18 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:25:12.943 03:28:18 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:25:12.943 03:28:18 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:25:12.943 03:28:18 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:25:14.847 03:28:20 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:25:14.847 03:28:20 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:25:14.847 03:28:20 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK7 00:25:14.847 03:28:20 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:25:14.847 03:28:20 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:25:14.847 03:28:20 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:25:14.847 03:28:20 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:14.847 03:28:20 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode8 -a 10.0.0.2 -s 4420 00:25:15.812 03:28:21 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK8 00:25:15.812 03:28:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:25:15.812 03:28:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:25:15.812 03:28:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:25:15.812 03:28:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:25:17.722 03:28:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:25:17.722 03:28:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:25:17.722 03:28:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK8 00:25:17.722 03:28:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:25:17.722 03:28:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:25:17.722 03:28:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:25:17.722 03:28:23 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:17.722 03:28:23 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode9 -a 10.0.0.2 -s 4420 00:25:18.656 03:28:24 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK9 00:25:18.656 03:28:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:25:18.656 03:28:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:25:18.656 03:28:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:25:18.656 03:28:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:25:20.561 03:28:26 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:25:20.561 03:28:26 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:25:20.561 03:28:26 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK9 00:25:20.562 03:28:26 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:25:20.562 03:28:26 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:25:20.562 03:28:26 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:25:20.562 03:28:26 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:20.562 03:28:26 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode10 -a 10.0.0.2 -s 4420 00:25:21.130 03:28:27 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK10 00:25:21.130 03:28:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:25:21.130 03:28:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:25:21.130 03:28:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:25:21.130 03:28:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:25:23.058 03:28:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:25:23.058 03:28:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:25:23.058 03:28:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK10 00:25:23.058 03:28:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:25:23.058 03:28:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:25:23.058 03:28:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:25:23.058 03:28:29 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:23.058 03:28:29 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode11 -a 10.0.0.2 -s 4420 00:25:24.431 03:28:30 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK11 00:25:24.431 03:28:30 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:25:24.431 03:28:30 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:25:24.431 03:28:30 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:25:24.431 03:28:30 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:25:26.336 03:28:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:25:26.336 03:28:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:25:26.336 03:28:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK11 00:25:26.336 03:28:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:25:26.336 03:28:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:25:26.336 03:28:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:25:26.336 03:28:32 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t read -r 10 00:25:26.336 [global] 00:25:26.336 thread=1 00:25:26.336 invalidate=1 00:25:26.336 rw=read 00:25:26.336 time_based=1 00:25:26.336 runtime=10 00:25:26.336 ioengine=libaio 00:25:26.336 direct=1 00:25:26.336 bs=262144 00:25:26.336 iodepth=64 00:25:26.336 norandommap=1 00:25:26.336 numjobs=1 00:25:26.336 00:25:26.336 [job0] 00:25:26.336 filename=/dev/nvme0n1 00:25:26.336 [job1] 00:25:26.336 filename=/dev/nvme10n1 00:25:26.336 [job2] 00:25:26.336 filename=/dev/nvme1n1 00:25:26.336 [job3] 00:25:26.336 filename=/dev/nvme2n1 00:25:26.336 [job4] 00:25:26.336 filename=/dev/nvme3n1 00:25:26.336 [job5] 00:25:26.336 filename=/dev/nvme4n1 00:25:26.336 [job6] 00:25:26.336 filename=/dev/nvme5n1 00:25:26.336 [job7] 00:25:26.336 filename=/dev/nvme6n1 00:25:26.336 [job8] 00:25:26.336 filename=/dev/nvme7n1 00:25:26.336 [job9] 00:25:26.336 filename=/dev/nvme8n1 00:25:26.336 [job10] 00:25:26.336 filename=/dev/nvme9n1 00:25:26.336 Could not set queue depth (nvme0n1) 00:25:26.336 Could not set queue depth (nvme10n1) 00:25:26.336 Could not set queue depth (nvme1n1) 00:25:26.336 Could not set queue depth (nvme2n1) 00:25:26.336 Could not set queue depth (nvme3n1) 00:25:26.336 Could not set queue depth (nvme4n1) 00:25:26.336 Could not set queue depth (nvme5n1) 00:25:26.336 Could not set queue depth (nvme6n1) 00:25:26.336 Could not set queue depth (nvme7n1) 00:25:26.336 Could not set queue depth (nvme8n1) 00:25:26.336 Could not set queue depth (nvme9n1) 00:25:26.336 job0: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:26.336 job1: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:26.336 job2: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:26.336 job3: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:26.336 job4: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:26.336 job5: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:26.336 job6: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:26.336 job7: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:26.336 job8: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:26.336 job9: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:26.336 job10: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:26.336 fio-3.35 00:25:26.336 Starting 11 threads 00:25:38.553 00:25:38.553 job0: (groupid=0, jobs=1): err= 0: pid=3253163: Mon Jul 15 03:28:43 2024 00:25:38.553 read: IOPS=705, BW=176MiB/s (185MB/s)(1791MiB/10155msec) 00:25:38.553 slat (usec): min=14, max=110114, avg=1232.28, stdev=4939.54 00:25:38.553 clat (usec): min=804, max=354904, avg=89378.45, stdev=58653.70 00:25:38.553 lat (usec): min=851, max=369520, avg=90610.73, stdev=59553.86 00:25:38.553 clat percentiles (usec): 00:25:38.553 | 1.00th=[ 1713], 5.00th=[ 22414], 10.00th=[ 30540], 20.00th=[ 39060], 00:25:38.553 | 30.00th=[ 49021], 40.00th=[ 62653], 50.00th=[ 76022], 60.00th=[ 95945], 00:25:38.553 | 70.00th=[111674], 80.00th=[131597], 90.00th=[160433], 95.00th=[208667], 00:25:38.553 | 99.00th=[267387], 99.50th=[287310], 99.90th=[341836], 99.95th=[341836], 00:25:38.553 | 99.99th=[354419] 00:25:38.553 bw ( KiB/s): min=64000, max=371200, per=9.54%, avg=181779.50, stdev=95300.90, samples=20 00:25:38.553 iops : min= 250, max= 1450, avg=710.05, stdev=372.27, samples=20 00:25:38.553 lat (usec) : 1000=0.04% 00:25:38.553 lat (msec) : 2=1.12%, 4=0.42%, 10=0.66%, 20=2.27%, 50=26.69% 00:25:38.553 lat (msec) : 100=30.86%, 250=35.53%, 500=2.41% 00:25:38.553 cpu : usr=0.34%, sys=2.47%, ctx=1495, majf=0, minf=4097 00:25:38.553 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:25:38.553 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:38.553 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:38.553 issued rwts: total=7165,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:38.553 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:38.553 job1: (groupid=0, jobs=1): err= 0: pid=3253164: Mon Jul 15 03:28:43 2024 00:25:38.553 read: IOPS=800, BW=200MiB/s (210MB/s)(2010MiB/10046msec) 00:25:38.553 slat (usec): min=13, max=142473, avg=1135.66, stdev=4755.02 00:25:38.553 clat (usec): min=1372, max=342198, avg=78778.51, stdev=56888.48 00:25:38.553 lat (usec): min=1430, max=364275, avg=79914.17, stdev=57843.32 00:25:38.553 clat percentiles (msec): 00:25:38.553 | 1.00th=[ 7], 5.00th=[ 27], 10.00th=[ 30], 20.00th=[ 33], 00:25:38.553 | 30.00th=[ 39], 40.00th=[ 54], 50.00th=[ 63], 60.00th=[ 73], 00:25:38.553 | 70.00th=[ 88], 80.00th=[ 117], 90.00th=[ 159], 95.00th=[ 211], 00:25:38.553 | 99.00th=[ 262], 99.50th=[ 266], 99.90th=[ 305], 99.95th=[ 313], 00:25:38.553 | 99.99th=[ 342] 00:25:38.553 bw ( KiB/s): min=59904, max=441485, per=10.71%, avg=204076.40, stdev=115329.66, samples=20 00:25:38.553 iops : min= 234, max= 1724, avg=797.10, stdev=450.37, samples=20 00:25:38.553 lat (msec) : 2=0.06%, 4=0.45%, 10=1.46%, 20=1.60%, 50=33.75% 00:25:38.553 lat (msec) : 100=37.14%, 250=23.85%, 500=1.69% 00:25:38.553 cpu : usr=0.44%, sys=2.59%, ctx=1638, majf=0, minf=4097 00:25:38.553 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:25:38.553 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:38.553 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:38.553 issued rwts: total=8039,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:38.553 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:38.553 job2: (groupid=0, jobs=1): err= 0: pid=3253165: Mon Jul 15 03:28:43 2024 00:25:38.553 read: IOPS=515, BW=129MiB/s (135MB/s)(1309MiB/10153msec) 00:25:38.553 slat (usec): min=9, max=195539, avg=1213.55, stdev=6364.43 00:25:38.554 clat (usec): min=1032, max=431220, avg=122761.98, stdev=65637.33 00:25:38.554 lat (usec): min=1056, max=431395, avg=123975.54, stdev=66567.21 00:25:38.554 clat percentiles (msec): 00:25:38.554 | 1.00th=[ 7], 5.00th=[ 18], 10.00th=[ 30], 20.00th=[ 62], 00:25:38.554 | 30.00th=[ 87], 40.00th=[ 109], 50.00th=[ 124], 60.00th=[ 140], 00:25:38.554 | 70.00th=[ 155], 80.00th=[ 176], 90.00th=[ 207], 95.00th=[ 241], 00:25:38.554 | 99.00th=[ 284], 99.50th=[ 288], 99.90th=[ 334], 99.95th=[ 342], 00:25:38.554 | 99.99th=[ 430] 00:25:38.554 bw ( KiB/s): min=53354, max=261120, per=6.94%, avg=132379.80, stdev=54741.66, samples=20 00:25:38.554 iops : min= 208, max= 1020, avg=517.05, stdev=213.85, samples=20 00:25:38.554 lat (msec) : 2=0.04%, 4=0.25%, 10=2.04%, 20=3.80%, 50=8.77% 00:25:38.554 lat (msec) : 100=20.11%, 250=60.66%, 500=4.34% 00:25:38.554 cpu : usr=0.22%, sys=1.57%, ctx=1296, majf=0, minf=4097 00:25:38.554 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:25:38.554 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:38.554 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:38.554 issued rwts: total=5236,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:38.554 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:38.554 job3: (groupid=0, jobs=1): err= 0: pid=3253166: Mon Jul 15 03:28:43 2024 00:25:38.554 read: IOPS=622, BW=156MiB/s (163MB/s)(1586MiB/10197msec) 00:25:38.554 slat (usec): min=9, max=240601, avg=1051.20, stdev=6121.50 00:25:38.554 clat (usec): min=743, max=398232, avg=101754.00, stdev=71873.32 00:25:38.554 lat (usec): min=767, max=505666, avg=102805.20, stdev=72879.84 00:25:38.554 clat percentiles (msec): 00:25:38.554 | 1.00th=[ 4], 5.00th=[ 11], 10.00th=[ 18], 20.00th=[ 33], 00:25:38.554 | 30.00th=[ 46], 40.00th=[ 65], 50.00th=[ 94], 60.00th=[ 123], 00:25:38.554 | 70.00th=[ 142], 80.00th=[ 167], 90.00th=[ 199], 95.00th=[ 230], 00:25:38.554 | 99.00th=[ 292], 99.50th=[ 321], 99.90th=[ 321], 99.95th=[ 321], 00:25:38.554 | 99.99th=[ 397] 00:25:38.554 bw ( KiB/s): min=56320, max=354618, per=8.43%, avg=160670.60, stdev=76997.98, samples=20 00:25:38.554 iops : min= 220, max= 1385, avg=627.55, stdev=300.69, samples=20 00:25:38.554 lat (usec) : 750=0.02%, 1000=0.03% 00:25:38.554 lat (msec) : 2=0.38%, 4=1.28%, 10=3.15%, 20=7.25%, 50=20.62% 00:25:38.554 lat (msec) : 100=19.28%, 250=45.36%, 500=2.63% 00:25:38.554 cpu : usr=0.26%, sys=1.73%, ctx=1505, majf=0, minf=3721 00:25:38.554 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:25:38.554 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:38.554 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:38.554 issued rwts: total=6343,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:38.554 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:38.554 job4: (groupid=0, jobs=1): err= 0: pid=3253167: Mon Jul 15 03:28:43 2024 00:25:38.554 read: IOPS=833, BW=208MiB/s (219MB/s)(2088MiB/10016msec) 00:25:38.554 slat (usec): min=13, max=135333, avg=1180.20, stdev=4301.53 00:25:38.554 clat (msec): min=13, max=369, avg=75.51, stdev=45.48 00:25:38.554 lat (msec): min=16, max=369, avg=76.69, stdev=46.27 00:25:38.554 clat percentiles (msec): 00:25:38.554 | 1.00th=[ 26], 5.00th=[ 30], 10.00th=[ 32], 20.00th=[ 40], 00:25:38.554 | 30.00th=[ 53], 40.00th=[ 58], 50.00th=[ 64], 60.00th=[ 72], 00:25:38.554 | 70.00th=[ 83], 80.00th=[ 104], 90.00th=[ 124], 95.00th=[ 155], 00:25:38.554 | 99.00th=[ 257], 99.50th=[ 264], 99.90th=[ 292], 99.95th=[ 300], 00:25:38.554 | 99.99th=[ 372] 00:25:38.554 bw ( KiB/s): min=63488, max=474624, per=11.13%, avg=212154.20, stdev=108402.49, samples=20 00:25:38.554 iops : min= 248, max= 1854, avg=828.70, stdev=423.45, samples=20 00:25:38.554 lat (msec) : 20=0.16%, 50=27.53%, 100=50.28%, 250=20.57%, 500=1.47% 00:25:38.554 cpu : usr=0.45%, sys=2.91%, ctx=1560, majf=0, minf=4097 00:25:38.554 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:25:38.554 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:38.554 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:38.554 issued rwts: total=8352,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:38.554 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:38.554 job5: (groupid=0, jobs=1): err= 0: pid=3253168: Mon Jul 15 03:28:43 2024 00:25:38.554 read: IOPS=446, BW=112MiB/s (117MB/s)(1138MiB/10190msec) 00:25:38.554 slat (usec): min=14, max=170985, avg=2020.53, stdev=6880.50 00:25:38.554 clat (msec): min=35, max=397, avg=141.18, stdev=51.57 00:25:38.554 lat (msec): min=35, max=408, avg=143.20, stdev=52.73 00:25:38.554 clat percentiles (msec): 00:25:38.554 | 1.00th=[ 51], 5.00th=[ 66], 10.00th=[ 80], 20.00th=[ 103], 00:25:38.554 | 30.00th=[ 114], 40.00th=[ 126], 50.00th=[ 134], 60.00th=[ 144], 00:25:38.554 | 70.00th=[ 159], 80.00th=[ 178], 90.00th=[ 213], 95.00th=[ 243], 00:25:38.554 | 99.00th=[ 279], 99.50th=[ 330], 99.90th=[ 376], 99.95th=[ 376], 00:25:38.554 | 99.99th=[ 397] 00:25:38.554 bw ( KiB/s): min=64000, max=192512, per=6.02%, avg=114851.75, stdev=34737.80, samples=20 00:25:38.554 iops : min= 250, max= 752, avg=448.60, stdev=135.74, samples=20 00:25:38.554 lat (msec) : 50=1.10%, 100=17.76%, 250=77.41%, 500=3.74% 00:25:38.554 cpu : usr=0.24%, sys=1.64%, ctx=928, majf=0, minf=4097 00:25:38.554 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:25:38.554 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:38.554 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:38.554 issued rwts: total=4550,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:38.554 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:38.554 job6: (groupid=0, jobs=1): err= 0: pid=3253169: Mon Jul 15 03:28:43 2024 00:25:38.554 read: IOPS=523, BW=131MiB/s (137MB/s)(1329MiB/10154msec) 00:25:38.554 slat (usec): min=9, max=191418, avg=1442.28, stdev=6965.27 00:25:38.554 clat (usec): min=748, max=452479, avg=120702.77, stdev=61478.51 00:25:38.554 lat (usec): min=763, max=452511, avg=122145.06, stdev=62491.46 00:25:38.554 clat percentiles (msec): 00:25:38.554 | 1.00th=[ 5], 5.00th=[ 22], 10.00th=[ 34], 20.00th=[ 68], 00:25:38.554 | 30.00th=[ 90], 40.00th=[ 109], 50.00th=[ 125], 60.00th=[ 134], 00:25:38.554 | 70.00th=[ 148], 80.00th=[ 167], 90.00th=[ 194], 95.00th=[ 236], 00:25:38.554 | 99.00th=[ 279], 99.50th=[ 305], 99.90th=[ 338], 99.95th=[ 351], 00:25:38.554 | 99.99th=[ 451] 00:25:38.554 bw ( KiB/s): min=62976, max=227328, per=7.05%, avg=134426.00, stdev=42460.29, samples=20 00:25:38.554 iops : min= 246, max= 888, avg=525.05, stdev=165.85, samples=20 00:25:38.554 lat (usec) : 750=0.02%, 1000=0.23% 00:25:38.554 lat (msec) : 2=0.55%, 4=0.17%, 10=1.02%, 20=2.33%, 50=11.65% 00:25:38.554 lat (msec) : 100=18.70%, 250=62.35%, 500=2.99% 00:25:38.554 cpu : usr=0.28%, sys=1.62%, ctx=1306, majf=0, minf=4097 00:25:38.554 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:25:38.554 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:38.554 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:38.554 issued rwts: total=5315,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:38.554 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:38.554 job7: (groupid=0, jobs=1): err= 0: pid=3253170: Mon Jul 15 03:28:43 2024 00:25:38.554 read: IOPS=550, BW=138MiB/s (144MB/s)(1398MiB/10158msec) 00:25:38.554 slat (usec): min=9, max=77477, avg=1498.44, stdev=5091.44 00:25:38.555 clat (msec): min=2, max=354, avg=114.64, stdev=51.61 00:25:38.555 lat (msec): min=2, max=354, avg=116.13, stdev=52.44 00:25:38.555 clat percentiles (msec): 00:25:38.555 | 1.00th=[ 18], 5.00th=[ 44], 10.00th=[ 56], 20.00th=[ 69], 00:25:38.555 | 30.00th=[ 80], 40.00th=[ 91], 50.00th=[ 108], 60.00th=[ 129], 00:25:38.555 | 70.00th=[ 142], 80.00th=[ 159], 90.00th=[ 188], 95.00th=[ 205], 00:25:38.555 | 99.00th=[ 234], 99.50th=[ 259], 99.90th=[ 338], 99.95th=[ 355], 00:25:38.555 | 99.99th=[ 355] 00:25:38.555 bw ( KiB/s): min=83456, max=246272, per=7.42%, avg=141525.80, stdev=48377.90, samples=20 00:25:38.555 iops : min= 326, max= 962, avg=552.80, stdev=188.97, samples=20 00:25:38.555 lat (msec) : 4=0.07%, 10=0.09%, 20=0.93%, 50=5.61%, 100=39.41% 00:25:38.555 lat (msec) : 250=53.28%, 500=0.61% 00:25:38.555 cpu : usr=0.36%, sys=1.72%, ctx=1242, majf=0, minf=4097 00:25:38.555 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:25:38.555 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:38.555 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:38.555 issued rwts: total=5593,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:38.555 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:38.555 job8: (groupid=0, jobs=1): err= 0: pid=3253171: Mon Jul 15 03:28:43 2024 00:25:38.555 read: IOPS=655, BW=164MiB/s (172MB/s)(1664MiB/10159msec) 00:25:38.555 slat (usec): min=9, max=143515, avg=1195.60, stdev=5680.15 00:25:38.555 clat (usec): min=1215, max=364396, avg=96427.95, stdev=63545.72 00:25:38.555 lat (usec): min=1239, max=370485, avg=97623.54, stdev=64501.92 00:25:38.555 clat percentiles (msec): 00:25:38.555 | 1.00th=[ 7], 5.00th=[ 26], 10.00th=[ 39], 20.00th=[ 47], 00:25:38.555 | 30.00th=[ 54], 40.00th=[ 64], 50.00th=[ 77], 60.00th=[ 92], 00:25:38.555 | 70.00th=[ 111], 80.00th=[ 148], 90.00th=[ 207], 95.00th=[ 226], 00:25:38.555 | 99.00th=[ 268], 99.50th=[ 279], 99.90th=[ 317], 99.95th=[ 347], 00:25:38.555 | 99.99th=[ 363] 00:25:38.555 bw ( KiB/s): min=71168, max=342528, per=8.85%, avg=168703.30, stdev=82173.75, samples=20 00:25:38.555 iops : min= 278, max= 1338, avg=658.95, stdev=321.01, samples=20 00:25:38.555 lat (msec) : 2=0.11%, 4=0.47%, 10=1.31%, 20=2.27%, 50=20.78% 00:25:38.555 lat (msec) : 100=38.98%, 250=33.49%, 500=2.60% 00:25:38.555 cpu : usr=0.30%, sys=2.07%, ctx=1441, majf=0, minf=4097 00:25:38.555 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:25:38.555 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:38.555 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:38.555 issued rwts: total=6655,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:38.555 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:38.555 job9: (groupid=0, jobs=1): err= 0: pid=3253172: Mon Jul 15 03:28:43 2024 00:25:38.555 read: IOPS=980, BW=245MiB/s (257MB/s)(2462MiB/10045msec) 00:25:38.555 slat (usec): min=14, max=41220, avg=983.99, stdev=2786.98 00:25:38.555 clat (msec): min=10, max=150, avg=64.24, stdev=26.97 00:25:38.555 lat (msec): min=10, max=164, avg=65.23, stdev=27.37 00:25:38.555 clat percentiles (msec): 00:25:38.555 | 1.00th=[ 28], 5.00th=[ 31], 10.00th=[ 32], 20.00th=[ 35], 00:25:38.555 | 30.00th=[ 47], 40.00th=[ 55], 50.00th=[ 61], 60.00th=[ 68], 00:25:38.555 | 70.00th=[ 78], 80.00th=[ 89], 90.00th=[ 106], 95.00th=[ 114], 00:25:38.555 | 99.00th=[ 128], 99.50th=[ 131], 99.90th=[ 140], 99.95th=[ 140], 00:25:38.555 | 99.99th=[ 150] 00:25:38.555 bw ( KiB/s): min=135680, max=504320, per=13.14%, avg=250455.00, stdev=96407.30, samples=20 00:25:38.555 iops : min= 530, max= 1970, avg=978.30, stdev=376.61, samples=20 00:25:38.555 lat (msec) : 20=0.28%, 50=33.33%, 100=52.65%, 250=13.74% 00:25:38.555 cpu : usr=0.65%, sys=3.10%, ctx=1920, majf=0, minf=4097 00:25:38.555 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:25:38.555 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:38.555 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:38.555 issued rwts: total=9848,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:38.555 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:38.555 job10: (groupid=0, jobs=1): err= 0: pid=3253175: Mon Jul 15 03:28:43 2024 00:25:38.555 read: IOPS=880, BW=220MiB/s (231MB/s)(2210MiB/10042msec) 00:25:38.555 slat (usec): min=9, max=27086, avg=967.49, stdev=2976.45 00:25:38.555 clat (msec): min=3, max=174, avg=71.70, stdev=35.20 00:25:38.555 lat (msec): min=3, max=174, avg=72.67, stdev=35.60 00:25:38.555 clat percentiles (msec): 00:25:38.555 | 1.00th=[ 11], 5.00th=[ 28], 10.00th=[ 31], 20.00th=[ 39], 00:25:38.555 | 30.00th=[ 50], 40.00th=[ 58], 50.00th=[ 66], 60.00th=[ 75], 00:25:38.555 | 70.00th=[ 86], 80.00th=[ 103], 90.00th=[ 128], 95.00th=[ 140], 00:25:38.555 | 99.00th=[ 155], 99.50th=[ 161], 99.90th=[ 167], 99.95th=[ 176], 00:25:38.555 | 99.99th=[ 176] 00:25:38.555 bw ( KiB/s): min=119808, max=509952, per=11.78%, avg=224603.45, stdev=96378.81, samples=20 00:25:38.555 iops : min= 468, max= 1992, avg=877.25, stdev=376.44, samples=20 00:25:38.555 lat (msec) : 4=0.02%, 10=0.87%, 20=1.43%, 50=28.02%, 100=48.12% 00:25:38.555 lat (msec) : 250=21.54% 00:25:38.555 cpu : usr=0.44%, sys=2.86%, ctx=1707, majf=0, minf=4097 00:25:38.555 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:25:38.555 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:38.555 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:38.555 issued rwts: total=8838,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:38.555 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:38.555 00:25:38.555 Run status group 0 (all jobs): 00:25:38.555 READ: bw=1862MiB/s (1952MB/s), 112MiB/s-245MiB/s (117MB/s-257MB/s), io=18.5GiB (19.9GB), run=10016-10197msec 00:25:38.555 00:25:38.555 Disk stats (read/write): 00:25:38.555 nvme0n1: ios=14182/0, merge=0/0, ticks=1231216/0, in_queue=1231216, util=95.20% 00:25:38.555 nvme10n1: ios=15899/0, merge=0/0, ticks=1238031/0, in_queue=1238031, util=95.61% 00:25:38.555 nvme1n1: ios=10324/0, merge=0/0, ticks=1238429/0, in_queue=1238429, util=96.12% 00:25:38.555 nvme2n1: ios=12536/0, merge=0/0, ticks=1239931/0, in_queue=1239931, util=96.44% 00:25:38.555 nvme3n1: ios=16443/0, merge=0/0, ticks=1242549/0, in_queue=1242549, util=96.59% 00:25:38.555 nvme4n1: ios=8958/0, merge=0/0, ticks=1232422/0, in_queue=1232422, util=97.25% 00:25:38.555 nvme5n1: ios=10491/0, merge=0/0, ticks=1232768/0, in_queue=1232768, util=97.57% 00:25:38.555 nvme6n1: ios=11050/0, merge=0/0, ticks=1233643/0, in_queue=1233643, util=97.83% 00:25:38.555 nvme7n1: ios=13154/0, merge=0/0, ticks=1234595/0, in_queue=1234595, util=98.65% 00:25:38.555 nvme8n1: ios=19520/0, merge=0/0, ticks=1240011/0, in_queue=1240011, util=99.05% 00:25:38.555 nvme9n1: ios=17488/0, merge=0/0, ticks=1241067/0, in_queue=1241067, util=99.21% 00:25:38.555 03:28:43 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t randwrite -r 10 00:25:38.555 [global] 00:25:38.555 thread=1 00:25:38.555 invalidate=1 00:25:38.555 rw=randwrite 00:25:38.555 time_based=1 00:25:38.555 runtime=10 00:25:38.555 ioengine=libaio 00:25:38.556 direct=1 00:25:38.556 bs=262144 00:25:38.556 iodepth=64 00:25:38.556 norandommap=1 00:25:38.556 numjobs=1 00:25:38.556 00:25:38.556 [job0] 00:25:38.556 filename=/dev/nvme0n1 00:25:38.556 [job1] 00:25:38.556 filename=/dev/nvme10n1 00:25:38.556 [job2] 00:25:38.556 filename=/dev/nvme1n1 00:25:38.556 [job3] 00:25:38.556 filename=/dev/nvme2n1 00:25:38.556 [job4] 00:25:38.556 filename=/dev/nvme3n1 00:25:38.556 [job5] 00:25:38.556 filename=/dev/nvme4n1 00:25:38.556 [job6] 00:25:38.556 filename=/dev/nvme5n1 00:25:38.556 [job7] 00:25:38.556 filename=/dev/nvme6n1 00:25:38.556 [job8] 00:25:38.556 filename=/dev/nvme7n1 00:25:38.556 [job9] 00:25:38.556 filename=/dev/nvme8n1 00:25:38.556 [job10] 00:25:38.556 filename=/dev/nvme9n1 00:25:38.556 Could not set queue depth (nvme0n1) 00:25:38.556 Could not set queue depth (nvme10n1) 00:25:38.556 Could not set queue depth (nvme1n1) 00:25:38.556 Could not set queue depth (nvme2n1) 00:25:38.556 Could not set queue depth (nvme3n1) 00:25:38.556 Could not set queue depth (nvme4n1) 00:25:38.556 Could not set queue depth (nvme5n1) 00:25:38.556 Could not set queue depth (nvme6n1) 00:25:38.556 Could not set queue depth (nvme7n1) 00:25:38.556 Could not set queue depth (nvme8n1) 00:25:38.556 Could not set queue depth (nvme9n1) 00:25:38.556 job0: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:38.556 job1: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:38.556 job2: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:38.556 job3: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:38.556 job4: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:38.556 job5: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:38.556 job6: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:38.556 job7: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:38.556 job8: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:38.556 job9: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:38.556 job10: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:38.556 fio-3.35 00:25:38.556 Starting 11 threads 00:25:48.526 00:25:48.526 job0: (groupid=0, jobs=1): err= 0: pid=3254203: Mon Jul 15 03:28:54 2024 00:25:48.526 write: IOPS=444, BW=111MiB/s (116MB/s)(1122MiB/10099msec); 0 zone resets 00:25:48.526 slat (usec): min=19, max=95459, avg=1626.06, stdev=4442.19 00:25:48.526 clat (usec): min=1028, max=339056, avg=142397.32, stdev=64222.61 00:25:48.526 lat (usec): min=1056, max=339089, avg=144023.38, stdev=65121.05 00:25:48.526 clat percentiles (msec): 00:25:48.526 | 1.00th=[ 5], 5.00th=[ 14], 10.00th=[ 43], 20.00th=[ 103], 00:25:48.526 | 30.00th=[ 115], 40.00th=[ 132], 50.00th=[ 142], 60.00th=[ 159], 00:25:48.526 | 70.00th=[ 180], 80.00th=[ 199], 90.00th=[ 220], 95.00th=[ 245], 00:25:48.526 | 99.00th=[ 275], 99.50th=[ 288], 99.90th=[ 321], 99.95th=[ 321], 00:25:48.526 | 99.99th=[ 338] 00:25:48.526 bw ( KiB/s): min=65536, max=168448, per=7.66%, avg=113228.80, stdev=30397.31, samples=20 00:25:48.526 iops : min= 256, max= 658, avg=442.30, stdev=118.74, samples=20 00:25:48.526 lat (msec) : 2=0.16%, 4=0.80%, 10=3.32%, 20=1.76%, 50=5.17% 00:25:48.526 lat (msec) : 100=8.14%, 250=76.13%, 500=4.53% 00:25:48.526 cpu : usr=1.43%, sys=1.45%, ctx=2398, majf=0, minf=1 00:25:48.526 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:25:48.526 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:48.526 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:48.526 issued rwts: total=0,4486,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:48.526 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:48.526 job1: (groupid=0, jobs=1): err= 0: pid=3254215: Mon Jul 15 03:28:54 2024 00:25:48.526 write: IOPS=577, BW=144MiB/s (151MB/s)(1464MiB/10134msec); 0 zone resets 00:25:48.526 slat (usec): min=18, max=90357, avg=1445.83, stdev=3551.39 00:25:48.526 clat (usec): min=1646, max=301081, avg=109253.53, stdev=60737.87 00:25:48.527 lat (usec): min=1745, max=301119, avg=110699.36, stdev=61559.01 00:25:48.527 clat percentiles (msec): 00:25:48.527 | 1.00th=[ 6], 5.00th=[ 21], 10.00th=[ 35], 20.00th=[ 45], 00:25:48.527 | 30.00th=[ 56], 40.00th=[ 88], 50.00th=[ 109], 60.00th=[ 136], 00:25:48.527 | 70.00th=[ 153], 80.00th=[ 174], 90.00th=[ 188], 95.00th=[ 199], 00:25:48.527 | 99.00th=[ 226], 99.50th=[ 234], 99.90th=[ 292], 99.95th=[ 292], 00:25:48.527 | 99.99th=[ 300] 00:25:48.527 bw ( KiB/s): min=83968, max=294912, per=10.03%, avg=148300.80, stdev=64455.79, samples=20 00:25:48.527 iops : min= 328, max= 1152, avg=579.30, stdev=251.78, samples=20 00:25:48.527 lat (msec) : 2=0.05%, 4=0.50%, 10=1.66%, 20=2.63%, 50=20.41% 00:25:48.527 lat (msec) : 100=18.63%, 250=55.75%, 500=0.38% 00:25:48.527 cpu : usr=1.94%, sys=1.83%, ctx=2564, majf=0, minf=1 00:25:48.527 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=98.9% 00:25:48.527 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:48.527 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:48.527 issued rwts: total=0,5856,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:48.527 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:48.527 job2: (groupid=0, jobs=1): err= 0: pid=3254216: Mon Jul 15 03:28:54 2024 00:25:48.527 write: IOPS=649, BW=162MiB/s (170MB/s)(1640MiB/10091msec); 0 zone resets 00:25:48.527 slat (usec): min=19, max=113367, avg=1115.07, stdev=3308.32 00:25:48.527 clat (usec): min=1495, max=339056, avg=97324.18, stdev=68995.07 00:25:48.527 lat (usec): min=1580, max=339099, avg=98439.25, stdev=69707.61 00:25:48.527 clat percentiles (msec): 00:25:48.527 | 1.00th=[ 4], 5.00th=[ 13], 10.00th=[ 29], 20.00th=[ 44], 00:25:48.527 | 30.00th=[ 46], 40.00th=[ 52], 50.00th=[ 73], 60.00th=[ 100], 00:25:48.527 | 70.00th=[ 131], 80.00th=[ 161], 90.00th=[ 203], 95.00th=[ 232], 00:25:48.527 | 99.00th=[ 279], 99.50th=[ 288], 99.90th=[ 305], 99.95th=[ 317], 00:25:48.527 | 99.99th=[ 338] 00:25:48.527 bw ( KiB/s): min=71680, max=323584, per=11.24%, avg=166272.00, stdev=83098.04, samples=20 00:25:48.527 iops : min= 280, max= 1264, avg=649.50, stdev=324.60, samples=20 00:25:48.527 lat (msec) : 2=0.06%, 4=0.96%, 10=2.74%, 20=3.22%, 50=32.31% 00:25:48.527 lat (msec) : 100=20.78%, 250=36.93%, 500=2.99% 00:25:48.527 cpu : usr=1.98%, sys=2.02%, ctx=3155, majf=0, minf=1 00:25:48.527 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:25:48.527 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:48.527 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:48.527 issued rwts: total=0,6558,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:48.527 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:48.527 job3: (groupid=0, jobs=1): err= 0: pid=3254218: Mon Jul 15 03:28:54 2024 00:25:48.527 write: IOPS=405, BW=101MiB/s (106MB/s)(1022MiB/10086msec); 0 zone resets 00:25:48.527 slat (usec): min=21, max=36174, avg=2061.40, stdev=4654.18 00:25:48.527 clat (usec): min=1048, max=314343, avg=155773.46, stdev=63161.52 00:25:48.527 lat (usec): min=1110, max=314373, avg=157834.86, stdev=64092.46 00:25:48.527 clat percentiles (msec): 00:25:48.527 | 1.00th=[ 9], 5.00th=[ 18], 10.00th=[ 46], 20.00th=[ 115], 00:25:48.527 | 30.00th=[ 138], 40.00th=[ 150], 50.00th=[ 169], 60.00th=[ 182], 00:25:48.527 | 70.00th=[ 197], 80.00th=[ 207], 90.00th=[ 222], 95.00th=[ 243], 00:25:48.527 | 99.00th=[ 271], 99.50th=[ 275], 99.90th=[ 288], 99.95th=[ 288], 00:25:48.527 | 99.99th=[ 313] 00:25:48.527 bw ( KiB/s): min=67584, max=140288, per=6.97%, avg=103051.65, stdev=18200.10, samples=20 00:25:48.527 iops : min= 264, max= 548, avg=402.50, stdev=71.06, samples=20 00:25:48.527 lat (msec) : 2=0.07%, 4=0.10%, 10=1.35%, 20=4.09%, 50=5.14% 00:25:48.527 lat (msec) : 100=5.28%, 250=79.79%, 500=4.18% 00:25:48.527 cpu : usr=1.24%, sys=1.44%, ctx=1868, majf=0, minf=1 00:25:48.527 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:25:48.527 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:48.527 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:48.527 issued rwts: total=0,4088,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:48.527 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:48.527 job4: (groupid=0, jobs=1): err= 0: pid=3254222: Mon Jul 15 03:28:54 2024 00:25:48.527 write: IOPS=507, BW=127MiB/s (133MB/s)(1285MiB/10140msec); 0 zone resets 00:25:48.527 slat (usec): min=15, max=122312, avg=1147.25, stdev=4254.93 00:25:48.527 clat (usec): min=1333, max=387163, avg=125028.29, stdev=78185.20 00:25:48.527 lat (usec): min=1404, max=387254, avg=126175.54, stdev=79138.01 00:25:48.527 clat percentiles (msec): 00:25:48.527 | 1.00th=[ 4], 5.00th=[ 9], 10.00th=[ 19], 20.00th=[ 44], 00:25:48.527 | 30.00th=[ 80], 40.00th=[ 106], 50.00th=[ 123], 60.00th=[ 146], 00:25:48.527 | 70.00th=[ 165], 80.00th=[ 192], 90.00th=[ 226], 95.00th=[ 255], 00:25:48.527 | 99.00th=[ 342], 99.50th=[ 363], 99.90th=[ 384], 99.95th=[ 388], 00:25:48.527 | 99.99th=[ 388] 00:25:48.527 bw ( KiB/s): min=45056, max=257536, per=8.79%, avg=129996.80, stdev=49247.60, samples=20 00:25:48.527 iops : min= 176, max= 1006, avg=507.80, stdev=192.37, samples=20 00:25:48.527 lat (msec) : 2=0.16%, 4=1.28%, 10=4.88%, 20=4.36%, 50=11.53% 00:25:48.527 lat (msec) : 100=14.76%, 250=57.28%, 500=5.74% 00:25:48.527 cpu : usr=1.40%, sys=1.97%, ctx=3319, majf=0, minf=1 00:25:48.527 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:25:48.527 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:48.527 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:48.527 issued rwts: total=0,5141,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:48.527 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:48.527 job5: (groupid=0, jobs=1): err= 0: pid=3254247: Mon Jul 15 03:28:54 2024 00:25:48.527 write: IOPS=498, BW=125MiB/s (131MB/s)(1257MiB/10083msec); 0 zone resets 00:25:48.527 slat (usec): min=15, max=104417, avg=1028.86, stdev=3882.33 00:25:48.527 clat (usec): min=869, max=398714, avg=127240.16, stdev=76765.81 00:25:48.527 lat (usec): min=895, max=398779, avg=128269.02, stdev=77629.16 00:25:48.527 clat percentiles (msec): 00:25:48.527 | 1.00th=[ 3], 5.00th=[ 7], 10.00th=[ 16], 20.00th=[ 41], 00:25:48.527 | 30.00th=[ 84], 40.00th=[ 112], 50.00th=[ 138], 60.00th=[ 161], 00:25:48.527 | 70.00th=[ 176], 80.00th=[ 190], 90.00th=[ 209], 95.00th=[ 234], 00:25:48.527 | 99.00th=[ 376], 99.50th=[ 388], 99.90th=[ 397], 99.95th=[ 401], 00:25:48.527 | 99.99th=[ 401] 00:25:48.527 bw ( KiB/s): min=43008, max=250880, per=8.60%, avg=127135.95, stdev=52325.69, samples=20 00:25:48.527 iops : min= 168, max= 980, avg=496.60, stdev=204.43, samples=20 00:25:48.527 lat (usec) : 1000=0.04% 00:25:48.527 lat (msec) : 2=0.48%, 4=1.55%, 10=5.13%, 20=4.51%, 50=10.24% 00:25:48.527 lat (msec) : 100=13.76%, 250=60.95%, 500=3.34% 00:25:48.527 cpu : usr=1.48%, sys=1.72%, ctx=3584, majf=0, minf=1 00:25:48.527 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.7% 00:25:48.527 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:48.527 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:48.528 issued rwts: total=0,5029,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:48.528 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:48.528 job6: (groupid=0, jobs=1): err= 0: pid=3254260: Mon Jul 15 03:28:54 2024 00:25:48.528 write: IOPS=538, BW=135MiB/s (141MB/s)(1360MiB/10108msec); 0 zone resets 00:25:48.528 slat (usec): min=23, max=30548, avg=1400.52, stdev=3581.11 00:25:48.528 clat (usec): min=1636, max=363426, avg=117489.33, stdev=65916.55 00:25:48.528 lat (usec): min=1693, max=363457, avg=118889.85, stdev=66866.03 00:25:48.528 clat percentiles (msec): 00:25:48.528 | 1.00th=[ 7], 5.00th=[ 18], 10.00th=[ 29], 20.00th=[ 63], 00:25:48.528 | 30.00th=[ 77], 40.00th=[ 91], 50.00th=[ 113], 60.00th=[ 132], 00:25:48.528 | 70.00th=[ 153], 80.00th=[ 182], 90.00th=[ 203], 95.00th=[ 236], 00:25:48.528 | 99.00th=[ 271], 99.50th=[ 288], 99.90th=[ 338], 99.95th=[ 338], 00:25:48.528 | 99.99th=[ 363] 00:25:48.528 bw ( KiB/s): min=67584, max=265728, per=9.31%, avg=137637.05, stdev=60700.03, samples=20 00:25:48.528 iops : min= 264, max= 1038, avg=537.60, stdev=237.13, samples=20 00:25:48.528 lat (msec) : 2=0.02%, 4=0.46%, 10=2.00%, 20=3.40%, 50=10.74% 00:25:48.528 lat (msec) : 100=27.65%, 250=52.86%, 500=2.87% 00:25:48.528 cpu : usr=1.90%, sys=1.74%, ctx=2899, majf=0, minf=1 00:25:48.528 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.8% 00:25:48.528 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:48.528 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:48.528 issued rwts: total=0,5439,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:48.528 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:48.528 job7: (groupid=0, jobs=1): err= 0: pid=3254278: Mon Jul 15 03:28:54 2024 00:25:48.528 write: IOPS=615, BW=154MiB/s (161MB/s)(1549MiB/10071msec); 0 zone resets 00:25:48.528 slat (usec): min=19, max=62232, avg=1079.06, stdev=3022.41 00:25:48.528 clat (msec): min=2, max=321, avg=102.95, stdev=66.08 00:25:48.528 lat (msec): min=2, max=333, avg=104.03, stdev=66.67 00:25:48.528 clat percentiles (msec): 00:25:48.528 | 1.00th=[ 11], 5.00th=[ 25], 10.00th=[ 41], 20.00th=[ 48], 00:25:48.528 | 30.00th=[ 54], 40.00th=[ 71], 50.00th=[ 79], 60.00th=[ 96], 00:25:48.528 | 70.00th=[ 131], 80.00th=[ 178], 90.00th=[ 203], 95.00th=[ 224], 00:25:48.528 | 99.00th=[ 288], 99.50th=[ 300], 99.90th=[ 313], 99.95th=[ 313], 00:25:48.528 | 99.99th=[ 321] 00:25:48.528 bw ( KiB/s): min=80384, max=345088, per=10.61%, avg=156953.60, stdev=76240.05, samples=20 00:25:48.528 iops : min= 314, max= 1348, avg=613.10, stdev=297.81, samples=20 00:25:48.528 lat (msec) : 4=0.10%, 10=0.71%, 20=2.91%, 50=18.65%, 100=39.17% 00:25:48.528 lat (msec) : 250=36.57%, 500=1.91% 00:25:48.528 cpu : usr=2.23%, sys=2.08%, ctx=3111, majf=0, minf=1 00:25:48.528 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:25:48.528 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:48.528 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:48.528 issued rwts: total=0,6194,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:48.528 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:48.528 job8: (groupid=0, jobs=1): err= 0: pid=3254335: Mon Jul 15 03:28:54 2024 00:25:48.528 write: IOPS=572, BW=143MiB/s (150MB/s)(1451MiB/10135msec); 0 zone resets 00:25:48.528 slat (usec): min=19, max=66632, avg=1340.89, stdev=3355.98 00:25:48.528 clat (usec): min=967, max=355246, avg=110400.42, stdev=62310.80 00:25:48.528 lat (usec): min=1013, max=355337, avg=111741.31, stdev=63020.22 00:25:48.528 clat percentiles (msec): 00:25:48.528 | 1.00th=[ 4], 5.00th=[ 10], 10.00th=[ 29], 20.00th=[ 48], 00:25:48.528 | 30.00th=[ 77], 40.00th=[ 87], 50.00th=[ 111], 60.00th=[ 127], 00:25:48.528 | 70.00th=[ 148], 80.00th=[ 171], 90.00th=[ 190], 95.00th=[ 201], 00:25:48.528 | 99.00th=[ 271], 99.50th=[ 317], 99.90th=[ 347], 99.95th=[ 351], 00:25:48.528 | 99.99th=[ 355] 00:25:48.528 bw ( KiB/s): min=81920, max=361472, per=9.94%, avg=146936.85, stdev=63701.13, samples=20 00:25:48.528 iops : min= 320, max= 1412, avg=573.95, stdev=248.82, samples=20 00:25:48.528 lat (usec) : 1000=0.02% 00:25:48.528 lat (msec) : 2=0.16%, 4=1.45%, 10=3.41%, 20=3.03%, 50=13.00% 00:25:48.528 lat (msec) : 100=23.97%, 250=53.67%, 500=1.29% 00:25:48.528 cpu : usr=1.62%, sys=2.04%, ctx=2943, majf=0, minf=1 00:25:48.528 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:25:48.528 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:48.528 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:48.528 issued rwts: total=0,5802,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:48.528 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:48.528 job9: (groupid=0, jobs=1): err= 0: pid=3254361: Mon Jul 15 03:28:54 2024 00:25:48.528 write: IOPS=532, BW=133MiB/s (140MB/s)(1343MiB/10077msec); 0 zone resets 00:25:48.528 slat (usec): min=21, max=90998, avg=1311.86, stdev=3634.56 00:25:48.528 clat (usec): min=1172, max=342835, avg=118740.54, stdev=60095.74 00:25:48.528 lat (usec): min=1231, max=342923, avg=120052.40, stdev=60825.69 00:25:48.528 clat percentiles (msec): 00:25:48.528 | 1.00th=[ 8], 5.00th=[ 24], 10.00th=[ 39], 20.00th=[ 66], 00:25:48.528 | 30.00th=[ 84], 40.00th=[ 101], 50.00th=[ 118], 60.00th=[ 138], 00:25:48.528 | 70.00th=[ 153], 80.00th=[ 171], 90.00th=[ 188], 95.00th=[ 207], 00:25:48.528 | 99.00th=[ 292], 99.50th=[ 313], 99.90th=[ 334], 99.95th=[ 342], 00:25:48.528 | 99.99th=[ 342] 00:25:48.528 bw ( KiB/s): min=61952, max=239616, per=9.19%, avg=135859.20, stdev=42966.04, samples=20 00:25:48.528 iops : min= 242, max= 936, avg=530.70, stdev=167.84, samples=20 00:25:48.528 lat (msec) : 2=0.06%, 4=0.17%, 10=1.43%, 20=2.64%, 50=11.66% 00:25:48.528 lat (msec) : 100=23.00%, 250=58.36%, 500=2.68% 00:25:48.528 cpu : usr=1.78%, sys=1.75%, ctx=2983, majf=0, minf=1 00:25:48.528 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.8% 00:25:48.528 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:48.528 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:48.528 issued rwts: total=0,5370,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:48.528 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:48.528 job10: (groupid=0, jobs=1): err= 0: pid=3254370: Mon Jul 15 03:28:54 2024 00:25:48.528 write: IOPS=456, BW=114MiB/s (120MB/s)(1154MiB/10107msec); 0 zone resets 00:25:48.528 slat (usec): min=25, max=115627, avg=1862.69, stdev=4842.03 00:25:48.528 clat (msec): min=6, max=386, avg=138.17, stdev=65.15 00:25:48.528 lat (msec): min=7, max=386, avg=140.04, stdev=65.97 00:25:48.528 clat percentiles (msec): 00:25:48.528 | 1.00th=[ 28], 5.00th=[ 70], 10.00th=[ 77], 20.00th=[ 81], 00:25:48.528 | 30.00th=[ 92], 40.00th=[ 107], 50.00th=[ 122], 60.00th=[ 144], 00:25:48.528 | 70.00th=[ 169], 80.00th=[ 194], 90.00th=[ 224], 95.00th=[ 255], 00:25:48.528 | 99.00th=[ 342], 99.50th=[ 363], 99.90th=[ 384], 99.95th=[ 388], 00:25:48.528 | 99.99th=[ 388] 00:25:48.528 bw ( KiB/s): min=43008, max=196608, per=7.88%, avg=116556.80, stdev=46226.56, samples=20 00:25:48.528 iops : min= 168, max= 768, avg=455.30, stdev=180.57, samples=20 00:25:48.528 lat (msec) : 10=0.17%, 20=0.54%, 50=1.78%, 100=34.03%, 250=57.86% 00:25:48.528 lat (msec) : 500=5.61% 00:25:48.528 cpu : usr=1.34%, sys=1.45%, ctx=1804, majf=0, minf=1 00:25:48.528 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.6% 00:25:48.528 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:48.529 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:48.529 issued rwts: total=0,4616,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:48.529 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:48.529 00:25:48.529 Run status group 0 (all jobs): 00:25:48.529 WRITE: bw=1444MiB/s (1514MB/s), 101MiB/s-162MiB/s (106MB/s-170MB/s), io=14.3GiB (15.4GB), run=10071-10140msec 00:25:48.529 00:25:48.529 Disk stats (read/write): 00:25:48.529 nvme0n1: ios=49/8774, merge=0/0, ticks=570/1206410, in_queue=1206980, util=99.35% 00:25:48.529 nvme10n1: ios=49/11512, merge=0/0, ticks=297/1207956, in_queue=1208253, util=99.51% 00:25:48.529 nvme1n1: ios=45/12848, merge=0/0, ticks=153/1211588, in_queue=1211741, util=98.24% 00:25:48.529 nvme2n1: ios=5/7871, merge=0/0, ticks=210/1201386, in_queue=1201596, util=97.45% 00:25:48.529 nvme3n1: ios=0/10080, merge=0/0, ticks=0/1220889, in_queue=1220889, util=97.54% 00:25:48.529 nvme4n1: ios=0/9728, merge=0/0, ticks=0/1224009, in_queue=1224009, util=97.91% 00:25:48.529 nvme5n1: ios=46/10655, merge=0/0, ticks=82/1203183, in_queue=1203265, util=98.30% 00:25:48.529 nvme6n1: ios=0/12042, merge=0/0, ticks=0/1218101, in_queue=1218101, util=98.23% 00:25:48.529 nvme7n1: ios=0/11399, merge=0/0, ticks=0/1211036, in_queue=1211036, util=98.66% 00:25:48.529 nvme8n1: ios=0/10467, merge=0/0, ticks=0/1213910, in_queue=1213910, util=98.90% 00:25:48.529 nvme9n1: ios=44/9047, merge=0/0, ticks=721/1199436, in_queue=1200157, util=100.00% 00:25:48.529 03:28:54 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@36 -- # sync 00:25:48.529 03:28:54 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # seq 1 11 00:25:48.529 03:28:54 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:48.529 03:28:54 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:25:48.529 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:25:48.529 03:28:54 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK1 00:25:48.529 03:28:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:25:48.529 03:28:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:25:48.529 03:28:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK1 00:25:48.529 03:28:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:25:48.529 03:28:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK1 00:25:48.529 03:28:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:25:48.529 03:28:54 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:48.529 03:28:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:48.529 03:28:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:48.529 03:28:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:48.529 03:28:54 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:48.529 03:28:54 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode2 00:25:48.786 NQN:nqn.2016-06.io.spdk:cnode2 disconnected 1 controller(s) 00:25:48.786 03:28:54 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK2 00:25:48.786 03:28:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:25:48.786 03:28:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:25:48.786 03:28:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK2 00:25:48.786 03:28:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:25:48.786 03:28:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK2 00:25:48.786 03:28:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:25:48.787 03:28:54 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:25:48.787 03:28:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:48.787 03:28:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:48.787 03:28:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:48.787 03:28:54 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:48.787 03:28:54 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode3 00:25:48.787 NQN:nqn.2016-06.io.spdk:cnode3 disconnected 1 controller(s) 00:25:48.787 03:28:54 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK3 00:25:48.787 03:28:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:25:48.787 03:28:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:25:48.787 03:28:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK3 00:25:49.043 03:28:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:25:49.043 03:28:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK3 00:25:49.043 03:28:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:25:49.043 03:28:54 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:25:49.043 03:28:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:49.043 03:28:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:49.043 03:28:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:49.043 03:28:54 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:49.043 03:28:54 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode4 00:25:49.301 NQN:nqn.2016-06.io.spdk:cnode4 disconnected 1 controller(s) 00:25:49.301 03:28:55 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK4 00:25:49.301 03:28:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:25:49.301 03:28:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:25:49.301 03:28:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK4 00:25:49.301 03:28:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:25:49.301 03:28:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK4 00:25:49.301 03:28:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:25:49.301 03:28:55 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:25:49.301 03:28:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:49.301 03:28:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:49.301 03:28:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:49.301 03:28:55 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:49.301 03:28:55 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode5 00:25:49.565 NQN:nqn.2016-06.io.spdk:cnode5 disconnected 1 controller(s) 00:25:49.565 03:28:55 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK5 00:25:49.565 03:28:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:25:49.565 03:28:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:25:49.565 03:28:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK5 00:25:49.565 03:28:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:25:49.565 03:28:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK5 00:25:49.565 03:28:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:25:49.565 03:28:55 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode5 00:25:49.565 03:28:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:49.565 03:28:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:49.565 03:28:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:49.565 03:28:55 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:49.565 03:28:55 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode6 00:25:49.565 NQN:nqn.2016-06.io.spdk:cnode6 disconnected 1 controller(s) 00:25:49.565 03:28:55 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK6 00:25:49.565 03:28:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:25:49.565 03:28:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:25:49.565 03:28:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK6 00:25:49.565 03:28:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:25:49.565 03:28:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK6 00:25:49.565 03:28:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:25:49.565 03:28:55 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode6 00:25:49.565 03:28:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:49.565 03:28:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:49.565 03:28:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:49.565 03:28:55 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:49.565 03:28:55 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode7 00:25:49.863 NQN:nqn.2016-06.io.spdk:cnode7 disconnected 1 controller(s) 00:25:49.863 03:28:55 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK7 00:25:49.863 03:28:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:25:49.863 03:28:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:25:49.863 03:28:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK7 00:25:49.863 03:28:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:25:49.863 03:28:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK7 00:25:49.863 03:28:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:25:49.863 03:28:55 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode7 00:25:49.863 03:28:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:49.863 03:28:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:49.863 03:28:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:49.863 03:28:55 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:49.863 03:28:55 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode8 00:25:49.863 NQN:nqn.2016-06.io.spdk:cnode8 disconnected 1 controller(s) 00:25:49.863 03:28:55 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK8 00:25:49.863 03:28:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:25:49.863 03:28:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:25:49.863 03:28:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK8 00:25:49.863 03:28:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:25:49.863 03:28:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK8 00:25:50.139 03:28:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:25:50.139 03:28:56 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode8 00:25:50.139 03:28:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:50.139 03:28:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:50.139 03:28:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:50.139 03:28:56 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:50.139 03:28:56 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode9 00:25:50.139 NQN:nqn.2016-06.io.spdk:cnode9 disconnected 1 controller(s) 00:25:50.139 03:28:56 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK9 00:25:50.139 03:28:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:25:50.139 03:28:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:25:50.139 03:28:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK9 00:25:50.139 03:28:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:25:50.139 03:28:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK9 00:25:50.139 03:28:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:25:50.139 03:28:56 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode9 00:25:50.139 03:28:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:50.139 03:28:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:50.139 03:28:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:50.139 03:28:56 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:50.139 03:28:56 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode10 00:25:50.139 NQN:nqn.2016-06.io.spdk:cnode10 disconnected 1 controller(s) 00:25:50.139 03:28:56 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK10 00:25:50.139 03:28:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:25:50.139 03:28:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:25:50.139 03:28:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK10 00:25:50.139 03:28:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:25:50.139 03:28:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK10 00:25:50.139 03:28:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:25:50.139 03:28:56 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode10 00:25:50.139 03:28:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:50.139 03:28:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:50.139 03:28:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:50.139 03:28:56 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:50.139 03:28:56 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode11 00:25:50.398 NQN:nqn.2016-06.io.spdk:cnode11 disconnected 1 controller(s) 00:25:50.398 03:28:56 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK11 00:25:50.398 03:28:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:25:50.398 03:28:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:25:50.398 03:28:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK11 00:25:50.398 03:28:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:25:50.398 03:28:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK11 00:25:50.398 03:28:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:25:50.398 03:28:56 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode11 00:25:50.398 03:28:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:50.398 03:28:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:50.398 03:28:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:50.398 03:28:56 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@43 -- # rm -f ./local-job0-0-verify.state 00:25:50.398 03:28:56 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:25:50.398 03:28:56 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@47 -- # nvmftestfini 00:25:50.398 03:28:56 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:50.398 03:28:56 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@117 -- # sync 00:25:50.398 03:28:56 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:50.398 03:28:56 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@120 -- # set +e 00:25:50.398 03:28:56 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:50.398 03:28:56 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:50.398 rmmod nvme_tcp 00:25:50.398 rmmod nvme_fabrics 00:25:50.398 rmmod nvme_keyring 00:25:50.398 03:28:56 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:50.398 03:28:56 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@124 -- # set -e 00:25:50.398 03:28:56 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@125 -- # return 0 00:25:50.398 03:28:56 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@489 -- # '[' -n 3248905 ']' 00:25:50.398 03:28:56 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@490 -- # killprocess 3248905 00:25:50.398 03:28:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@948 -- # '[' -z 3248905 ']' 00:25:50.398 03:28:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@952 -- # kill -0 3248905 00:25:50.398 03:28:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@953 -- # uname 00:25:50.398 03:28:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:50.398 03:28:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3248905 00:25:50.398 03:28:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:25:50.398 03:28:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:25:50.398 03:28:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3248905' 00:25:50.398 killing process with pid 3248905 00:25:50.398 03:28:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@967 -- # kill 3248905 00:25:50.398 03:28:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@972 -- # wait 3248905 00:25:50.964 03:28:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:50.964 03:28:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:50.964 03:28:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:50.964 03:28:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:50.964 03:28:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:50.964 03:28:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:50.964 03:28:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:50.964 03:28:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:53.497 03:28:59 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:53.497 00:25:53.497 real 1m0.532s 00:25:53.497 user 3m21.967s 00:25:53.497 sys 0m25.554s 00:25:53.497 03:28:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:53.497 03:28:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:53.497 ************************************ 00:25:53.497 END TEST nvmf_multiconnection 00:25:53.497 ************************************ 00:25:53.497 03:28:59 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:25:53.497 03:28:59 nvmf_tcp -- nvmf/nvmf.sh@68 -- # run_test nvmf_initiator_timeout /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:25:53.497 03:28:59 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:25:53.497 03:28:59 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:53.497 03:28:59 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:53.497 ************************************ 00:25:53.497 START TEST nvmf_initiator_timeout 00:25:53.497 ************************************ 00:25:53.497 03:28:59 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:25:53.497 * Looking for test storage... 00:25:53.497 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:25:53.497 03:28:59 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:53.497 03:28:59 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # uname -s 00:25:53.497 03:28:59 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:53.497 03:28:59 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:53.497 03:28:59 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:53.497 03:28:59 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:53.497 03:28:59 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:53.497 03:28:59 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:53.497 03:28:59 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:53.497 03:28:59 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:53.497 03:28:59 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:53.497 03:28:59 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:53.497 03:28:59 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:25:53.497 03:28:59 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:25:53.497 03:28:59 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:53.497 03:28:59 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:53.497 03:28:59 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:53.497 03:28:59 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:53.497 03:28:59 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:53.497 03:28:59 nvmf_tcp.nvmf_initiator_timeout -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:53.497 03:28:59 nvmf_tcp.nvmf_initiator_timeout -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:53.497 03:28:59 nvmf_tcp.nvmf_initiator_timeout -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:53.497 03:28:59 nvmf_tcp.nvmf_initiator_timeout -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:53.497 03:28:59 nvmf_tcp.nvmf_initiator_timeout -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:53.497 03:28:59 nvmf_tcp.nvmf_initiator_timeout -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:53.497 03:28:59 nvmf_tcp.nvmf_initiator_timeout -- paths/export.sh@5 -- # export PATH 00:25:53.497 03:28:59 nvmf_tcp.nvmf_initiator_timeout -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:53.497 03:28:59 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@47 -- # : 0 00:25:53.497 03:28:59 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:53.497 03:28:59 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:53.497 03:28:59 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:53.497 03:28:59 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:53.497 03:28:59 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:53.497 03:28:59 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:53.497 03:28:59 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:53.497 03:28:59 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:53.497 03:28:59 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:25:53.497 03:28:59 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:25:53.497 03:28:59 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@14 -- # nvmftestinit 00:25:53.497 03:28:59 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:53.497 03:28:59 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:53.497 03:28:59 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:53.497 03:28:59 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:53.497 03:28:59 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:53.497 03:28:59 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:53.497 03:28:59 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:53.497 03:28:59 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:53.497 03:28:59 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:25:53.497 03:28:59 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:25:53.497 03:28:59 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@285 -- # xtrace_disable 00:25:53.497 03:28:59 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:55.423 03:29:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:55.423 03:29:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@291 -- # pci_devs=() 00:25:55.423 03:29:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:55.423 03:29:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:55.423 03:29:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:55.423 03:29:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:55.423 03:29:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:55.423 03:29:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@295 -- # net_devs=() 00:25:55.423 03:29:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:55.423 03:29:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@296 -- # e810=() 00:25:55.423 03:29:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@296 -- # local -ga e810 00:25:55.423 03:29:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@297 -- # x722=() 00:25:55.423 03:29:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@297 -- # local -ga x722 00:25:55.423 03:29:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@298 -- # mlx=() 00:25:55.423 03:29:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@298 -- # local -ga mlx 00:25:55.423 03:29:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:55.423 03:29:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:55.423 03:29:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:55.423 03:29:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:55.423 03:29:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:55.423 03:29:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:55.423 03:29:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:55.423 03:29:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:55.423 03:29:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:55.423 03:29:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:55.423 03:29:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:55.423 03:29:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:55.423 03:29:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:55.423 03:29:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:55.423 03:29:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:55.423 03:29:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:55.423 03:29:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:55.423 03:29:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:55.423 03:29:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:25:55.423 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:25:55.423 03:29:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:55.423 03:29:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:55.423 03:29:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:55.423 03:29:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:55.423 03:29:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:55.423 03:29:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:55.423 03:29:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:25:55.423 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:25:55.423 03:29:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:55.423 03:29:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:55.423 03:29:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:55.423 03:29:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:55.423 03:29:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:55.423 03:29:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:55.423 03:29:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:55.423 03:29:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:55.423 03:29:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:55.423 03:29:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:55.423 03:29:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:55.423 03:29:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:55.423 03:29:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:55.423 03:29:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:55.423 03:29:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:55.423 03:29:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:25:55.423 Found net devices under 0000:0a:00.0: cvl_0_0 00:25:55.423 03:29:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:55.423 03:29:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:55.423 03:29:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:55.423 03:29:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:55.423 03:29:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:55.423 03:29:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:55.423 03:29:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:55.423 03:29:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:55.424 03:29:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:25:55.424 Found net devices under 0000:0a:00.1: cvl_0_1 00:25:55.424 03:29:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:55.424 03:29:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:55.424 03:29:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@414 -- # is_hw=yes 00:25:55.424 03:29:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:55.424 03:29:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:25:55.424 03:29:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:25:55.424 03:29:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:55.424 03:29:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:55.424 03:29:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:55.424 03:29:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:55.424 03:29:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:55.424 03:29:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:55.424 03:29:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:55.424 03:29:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:55.424 03:29:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:55.424 03:29:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:55.424 03:29:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:55.424 03:29:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:55.424 03:29:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:55.424 03:29:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:55.424 03:29:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:55.424 03:29:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:55.424 03:29:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:55.424 03:29:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:55.424 03:29:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:55.424 03:29:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:55.424 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:55.424 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.280 ms 00:25:55.424 00:25:55.424 --- 10.0.0.2 ping statistics --- 00:25:55.424 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:55.424 rtt min/avg/max/mdev = 0.280/0.280/0.280/0.000 ms 00:25:55.424 03:29:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:55.424 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:55.424 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.118 ms 00:25:55.424 00:25:55.424 --- 10.0.0.1 ping statistics --- 00:25:55.424 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:55.424 rtt min/avg/max/mdev = 0.118/0.118/0.118/0.000 ms 00:25:55.424 03:29:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:55.424 03:29:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@422 -- # return 0 00:25:55.424 03:29:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:55.424 03:29:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:55.424 03:29:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:55.424 03:29:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:55.424 03:29:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:55.424 03:29:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:55.424 03:29:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:55.424 03:29:01 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@15 -- # nvmfappstart -m 0xF 00:25:55.424 03:29:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:55.424 03:29:01 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:55.424 03:29:01 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:55.424 03:29:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@481 -- # nvmfpid=3257682 00:25:55.424 03:29:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:25:55.424 03:29:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@482 -- # waitforlisten 3257682 00:25:55.424 03:29:01 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@829 -- # '[' -z 3257682 ']' 00:25:55.424 03:29:01 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:55.424 03:29:01 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:55.424 03:29:01 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:55.424 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:55.424 03:29:01 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:55.424 03:29:01 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:55.424 [2024-07-15 03:29:01.301414] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:25:55.424 [2024-07-15 03:29:01.301487] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:55.424 EAL: No free 2048 kB hugepages reported on node 1 00:25:55.424 [2024-07-15 03:29:01.365119] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:55.424 [2024-07-15 03:29:01.450130] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:55.424 [2024-07-15 03:29:01.450191] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:55.424 [2024-07-15 03:29:01.450205] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:55.424 [2024-07-15 03:29:01.450216] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:55.424 [2024-07-15 03:29:01.450235] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:55.424 [2024-07-15 03:29:01.450301] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:55.424 [2024-07-15 03:29:01.450398] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:25:55.424 [2024-07-15 03:29:01.450463] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:25:55.424 [2024-07-15 03:29:01.450465] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:55.682 03:29:01 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:55.682 03:29:01 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@862 -- # return 0 00:25:55.682 03:29:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:55.682 03:29:01 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:55.682 03:29:01 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:55.682 03:29:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:55.682 03:29:01 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@17 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:25:55.682 03:29:01 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:25:55.682 03:29:01 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:55.682 03:29:01 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:55.682 Malloc0 00:25:55.682 03:29:01 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:55.682 03:29:01 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@22 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 30 -t 30 -w 30 -n 30 00:25:55.682 03:29:01 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:55.682 03:29:01 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:55.682 Delay0 00:25:55.682 03:29:01 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:55.682 03:29:01 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:55.682 03:29:01 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:55.682 03:29:01 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:55.682 [2024-07-15 03:29:01.641115] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:55.682 03:29:01 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:55.682 03:29:01 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:25:55.682 03:29:01 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:55.682 03:29:01 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:55.682 03:29:01 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:55.682 03:29:01 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:25:55.682 03:29:01 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:55.682 03:29:01 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:55.682 03:29:01 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:55.682 03:29:01 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:55.682 03:29:01 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:55.682 03:29:01 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:55.682 [2024-07-15 03:29:01.669395] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:55.682 03:29:01 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:55.682 03:29:01 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:25:56.247 03:29:02 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@31 -- # waitforserial SPDKISFASTANDAWESOME 00:25:56.247 03:29:02 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1198 -- # local i=0 00:25:56.247 03:29:02 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:25:56.247 03:29:02 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:25:56.247 03:29:02 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1205 -- # sleep 2 00:25:58.771 03:29:04 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:25:58.771 03:29:04 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:25:58.771 03:29:04 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:25:58.771 03:29:04 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:25:58.771 03:29:04 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:25:58.771 03:29:04 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1208 -- # return 0 00:25:58.771 03:29:04 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@35 -- # fio_pid=3258004 00:25:58.772 03:29:04 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 60 -v 00:25:58.772 03:29:04 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@37 -- # sleep 3 00:25:58.772 [global] 00:25:58.772 thread=1 00:25:58.772 invalidate=1 00:25:58.772 rw=write 00:25:58.772 time_based=1 00:25:58.772 runtime=60 00:25:58.772 ioengine=libaio 00:25:58.772 direct=1 00:25:58.772 bs=4096 00:25:58.772 iodepth=1 00:25:58.772 norandommap=0 00:25:58.772 numjobs=1 00:25:58.772 00:25:58.772 verify_dump=1 00:25:58.772 verify_backlog=512 00:25:58.772 verify_state_save=0 00:25:58.772 do_verify=1 00:25:58.772 verify=crc32c-intel 00:25:58.772 [job0] 00:25:58.772 filename=/dev/nvme0n1 00:25:58.772 Could not set queue depth (nvme0n1) 00:25:58.772 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:25:58.772 fio-3.35 00:25:58.772 Starting 1 thread 00:26:01.295 03:29:07 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@40 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 31000000 00:26:01.295 03:29:07 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:01.295 03:29:07 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:01.295 true 00:26:01.295 03:29:07 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:01.295 03:29:07 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@41 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 31000000 00:26:01.295 03:29:07 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:01.295 03:29:07 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:01.295 true 00:26:01.295 03:29:07 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:01.295 03:29:07 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@42 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 31000000 00:26:01.295 03:29:07 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:01.295 03:29:07 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:01.295 true 00:26:01.295 03:29:07 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:01.295 03:29:07 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@43 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 310000000 00:26:01.296 03:29:07 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:01.296 03:29:07 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:01.296 true 00:26:01.296 03:29:07 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:01.296 03:29:07 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@45 -- # sleep 3 00:26:04.569 03:29:10 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@48 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 30 00:26:04.569 03:29:10 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:04.569 03:29:10 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:04.569 true 00:26:04.569 03:29:10 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:04.569 03:29:10 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@49 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 30 00:26:04.569 03:29:10 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:04.569 03:29:10 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:04.569 true 00:26:04.569 03:29:10 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:04.569 03:29:10 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@50 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 30 00:26:04.569 03:29:10 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:04.569 03:29:10 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:04.569 true 00:26:04.569 03:29:10 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:04.569 03:29:10 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@51 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 30 00:26:04.569 03:29:10 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:04.569 03:29:10 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:04.569 true 00:26:04.569 03:29:10 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:04.569 03:29:10 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@53 -- # fio_status=0 00:26:04.569 03:29:10 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@54 -- # wait 3258004 00:27:00.797 00:27:00.797 job0: (groupid=0, jobs=1): err= 0: pid=3258193: Mon Jul 15 03:30:04 2024 00:27:00.797 read: IOPS=77, BW=311KiB/s (319kB/s)(18.2MiB/60007msec) 00:27:00.797 slat (nsec): min=4216, max=67279, avg=15947.63, stdev=9585.45 00:27:00.797 clat (usec): min=254, max=41261k, avg=12583.19, stdev=603971.30 00:27:00.797 lat (usec): min=260, max=41261k, avg=12599.14, stdev=603971.36 00:27:00.797 clat percentiles (usec): 00:27:00.797 | 1.00th=[ 269], 5.00th=[ 277], 10.00th=[ 285], 00:27:00.797 | 20.00th=[ 297], 30.00th=[ 310], 40.00th=[ 318], 00:27:00.797 | 50.00th=[ 330], 60.00th=[ 351], 70.00th=[ 375], 00:27:00.797 | 80.00th=[ 388], 90.00th=[ 445], 95.00th=[ 41157], 00:27:00.797 | 99.00th=[ 42206], 99.50th=[ 42206], 99.90th=[ 42206], 00:27:00.797 | 99.95th=[ 42206], 99.99th=[17112761] 00:27:00.797 write: IOPS=85, BW=341KiB/s (349kB/s)(20.0MiB/60007msec); 0 zone resets 00:27:00.797 slat (nsec): min=5473, max=59488, avg=10441.01, stdev=5329.28 00:27:00.797 clat (usec): min=181, max=1042, avg=215.38, stdev=22.33 00:27:00.797 lat (usec): min=187, max=1050, avg=225.82, stdev=24.12 00:27:00.797 clat percentiles (usec): 00:27:00.797 | 1.00th=[ 188], 5.00th=[ 194], 10.00th=[ 198], 20.00th=[ 202], 00:27:00.797 | 30.00th=[ 204], 40.00th=[ 208], 50.00th=[ 212], 60.00th=[ 217], 00:27:00.797 | 70.00th=[ 221], 80.00th=[ 225], 90.00th=[ 239], 95.00th=[ 251], 00:27:00.797 | 99.00th=[ 277], 99.50th=[ 281], 99.90th=[ 314], 99.95th=[ 392], 00:27:00.797 | 99.99th=[ 1045] 00:27:00.797 bw ( KiB/s): min= 696, max= 8192, per=100.00%, avg=4551.11, stdev=2544.57, samples=9 00:27:00.797 iops : min= 174, max= 2048, avg=1137.78, stdev=636.14, samples=9 00:27:00.797 lat (usec) : 250=49.57%, 500=46.30%, 750=0.15% 00:27:00.797 lat (msec) : 2=0.01%, 50=3.95%, >=2000=0.01% 00:27:00.797 cpu : usr=0.12%, sys=0.23%, ctx=9788, majf=0, minf=2 00:27:00.797 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:00.797 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:00.797 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:00.797 issued rwts: total=4668,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:00.797 latency : target=0, window=0, percentile=100.00%, depth=1 00:27:00.797 00:27:00.797 Run status group 0 (all jobs): 00:27:00.797 READ: bw=311KiB/s (319kB/s), 311KiB/s-311KiB/s (319kB/s-319kB/s), io=18.2MiB (19.1MB), run=60007-60007msec 00:27:00.797 WRITE: bw=341KiB/s (349kB/s), 341KiB/s-341KiB/s (349kB/s-349kB/s), io=20.0MiB (21.0MB), run=60007-60007msec 00:27:00.797 00:27:00.797 Disk stats (read/write): 00:27:00.797 nvme0n1: ios=4764/5120, merge=0/0, ticks=18731/1061, in_queue=19792, util=99.83% 00:27:00.797 03:30:04 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@56 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:27:00.797 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:27:00.797 03:30:04 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@57 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:27:00.797 03:30:04 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1219 -- # local i=0 00:27:00.797 03:30:04 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:27:00.797 03:30:04 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:27:00.797 03:30:04 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:27:00.797 03:30:04 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:27:00.797 03:30:04 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1231 -- # return 0 00:27:00.797 03:30:04 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@59 -- # '[' 0 -eq 0 ']' 00:27:00.797 03:30:04 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@60 -- # echo 'nvmf hotplug test: fio successful as expected' 00:27:00.797 nvmf hotplug test: fio successful as expected 00:27:00.797 03:30:04 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:00.797 03:30:04 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:00.797 03:30:04 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:00.797 03:30:04 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:00.797 03:30:04 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@69 -- # rm -f ./local-job0-0-verify.state 00:27:00.797 03:30:04 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@71 -- # trap - SIGINT SIGTERM EXIT 00:27:00.797 03:30:04 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@73 -- # nvmftestfini 00:27:00.797 03:30:04 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:00.797 03:30:04 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@117 -- # sync 00:27:00.797 03:30:04 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:00.797 03:30:04 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@120 -- # set +e 00:27:00.797 03:30:04 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:00.797 03:30:04 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:00.797 rmmod nvme_tcp 00:27:00.797 rmmod nvme_fabrics 00:27:00.797 rmmod nvme_keyring 00:27:00.797 03:30:04 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:00.797 03:30:04 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@124 -- # set -e 00:27:00.797 03:30:04 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@125 -- # return 0 00:27:00.797 03:30:04 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@489 -- # '[' -n 3257682 ']' 00:27:00.797 03:30:04 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@490 -- # killprocess 3257682 00:27:00.797 03:30:04 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@948 -- # '[' -z 3257682 ']' 00:27:00.797 03:30:04 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@952 -- # kill -0 3257682 00:27:00.797 03:30:04 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@953 -- # uname 00:27:00.797 03:30:04 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:00.797 03:30:04 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3257682 00:27:00.797 03:30:04 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:27:00.797 03:30:04 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:27:00.797 03:30:04 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3257682' 00:27:00.797 killing process with pid 3257682 00:27:00.797 03:30:04 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@967 -- # kill 3257682 00:27:00.797 03:30:04 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@972 -- # wait 3257682 00:27:00.797 03:30:05 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:00.797 03:30:05 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:00.797 03:30:05 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:00.797 03:30:05 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:00.797 03:30:05 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:00.797 03:30:05 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:00.797 03:30:05 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:00.797 03:30:05 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:01.366 03:30:07 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:01.366 00:27:01.366 real 1m8.182s 00:27:01.366 user 4m11.243s 00:27:01.366 sys 0m6.365s 00:27:01.366 03:30:07 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:01.366 03:30:07 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:01.366 ************************************ 00:27:01.366 END TEST nvmf_initiator_timeout 00:27:01.366 ************************************ 00:27:01.366 03:30:07 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:27:01.366 03:30:07 nvmf_tcp -- nvmf/nvmf.sh@71 -- # [[ phy == phy ]] 00:27:01.366 03:30:07 nvmf_tcp -- nvmf/nvmf.sh@72 -- # '[' tcp = tcp ']' 00:27:01.366 03:30:07 nvmf_tcp -- nvmf/nvmf.sh@73 -- # gather_supported_nvmf_pci_devs 00:27:01.366 03:30:07 nvmf_tcp -- nvmf/common.sh@285 -- # xtrace_disable 00:27:01.366 03:30:07 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:03.268 03:30:09 nvmf_tcp -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:03.268 03:30:09 nvmf_tcp -- nvmf/common.sh@291 -- # pci_devs=() 00:27:03.268 03:30:09 nvmf_tcp -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:03.268 03:30:09 nvmf_tcp -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:03.268 03:30:09 nvmf_tcp -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:03.268 03:30:09 nvmf_tcp -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:03.268 03:30:09 nvmf_tcp -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:03.268 03:30:09 nvmf_tcp -- nvmf/common.sh@295 -- # net_devs=() 00:27:03.268 03:30:09 nvmf_tcp -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:03.268 03:30:09 nvmf_tcp -- nvmf/common.sh@296 -- # e810=() 00:27:03.268 03:30:09 nvmf_tcp -- nvmf/common.sh@296 -- # local -ga e810 00:27:03.268 03:30:09 nvmf_tcp -- nvmf/common.sh@297 -- # x722=() 00:27:03.268 03:30:09 nvmf_tcp -- nvmf/common.sh@297 -- # local -ga x722 00:27:03.268 03:30:09 nvmf_tcp -- nvmf/common.sh@298 -- # mlx=() 00:27:03.268 03:30:09 nvmf_tcp -- nvmf/common.sh@298 -- # local -ga mlx 00:27:03.268 03:30:09 nvmf_tcp -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:03.268 03:30:09 nvmf_tcp -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:03.268 03:30:09 nvmf_tcp -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:03.268 03:30:09 nvmf_tcp -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:03.268 03:30:09 nvmf_tcp -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:03.268 03:30:09 nvmf_tcp -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:03.268 03:30:09 nvmf_tcp -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:03.268 03:30:09 nvmf_tcp -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:03.268 03:30:09 nvmf_tcp -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:03.268 03:30:09 nvmf_tcp -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:03.268 03:30:09 nvmf_tcp -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:03.268 03:30:09 nvmf_tcp -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:03.268 03:30:09 nvmf_tcp -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:03.268 03:30:09 nvmf_tcp -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:03.268 03:30:09 nvmf_tcp -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:03.268 03:30:09 nvmf_tcp -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:03.268 03:30:09 nvmf_tcp -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:03.268 03:30:09 nvmf_tcp -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:03.268 03:30:09 nvmf_tcp -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:27:03.268 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:27:03.268 03:30:09 nvmf_tcp -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:03.268 03:30:09 nvmf_tcp -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:03.268 03:30:09 nvmf_tcp -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:03.268 03:30:09 nvmf_tcp -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:03.268 03:30:09 nvmf_tcp -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:03.268 03:30:09 nvmf_tcp -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:03.268 03:30:09 nvmf_tcp -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:27:03.268 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:27:03.268 03:30:09 nvmf_tcp -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:03.268 03:30:09 nvmf_tcp -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:03.268 03:30:09 nvmf_tcp -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:03.268 03:30:09 nvmf_tcp -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:03.268 03:30:09 nvmf_tcp -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:03.268 03:30:09 nvmf_tcp -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:03.268 03:30:09 nvmf_tcp -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:03.268 03:30:09 nvmf_tcp -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:03.268 03:30:09 nvmf_tcp -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:03.268 03:30:09 nvmf_tcp -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:03.268 03:30:09 nvmf_tcp -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:03.268 03:30:09 nvmf_tcp -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:03.268 03:30:09 nvmf_tcp -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:03.268 03:30:09 nvmf_tcp -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:03.268 03:30:09 nvmf_tcp -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:03.268 03:30:09 nvmf_tcp -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:27:03.268 Found net devices under 0000:0a:00.0: cvl_0_0 00:27:03.268 03:30:09 nvmf_tcp -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:03.268 03:30:09 nvmf_tcp -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:03.268 03:30:09 nvmf_tcp -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:03.268 03:30:09 nvmf_tcp -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:03.268 03:30:09 nvmf_tcp -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:03.268 03:30:09 nvmf_tcp -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:03.268 03:30:09 nvmf_tcp -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:03.268 03:30:09 nvmf_tcp -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:03.269 03:30:09 nvmf_tcp -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:27:03.269 Found net devices under 0000:0a:00.1: cvl_0_1 00:27:03.269 03:30:09 nvmf_tcp -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:03.269 03:30:09 nvmf_tcp -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:03.269 03:30:09 nvmf_tcp -- nvmf/nvmf.sh@74 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:03.269 03:30:09 nvmf_tcp -- nvmf/nvmf.sh@75 -- # (( 2 > 0 )) 00:27:03.269 03:30:09 nvmf_tcp -- nvmf/nvmf.sh@76 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:27:03.269 03:30:09 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:27:03.269 03:30:09 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:03.269 03:30:09 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:03.269 ************************************ 00:27:03.269 START TEST nvmf_perf_adq 00:27:03.269 ************************************ 00:27:03.269 03:30:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:27:03.269 * Looking for test storage... 00:27:03.269 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:27:03.269 03:30:09 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:03.269 03:30:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:27:03.269 03:30:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:03.269 03:30:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:03.269 03:30:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:03.269 03:30:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:03.269 03:30:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:03.269 03:30:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:03.269 03:30:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:03.269 03:30:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:03.269 03:30:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:03.269 03:30:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:03.269 03:30:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:27:03.269 03:30:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:27:03.269 03:30:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:03.269 03:30:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:03.269 03:30:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:03.269 03:30:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:03.269 03:30:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:03.269 03:30:09 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:03.269 03:30:09 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:03.269 03:30:09 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:03.269 03:30:09 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:03.269 03:30:09 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:03.269 03:30:09 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:03.269 03:30:09 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:27:03.269 03:30:09 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:03.269 03:30:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@47 -- # : 0 00:27:03.269 03:30:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:03.269 03:30:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:03.269 03:30:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:03.269 03:30:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:03.269 03:30:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:03.269 03:30:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:03.269 03:30:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:03.269 03:30:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:03.269 03:30:09 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:27:03.269 03:30:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:27:03.269 03:30:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:05.171 03:30:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:05.171 03:30:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:27:05.171 03:30:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:05.171 03:30:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:05.171 03:30:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:05.171 03:30:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:05.171 03:30:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:05.171 03:30:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:27:05.171 03:30:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:05.171 03:30:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:27:05.171 03:30:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:27:05.171 03:30:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:27:05.171 03:30:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:27:05.171 03:30:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:27:05.171 03:30:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:27:05.171 03:30:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:05.171 03:30:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:05.171 03:30:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:05.171 03:30:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:05.171 03:30:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:05.171 03:30:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:05.171 03:30:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:05.171 03:30:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:05.171 03:30:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:05.171 03:30:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:05.171 03:30:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:05.171 03:30:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:05.171 03:30:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:05.171 03:30:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:05.171 03:30:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:05.171 03:30:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:05.171 03:30:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:05.171 03:30:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:05.171 03:30:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:27:05.171 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:27:05.171 03:30:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:05.171 03:30:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:05.171 03:30:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:05.171 03:30:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:05.171 03:30:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:05.171 03:30:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:05.171 03:30:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:27:05.171 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:27:05.171 03:30:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:05.171 03:30:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:05.171 03:30:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:05.171 03:30:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:05.171 03:30:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:05.171 03:30:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:05.171 03:30:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:05.171 03:30:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:05.171 03:30:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:05.171 03:30:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:05.171 03:30:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:05.171 03:30:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:05.171 03:30:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:05.171 03:30:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:05.171 03:30:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:05.171 03:30:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:27:05.171 Found net devices under 0000:0a:00.0: cvl_0_0 00:27:05.171 03:30:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:05.171 03:30:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:05.171 03:30:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:05.171 03:30:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:05.171 03:30:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:05.171 03:30:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:05.171 03:30:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:05.171 03:30:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:05.171 03:30:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:27:05.171 Found net devices under 0000:0a:00.1: cvl_0_1 00:27:05.171 03:30:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:05.172 03:30:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:05.172 03:30:11 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:05.172 03:30:11 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:27:05.172 03:30:11 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:27:05.172 03:30:11 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@60 -- # adq_reload_driver 00:27:05.172 03:30:11 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:27:05.739 03:30:11 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:27:07.639 03:30:13 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:27:12.909 03:30:18 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@68 -- # nvmftestinit 00:27:12.909 03:30:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:12.909 03:30:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:12.909 03:30:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:12.909 03:30:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:12.909 03:30:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:12.909 03:30:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:12.909 03:30:18 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:12.909 03:30:18 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:12.909 03:30:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:12.909 03:30:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:12.909 03:30:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:27:12.909 03:30:18 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:12.909 03:30:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:12.909 03:30:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:27:12.909 03:30:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:12.909 03:30:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:12.909 03:30:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:12.909 03:30:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:12.909 03:30:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:12.909 03:30:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:27:12.909 03:30:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:12.909 03:30:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:27:12.909 03:30:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:27:12.909 03:30:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:27:12.909 03:30:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:27:12.909 03:30:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:27:12.909 03:30:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:27:12.909 03:30:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:12.909 03:30:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:12.909 03:30:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:12.909 03:30:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:12.909 03:30:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:12.909 03:30:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:12.909 03:30:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:12.909 03:30:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:12.909 03:30:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:12.909 03:30:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:12.909 03:30:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:12.909 03:30:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:12.909 03:30:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:12.909 03:30:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:12.909 03:30:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:12.909 03:30:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:12.909 03:30:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:12.909 03:30:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:12.909 03:30:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:27:12.909 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:27:12.909 03:30:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:12.909 03:30:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:12.909 03:30:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:12.909 03:30:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:12.909 03:30:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:12.909 03:30:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:12.909 03:30:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:27:12.909 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:27:12.909 03:30:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:12.909 03:30:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:12.909 03:30:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:12.909 03:30:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:12.909 03:30:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:12.909 03:30:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:12.909 03:30:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:12.909 03:30:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:12.909 03:30:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:12.909 03:30:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:12.909 03:30:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:12.909 03:30:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:12.909 03:30:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:12.909 03:30:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:12.909 03:30:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:12.909 03:30:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:27:12.909 Found net devices under 0000:0a:00.0: cvl_0_0 00:27:12.909 03:30:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:12.909 03:30:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:12.909 03:30:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:12.909 03:30:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:12.909 03:30:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:12.909 03:30:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:12.909 03:30:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:12.909 03:30:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:12.909 03:30:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:27:12.909 Found net devices under 0000:0a:00.1: cvl_0_1 00:27:12.909 03:30:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:12.909 03:30:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:12.909 03:30:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:27:12.909 03:30:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:12.909 03:30:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:12.909 03:30:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:12.909 03:30:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:12.909 03:30:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:12.909 03:30:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:12.909 03:30:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:12.909 03:30:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:12.909 03:30:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:12.909 03:30:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:12.909 03:30:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:12.909 03:30:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:12.909 03:30:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:12.909 03:30:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:12.909 03:30:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:12.909 03:30:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:12.909 03:30:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:12.909 03:30:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:12.909 03:30:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:12.909 03:30:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:12.909 03:30:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:12.909 03:30:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:12.909 03:30:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:12.909 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:12.909 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.130 ms 00:27:12.910 00:27:12.910 --- 10.0.0.2 ping statistics --- 00:27:12.910 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:12.910 rtt min/avg/max/mdev = 0.130/0.130/0.130/0.000 ms 00:27:12.910 03:30:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:12.910 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:12.910 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.117 ms 00:27:12.910 00:27:12.910 --- 10.0.0.1 ping statistics --- 00:27:12.910 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:12.910 rtt min/avg/max/mdev = 0.117/0.117/0.117/0.000 ms 00:27:12.910 03:30:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:12.910 03:30:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:27:12.910 03:30:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:12.910 03:30:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:12.910 03:30:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:12.910 03:30:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:12.910 03:30:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:12.910 03:30:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:12.910 03:30:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:12.910 03:30:18 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@69 -- # nvmfappstart -m 0xF --wait-for-rpc 00:27:12.910 03:30:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:12.910 03:30:18 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:12.910 03:30:18 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:12.910 03:30:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:27:12.910 03:30:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=3270320 00:27:12.910 03:30:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 3270320 00:27:12.910 03:30:18 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@829 -- # '[' -z 3270320 ']' 00:27:12.910 03:30:18 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:12.910 03:30:18 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:12.910 03:30:18 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:12.910 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:12.910 03:30:18 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:12.910 03:30:18 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:12.910 [2024-07-15 03:30:18.978735] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:27:12.910 [2024-07-15 03:30:18.978820] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:12.910 EAL: No free 2048 kB hugepages reported on node 1 00:27:13.168 [2024-07-15 03:30:19.053405] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:13.168 [2024-07-15 03:30:19.144198] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:13.168 [2024-07-15 03:30:19.144258] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:13.168 [2024-07-15 03:30:19.144283] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:13.168 [2024-07-15 03:30:19.144296] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:13.168 [2024-07-15 03:30:19.144317] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:13.168 [2024-07-15 03:30:19.144406] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:13.168 [2024-07-15 03:30:19.144460] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:27:13.168 [2024-07-15 03:30:19.144576] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:27:13.168 [2024-07-15 03:30:19.144577] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:13.168 03:30:19 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:13.168 03:30:19 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@862 -- # return 0 00:27:13.168 03:30:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:13.168 03:30:19 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:13.168 03:30:19 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:13.168 03:30:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:13.168 03:30:19 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@70 -- # adq_configure_nvmf_target 0 00:27:13.168 03:30:19 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:27:13.168 03:30:19 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:27:13.168 03:30:19 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:13.168 03:30:19 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:13.168 03:30:19 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:13.168 03:30:19 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:27:13.168 03:30:19 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:27:13.168 03:30:19 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:13.168 03:30:19 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:13.168 03:30:19 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:13.168 03:30:19 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:27:13.168 03:30:19 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:13.168 03:30:19 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:13.427 03:30:19 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:13.427 03:30:19 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:27:13.427 03:30:19 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:13.427 03:30:19 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:13.427 [2024-07-15 03:30:19.348437] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:13.427 03:30:19 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:13.427 03:30:19 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:27:13.427 03:30:19 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:13.427 03:30:19 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:13.427 Malloc1 00:27:13.427 03:30:19 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:13.427 03:30:19 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:13.427 03:30:19 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:13.427 03:30:19 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:13.427 03:30:19 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:13.427 03:30:19 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:27:13.427 03:30:19 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:13.427 03:30:19 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:13.427 03:30:19 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:13.427 03:30:19 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:13.427 03:30:19 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:13.427 03:30:19 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:13.427 [2024-07-15 03:30:19.399240] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:13.427 03:30:19 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:13.427 03:30:19 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@74 -- # perfpid=3270352 00:27:13.427 03:30:19 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:27:13.427 03:30:19 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@75 -- # sleep 2 00:27:13.427 EAL: No free 2048 kB hugepages reported on node 1 00:27:15.326 03:30:21 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@77 -- # rpc_cmd nvmf_get_stats 00:27:15.326 03:30:21 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:15.326 03:30:21 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:15.326 03:30:21 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:15.326 03:30:21 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmf_stats='{ 00:27:15.326 "tick_rate": 2700000000, 00:27:15.326 "poll_groups": [ 00:27:15.326 { 00:27:15.326 "name": "nvmf_tgt_poll_group_000", 00:27:15.326 "admin_qpairs": 1, 00:27:15.326 "io_qpairs": 1, 00:27:15.326 "current_admin_qpairs": 1, 00:27:15.326 "current_io_qpairs": 1, 00:27:15.326 "pending_bdev_io": 0, 00:27:15.326 "completed_nvme_io": 20817, 00:27:15.326 "transports": [ 00:27:15.326 { 00:27:15.326 "trtype": "TCP" 00:27:15.326 } 00:27:15.326 ] 00:27:15.326 }, 00:27:15.326 { 00:27:15.326 "name": "nvmf_tgt_poll_group_001", 00:27:15.326 "admin_qpairs": 0, 00:27:15.326 "io_qpairs": 1, 00:27:15.326 "current_admin_qpairs": 0, 00:27:15.326 "current_io_qpairs": 1, 00:27:15.326 "pending_bdev_io": 0, 00:27:15.326 "completed_nvme_io": 20101, 00:27:15.326 "transports": [ 00:27:15.326 { 00:27:15.326 "trtype": "TCP" 00:27:15.326 } 00:27:15.326 ] 00:27:15.326 }, 00:27:15.326 { 00:27:15.326 "name": "nvmf_tgt_poll_group_002", 00:27:15.326 "admin_qpairs": 0, 00:27:15.326 "io_qpairs": 1, 00:27:15.326 "current_admin_qpairs": 0, 00:27:15.326 "current_io_qpairs": 1, 00:27:15.326 "pending_bdev_io": 0, 00:27:15.326 "completed_nvme_io": 20754, 00:27:15.326 "transports": [ 00:27:15.326 { 00:27:15.326 "trtype": "TCP" 00:27:15.326 } 00:27:15.326 ] 00:27:15.326 }, 00:27:15.326 { 00:27:15.326 "name": "nvmf_tgt_poll_group_003", 00:27:15.326 "admin_qpairs": 0, 00:27:15.326 "io_qpairs": 1, 00:27:15.326 "current_admin_qpairs": 0, 00:27:15.326 "current_io_qpairs": 1, 00:27:15.326 "pending_bdev_io": 0, 00:27:15.326 "completed_nvme_io": 20220, 00:27:15.326 "transports": [ 00:27:15.326 { 00:27:15.326 "trtype": "TCP" 00:27:15.326 } 00:27:15.326 ] 00:27:15.326 } 00:27:15.326 ] 00:27:15.326 }' 00:27:15.326 03:30:21 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:27:15.326 03:30:21 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # wc -l 00:27:15.326 03:30:21 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # count=4 00:27:15.326 03:30:21 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@79 -- # [[ 4 -ne 4 ]] 00:27:15.326 03:30:21 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@83 -- # wait 3270352 00:27:23.429 Initializing NVMe Controllers 00:27:23.429 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:23.430 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:27:23.430 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:27:23.430 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:27:23.430 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:27:23.430 Initialization complete. Launching workers. 00:27:23.430 ======================================================== 00:27:23.430 Latency(us) 00:27:23.430 Device Information : IOPS MiB/s Average min max 00:27:23.430 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 10590.26 41.37 6042.94 2947.57 8041.39 00:27:23.430 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 10589.26 41.36 6045.77 2376.65 8800.80 00:27:23.430 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 10881.35 42.51 5883.77 3519.86 7692.29 00:27:23.430 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 10918.55 42.65 5863.52 2459.95 8510.04 00:27:23.430 ======================================================== 00:27:23.430 Total : 42979.42 167.89 5957.76 2376.65 8800.80 00:27:23.430 00:27:23.430 03:30:29 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@84 -- # nvmftestfini 00:27:23.430 03:30:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:23.430 03:30:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:27:23.430 03:30:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:23.430 03:30:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:27:23.430 03:30:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:23.430 03:30:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:23.430 rmmod nvme_tcp 00:27:23.430 rmmod nvme_fabrics 00:27:23.430 rmmod nvme_keyring 00:27:23.687 03:30:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:23.687 03:30:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:27:23.687 03:30:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:27:23.687 03:30:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 3270320 ']' 00:27:23.687 03:30:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 3270320 00:27:23.687 03:30:29 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@948 -- # '[' -z 3270320 ']' 00:27:23.687 03:30:29 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@952 -- # kill -0 3270320 00:27:23.687 03:30:29 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # uname 00:27:23.687 03:30:29 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:23.687 03:30:29 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3270320 00:27:23.687 03:30:29 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:27:23.687 03:30:29 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:27:23.687 03:30:29 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3270320' 00:27:23.687 killing process with pid 3270320 00:27:23.687 03:30:29 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@967 -- # kill 3270320 00:27:23.687 03:30:29 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@972 -- # wait 3270320 00:27:23.945 03:30:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:23.945 03:30:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:23.945 03:30:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:23.945 03:30:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:23.945 03:30:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:23.945 03:30:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:23.945 03:30:29 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:23.945 03:30:29 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:25.843 03:30:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:25.843 03:30:31 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@86 -- # adq_reload_driver 00:27:25.843 03:30:31 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:27:26.409 03:30:32 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:27:28.338 03:30:34 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:27:33.606 03:30:39 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@89 -- # nvmftestinit 00:27:33.606 03:30:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:33.606 03:30:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:33.606 03:30:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:33.606 03:30:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:33.606 03:30:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:33.606 03:30:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:33.606 03:30:39 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:33.606 03:30:39 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:33.606 03:30:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:33.606 03:30:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:33.606 03:30:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:27:33.606 03:30:39 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:33.606 03:30:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:33.606 03:30:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:27:33.606 03:30:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:33.606 03:30:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:33.606 03:30:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:33.606 03:30:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:33.606 03:30:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:33.606 03:30:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:27:33.606 03:30:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:33.606 03:30:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:27:33.606 03:30:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:27:33.606 03:30:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:27:33.606 03:30:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:27:33.606 03:30:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:27:33.606 03:30:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:27:33.606 03:30:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:33.606 03:30:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:33.606 03:30:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:33.606 03:30:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:33.606 03:30:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:33.606 03:30:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:33.606 03:30:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:33.606 03:30:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:33.606 03:30:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:33.606 03:30:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:33.606 03:30:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:33.606 03:30:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:33.606 03:30:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:33.606 03:30:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:33.606 03:30:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:33.606 03:30:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:33.606 03:30:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:33.606 03:30:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:33.606 03:30:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:27:33.606 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:27:33.606 03:30:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:33.606 03:30:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:33.606 03:30:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:33.606 03:30:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:33.606 03:30:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:33.606 03:30:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:33.606 03:30:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:27:33.606 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:27:33.606 03:30:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:33.606 03:30:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:33.606 03:30:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:33.606 03:30:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:33.606 03:30:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:33.606 03:30:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:33.606 03:30:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:33.606 03:30:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:33.606 03:30:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:33.606 03:30:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:33.606 03:30:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:33.606 03:30:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:33.606 03:30:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:33.606 03:30:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:33.606 03:30:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:33.606 03:30:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:27:33.606 Found net devices under 0000:0a:00.0: cvl_0_0 00:27:33.606 03:30:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:33.606 03:30:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:33.606 03:30:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:33.606 03:30:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:33.606 03:30:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:33.606 03:30:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:33.606 03:30:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:33.606 03:30:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:33.606 03:30:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:27:33.606 Found net devices under 0000:0a:00.1: cvl_0_1 00:27:33.606 03:30:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:33.606 03:30:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:33.606 03:30:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:27:33.606 03:30:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:33.606 03:30:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:33.606 03:30:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:33.606 03:30:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:33.606 03:30:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:33.606 03:30:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:33.606 03:30:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:33.606 03:30:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:33.606 03:30:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:33.606 03:30:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:33.606 03:30:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:33.606 03:30:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:33.606 03:30:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:33.606 03:30:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:33.607 03:30:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:33.607 03:30:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:33.607 03:30:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:33.607 03:30:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:33.607 03:30:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:33.607 03:30:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:33.607 03:30:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:33.607 03:30:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:33.607 03:30:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:33.607 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:33.607 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.143 ms 00:27:33.607 00:27:33.607 --- 10.0.0.2 ping statistics --- 00:27:33.607 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:33.607 rtt min/avg/max/mdev = 0.143/0.143/0.143/0.000 ms 00:27:33.607 03:30:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:33.607 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:33.607 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.104 ms 00:27:33.607 00:27:33.607 --- 10.0.0.1 ping statistics --- 00:27:33.607 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:33.607 rtt min/avg/max/mdev = 0.104/0.104/0.104/0.000 ms 00:27:33.607 03:30:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:33.607 03:30:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:27:33.607 03:30:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:33.607 03:30:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:33.607 03:30:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:33.607 03:30:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:33.607 03:30:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:33.607 03:30:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:33.607 03:30:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:33.607 03:30:39 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@90 -- # adq_configure_driver 00:27:33.607 03:30:39 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:27:33.607 03:30:39 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:27:33.607 03:30:39 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:27:33.607 net.core.busy_poll = 1 00:27:33.607 03:30:39 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:27:33.607 net.core.busy_read = 1 00:27:33.607 03:30:39 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:27:33.607 03:30:39 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:27:33.607 03:30:39 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:27:33.607 03:30:39 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:27:33.607 03:30:39 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:27:33.607 03:30:39 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@91 -- # nvmfappstart -m 0xF --wait-for-rpc 00:27:33.607 03:30:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:33.607 03:30:39 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:33.607 03:30:39 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:33.607 03:30:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=3272958 00:27:33.607 03:30:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:27:33.607 03:30:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 3272958 00:27:33.607 03:30:39 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@829 -- # '[' -z 3272958 ']' 00:27:33.607 03:30:39 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:33.607 03:30:39 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:33.607 03:30:39 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:33.607 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:33.607 03:30:39 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:33.607 03:30:39 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:33.865 [2024-07-15 03:30:39.785835] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:27:33.865 [2024-07-15 03:30:39.785941] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:33.865 EAL: No free 2048 kB hugepages reported on node 1 00:27:33.865 [2024-07-15 03:30:39.851691] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:33.865 [2024-07-15 03:30:39.941287] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:33.865 [2024-07-15 03:30:39.941347] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:33.865 [2024-07-15 03:30:39.941360] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:33.865 [2024-07-15 03:30:39.941371] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:33.865 [2024-07-15 03:30:39.941380] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:33.865 [2024-07-15 03:30:39.941465] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:33.865 [2024-07-15 03:30:39.941531] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:27:33.865 [2024-07-15 03:30:39.941595] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:27:33.865 [2024-07-15 03:30:39.941597] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:33.865 03:30:39 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:33.865 03:30:39 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@862 -- # return 0 00:27:33.865 03:30:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:33.865 03:30:39 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:33.865 03:30:39 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:34.123 03:30:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:34.123 03:30:40 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@92 -- # adq_configure_nvmf_target 1 00:27:34.123 03:30:40 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:27:34.123 03:30:40 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:27:34.123 03:30:40 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:34.123 03:30:40 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:34.123 03:30:40 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:34.123 03:30:40 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:27:34.123 03:30:40 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:27:34.123 03:30:40 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:34.123 03:30:40 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:34.123 03:30:40 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:34.123 03:30:40 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:27:34.123 03:30:40 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:34.123 03:30:40 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:34.123 03:30:40 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:34.123 03:30:40 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:27:34.123 03:30:40 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:34.123 03:30:40 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:34.123 [2024-07-15 03:30:40.195884] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:34.123 03:30:40 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:34.123 03:30:40 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:27:34.123 03:30:40 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:34.123 03:30:40 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:34.123 Malloc1 00:27:34.123 03:30:40 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:34.123 03:30:40 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:34.123 03:30:40 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:34.123 03:30:40 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:34.123 03:30:40 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:34.123 03:30:40 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:27:34.123 03:30:40 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:34.123 03:30:40 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:34.123 03:30:40 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:34.123 03:30:40 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:34.123 03:30:40 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:34.123 03:30:40 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:34.123 [2024-07-15 03:30:40.249042] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:34.123 03:30:40 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:34.123 03:30:40 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@96 -- # perfpid=3272990 00:27:34.123 03:30:40 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@97 -- # sleep 2 00:27:34.123 03:30:40 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:27:34.381 EAL: No free 2048 kB hugepages reported on node 1 00:27:36.286 03:30:42 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@99 -- # rpc_cmd nvmf_get_stats 00:27:36.287 03:30:42 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:36.287 03:30:42 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:36.287 03:30:42 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:36.287 03:30:42 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmf_stats='{ 00:27:36.287 "tick_rate": 2700000000, 00:27:36.287 "poll_groups": [ 00:27:36.287 { 00:27:36.287 "name": "nvmf_tgt_poll_group_000", 00:27:36.287 "admin_qpairs": 1, 00:27:36.287 "io_qpairs": 3, 00:27:36.287 "current_admin_qpairs": 1, 00:27:36.287 "current_io_qpairs": 3, 00:27:36.287 "pending_bdev_io": 0, 00:27:36.287 "completed_nvme_io": 27380, 00:27:36.287 "transports": [ 00:27:36.287 { 00:27:36.287 "trtype": "TCP" 00:27:36.287 } 00:27:36.287 ] 00:27:36.287 }, 00:27:36.287 { 00:27:36.287 "name": "nvmf_tgt_poll_group_001", 00:27:36.287 "admin_qpairs": 0, 00:27:36.287 "io_qpairs": 1, 00:27:36.287 "current_admin_qpairs": 0, 00:27:36.287 "current_io_qpairs": 1, 00:27:36.287 "pending_bdev_io": 0, 00:27:36.287 "completed_nvme_io": 25986, 00:27:36.287 "transports": [ 00:27:36.287 { 00:27:36.287 "trtype": "TCP" 00:27:36.287 } 00:27:36.287 ] 00:27:36.287 }, 00:27:36.287 { 00:27:36.287 "name": "nvmf_tgt_poll_group_002", 00:27:36.287 "admin_qpairs": 0, 00:27:36.287 "io_qpairs": 0, 00:27:36.287 "current_admin_qpairs": 0, 00:27:36.287 "current_io_qpairs": 0, 00:27:36.287 "pending_bdev_io": 0, 00:27:36.287 "completed_nvme_io": 0, 00:27:36.287 "transports": [ 00:27:36.287 { 00:27:36.287 "trtype": "TCP" 00:27:36.287 } 00:27:36.287 ] 00:27:36.287 }, 00:27:36.287 { 00:27:36.287 "name": "nvmf_tgt_poll_group_003", 00:27:36.287 "admin_qpairs": 0, 00:27:36.287 "io_qpairs": 0, 00:27:36.287 "current_admin_qpairs": 0, 00:27:36.287 "current_io_qpairs": 0, 00:27:36.287 "pending_bdev_io": 0, 00:27:36.287 "completed_nvme_io": 0, 00:27:36.287 "transports": [ 00:27:36.287 { 00:27:36.287 "trtype": "TCP" 00:27:36.287 } 00:27:36.287 ] 00:27:36.287 } 00:27:36.287 ] 00:27:36.287 }' 00:27:36.287 03:30:42 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:27:36.287 03:30:42 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # wc -l 00:27:36.287 03:30:42 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # count=2 00:27:36.287 03:30:42 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@101 -- # [[ 2 -lt 2 ]] 00:27:36.287 03:30:42 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@106 -- # wait 3272990 00:27:44.396 Initializing NVMe Controllers 00:27:44.396 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:44.396 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:27:44.396 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:27:44.396 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:27:44.396 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:27:44.396 Initialization complete. Launching workers. 00:27:44.396 ======================================================== 00:27:44.396 Latency(us) 00:27:44.396 Device Information : IOPS MiB/s Average min max 00:27:44.396 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 4094.20 15.99 15686.12 1901.71 64745.86 00:27:44.396 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 13430.40 52.46 4764.95 1352.24 7058.29 00:27:44.396 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 5713.80 22.32 11226.54 1855.31 60343.51 00:27:44.396 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 4342.50 16.96 14743.46 2113.21 61575.33 00:27:44.396 ======================================================== 00:27:44.396 Total : 27580.90 107.74 9295.81 1352.24 64745.86 00:27:44.396 00:27:44.396 03:30:50 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmftestfini 00:27:44.396 03:30:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:44.396 03:30:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:27:44.396 03:30:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:44.396 03:30:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:27:44.396 03:30:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:44.396 03:30:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:44.396 rmmod nvme_tcp 00:27:44.397 rmmod nvme_fabrics 00:27:44.397 rmmod nvme_keyring 00:27:44.397 03:30:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:44.397 03:30:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:27:44.397 03:30:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:27:44.397 03:30:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 3272958 ']' 00:27:44.397 03:30:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 3272958 00:27:44.397 03:30:50 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@948 -- # '[' -z 3272958 ']' 00:27:44.397 03:30:50 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@952 -- # kill -0 3272958 00:27:44.397 03:30:50 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # uname 00:27:44.397 03:30:50 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:44.397 03:30:50 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3272958 00:27:44.397 03:30:50 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:27:44.397 03:30:50 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:27:44.397 03:30:50 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3272958' 00:27:44.397 killing process with pid 3272958 00:27:44.397 03:30:50 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@967 -- # kill 3272958 00:27:44.397 03:30:50 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@972 -- # wait 3272958 00:27:44.654 03:30:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:44.654 03:30:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:44.654 03:30:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:44.655 03:30:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:44.655 03:30:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:44.655 03:30:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:44.655 03:30:50 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:44.655 03:30:50 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:47.940 03:30:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:47.940 03:30:53 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:27:47.940 00:27:47.940 real 0m44.516s 00:27:47.940 user 2m38.668s 00:27:47.940 sys 0m9.663s 00:27:47.940 03:30:53 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:47.940 03:30:53 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:47.940 ************************************ 00:27:47.940 END TEST nvmf_perf_adq 00:27:47.940 ************************************ 00:27:47.940 03:30:53 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:27:47.940 03:30:53 nvmf_tcp -- nvmf/nvmf.sh@83 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:27:47.940 03:30:53 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:27:47.940 03:30:53 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:47.940 03:30:53 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:47.940 ************************************ 00:27:47.940 START TEST nvmf_shutdown 00:27:47.940 ************************************ 00:27:47.940 03:30:53 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:27:47.940 * Looking for test storage... 00:27:47.940 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:27:47.940 03:30:53 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:47.940 03:30:53 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:27:47.940 03:30:53 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:47.940 03:30:53 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:47.940 03:30:53 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:47.940 03:30:53 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:47.940 03:30:53 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:47.940 03:30:53 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:47.940 03:30:53 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:47.940 03:30:53 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:47.940 03:30:53 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:47.940 03:30:53 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:47.940 03:30:53 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:27:47.940 03:30:53 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:27:47.940 03:30:53 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:47.940 03:30:53 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:47.940 03:30:53 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:47.940 03:30:53 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:47.940 03:30:53 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:47.940 03:30:53 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:47.940 03:30:53 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:47.940 03:30:53 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:47.940 03:30:53 nvmf_tcp.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:47.940 03:30:53 nvmf_tcp.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:47.940 03:30:53 nvmf_tcp.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:47.940 03:30:53 nvmf_tcp.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:27:47.940 03:30:53 nvmf_tcp.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:47.940 03:30:53 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@47 -- # : 0 00:27:47.940 03:30:53 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:47.940 03:30:53 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:47.940 03:30:53 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:47.940 03:30:53 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:47.940 03:30:53 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:47.940 03:30:53 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:47.940 03:30:53 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:47.940 03:30:53 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:47.940 03:30:53 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@11 -- # MALLOC_BDEV_SIZE=64 00:27:47.940 03:30:53 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:27:47.940 03:30:53 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@147 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:27:47.940 03:30:53 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:27:47.940 03:30:53 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:47.940 03:30:53 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:27:47.940 ************************************ 00:27:47.940 START TEST nvmf_shutdown_tc1 00:27:47.940 ************************************ 00:27:47.940 03:30:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1123 -- # nvmf_shutdown_tc1 00:27:47.940 03:30:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@74 -- # starttarget 00:27:47.940 03:30:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@15 -- # nvmftestinit 00:27:47.940 03:30:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:47.940 03:30:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:47.940 03:30:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:47.940 03:30:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:47.941 03:30:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:47.941 03:30:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:47.941 03:30:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:47.941 03:30:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:47.941 03:30:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:47.941 03:30:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:47.941 03:30:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@285 -- # xtrace_disable 00:27:47.941 03:30:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:50.474 03:30:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:50.474 03:30:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # pci_devs=() 00:27:50.474 03:30:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:50.474 03:30:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:50.474 03:30:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:50.474 03:30:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:50.474 03:30:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:50.474 03:30:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # net_devs=() 00:27:50.474 03:30:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:50.474 03:30:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # e810=() 00:27:50.474 03:30:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # local -ga e810 00:27:50.474 03:30:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # x722=() 00:27:50.474 03:30:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # local -ga x722 00:27:50.474 03:30:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # mlx=() 00:27:50.474 03:30:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # local -ga mlx 00:27:50.474 03:30:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:50.474 03:30:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:50.474 03:30:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:50.474 03:30:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:50.474 03:30:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:50.474 03:30:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:50.474 03:30:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:50.474 03:30:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:50.474 03:30:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:50.474 03:30:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:50.474 03:30:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:50.474 03:30:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:50.474 03:30:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:50.474 03:30:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:50.474 03:30:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:50.474 03:30:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:50.474 03:30:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:50.474 03:30:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:50.474 03:30:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:27:50.474 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:27:50.474 03:30:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:50.474 03:30:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:50.474 03:30:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:50.474 03:30:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:50.474 03:30:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:50.474 03:30:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:50.474 03:30:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:27:50.474 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:27:50.474 03:30:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:50.474 03:30:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:50.474 03:30:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:50.474 03:30:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:50.474 03:30:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:50.474 03:30:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:50.474 03:30:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:50.474 03:30:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:50.474 03:30:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:50.474 03:30:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:50.474 03:30:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:50.474 03:30:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:50.474 03:30:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:50.474 03:30:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:50.474 03:30:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:50.474 03:30:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:27:50.474 Found net devices under 0000:0a:00.0: cvl_0_0 00:27:50.474 03:30:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:50.474 03:30:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:50.474 03:30:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:50.474 03:30:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:50.474 03:30:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:50.474 03:30:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:50.474 03:30:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:50.474 03:30:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:50.474 03:30:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:27:50.474 Found net devices under 0000:0a:00.1: cvl_0_1 00:27:50.474 03:30:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:50.474 03:30:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:50.474 03:30:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # is_hw=yes 00:27:50.474 03:30:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:50.474 03:30:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:50.475 03:30:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:50.475 03:30:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:50.475 03:30:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:50.475 03:30:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:50.475 03:30:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:50.475 03:30:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:50.475 03:30:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:50.475 03:30:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:50.475 03:30:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:50.475 03:30:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:50.475 03:30:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:50.475 03:30:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:50.475 03:30:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:50.475 03:30:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:50.475 03:30:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:50.475 03:30:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:50.475 03:30:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:50.475 03:30:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:50.475 03:30:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:50.475 03:30:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:50.475 03:30:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:50.475 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:50.475 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.141 ms 00:27:50.475 00:27:50.475 --- 10.0.0.2 ping statistics --- 00:27:50.475 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:50.475 rtt min/avg/max/mdev = 0.141/0.141/0.141/0.000 ms 00:27:50.475 03:30:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:50.475 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:50.475 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.143 ms 00:27:50.475 00:27:50.475 --- 10.0.0.1 ping statistics --- 00:27:50.475 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:50.475 rtt min/avg/max/mdev = 0.143/0.143/0.143/0.000 ms 00:27:50.475 03:30:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:50.475 03:30:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # return 0 00:27:50.475 03:30:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:50.475 03:30:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:50.475 03:30:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:50.475 03:30:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:50.475 03:30:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:50.475 03:30:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:50.475 03:30:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:50.475 03:30:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:27:50.475 03:30:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:50.475 03:30:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:50.475 03:30:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:50.475 03:30:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@481 -- # nvmfpid=3276281 00:27:50.475 03:30:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:27:50.475 03:30:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # waitforlisten 3276281 00:27:50.475 03:30:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@829 -- # '[' -z 3276281 ']' 00:27:50.475 03:30:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:50.475 03:30:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:50.475 03:30:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:50.475 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:50.475 03:30:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:50.475 03:30:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:50.475 [2024-07-15 03:30:56.271068] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:27:50.475 [2024-07-15 03:30:56.271139] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:50.475 EAL: No free 2048 kB hugepages reported on node 1 00:27:50.475 [2024-07-15 03:30:56.334377] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:50.475 [2024-07-15 03:30:56.420819] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:50.475 [2024-07-15 03:30:56.420870] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:50.475 [2024-07-15 03:30:56.420905] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:50.475 [2024-07-15 03:30:56.420918] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:50.475 [2024-07-15 03:30:56.420928] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:50.475 [2024-07-15 03:30:56.421019] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:27:50.475 [2024-07-15 03:30:56.421083] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:27:50.475 [2024-07-15 03:30:56.421149] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:27:50.475 [2024-07-15 03:30:56.421151] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:50.475 03:30:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:50.475 03:30:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@862 -- # return 0 00:27:50.475 03:30:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:50.475 03:30:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:50.475 03:30:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:50.475 03:30:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:50.475 03:30:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:50.475 03:30:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:50.475 03:30:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:50.475 [2024-07-15 03:30:56.571677] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:50.475 03:30:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:50.475 03:30:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:27:50.475 03:30:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:27:50.475 03:30:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:50.475 03:30:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:50.475 03:30:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:27:50.475 03:30:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:50.475 03:30:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:27:50.475 03:30:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:50.475 03:30:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:27:50.475 03:30:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:50.475 03:30:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:27:50.475 03:30:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:50.475 03:30:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:27:50.475 03:30:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:50.475 03:30:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:27:50.475 03:30:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:50.475 03:30:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:27:50.475 03:30:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:50.475 03:30:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:27:50.475 03:30:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:50.475 03:30:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:27:50.475 03:30:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:50.475 03:30:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:27:50.475 03:30:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:50.475 03:30:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:27:50.476 03:30:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@35 -- # rpc_cmd 00:27:50.476 03:30:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:50.732 03:30:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:50.732 Malloc1 00:27:50.732 [2024-07-15 03:30:56.660806] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:50.732 Malloc2 00:27:50.732 Malloc3 00:27:50.732 Malloc4 00:27:50.732 Malloc5 00:27:50.987 Malloc6 00:27:50.987 Malloc7 00:27:50.987 Malloc8 00:27:50.987 Malloc9 00:27:50.987 Malloc10 00:27:50.987 03:30:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:50.987 03:30:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:27:50.987 03:30:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:50.987 03:30:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:51.245 03:30:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # perfpid=3276459 00:27:51.245 03:30:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # waitforlisten 3276459 /var/tmp/bdevperf.sock 00:27:51.245 03:30:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@829 -- # '[' -z 3276459 ']' 00:27:51.245 03:30:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:27:51.245 03:30:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:27:51.245 03:30:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:27:51.245 03:30:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:51.245 03:30:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:27:51.245 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:27:51.245 03:30:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:27:51.245 03:30:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:51.245 03:30:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:27:51.245 03:30:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:51.245 03:30:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:51.245 03:30:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:51.245 { 00:27:51.245 "params": { 00:27:51.245 "name": "Nvme$subsystem", 00:27:51.245 "trtype": "$TEST_TRANSPORT", 00:27:51.245 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:51.245 "adrfam": "ipv4", 00:27:51.245 "trsvcid": "$NVMF_PORT", 00:27:51.245 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:51.245 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:51.245 "hdgst": ${hdgst:-false}, 00:27:51.245 "ddgst": ${ddgst:-false} 00:27:51.245 }, 00:27:51.245 "method": "bdev_nvme_attach_controller" 00:27:51.245 } 00:27:51.245 EOF 00:27:51.245 )") 00:27:51.245 03:30:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:51.245 03:30:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:51.245 03:30:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:51.245 { 00:27:51.245 "params": { 00:27:51.245 "name": "Nvme$subsystem", 00:27:51.245 "trtype": "$TEST_TRANSPORT", 00:27:51.245 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:51.245 "adrfam": "ipv4", 00:27:51.245 "trsvcid": "$NVMF_PORT", 00:27:51.245 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:51.245 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:51.245 "hdgst": ${hdgst:-false}, 00:27:51.245 "ddgst": ${ddgst:-false} 00:27:51.245 }, 00:27:51.245 "method": "bdev_nvme_attach_controller" 00:27:51.245 } 00:27:51.245 EOF 00:27:51.245 )") 00:27:51.245 03:30:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:51.245 03:30:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:51.245 03:30:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:51.245 { 00:27:51.245 "params": { 00:27:51.245 "name": "Nvme$subsystem", 00:27:51.245 "trtype": "$TEST_TRANSPORT", 00:27:51.245 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:51.245 "adrfam": "ipv4", 00:27:51.245 "trsvcid": "$NVMF_PORT", 00:27:51.245 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:51.245 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:51.245 "hdgst": ${hdgst:-false}, 00:27:51.245 "ddgst": ${ddgst:-false} 00:27:51.245 }, 00:27:51.245 "method": "bdev_nvme_attach_controller" 00:27:51.245 } 00:27:51.245 EOF 00:27:51.245 )") 00:27:51.245 03:30:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:51.245 03:30:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:51.245 03:30:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:51.245 { 00:27:51.245 "params": { 00:27:51.245 "name": "Nvme$subsystem", 00:27:51.245 "trtype": "$TEST_TRANSPORT", 00:27:51.245 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:51.245 "adrfam": "ipv4", 00:27:51.245 "trsvcid": "$NVMF_PORT", 00:27:51.245 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:51.245 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:51.245 "hdgst": ${hdgst:-false}, 00:27:51.245 "ddgst": ${ddgst:-false} 00:27:51.245 }, 00:27:51.245 "method": "bdev_nvme_attach_controller" 00:27:51.245 } 00:27:51.245 EOF 00:27:51.245 )") 00:27:51.245 03:30:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:51.245 03:30:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:51.245 03:30:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:51.245 { 00:27:51.245 "params": { 00:27:51.245 "name": "Nvme$subsystem", 00:27:51.245 "trtype": "$TEST_TRANSPORT", 00:27:51.245 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:51.245 "adrfam": "ipv4", 00:27:51.245 "trsvcid": "$NVMF_PORT", 00:27:51.245 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:51.245 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:51.245 "hdgst": ${hdgst:-false}, 00:27:51.245 "ddgst": ${ddgst:-false} 00:27:51.245 }, 00:27:51.245 "method": "bdev_nvme_attach_controller" 00:27:51.245 } 00:27:51.245 EOF 00:27:51.245 )") 00:27:51.245 03:30:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:51.245 03:30:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:51.245 03:30:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:51.245 { 00:27:51.245 "params": { 00:27:51.245 "name": "Nvme$subsystem", 00:27:51.245 "trtype": "$TEST_TRANSPORT", 00:27:51.245 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:51.245 "adrfam": "ipv4", 00:27:51.245 "trsvcid": "$NVMF_PORT", 00:27:51.245 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:51.245 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:51.245 "hdgst": ${hdgst:-false}, 00:27:51.245 "ddgst": ${ddgst:-false} 00:27:51.245 }, 00:27:51.245 "method": "bdev_nvme_attach_controller" 00:27:51.245 } 00:27:51.245 EOF 00:27:51.245 )") 00:27:51.245 03:30:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:51.245 03:30:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:51.245 03:30:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:51.245 { 00:27:51.245 "params": { 00:27:51.245 "name": "Nvme$subsystem", 00:27:51.245 "trtype": "$TEST_TRANSPORT", 00:27:51.245 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:51.245 "adrfam": "ipv4", 00:27:51.245 "trsvcid": "$NVMF_PORT", 00:27:51.245 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:51.245 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:51.245 "hdgst": ${hdgst:-false}, 00:27:51.245 "ddgst": ${ddgst:-false} 00:27:51.245 }, 00:27:51.245 "method": "bdev_nvme_attach_controller" 00:27:51.245 } 00:27:51.245 EOF 00:27:51.245 )") 00:27:51.245 03:30:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:51.245 03:30:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:51.245 03:30:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:51.245 { 00:27:51.245 "params": { 00:27:51.245 "name": "Nvme$subsystem", 00:27:51.245 "trtype": "$TEST_TRANSPORT", 00:27:51.245 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:51.245 "adrfam": "ipv4", 00:27:51.245 "trsvcid": "$NVMF_PORT", 00:27:51.245 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:51.245 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:51.245 "hdgst": ${hdgst:-false}, 00:27:51.245 "ddgst": ${ddgst:-false} 00:27:51.245 }, 00:27:51.245 "method": "bdev_nvme_attach_controller" 00:27:51.245 } 00:27:51.245 EOF 00:27:51.245 )") 00:27:51.245 03:30:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:51.245 03:30:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:51.245 03:30:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:51.245 { 00:27:51.245 "params": { 00:27:51.245 "name": "Nvme$subsystem", 00:27:51.245 "trtype": "$TEST_TRANSPORT", 00:27:51.245 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:51.245 "adrfam": "ipv4", 00:27:51.245 "trsvcid": "$NVMF_PORT", 00:27:51.245 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:51.245 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:51.245 "hdgst": ${hdgst:-false}, 00:27:51.245 "ddgst": ${ddgst:-false} 00:27:51.245 }, 00:27:51.245 "method": "bdev_nvme_attach_controller" 00:27:51.245 } 00:27:51.245 EOF 00:27:51.245 )") 00:27:51.245 03:30:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:51.245 03:30:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:51.245 03:30:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:51.245 { 00:27:51.245 "params": { 00:27:51.245 "name": "Nvme$subsystem", 00:27:51.245 "trtype": "$TEST_TRANSPORT", 00:27:51.245 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:51.245 "adrfam": "ipv4", 00:27:51.245 "trsvcid": "$NVMF_PORT", 00:27:51.245 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:51.245 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:51.245 "hdgst": ${hdgst:-false}, 00:27:51.245 "ddgst": ${ddgst:-false} 00:27:51.245 }, 00:27:51.245 "method": "bdev_nvme_attach_controller" 00:27:51.245 } 00:27:51.245 EOF 00:27:51.245 )") 00:27:51.245 03:30:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:51.245 03:30:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:27:51.245 03:30:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:27:51.245 03:30:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:27:51.245 "params": { 00:27:51.245 "name": "Nvme1", 00:27:51.245 "trtype": "tcp", 00:27:51.245 "traddr": "10.0.0.2", 00:27:51.245 "adrfam": "ipv4", 00:27:51.245 "trsvcid": "4420", 00:27:51.245 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:51.245 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:51.245 "hdgst": false, 00:27:51.245 "ddgst": false 00:27:51.245 }, 00:27:51.245 "method": "bdev_nvme_attach_controller" 00:27:51.245 },{ 00:27:51.245 "params": { 00:27:51.245 "name": "Nvme2", 00:27:51.245 "trtype": "tcp", 00:27:51.245 "traddr": "10.0.0.2", 00:27:51.245 "adrfam": "ipv4", 00:27:51.245 "trsvcid": "4420", 00:27:51.245 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:27:51.245 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:27:51.245 "hdgst": false, 00:27:51.245 "ddgst": false 00:27:51.245 }, 00:27:51.245 "method": "bdev_nvme_attach_controller" 00:27:51.245 },{ 00:27:51.245 "params": { 00:27:51.245 "name": "Nvme3", 00:27:51.245 "trtype": "tcp", 00:27:51.245 "traddr": "10.0.0.2", 00:27:51.245 "adrfam": "ipv4", 00:27:51.245 "trsvcid": "4420", 00:27:51.245 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:27:51.245 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:27:51.245 "hdgst": false, 00:27:51.245 "ddgst": false 00:27:51.245 }, 00:27:51.245 "method": "bdev_nvme_attach_controller" 00:27:51.245 },{ 00:27:51.245 "params": { 00:27:51.245 "name": "Nvme4", 00:27:51.245 "trtype": "tcp", 00:27:51.245 "traddr": "10.0.0.2", 00:27:51.245 "adrfam": "ipv4", 00:27:51.245 "trsvcid": "4420", 00:27:51.245 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:27:51.245 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:27:51.245 "hdgst": false, 00:27:51.245 "ddgst": false 00:27:51.245 }, 00:27:51.245 "method": "bdev_nvme_attach_controller" 00:27:51.245 },{ 00:27:51.245 "params": { 00:27:51.245 "name": "Nvme5", 00:27:51.245 "trtype": "tcp", 00:27:51.245 "traddr": "10.0.0.2", 00:27:51.245 "adrfam": "ipv4", 00:27:51.245 "trsvcid": "4420", 00:27:51.245 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:27:51.245 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:27:51.245 "hdgst": false, 00:27:51.245 "ddgst": false 00:27:51.245 }, 00:27:51.245 "method": "bdev_nvme_attach_controller" 00:27:51.245 },{ 00:27:51.245 "params": { 00:27:51.245 "name": "Nvme6", 00:27:51.245 "trtype": "tcp", 00:27:51.245 "traddr": "10.0.0.2", 00:27:51.245 "adrfam": "ipv4", 00:27:51.245 "trsvcid": "4420", 00:27:51.245 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:27:51.245 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:27:51.245 "hdgst": false, 00:27:51.245 "ddgst": false 00:27:51.245 }, 00:27:51.245 "method": "bdev_nvme_attach_controller" 00:27:51.245 },{ 00:27:51.245 "params": { 00:27:51.245 "name": "Nvme7", 00:27:51.245 "trtype": "tcp", 00:27:51.245 "traddr": "10.0.0.2", 00:27:51.245 "adrfam": "ipv4", 00:27:51.245 "trsvcid": "4420", 00:27:51.245 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:27:51.245 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:27:51.245 "hdgst": false, 00:27:51.245 "ddgst": false 00:27:51.245 }, 00:27:51.245 "method": "bdev_nvme_attach_controller" 00:27:51.245 },{ 00:27:51.245 "params": { 00:27:51.245 "name": "Nvme8", 00:27:51.245 "trtype": "tcp", 00:27:51.245 "traddr": "10.0.0.2", 00:27:51.245 "adrfam": "ipv4", 00:27:51.245 "trsvcid": "4420", 00:27:51.245 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:27:51.245 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:27:51.245 "hdgst": false, 00:27:51.245 "ddgst": false 00:27:51.245 }, 00:27:51.245 "method": "bdev_nvme_attach_controller" 00:27:51.245 },{ 00:27:51.245 "params": { 00:27:51.245 "name": "Nvme9", 00:27:51.245 "trtype": "tcp", 00:27:51.245 "traddr": "10.0.0.2", 00:27:51.245 "adrfam": "ipv4", 00:27:51.245 "trsvcid": "4420", 00:27:51.245 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:27:51.245 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:27:51.245 "hdgst": false, 00:27:51.245 "ddgst": false 00:27:51.245 }, 00:27:51.245 "method": "bdev_nvme_attach_controller" 00:27:51.245 },{ 00:27:51.245 "params": { 00:27:51.245 "name": "Nvme10", 00:27:51.245 "trtype": "tcp", 00:27:51.245 "traddr": "10.0.0.2", 00:27:51.245 "adrfam": "ipv4", 00:27:51.245 "trsvcid": "4420", 00:27:51.245 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:27:51.245 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:27:51.245 "hdgst": false, 00:27:51.245 "ddgst": false 00:27:51.245 }, 00:27:51.245 "method": "bdev_nvme_attach_controller" 00:27:51.245 }' 00:27:51.245 [2024-07-15 03:30:57.176541] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:27:51.245 [2024-07-15 03:30:57.176613] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:27:51.245 EAL: No free 2048 kB hugepages reported on node 1 00:27:51.245 [2024-07-15 03:30:57.242305] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:51.245 [2024-07-15 03:30:57.329010] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:53.138 03:30:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:53.139 03:30:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@862 -- # return 0 00:27:53.139 03:30:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:27:53.139 03:30:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:53.139 03:30:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:53.139 03:30:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:53.139 03:30:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@83 -- # kill -9 3276459 00:27:53.139 03:30:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # rm -f /var/run/spdk_bdev1 00:27:53.139 03:30:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@87 -- # sleep 1 00:27:54.071 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 73: 3276459 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:27:54.071 03:31:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # kill -0 3276281 00:27:54.071 03:31:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:27:54.071 03:31:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:27:54.071 03:31:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:27:54.071 03:31:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:27:54.071 03:31:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:54.071 03:31:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:54.071 { 00:27:54.071 "params": { 00:27:54.071 "name": "Nvme$subsystem", 00:27:54.071 "trtype": "$TEST_TRANSPORT", 00:27:54.071 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:54.071 "adrfam": "ipv4", 00:27:54.071 "trsvcid": "$NVMF_PORT", 00:27:54.071 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:54.071 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:54.071 "hdgst": ${hdgst:-false}, 00:27:54.071 "ddgst": ${ddgst:-false} 00:27:54.071 }, 00:27:54.071 "method": "bdev_nvme_attach_controller" 00:27:54.071 } 00:27:54.071 EOF 00:27:54.071 )") 00:27:54.071 03:31:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:54.071 03:31:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:54.071 03:31:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:54.071 { 00:27:54.071 "params": { 00:27:54.071 "name": "Nvme$subsystem", 00:27:54.071 "trtype": "$TEST_TRANSPORT", 00:27:54.071 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:54.071 "adrfam": "ipv4", 00:27:54.071 "trsvcid": "$NVMF_PORT", 00:27:54.071 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:54.071 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:54.071 "hdgst": ${hdgst:-false}, 00:27:54.071 "ddgst": ${ddgst:-false} 00:27:54.071 }, 00:27:54.071 "method": "bdev_nvme_attach_controller" 00:27:54.071 } 00:27:54.071 EOF 00:27:54.071 )") 00:27:54.071 03:31:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:54.071 03:31:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:54.071 03:31:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:54.071 { 00:27:54.071 "params": { 00:27:54.071 "name": "Nvme$subsystem", 00:27:54.071 "trtype": "$TEST_TRANSPORT", 00:27:54.071 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:54.071 "adrfam": "ipv4", 00:27:54.071 "trsvcid": "$NVMF_PORT", 00:27:54.071 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:54.071 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:54.071 "hdgst": ${hdgst:-false}, 00:27:54.071 "ddgst": ${ddgst:-false} 00:27:54.071 }, 00:27:54.071 "method": "bdev_nvme_attach_controller" 00:27:54.071 } 00:27:54.071 EOF 00:27:54.071 )") 00:27:54.072 03:31:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:54.072 03:31:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:54.072 03:31:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:54.072 { 00:27:54.072 "params": { 00:27:54.072 "name": "Nvme$subsystem", 00:27:54.072 "trtype": "$TEST_TRANSPORT", 00:27:54.072 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:54.072 "adrfam": "ipv4", 00:27:54.072 "trsvcid": "$NVMF_PORT", 00:27:54.072 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:54.072 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:54.072 "hdgst": ${hdgst:-false}, 00:27:54.072 "ddgst": ${ddgst:-false} 00:27:54.072 }, 00:27:54.072 "method": "bdev_nvme_attach_controller" 00:27:54.072 } 00:27:54.072 EOF 00:27:54.072 )") 00:27:54.072 03:31:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:54.072 03:31:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:54.072 03:31:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:54.072 { 00:27:54.072 "params": { 00:27:54.072 "name": "Nvme$subsystem", 00:27:54.072 "trtype": "$TEST_TRANSPORT", 00:27:54.072 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:54.072 "adrfam": "ipv4", 00:27:54.072 "trsvcid": "$NVMF_PORT", 00:27:54.072 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:54.072 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:54.072 "hdgst": ${hdgst:-false}, 00:27:54.072 "ddgst": ${ddgst:-false} 00:27:54.072 }, 00:27:54.072 "method": "bdev_nvme_attach_controller" 00:27:54.072 } 00:27:54.072 EOF 00:27:54.072 )") 00:27:54.072 03:31:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:54.072 03:31:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:54.072 03:31:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:54.072 { 00:27:54.072 "params": { 00:27:54.072 "name": "Nvme$subsystem", 00:27:54.072 "trtype": "$TEST_TRANSPORT", 00:27:54.072 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:54.072 "adrfam": "ipv4", 00:27:54.072 "trsvcid": "$NVMF_PORT", 00:27:54.072 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:54.072 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:54.072 "hdgst": ${hdgst:-false}, 00:27:54.072 "ddgst": ${ddgst:-false} 00:27:54.072 }, 00:27:54.072 "method": "bdev_nvme_attach_controller" 00:27:54.072 } 00:27:54.072 EOF 00:27:54.072 )") 00:27:54.072 03:31:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:54.072 03:31:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:54.072 03:31:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:54.072 { 00:27:54.072 "params": { 00:27:54.072 "name": "Nvme$subsystem", 00:27:54.072 "trtype": "$TEST_TRANSPORT", 00:27:54.072 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:54.072 "adrfam": "ipv4", 00:27:54.072 "trsvcid": "$NVMF_PORT", 00:27:54.072 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:54.072 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:54.072 "hdgst": ${hdgst:-false}, 00:27:54.072 "ddgst": ${ddgst:-false} 00:27:54.072 }, 00:27:54.072 "method": "bdev_nvme_attach_controller" 00:27:54.072 } 00:27:54.072 EOF 00:27:54.072 )") 00:27:54.072 03:31:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:54.072 03:31:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:54.072 03:31:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:54.072 { 00:27:54.072 "params": { 00:27:54.072 "name": "Nvme$subsystem", 00:27:54.072 "trtype": "$TEST_TRANSPORT", 00:27:54.072 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:54.072 "adrfam": "ipv4", 00:27:54.072 "trsvcid": "$NVMF_PORT", 00:27:54.072 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:54.072 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:54.072 "hdgst": ${hdgst:-false}, 00:27:54.072 "ddgst": ${ddgst:-false} 00:27:54.072 }, 00:27:54.072 "method": "bdev_nvme_attach_controller" 00:27:54.072 } 00:27:54.072 EOF 00:27:54.072 )") 00:27:54.072 03:31:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:54.072 03:31:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:54.072 03:31:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:54.072 { 00:27:54.072 "params": { 00:27:54.072 "name": "Nvme$subsystem", 00:27:54.072 "trtype": "$TEST_TRANSPORT", 00:27:54.072 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:54.072 "adrfam": "ipv4", 00:27:54.072 "trsvcid": "$NVMF_PORT", 00:27:54.072 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:54.072 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:54.072 "hdgst": ${hdgst:-false}, 00:27:54.072 "ddgst": ${ddgst:-false} 00:27:54.072 }, 00:27:54.072 "method": "bdev_nvme_attach_controller" 00:27:54.072 } 00:27:54.072 EOF 00:27:54.072 )") 00:27:54.072 03:31:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:54.072 03:31:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:54.072 03:31:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:54.072 { 00:27:54.072 "params": { 00:27:54.072 "name": "Nvme$subsystem", 00:27:54.072 "trtype": "$TEST_TRANSPORT", 00:27:54.072 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:54.072 "adrfam": "ipv4", 00:27:54.072 "trsvcid": "$NVMF_PORT", 00:27:54.072 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:54.072 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:54.072 "hdgst": ${hdgst:-false}, 00:27:54.072 "ddgst": ${ddgst:-false} 00:27:54.072 }, 00:27:54.072 "method": "bdev_nvme_attach_controller" 00:27:54.072 } 00:27:54.072 EOF 00:27:54.072 )") 00:27:54.072 03:31:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:54.072 03:31:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:27:54.072 03:31:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:27:54.072 03:31:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:27:54.072 "params": { 00:27:54.072 "name": "Nvme1", 00:27:54.072 "trtype": "tcp", 00:27:54.072 "traddr": "10.0.0.2", 00:27:54.073 "adrfam": "ipv4", 00:27:54.073 "trsvcid": "4420", 00:27:54.073 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:54.073 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:54.073 "hdgst": false, 00:27:54.073 "ddgst": false 00:27:54.073 }, 00:27:54.073 "method": "bdev_nvme_attach_controller" 00:27:54.073 },{ 00:27:54.073 "params": { 00:27:54.073 "name": "Nvme2", 00:27:54.073 "trtype": "tcp", 00:27:54.073 "traddr": "10.0.0.2", 00:27:54.073 "adrfam": "ipv4", 00:27:54.073 "trsvcid": "4420", 00:27:54.073 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:27:54.073 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:27:54.073 "hdgst": false, 00:27:54.073 "ddgst": false 00:27:54.073 }, 00:27:54.073 "method": "bdev_nvme_attach_controller" 00:27:54.073 },{ 00:27:54.073 "params": { 00:27:54.073 "name": "Nvme3", 00:27:54.073 "trtype": "tcp", 00:27:54.073 "traddr": "10.0.0.2", 00:27:54.073 "adrfam": "ipv4", 00:27:54.073 "trsvcid": "4420", 00:27:54.073 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:27:54.073 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:27:54.073 "hdgst": false, 00:27:54.073 "ddgst": false 00:27:54.073 }, 00:27:54.073 "method": "bdev_nvme_attach_controller" 00:27:54.073 },{ 00:27:54.073 "params": { 00:27:54.073 "name": "Nvme4", 00:27:54.073 "trtype": "tcp", 00:27:54.073 "traddr": "10.0.0.2", 00:27:54.073 "adrfam": "ipv4", 00:27:54.073 "trsvcid": "4420", 00:27:54.073 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:27:54.073 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:27:54.073 "hdgst": false, 00:27:54.073 "ddgst": false 00:27:54.073 }, 00:27:54.073 "method": "bdev_nvme_attach_controller" 00:27:54.073 },{ 00:27:54.073 "params": { 00:27:54.073 "name": "Nvme5", 00:27:54.073 "trtype": "tcp", 00:27:54.073 "traddr": "10.0.0.2", 00:27:54.073 "adrfam": "ipv4", 00:27:54.073 "trsvcid": "4420", 00:27:54.073 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:27:54.073 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:27:54.073 "hdgst": false, 00:27:54.073 "ddgst": false 00:27:54.073 }, 00:27:54.073 "method": "bdev_nvme_attach_controller" 00:27:54.073 },{ 00:27:54.073 "params": { 00:27:54.073 "name": "Nvme6", 00:27:54.073 "trtype": "tcp", 00:27:54.073 "traddr": "10.0.0.2", 00:27:54.073 "adrfam": "ipv4", 00:27:54.073 "trsvcid": "4420", 00:27:54.073 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:27:54.073 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:27:54.073 "hdgst": false, 00:27:54.073 "ddgst": false 00:27:54.073 }, 00:27:54.073 "method": "bdev_nvme_attach_controller" 00:27:54.073 },{ 00:27:54.073 "params": { 00:27:54.073 "name": "Nvme7", 00:27:54.073 "trtype": "tcp", 00:27:54.073 "traddr": "10.0.0.2", 00:27:54.073 "adrfam": "ipv4", 00:27:54.073 "trsvcid": "4420", 00:27:54.073 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:27:54.073 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:27:54.073 "hdgst": false, 00:27:54.073 "ddgst": false 00:27:54.073 }, 00:27:54.073 "method": "bdev_nvme_attach_controller" 00:27:54.073 },{ 00:27:54.073 "params": { 00:27:54.073 "name": "Nvme8", 00:27:54.073 "trtype": "tcp", 00:27:54.073 "traddr": "10.0.0.2", 00:27:54.073 "adrfam": "ipv4", 00:27:54.073 "trsvcid": "4420", 00:27:54.073 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:27:54.073 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:27:54.073 "hdgst": false, 00:27:54.073 "ddgst": false 00:27:54.073 }, 00:27:54.073 "method": "bdev_nvme_attach_controller" 00:27:54.073 },{ 00:27:54.073 "params": { 00:27:54.073 "name": "Nvme9", 00:27:54.073 "trtype": "tcp", 00:27:54.073 "traddr": "10.0.0.2", 00:27:54.073 "adrfam": "ipv4", 00:27:54.073 "trsvcid": "4420", 00:27:54.073 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:27:54.073 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:27:54.073 "hdgst": false, 00:27:54.073 "ddgst": false 00:27:54.073 }, 00:27:54.073 "method": "bdev_nvme_attach_controller" 00:27:54.073 },{ 00:27:54.073 "params": { 00:27:54.073 "name": "Nvme10", 00:27:54.073 "trtype": "tcp", 00:27:54.073 "traddr": "10.0.0.2", 00:27:54.073 "adrfam": "ipv4", 00:27:54.073 "trsvcid": "4420", 00:27:54.073 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:27:54.073 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:27:54.073 "hdgst": false, 00:27:54.073 "ddgst": false 00:27:54.073 }, 00:27:54.073 "method": "bdev_nvme_attach_controller" 00:27:54.073 }' 00:27:54.073 [2024-07-15 03:31:00.203883] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:27:54.073 [2024-07-15 03:31:00.203962] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3276877 ] 00:27:54.331 EAL: No free 2048 kB hugepages reported on node 1 00:27:54.331 [2024-07-15 03:31:00.271024] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:54.331 [2024-07-15 03:31:00.358012] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:55.756 Running I/O for 1 seconds... 00:27:57.128 00:27:57.128 Latency(us) 00:27:57.128 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:57.128 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:57.128 Verification LBA range: start 0x0 length 0x400 00:27:57.128 Nvme1n1 : 1.13 226.90 14.18 0.00 0.00 279283.11 22233.69 257872.02 00:27:57.128 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:57.128 Verification LBA range: start 0x0 length 0x400 00:27:57.128 Nvme2n1 : 1.14 224.03 14.00 0.00 0.00 278345.77 21845.33 257872.02 00:27:57.128 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:57.128 Verification LBA range: start 0x0 length 0x400 00:27:57.128 Nvme3n1 : 1.16 277.01 17.31 0.00 0.00 221418.84 17670.45 225249.66 00:27:57.128 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:57.128 Verification LBA range: start 0x0 length 0x400 00:27:57.128 Nvme4n1 : 1.10 246.76 15.42 0.00 0.00 237352.37 10194.49 250104.79 00:27:57.128 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:57.128 Verification LBA range: start 0x0 length 0x400 00:27:57.128 Nvme5n1 : 1.15 222.39 13.90 0.00 0.00 266785.00 21262.79 248551.35 00:27:57.128 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:57.128 Verification LBA range: start 0x0 length 0x400 00:27:57.128 Nvme6n1 : 1.15 223.29 13.96 0.00 0.00 260847.50 21165.70 250104.79 00:27:57.128 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:57.128 Verification LBA range: start 0x0 length 0x400 00:27:57.128 Nvme7n1 : 1.14 225.37 14.09 0.00 0.00 253867.61 17087.91 239230.67 00:27:57.128 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:57.128 Verification LBA range: start 0x0 length 0x400 00:27:57.128 Nvme8n1 : 1.19 269.31 16.83 0.00 0.00 209059.54 11990.66 246997.90 00:27:57.128 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:57.128 Verification LBA range: start 0x0 length 0x400 00:27:57.128 Nvme9n1 : 1.19 215.93 13.50 0.00 0.00 256988.35 22233.69 281173.71 00:27:57.128 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:57.128 Verification LBA range: start 0x0 length 0x400 00:27:57.128 Nvme10n1 : 1.20 266.76 16.67 0.00 0.00 204983.79 6893.42 259425.47 00:27:57.128 =================================================================================================================== 00:27:57.128 Total : 2397.75 149.86 0.00 0.00 244407.81 6893.42 281173.71 00:27:57.385 03:31:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@94 -- # stoptarget 00:27:57.385 03:31:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:27:57.385 03:31:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:27:57.385 03:31:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:27:57.385 03:31:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@45 -- # nvmftestfini 00:27:57.385 03:31:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:57.385 03:31:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # sync 00:27:57.385 03:31:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:57.385 03:31:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@120 -- # set +e 00:27:57.385 03:31:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:57.385 03:31:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:57.385 rmmod nvme_tcp 00:27:57.385 rmmod nvme_fabrics 00:27:57.385 rmmod nvme_keyring 00:27:57.385 03:31:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:57.385 03:31:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set -e 00:27:57.385 03:31:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # return 0 00:27:57.385 03:31:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@489 -- # '[' -n 3276281 ']' 00:27:57.385 03:31:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@490 -- # killprocess 3276281 00:27:57.385 03:31:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@948 -- # '[' -z 3276281 ']' 00:27:57.385 03:31:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@952 -- # kill -0 3276281 00:27:57.385 03:31:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@953 -- # uname 00:27:57.385 03:31:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:57.385 03:31:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3276281 00:27:57.385 03:31:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:27:57.385 03:31:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:27:57.385 03:31:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3276281' 00:27:57.385 killing process with pid 3276281 00:27:57.385 03:31:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@967 -- # kill 3276281 00:27:57.385 03:31:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@972 -- # wait 3276281 00:27:57.951 03:31:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:57.951 03:31:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:57.951 03:31:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:57.952 03:31:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:57.952 03:31:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:57.952 03:31:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:57.952 03:31:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:57.952 03:31:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:59.851 03:31:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:59.851 00:27:59.851 real 0m11.969s 00:27:59.851 user 0m34.492s 00:27:59.851 sys 0m3.243s 00:27:59.851 03:31:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:59.851 03:31:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:59.851 ************************************ 00:27:59.851 END TEST nvmf_shutdown_tc1 00:27:59.851 ************************************ 00:27:59.851 03:31:05 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1142 -- # return 0 00:27:59.851 03:31:05 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@148 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:27:59.851 03:31:05 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:27:59.851 03:31:05 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:59.851 03:31:05 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:27:59.851 ************************************ 00:27:59.851 START TEST nvmf_shutdown_tc2 00:27:59.851 ************************************ 00:27:59.851 03:31:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1123 -- # nvmf_shutdown_tc2 00:27:59.851 03:31:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@99 -- # starttarget 00:27:59.851 03:31:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@15 -- # nvmftestinit 00:27:59.851 03:31:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:59.851 03:31:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:59.851 03:31:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:59.851 03:31:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:59.851 03:31:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:59.851 03:31:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:59.851 03:31:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:59.851 03:31:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:59.851 03:31:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:59.851 03:31:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:59.851 03:31:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@285 -- # xtrace_disable 00:27:59.851 03:31:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:59.851 03:31:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:59.851 03:31:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # pci_devs=() 00:27:59.851 03:31:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:59.851 03:31:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:59.851 03:31:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:59.851 03:31:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:59.851 03:31:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:59.851 03:31:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # net_devs=() 00:27:59.851 03:31:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:59.851 03:31:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # e810=() 00:27:59.851 03:31:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # local -ga e810 00:27:59.851 03:31:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # x722=() 00:27:59.851 03:31:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # local -ga x722 00:27:59.851 03:31:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # mlx=() 00:27:59.851 03:31:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # local -ga mlx 00:27:59.851 03:31:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:59.851 03:31:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:59.851 03:31:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:59.851 03:31:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:59.851 03:31:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:59.851 03:31:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:59.851 03:31:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:59.851 03:31:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:59.851 03:31:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:59.851 03:31:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:59.851 03:31:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:59.851 03:31:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:59.851 03:31:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:59.851 03:31:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:59.851 03:31:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:59.851 03:31:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:59.851 03:31:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:59.851 03:31:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:59.851 03:31:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:27:59.851 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:27:59.851 03:31:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:59.851 03:31:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:59.851 03:31:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:59.851 03:31:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:59.851 03:31:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:59.851 03:31:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:59.851 03:31:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:27:59.851 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:27:59.851 03:31:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:59.851 03:31:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:59.851 03:31:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:59.851 03:31:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:59.851 03:31:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:59.851 03:31:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:59.851 03:31:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:59.851 03:31:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:59.851 03:31:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:59.851 03:31:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:59.851 03:31:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:59.851 03:31:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:59.851 03:31:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:59.851 03:31:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:59.851 03:31:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:59.851 03:31:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:27:59.851 Found net devices under 0000:0a:00.0: cvl_0_0 00:27:59.851 03:31:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:59.851 03:31:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:59.851 03:31:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:59.851 03:31:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:59.851 03:31:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:59.851 03:31:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:59.851 03:31:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:59.851 03:31:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:59.851 03:31:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:27:59.851 Found net devices under 0000:0a:00.1: cvl_0_1 00:27:59.851 03:31:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:59.851 03:31:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:59.851 03:31:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # is_hw=yes 00:27:59.851 03:31:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:59.851 03:31:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:59.851 03:31:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:59.852 03:31:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:59.852 03:31:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:59.852 03:31:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:59.852 03:31:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:59.852 03:31:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:59.852 03:31:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:59.852 03:31:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:59.852 03:31:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:59.852 03:31:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:59.852 03:31:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:59.852 03:31:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:59.852 03:31:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:59.852 03:31:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:59.852 03:31:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:00.109 03:31:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:00.109 03:31:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:28:00.109 03:31:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:00.109 03:31:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:00.109 03:31:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:00.109 03:31:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:28:00.109 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:00.109 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.124 ms 00:28:00.109 00:28:00.109 --- 10.0.0.2 ping statistics --- 00:28:00.109 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:00.109 rtt min/avg/max/mdev = 0.124/0.124/0.124/0.000 ms 00:28:00.109 03:31:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:00.109 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:00.109 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.059 ms 00:28:00.109 00:28:00.109 --- 10.0.0.1 ping statistics --- 00:28:00.109 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:00.109 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:28:00.109 03:31:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:00.109 03:31:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # return 0 00:28:00.109 03:31:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:28:00.109 03:31:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:00.109 03:31:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:28:00.109 03:31:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:28:00.109 03:31:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:00.109 03:31:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:28:00.109 03:31:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:28:00.109 03:31:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:28:00.109 03:31:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:00.109 03:31:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:28:00.109 03:31:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:00.109 03:31:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@481 -- # nvmfpid=3277638 00:28:00.109 03:31:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:28:00.109 03:31:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # waitforlisten 3277638 00:28:00.109 03:31:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@829 -- # '[' -z 3277638 ']' 00:28:00.109 03:31:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:00.109 03:31:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:00.109 03:31:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:00.109 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:00.109 03:31:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:00.109 03:31:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:00.109 [2024-07-15 03:31:06.129477] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:28:00.109 [2024-07-15 03:31:06.129568] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:00.109 EAL: No free 2048 kB hugepages reported on node 1 00:28:00.109 [2024-07-15 03:31:06.195141] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:00.366 [2024-07-15 03:31:06.281656] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:00.366 [2024-07-15 03:31:06.281706] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:00.366 [2024-07-15 03:31:06.281728] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:00.366 [2024-07-15 03:31:06.281738] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:00.366 [2024-07-15 03:31:06.281748] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:00.366 [2024-07-15 03:31:06.281832] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:28:00.366 [2024-07-15 03:31:06.281963] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:28:00.366 [2024-07-15 03:31:06.282018] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:00.366 [2024-07-15 03:31:06.282015] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:28:00.366 03:31:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:00.366 03:31:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@862 -- # return 0 00:28:00.366 03:31:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:00.366 03:31:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:28:00.366 03:31:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:00.366 03:31:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:00.366 03:31:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:00.366 03:31:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:00.366 03:31:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:00.366 [2024-07-15 03:31:06.444776] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:00.366 03:31:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:00.366 03:31:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:28:00.366 03:31:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:28:00.366 03:31:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:28:00.366 03:31:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:00.366 03:31:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:28:00.366 03:31:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:00.366 03:31:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:28:00.366 03:31:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:00.366 03:31:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:28:00.366 03:31:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:00.366 03:31:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:28:00.366 03:31:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:00.366 03:31:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:28:00.366 03:31:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:00.366 03:31:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:28:00.367 03:31:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:00.367 03:31:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:28:00.367 03:31:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:00.367 03:31:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:28:00.367 03:31:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:00.367 03:31:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:28:00.367 03:31:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:00.367 03:31:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:28:00.367 03:31:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:00.367 03:31:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:28:00.367 03:31:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@35 -- # rpc_cmd 00:28:00.367 03:31:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:00.367 03:31:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:00.624 Malloc1 00:28:00.624 [2024-07-15 03:31:06.534607] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:00.624 Malloc2 00:28:00.624 Malloc3 00:28:00.624 Malloc4 00:28:00.624 Malloc5 00:28:00.624 Malloc6 00:28:00.881 Malloc7 00:28:00.881 Malloc8 00:28:00.881 Malloc9 00:28:00.881 Malloc10 00:28:00.881 03:31:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:00.881 03:31:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:28:00.881 03:31:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:28:00.881 03:31:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:00.881 03:31:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # perfpid=3277814 00:28:00.881 03:31:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # waitforlisten 3277814 /var/tmp/bdevperf.sock 00:28:00.881 03:31:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@829 -- # '[' -z 3277814 ']' 00:28:00.881 03:31:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:28:00.881 03:31:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:28:00.881 03:31:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:28:00.881 03:31:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:00.881 03:31:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # config=() 00:28:00.881 03:31:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:28:00.881 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:28:00.881 03:31:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # local subsystem config 00:28:00.881 03:31:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:00.881 03:31:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:00.881 03:31:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:00.881 03:31:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:00.881 { 00:28:00.881 "params": { 00:28:00.881 "name": "Nvme$subsystem", 00:28:00.881 "trtype": "$TEST_TRANSPORT", 00:28:00.881 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:00.881 "adrfam": "ipv4", 00:28:00.881 "trsvcid": "$NVMF_PORT", 00:28:00.881 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:00.881 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:00.881 "hdgst": ${hdgst:-false}, 00:28:00.881 "ddgst": ${ddgst:-false} 00:28:00.881 }, 00:28:00.881 "method": "bdev_nvme_attach_controller" 00:28:00.881 } 00:28:00.881 EOF 00:28:00.881 )") 00:28:00.881 03:31:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:28:00.881 03:31:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:00.881 03:31:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:00.881 { 00:28:00.881 "params": { 00:28:00.881 "name": "Nvme$subsystem", 00:28:00.881 "trtype": "$TEST_TRANSPORT", 00:28:00.881 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:00.881 "adrfam": "ipv4", 00:28:00.881 "trsvcid": "$NVMF_PORT", 00:28:00.881 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:00.881 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:00.881 "hdgst": ${hdgst:-false}, 00:28:00.881 "ddgst": ${ddgst:-false} 00:28:00.881 }, 00:28:00.881 "method": "bdev_nvme_attach_controller" 00:28:00.881 } 00:28:00.881 EOF 00:28:00.881 )") 00:28:00.881 03:31:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:28:01.163 03:31:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:01.163 03:31:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:01.163 { 00:28:01.163 "params": { 00:28:01.163 "name": "Nvme$subsystem", 00:28:01.163 "trtype": "$TEST_TRANSPORT", 00:28:01.163 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:01.163 "adrfam": "ipv4", 00:28:01.163 "trsvcid": "$NVMF_PORT", 00:28:01.163 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:01.163 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:01.163 "hdgst": ${hdgst:-false}, 00:28:01.163 "ddgst": ${ddgst:-false} 00:28:01.163 }, 00:28:01.163 "method": "bdev_nvme_attach_controller" 00:28:01.163 } 00:28:01.163 EOF 00:28:01.163 )") 00:28:01.163 03:31:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:28:01.163 03:31:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:01.163 03:31:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:01.163 { 00:28:01.163 "params": { 00:28:01.163 "name": "Nvme$subsystem", 00:28:01.163 "trtype": "$TEST_TRANSPORT", 00:28:01.163 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:01.163 "adrfam": "ipv4", 00:28:01.163 "trsvcid": "$NVMF_PORT", 00:28:01.163 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:01.163 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:01.163 "hdgst": ${hdgst:-false}, 00:28:01.163 "ddgst": ${ddgst:-false} 00:28:01.163 }, 00:28:01.163 "method": "bdev_nvme_attach_controller" 00:28:01.163 } 00:28:01.163 EOF 00:28:01.163 )") 00:28:01.163 03:31:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:28:01.163 03:31:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:01.163 03:31:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:01.163 { 00:28:01.163 "params": { 00:28:01.163 "name": "Nvme$subsystem", 00:28:01.163 "trtype": "$TEST_TRANSPORT", 00:28:01.163 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:01.163 "adrfam": "ipv4", 00:28:01.163 "trsvcid": "$NVMF_PORT", 00:28:01.163 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:01.163 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:01.163 "hdgst": ${hdgst:-false}, 00:28:01.163 "ddgst": ${ddgst:-false} 00:28:01.163 }, 00:28:01.163 "method": "bdev_nvme_attach_controller" 00:28:01.163 } 00:28:01.163 EOF 00:28:01.163 )") 00:28:01.163 03:31:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:28:01.163 03:31:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:01.163 03:31:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:01.163 { 00:28:01.163 "params": { 00:28:01.163 "name": "Nvme$subsystem", 00:28:01.163 "trtype": "$TEST_TRANSPORT", 00:28:01.163 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:01.163 "adrfam": "ipv4", 00:28:01.163 "trsvcid": "$NVMF_PORT", 00:28:01.163 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:01.163 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:01.163 "hdgst": ${hdgst:-false}, 00:28:01.163 "ddgst": ${ddgst:-false} 00:28:01.163 }, 00:28:01.163 "method": "bdev_nvme_attach_controller" 00:28:01.163 } 00:28:01.163 EOF 00:28:01.163 )") 00:28:01.163 03:31:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:28:01.163 03:31:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:01.163 03:31:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:01.163 { 00:28:01.163 "params": { 00:28:01.163 "name": "Nvme$subsystem", 00:28:01.163 "trtype": "$TEST_TRANSPORT", 00:28:01.163 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:01.163 "adrfam": "ipv4", 00:28:01.164 "trsvcid": "$NVMF_PORT", 00:28:01.164 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:01.164 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:01.164 "hdgst": ${hdgst:-false}, 00:28:01.164 "ddgst": ${ddgst:-false} 00:28:01.164 }, 00:28:01.164 "method": "bdev_nvme_attach_controller" 00:28:01.164 } 00:28:01.164 EOF 00:28:01.164 )") 00:28:01.164 03:31:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:28:01.164 03:31:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:01.164 03:31:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:01.164 { 00:28:01.164 "params": { 00:28:01.164 "name": "Nvme$subsystem", 00:28:01.164 "trtype": "$TEST_TRANSPORT", 00:28:01.164 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:01.164 "adrfam": "ipv4", 00:28:01.164 "trsvcid": "$NVMF_PORT", 00:28:01.164 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:01.164 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:01.164 "hdgst": ${hdgst:-false}, 00:28:01.164 "ddgst": ${ddgst:-false} 00:28:01.164 }, 00:28:01.164 "method": "bdev_nvme_attach_controller" 00:28:01.164 } 00:28:01.164 EOF 00:28:01.164 )") 00:28:01.164 03:31:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:28:01.164 03:31:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:01.164 03:31:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:01.164 { 00:28:01.164 "params": { 00:28:01.164 "name": "Nvme$subsystem", 00:28:01.164 "trtype": "$TEST_TRANSPORT", 00:28:01.164 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:01.164 "adrfam": "ipv4", 00:28:01.164 "trsvcid": "$NVMF_PORT", 00:28:01.164 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:01.164 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:01.164 "hdgst": ${hdgst:-false}, 00:28:01.164 "ddgst": ${ddgst:-false} 00:28:01.164 }, 00:28:01.164 "method": "bdev_nvme_attach_controller" 00:28:01.164 } 00:28:01.164 EOF 00:28:01.164 )") 00:28:01.164 03:31:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:28:01.164 03:31:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:01.164 03:31:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:01.164 { 00:28:01.164 "params": { 00:28:01.164 "name": "Nvme$subsystem", 00:28:01.164 "trtype": "$TEST_TRANSPORT", 00:28:01.164 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:01.164 "adrfam": "ipv4", 00:28:01.164 "trsvcid": "$NVMF_PORT", 00:28:01.164 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:01.164 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:01.164 "hdgst": ${hdgst:-false}, 00:28:01.164 "ddgst": ${ddgst:-false} 00:28:01.164 }, 00:28:01.164 "method": "bdev_nvme_attach_controller" 00:28:01.164 } 00:28:01.164 EOF 00:28:01.164 )") 00:28:01.164 03:31:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:28:01.164 03:31:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@556 -- # jq . 00:28:01.164 03:31:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@557 -- # IFS=, 00:28:01.164 03:31:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:28:01.164 "params": { 00:28:01.164 "name": "Nvme1", 00:28:01.164 "trtype": "tcp", 00:28:01.164 "traddr": "10.0.0.2", 00:28:01.164 "adrfam": "ipv4", 00:28:01.164 "trsvcid": "4420", 00:28:01.164 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:01.164 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:01.164 "hdgst": false, 00:28:01.164 "ddgst": false 00:28:01.164 }, 00:28:01.164 "method": "bdev_nvme_attach_controller" 00:28:01.164 },{ 00:28:01.164 "params": { 00:28:01.164 "name": "Nvme2", 00:28:01.164 "trtype": "tcp", 00:28:01.164 "traddr": "10.0.0.2", 00:28:01.164 "adrfam": "ipv4", 00:28:01.164 "trsvcid": "4420", 00:28:01.164 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:28:01.164 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:28:01.164 "hdgst": false, 00:28:01.164 "ddgst": false 00:28:01.164 }, 00:28:01.164 "method": "bdev_nvme_attach_controller" 00:28:01.164 },{ 00:28:01.164 "params": { 00:28:01.164 "name": "Nvme3", 00:28:01.164 "trtype": "tcp", 00:28:01.164 "traddr": "10.0.0.2", 00:28:01.164 "adrfam": "ipv4", 00:28:01.164 "trsvcid": "4420", 00:28:01.164 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:28:01.164 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:28:01.164 "hdgst": false, 00:28:01.164 "ddgst": false 00:28:01.164 }, 00:28:01.164 "method": "bdev_nvme_attach_controller" 00:28:01.164 },{ 00:28:01.164 "params": { 00:28:01.164 "name": "Nvme4", 00:28:01.164 "trtype": "tcp", 00:28:01.164 "traddr": "10.0.0.2", 00:28:01.164 "adrfam": "ipv4", 00:28:01.164 "trsvcid": "4420", 00:28:01.164 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:28:01.164 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:28:01.164 "hdgst": false, 00:28:01.164 "ddgst": false 00:28:01.164 }, 00:28:01.164 "method": "bdev_nvme_attach_controller" 00:28:01.164 },{ 00:28:01.164 "params": { 00:28:01.164 "name": "Nvme5", 00:28:01.164 "trtype": "tcp", 00:28:01.164 "traddr": "10.0.0.2", 00:28:01.164 "adrfam": "ipv4", 00:28:01.164 "trsvcid": "4420", 00:28:01.164 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:28:01.164 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:28:01.164 "hdgst": false, 00:28:01.164 "ddgst": false 00:28:01.164 }, 00:28:01.164 "method": "bdev_nvme_attach_controller" 00:28:01.164 },{ 00:28:01.164 "params": { 00:28:01.164 "name": "Nvme6", 00:28:01.164 "trtype": "tcp", 00:28:01.164 "traddr": "10.0.0.2", 00:28:01.164 "adrfam": "ipv4", 00:28:01.164 "trsvcid": "4420", 00:28:01.164 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:28:01.164 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:28:01.164 "hdgst": false, 00:28:01.164 "ddgst": false 00:28:01.164 }, 00:28:01.164 "method": "bdev_nvme_attach_controller" 00:28:01.164 },{ 00:28:01.164 "params": { 00:28:01.164 "name": "Nvme7", 00:28:01.164 "trtype": "tcp", 00:28:01.164 "traddr": "10.0.0.2", 00:28:01.164 "adrfam": "ipv4", 00:28:01.165 "trsvcid": "4420", 00:28:01.165 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:28:01.165 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:28:01.165 "hdgst": false, 00:28:01.165 "ddgst": false 00:28:01.165 }, 00:28:01.165 "method": "bdev_nvme_attach_controller" 00:28:01.165 },{ 00:28:01.165 "params": { 00:28:01.165 "name": "Nvme8", 00:28:01.165 "trtype": "tcp", 00:28:01.165 "traddr": "10.0.0.2", 00:28:01.165 "adrfam": "ipv4", 00:28:01.165 "trsvcid": "4420", 00:28:01.165 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:28:01.165 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:28:01.165 "hdgst": false, 00:28:01.165 "ddgst": false 00:28:01.165 }, 00:28:01.165 "method": "bdev_nvme_attach_controller" 00:28:01.165 },{ 00:28:01.165 "params": { 00:28:01.165 "name": "Nvme9", 00:28:01.165 "trtype": "tcp", 00:28:01.165 "traddr": "10.0.0.2", 00:28:01.165 "adrfam": "ipv4", 00:28:01.165 "trsvcid": "4420", 00:28:01.165 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:28:01.165 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:28:01.165 "hdgst": false, 00:28:01.165 "ddgst": false 00:28:01.165 }, 00:28:01.165 "method": "bdev_nvme_attach_controller" 00:28:01.165 },{ 00:28:01.165 "params": { 00:28:01.165 "name": "Nvme10", 00:28:01.165 "trtype": "tcp", 00:28:01.165 "traddr": "10.0.0.2", 00:28:01.165 "adrfam": "ipv4", 00:28:01.165 "trsvcid": "4420", 00:28:01.165 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:28:01.165 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:28:01.165 "hdgst": false, 00:28:01.165 "ddgst": false 00:28:01.165 }, 00:28:01.165 "method": "bdev_nvme_attach_controller" 00:28:01.165 }' 00:28:01.165 [2024-07-15 03:31:07.061654] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:28:01.165 [2024-07-15 03:31:07.061745] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3277814 ] 00:28:01.165 EAL: No free 2048 kB hugepages reported on node 1 00:28:01.165 [2024-07-15 03:31:07.125958] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:01.165 [2024-07-15 03:31:07.213176] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:03.057 Running I/O for 10 seconds... 00:28:03.057 03:31:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:03.057 03:31:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@862 -- # return 0 00:28:03.057 03:31:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:28:03.057 03:31:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:03.057 03:31:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:03.057 03:31:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:03.057 03:31:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@107 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:28:03.057 03:31:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:28:03.057 03:31:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:28:03.057 03:31:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@57 -- # local ret=1 00:28:03.057 03:31:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local i 00:28:03.057 03:31:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:28:03.057 03:31:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:28:03.057 03:31:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:28:03.057 03:31:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:28:03.057 03:31:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:03.057 03:31:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:03.057 03:31:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:03.057 03:31:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=3 00:28:03.057 03:31:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:28:03.057 03:31:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@67 -- # sleep 0.25 00:28:03.314 03:31:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i-- )) 00:28:03.314 03:31:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:28:03.314 03:31:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:28:03.314 03:31:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:28:03.314 03:31:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:03.314 03:31:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:03.572 03:31:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:03.572 03:31:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=67 00:28:03.572 03:31:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 67 -ge 100 ']' 00:28:03.572 03:31:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@67 -- # sleep 0.25 00:28:03.829 03:31:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i-- )) 00:28:03.829 03:31:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:28:03.829 03:31:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:28:03.829 03:31:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:28:03.829 03:31:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:03.829 03:31:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:03.829 03:31:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:03.829 03:31:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=195 00:28:03.829 03:31:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 195 -ge 100 ']' 00:28:03.829 03:31:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # ret=0 00:28:03.829 03:31:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # break 00:28:03.829 03:31:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@69 -- # return 0 00:28:03.829 03:31:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@110 -- # killprocess 3277814 00:28:03.829 03:31:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@948 -- # '[' -z 3277814 ']' 00:28:03.829 03:31:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # kill -0 3277814 00:28:03.829 03:31:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # uname 00:28:03.829 03:31:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:03.829 03:31:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3277814 00:28:03.829 03:31:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:28:03.829 03:31:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:28:03.829 03:31:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3277814' 00:28:03.829 killing process with pid 3277814 00:28:03.829 03:31:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@967 -- # kill 3277814 00:28:03.829 03:31:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # wait 3277814 00:28:03.829 Received shutdown signal, test time was about 0.951028 seconds 00:28:03.829 00:28:03.829 Latency(us) 00:28:03.829 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:03.829 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:03.829 Verification LBA range: start 0x0 length 0x400 00:28:03.829 Nvme1n1 : 0.94 271.77 16.99 0.00 0.00 231795.29 18835.53 245444.46 00:28:03.829 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:03.829 Verification LBA range: start 0x0 length 0x400 00:28:03.829 Nvme2n1 : 0.93 206.32 12.89 0.00 0.00 300434.27 22816.24 273406.48 00:28:03.829 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:03.829 Verification LBA range: start 0x0 length 0x400 00:28:03.829 Nvme3n1 : 0.95 269.67 16.85 0.00 0.00 225058.89 30292.20 239230.67 00:28:03.829 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:03.829 Verification LBA range: start 0x0 length 0x400 00:28:03.829 Nvme4n1 : 0.95 269.41 16.84 0.00 0.00 220918.52 16990.81 256318.58 00:28:03.829 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:03.829 Verification LBA range: start 0x0 length 0x400 00:28:03.830 Nvme5n1 : 0.91 211.68 13.23 0.00 0.00 274438.07 28932.93 248551.35 00:28:03.830 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:03.830 Verification LBA range: start 0x0 length 0x400 00:28:03.830 Nvme6n1 : 0.92 208.66 13.04 0.00 0.00 272851.25 23107.51 253211.69 00:28:03.830 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:03.830 Verification LBA range: start 0x0 length 0x400 00:28:03.830 Nvme7n1 : 0.91 210.92 13.18 0.00 0.00 263577.60 19903.53 253211.69 00:28:03.830 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:03.830 Verification LBA range: start 0x0 length 0x400 00:28:03.830 Nvme8n1 : 0.93 274.00 17.13 0.00 0.00 199081.15 19515.16 243891.01 00:28:03.830 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:03.830 Verification LBA range: start 0x0 length 0x400 00:28:03.830 Nvme9n1 : 0.93 211.67 13.23 0.00 0.00 250850.12 2924.85 251658.24 00:28:03.830 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:03.830 Verification LBA range: start 0x0 length 0x400 00:28:03.830 Nvme10n1 : 0.94 204.01 12.75 0.00 0.00 255488.25 25049.32 284280.60 00:28:03.830 =================================================================================================================== 00:28:03.830 Total : 2338.10 146.13 0.00 0.00 245901.28 2924.85 284280.60 00:28:04.087 03:31:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@113 -- # sleep 1 00:28:05.019 03:31:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # kill -0 3277638 00:28:05.019 03:31:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@116 -- # stoptarget 00:28:05.019 03:31:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:28:05.019 03:31:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:28:05.019 03:31:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:28:05.019 03:31:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@45 -- # nvmftestfini 00:28:05.019 03:31:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:05.019 03:31:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # sync 00:28:05.019 03:31:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:05.019 03:31:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@120 -- # set +e 00:28:05.019 03:31:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:05.019 03:31:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:05.019 rmmod nvme_tcp 00:28:05.019 rmmod nvme_fabrics 00:28:05.019 rmmod nvme_keyring 00:28:05.019 03:31:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:05.019 03:31:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set -e 00:28:05.019 03:31:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # return 0 00:28:05.019 03:31:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@489 -- # '[' -n 3277638 ']' 00:28:05.019 03:31:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@490 -- # killprocess 3277638 00:28:05.019 03:31:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@948 -- # '[' -z 3277638 ']' 00:28:05.019 03:31:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # kill -0 3277638 00:28:05.019 03:31:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # uname 00:28:05.277 03:31:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:05.277 03:31:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3277638 00:28:05.277 03:31:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:28:05.277 03:31:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:28:05.277 03:31:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3277638' 00:28:05.277 killing process with pid 3277638 00:28:05.277 03:31:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@967 -- # kill 3277638 00:28:05.277 03:31:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # wait 3277638 00:28:05.535 03:31:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:28:05.535 03:31:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:28:05.535 03:31:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:28:05.535 03:31:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:05.535 03:31:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:05.535 03:31:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:05.535 03:31:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:05.535 03:31:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:08.065 03:31:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:28:08.065 00:28:08.065 real 0m7.765s 00:28:08.065 user 0m23.802s 00:28:08.065 sys 0m1.547s 00:28:08.065 03:31:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:08.065 03:31:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:08.065 ************************************ 00:28:08.065 END TEST nvmf_shutdown_tc2 00:28:08.065 ************************************ 00:28:08.065 03:31:13 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1142 -- # return 0 00:28:08.065 03:31:13 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@149 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:28:08.065 03:31:13 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:28:08.065 03:31:13 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:08.065 03:31:13 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:28:08.065 ************************************ 00:28:08.065 START TEST nvmf_shutdown_tc3 00:28:08.065 ************************************ 00:28:08.065 03:31:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1123 -- # nvmf_shutdown_tc3 00:28:08.065 03:31:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@121 -- # starttarget 00:28:08.065 03:31:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@15 -- # nvmftestinit 00:28:08.065 03:31:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:28:08.065 03:31:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:08.065 03:31:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:08.065 03:31:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:08.065 03:31:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:08.065 03:31:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:08.065 03:31:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:08.065 03:31:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:08.065 03:31:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:28:08.065 03:31:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:28:08.065 03:31:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@285 -- # xtrace_disable 00:28:08.065 03:31:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:08.065 03:31:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:08.065 03:31:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # pci_devs=() 00:28:08.065 03:31:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:08.065 03:31:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:08.065 03:31:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:08.065 03:31:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:08.065 03:31:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:08.065 03:31:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # net_devs=() 00:28:08.065 03:31:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:08.065 03:31:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # e810=() 00:28:08.065 03:31:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # local -ga e810 00:28:08.065 03:31:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # x722=() 00:28:08.065 03:31:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # local -ga x722 00:28:08.065 03:31:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # mlx=() 00:28:08.065 03:31:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # local -ga mlx 00:28:08.065 03:31:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:08.065 03:31:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:08.065 03:31:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:08.065 03:31:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:08.065 03:31:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:08.065 03:31:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:08.065 03:31:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:08.065 03:31:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:08.065 03:31:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:08.065 03:31:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:08.065 03:31:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:08.065 03:31:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:08.065 03:31:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:08.065 03:31:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:28:08.065 03:31:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:28:08.065 03:31:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:28:08.065 03:31:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:08.065 03:31:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:08.065 03:31:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:28:08.065 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:28:08.065 03:31:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:08.065 03:31:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:08.065 03:31:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:08.065 03:31:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:08.065 03:31:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:08.065 03:31:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:08.065 03:31:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:28:08.065 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:28:08.065 03:31:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:08.065 03:31:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:08.065 03:31:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:08.065 03:31:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:08.065 03:31:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:08.065 03:31:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:08.065 03:31:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:28:08.065 03:31:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:28:08.065 03:31:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:08.065 03:31:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:08.065 03:31:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:08.065 03:31:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:08.065 03:31:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:08.065 03:31:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:08.065 03:31:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:08.065 03:31:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:28:08.065 Found net devices under 0000:0a:00.0: cvl_0_0 00:28:08.066 03:31:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:08.066 03:31:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:08.066 03:31:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:08.066 03:31:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:08.066 03:31:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:08.066 03:31:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:08.066 03:31:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:08.066 03:31:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:08.066 03:31:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:28:08.066 Found net devices under 0000:0a:00.1: cvl_0_1 00:28:08.066 03:31:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:08.066 03:31:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:28:08.066 03:31:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # is_hw=yes 00:28:08.066 03:31:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:28:08.066 03:31:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:28:08.066 03:31:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:28:08.066 03:31:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:08.066 03:31:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:08.066 03:31:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:08.066 03:31:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:28:08.066 03:31:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:08.066 03:31:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:08.066 03:31:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:28:08.066 03:31:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:08.066 03:31:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:08.066 03:31:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:28:08.066 03:31:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:28:08.066 03:31:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:28:08.066 03:31:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:08.066 03:31:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:08.066 03:31:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:08.066 03:31:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:28:08.066 03:31:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:08.066 03:31:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:08.066 03:31:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:08.066 03:31:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:28:08.066 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:08.066 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.166 ms 00:28:08.066 00:28:08.066 --- 10.0.0.2 ping statistics --- 00:28:08.066 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:08.066 rtt min/avg/max/mdev = 0.166/0.166/0.166/0.000 ms 00:28:08.066 03:31:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:08.066 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:08.066 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.137 ms 00:28:08.066 00:28:08.066 --- 10.0.0.1 ping statistics --- 00:28:08.066 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:08.066 rtt min/avg/max/mdev = 0.137/0.137/0.137/0.000 ms 00:28:08.066 03:31:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:08.066 03:31:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # return 0 00:28:08.066 03:31:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:28:08.066 03:31:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:08.066 03:31:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:28:08.066 03:31:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:28:08.066 03:31:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:08.066 03:31:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:28:08.066 03:31:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:28:08.066 03:31:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:28:08.066 03:31:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:08.066 03:31:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@722 -- # xtrace_disable 00:28:08.066 03:31:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:08.066 03:31:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@481 -- # nvmfpid=3278729 00:28:08.066 03:31:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:28:08.066 03:31:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # waitforlisten 3278729 00:28:08.066 03:31:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@829 -- # '[' -z 3278729 ']' 00:28:08.066 03:31:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:08.066 03:31:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:08.066 03:31:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:08.066 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:08.066 03:31:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:08.066 03:31:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:08.066 [2024-07-15 03:31:13.975341] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:28:08.066 [2024-07-15 03:31:13.975444] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:08.066 EAL: No free 2048 kB hugepages reported on node 1 00:28:08.066 [2024-07-15 03:31:14.051721] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:08.066 [2024-07-15 03:31:14.148496] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:08.066 [2024-07-15 03:31:14.148556] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:08.066 [2024-07-15 03:31:14.148573] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:08.066 [2024-07-15 03:31:14.148587] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:08.066 [2024-07-15 03:31:14.148598] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:08.066 [2024-07-15 03:31:14.148682] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:28:08.066 [2024-07-15 03:31:14.148800] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:28:08.066 [2024-07-15 03:31:14.148866] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:28:08.066 [2024-07-15 03:31:14.148868] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:08.324 03:31:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:08.324 03:31:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@862 -- # return 0 00:28:08.324 03:31:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:08.324 03:31:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@728 -- # xtrace_disable 00:28:08.324 03:31:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:08.324 03:31:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:08.324 03:31:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:08.324 03:31:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:08.324 03:31:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:08.324 [2024-07-15 03:31:14.302762] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:08.324 03:31:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:08.324 03:31:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:28:08.324 03:31:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:28:08.324 03:31:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@722 -- # xtrace_disable 00:28:08.324 03:31:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:08.324 03:31:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:28:08.324 03:31:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:08.324 03:31:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:28:08.324 03:31:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:08.324 03:31:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:28:08.324 03:31:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:08.324 03:31:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:28:08.324 03:31:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:08.324 03:31:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:28:08.324 03:31:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:08.324 03:31:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:28:08.324 03:31:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:08.324 03:31:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:28:08.324 03:31:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:08.324 03:31:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:28:08.324 03:31:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:08.324 03:31:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:28:08.324 03:31:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:08.324 03:31:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:28:08.324 03:31:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:08.324 03:31:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:28:08.324 03:31:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@35 -- # rpc_cmd 00:28:08.324 03:31:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:08.324 03:31:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:08.324 Malloc1 00:28:08.324 [2024-07-15 03:31:14.392230] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:08.324 Malloc2 00:28:08.582 Malloc3 00:28:08.582 Malloc4 00:28:08.582 Malloc5 00:28:08.582 Malloc6 00:28:08.582 Malloc7 00:28:08.841 Malloc8 00:28:08.841 Malloc9 00:28:08.841 Malloc10 00:28:08.841 03:31:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:08.841 03:31:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:28:08.841 03:31:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@728 -- # xtrace_disable 00:28:08.842 03:31:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:08.842 03:31:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # perfpid=3278902 00:28:08.842 03:31:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # waitforlisten 3278902 /var/tmp/bdevperf.sock 00:28:08.842 03:31:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@829 -- # '[' -z 3278902 ']' 00:28:08.842 03:31:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:28:08.842 03:31:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:28:08.842 03:31:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:28:08.842 03:31:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:08.842 03:31:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:28:08.842 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:28:08.842 03:31:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # config=() 00:28:08.842 03:31:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:08.842 03:31:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # local subsystem config 00:28:08.842 03:31:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:08.842 03:31:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:08.842 03:31:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:08.842 { 00:28:08.842 "params": { 00:28:08.842 "name": "Nvme$subsystem", 00:28:08.842 "trtype": "$TEST_TRANSPORT", 00:28:08.842 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:08.842 "adrfam": "ipv4", 00:28:08.842 "trsvcid": "$NVMF_PORT", 00:28:08.842 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:08.842 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:08.842 "hdgst": ${hdgst:-false}, 00:28:08.842 "ddgst": ${ddgst:-false} 00:28:08.842 }, 00:28:08.842 "method": "bdev_nvme_attach_controller" 00:28:08.842 } 00:28:08.842 EOF 00:28:08.842 )") 00:28:08.842 03:31:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:28:08.842 03:31:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:08.842 03:31:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:08.842 { 00:28:08.842 "params": { 00:28:08.842 "name": "Nvme$subsystem", 00:28:08.842 "trtype": "$TEST_TRANSPORT", 00:28:08.842 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:08.842 "adrfam": "ipv4", 00:28:08.842 "trsvcid": "$NVMF_PORT", 00:28:08.842 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:08.842 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:08.842 "hdgst": ${hdgst:-false}, 00:28:08.842 "ddgst": ${ddgst:-false} 00:28:08.842 }, 00:28:08.842 "method": "bdev_nvme_attach_controller" 00:28:08.842 } 00:28:08.842 EOF 00:28:08.842 )") 00:28:08.842 03:31:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:28:08.842 03:31:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:08.842 03:31:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:08.842 { 00:28:08.842 "params": { 00:28:08.842 "name": "Nvme$subsystem", 00:28:08.842 "trtype": "$TEST_TRANSPORT", 00:28:08.842 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:08.842 "adrfam": "ipv4", 00:28:08.842 "trsvcid": "$NVMF_PORT", 00:28:08.842 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:08.842 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:08.842 "hdgst": ${hdgst:-false}, 00:28:08.842 "ddgst": ${ddgst:-false} 00:28:08.842 }, 00:28:08.842 "method": "bdev_nvme_attach_controller" 00:28:08.842 } 00:28:08.842 EOF 00:28:08.842 )") 00:28:08.842 03:31:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:28:08.842 03:31:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:08.842 03:31:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:08.842 { 00:28:08.842 "params": { 00:28:08.842 "name": "Nvme$subsystem", 00:28:08.842 "trtype": "$TEST_TRANSPORT", 00:28:08.842 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:08.842 "adrfam": "ipv4", 00:28:08.842 "trsvcid": "$NVMF_PORT", 00:28:08.842 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:08.842 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:08.842 "hdgst": ${hdgst:-false}, 00:28:08.842 "ddgst": ${ddgst:-false} 00:28:08.842 }, 00:28:08.842 "method": "bdev_nvme_attach_controller" 00:28:08.842 } 00:28:08.842 EOF 00:28:08.842 )") 00:28:08.842 03:31:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:28:08.842 03:31:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:08.842 03:31:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:08.842 { 00:28:08.842 "params": { 00:28:08.842 "name": "Nvme$subsystem", 00:28:08.842 "trtype": "$TEST_TRANSPORT", 00:28:08.842 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:08.842 "adrfam": "ipv4", 00:28:08.842 "trsvcid": "$NVMF_PORT", 00:28:08.842 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:08.842 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:08.842 "hdgst": ${hdgst:-false}, 00:28:08.842 "ddgst": ${ddgst:-false} 00:28:08.842 }, 00:28:08.842 "method": "bdev_nvme_attach_controller" 00:28:08.842 } 00:28:08.842 EOF 00:28:08.842 )") 00:28:08.842 03:31:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:28:08.842 03:31:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:08.842 03:31:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:08.842 { 00:28:08.842 "params": { 00:28:08.842 "name": "Nvme$subsystem", 00:28:08.842 "trtype": "$TEST_TRANSPORT", 00:28:08.842 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:08.842 "adrfam": "ipv4", 00:28:08.842 "trsvcid": "$NVMF_PORT", 00:28:08.842 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:08.842 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:08.842 "hdgst": ${hdgst:-false}, 00:28:08.842 "ddgst": ${ddgst:-false} 00:28:08.842 }, 00:28:08.842 "method": "bdev_nvme_attach_controller" 00:28:08.842 } 00:28:08.842 EOF 00:28:08.842 )") 00:28:08.842 03:31:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:28:08.842 03:31:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:08.842 03:31:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:08.842 { 00:28:08.842 "params": { 00:28:08.842 "name": "Nvme$subsystem", 00:28:08.842 "trtype": "$TEST_TRANSPORT", 00:28:08.842 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:08.842 "adrfam": "ipv4", 00:28:08.842 "trsvcid": "$NVMF_PORT", 00:28:08.842 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:08.842 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:08.842 "hdgst": ${hdgst:-false}, 00:28:08.842 "ddgst": ${ddgst:-false} 00:28:08.842 }, 00:28:08.842 "method": "bdev_nvme_attach_controller" 00:28:08.842 } 00:28:08.842 EOF 00:28:08.842 )") 00:28:08.842 03:31:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:28:08.842 03:31:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:08.842 03:31:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:08.842 { 00:28:08.842 "params": { 00:28:08.842 "name": "Nvme$subsystem", 00:28:08.842 "trtype": "$TEST_TRANSPORT", 00:28:08.842 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:08.842 "adrfam": "ipv4", 00:28:08.842 "trsvcid": "$NVMF_PORT", 00:28:08.842 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:08.842 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:08.842 "hdgst": ${hdgst:-false}, 00:28:08.842 "ddgst": ${ddgst:-false} 00:28:08.842 }, 00:28:08.842 "method": "bdev_nvme_attach_controller" 00:28:08.842 } 00:28:08.842 EOF 00:28:08.842 )") 00:28:08.842 03:31:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:28:08.842 03:31:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:08.842 03:31:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:08.842 { 00:28:08.842 "params": { 00:28:08.842 "name": "Nvme$subsystem", 00:28:08.842 "trtype": "$TEST_TRANSPORT", 00:28:08.842 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:08.842 "adrfam": "ipv4", 00:28:08.842 "trsvcid": "$NVMF_PORT", 00:28:08.842 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:08.842 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:08.842 "hdgst": ${hdgst:-false}, 00:28:08.842 "ddgst": ${ddgst:-false} 00:28:08.842 }, 00:28:08.842 "method": "bdev_nvme_attach_controller" 00:28:08.842 } 00:28:08.842 EOF 00:28:08.842 )") 00:28:08.842 03:31:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:28:08.842 03:31:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:08.842 03:31:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:08.842 { 00:28:08.842 "params": { 00:28:08.842 "name": "Nvme$subsystem", 00:28:08.842 "trtype": "$TEST_TRANSPORT", 00:28:08.843 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:08.843 "adrfam": "ipv4", 00:28:08.843 "trsvcid": "$NVMF_PORT", 00:28:08.843 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:08.843 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:08.843 "hdgst": ${hdgst:-false}, 00:28:08.843 "ddgst": ${ddgst:-false} 00:28:08.843 }, 00:28:08.843 "method": "bdev_nvme_attach_controller" 00:28:08.843 } 00:28:08.843 EOF 00:28:08.843 )") 00:28:08.843 03:31:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:28:08.843 03:31:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@556 -- # jq . 00:28:08.843 03:31:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@557 -- # IFS=, 00:28:08.843 03:31:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:28:08.843 "params": { 00:28:08.843 "name": "Nvme1", 00:28:08.843 "trtype": "tcp", 00:28:08.843 "traddr": "10.0.0.2", 00:28:08.843 "adrfam": "ipv4", 00:28:08.843 "trsvcid": "4420", 00:28:08.843 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:08.843 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:08.843 "hdgst": false, 00:28:08.843 "ddgst": false 00:28:08.843 }, 00:28:08.843 "method": "bdev_nvme_attach_controller" 00:28:08.843 },{ 00:28:08.843 "params": { 00:28:08.843 "name": "Nvme2", 00:28:08.843 "trtype": "tcp", 00:28:08.843 "traddr": "10.0.0.2", 00:28:08.843 "adrfam": "ipv4", 00:28:08.843 "trsvcid": "4420", 00:28:08.843 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:28:08.843 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:28:08.843 "hdgst": false, 00:28:08.843 "ddgst": false 00:28:08.843 }, 00:28:08.843 "method": "bdev_nvme_attach_controller" 00:28:08.843 },{ 00:28:08.843 "params": { 00:28:08.843 "name": "Nvme3", 00:28:08.843 "trtype": "tcp", 00:28:08.843 "traddr": "10.0.0.2", 00:28:08.843 "adrfam": "ipv4", 00:28:08.843 "trsvcid": "4420", 00:28:08.843 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:28:08.843 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:28:08.843 "hdgst": false, 00:28:08.843 "ddgst": false 00:28:08.843 }, 00:28:08.843 "method": "bdev_nvme_attach_controller" 00:28:08.843 },{ 00:28:08.843 "params": { 00:28:08.843 "name": "Nvme4", 00:28:08.843 "trtype": "tcp", 00:28:08.843 "traddr": "10.0.0.2", 00:28:08.843 "adrfam": "ipv4", 00:28:08.843 "trsvcid": "4420", 00:28:08.843 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:28:08.843 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:28:08.843 "hdgst": false, 00:28:08.843 "ddgst": false 00:28:08.843 }, 00:28:08.843 "method": "bdev_nvme_attach_controller" 00:28:08.843 },{ 00:28:08.843 "params": { 00:28:08.843 "name": "Nvme5", 00:28:08.843 "trtype": "tcp", 00:28:08.843 "traddr": "10.0.0.2", 00:28:08.843 "adrfam": "ipv4", 00:28:08.843 "trsvcid": "4420", 00:28:08.843 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:28:08.843 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:28:08.843 "hdgst": false, 00:28:08.843 "ddgst": false 00:28:08.843 }, 00:28:08.843 "method": "bdev_nvme_attach_controller" 00:28:08.843 },{ 00:28:08.843 "params": { 00:28:08.843 "name": "Nvme6", 00:28:08.843 "trtype": "tcp", 00:28:08.843 "traddr": "10.0.0.2", 00:28:08.843 "adrfam": "ipv4", 00:28:08.843 "trsvcid": "4420", 00:28:08.843 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:28:08.843 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:28:08.843 "hdgst": false, 00:28:08.843 "ddgst": false 00:28:08.843 }, 00:28:08.843 "method": "bdev_nvme_attach_controller" 00:28:08.843 },{ 00:28:08.843 "params": { 00:28:08.843 "name": "Nvme7", 00:28:08.843 "trtype": "tcp", 00:28:08.843 "traddr": "10.0.0.2", 00:28:08.843 "adrfam": "ipv4", 00:28:08.843 "trsvcid": "4420", 00:28:08.843 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:28:08.843 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:28:08.843 "hdgst": false, 00:28:08.843 "ddgst": false 00:28:08.843 }, 00:28:08.843 "method": "bdev_nvme_attach_controller" 00:28:08.843 },{ 00:28:08.843 "params": { 00:28:08.843 "name": "Nvme8", 00:28:08.843 "trtype": "tcp", 00:28:08.843 "traddr": "10.0.0.2", 00:28:08.843 "adrfam": "ipv4", 00:28:08.843 "trsvcid": "4420", 00:28:08.843 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:28:08.843 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:28:08.843 "hdgst": false, 00:28:08.843 "ddgst": false 00:28:08.843 }, 00:28:08.843 "method": "bdev_nvme_attach_controller" 00:28:08.843 },{ 00:28:08.843 "params": { 00:28:08.843 "name": "Nvme9", 00:28:08.843 "trtype": "tcp", 00:28:08.843 "traddr": "10.0.0.2", 00:28:08.843 "adrfam": "ipv4", 00:28:08.843 "trsvcid": "4420", 00:28:08.843 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:28:08.843 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:28:08.843 "hdgst": false, 00:28:08.843 "ddgst": false 00:28:08.843 }, 00:28:08.843 "method": "bdev_nvme_attach_controller" 00:28:08.843 },{ 00:28:08.843 "params": { 00:28:08.843 "name": "Nvme10", 00:28:08.843 "trtype": "tcp", 00:28:08.843 "traddr": "10.0.0.2", 00:28:08.843 "adrfam": "ipv4", 00:28:08.843 "trsvcid": "4420", 00:28:08.843 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:28:08.843 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:28:08.843 "hdgst": false, 00:28:08.843 "ddgst": false 00:28:08.843 }, 00:28:08.843 "method": "bdev_nvme_attach_controller" 00:28:08.843 }' 00:28:08.843 [2024-07-15 03:31:14.916435] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:28:08.843 [2024-07-15 03:31:14.916525] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3278902 ] 00:28:08.843 EAL: No free 2048 kB hugepages reported on node 1 00:28:08.843 [2024-07-15 03:31:14.981323] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:09.101 [2024-07-15 03:31:15.068803] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:11.000 Running I/O for 10 seconds... 00:28:11.000 03:31:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:11.000 03:31:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@862 -- # return 0 00:28:11.000 03:31:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:28:11.000 03:31:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:11.000 03:31:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:11.000 03:31:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:11.000 03:31:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@130 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:28:11.000 03:31:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@132 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:28:11.000 03:31:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:28:11.000 03:31:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:28:11.000 03:31:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@57 -- # local ret=1 00:28:11.000 03:31:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local i 00:28:11.000 03:31:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:28:11.000 03:31:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:28:11.000 03:31:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:28:11.000 03:31:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:28:11.000 03:31:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:11.000 03:31:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:11.000 03:31:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:11.000 03:31:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=3 00:28:11.000 03:31:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:28:11.000 03:31:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@67 -- # sleep 0.25 00:28:11.259 03:31:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i-- )) 00:28:11.259 03:31:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:28:11.259 03:31:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:28:11.259 03:31:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:28:11.259 03:31:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:11.259 03:31:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:11.259 03:31:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:11.259 03:31:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=67 00:28:11.259 03:31:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 67 -ge 100 ']' 00:28:11.259 03:31:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@67 -- # sleep 0.25 00:28:11.530 03:31:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i-- )) 00:28:11.530 03:31:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:28:11.530 03:31:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:28:11.530 03:31:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:28:11.530 03:31:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:11.530 03:31:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:11.530 03:31:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:11.530 03:31:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=131 00:28:11.530 03:31:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 131 -ge 100 ']' 00:28:11.530 03:31:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # ret=0 00:28:11.530 03:31:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # break 00:28:11.530 03:31:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@69 -- # return 0 00:28:11.530 03:31:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@135 -- # killprocess 3278729 00:28:11.530 03:31:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@948 -- # '[' -z 3278729 ']' 00:28:11.530 03:31:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@952 -- # kill -0 3278729 00:28:11.530 03:31:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@953 -- # uname 00:28:11.530 03:31:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:11.530 03:31:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3278729 00:28:11.530 03:31:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:28:11.530 03:31:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:28:11.530 03:31:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3278729' 00:28:11.530 killing process with pid 3278729 00:28:11.530 03:31:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@967 -- # kill 3278729 00:28:11.530 03:31:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@972 -- # wait 3278729 00:28:11.531 [2024-07-15 03:31:17.603763] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x111c860 is same with the state(5) to be set 00:28:11.531 [2024-07-15 03:31:17.603905] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x111c860 is same with the state(5) to be set 00:28:11.531 [2024-07-15 03:31:17.603923] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x111c860 is same with the state(5) to be set 00:28:11.531 [2024-07-15 03:31:17.603937] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x111c860 is same with the state(5) to be set 00:28:11.531 [2024-07-15 03:31:17.603949] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x111c860 is same with the state(5) to be set 00:28:11.531 [2024-07-15 03:31:17.603963] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x111c860 is same with the state(5) to be set 00:28:11.531 [2024-07-15 03:31:17.603975] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x111c860 is same with the state(5) to be set 00:28:11.531 [2024-07-15 03:31:17.603988] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x111c860 is same with the state(5) to be set 00:28:11.531 [2024-07-15 03:31:17.604001] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x111c860 is same with the state(5) to be set 00:28:11.531 [2024-07-15 03:31:17.604025] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x111c860 is same with the state(5) to be set 00:28:11.531 [2024-07-15 03:31:17.604039] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x111c860 is same with the state(5) to be set 00:28:11.531 [2024-07-15 03:31:17.604051] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x111c860 is same with the state(5) to be set 00:28:11.531 [2024-07-15 03:31:17.604064] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x111c860 is same with the state(5) to be set 00:28:11.531 [2024-07-15 03:31:17.604077] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x111c860 is same with the state(5) to be set 00:28:11.531 [2024-07-15 03:31:17.604089] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x111c860 is same with the state(5) to be set 00:28:11.531 [2024-07-15 03:31:17.604102] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x111c860 is same with the state(5) to be set 00:28:11.531 [2024-07-15 03:31:17.604115] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x111c860 is same with the state(5) to be set 00:28:11.531 [2024-07-15 03:31:17.604128] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x111c860 is same with the state(5) to be set 00:28:11.531 [2024-07-15 03:31:17.604141] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x111c860 is same with the state(5) to be set 00:28:11.531 [2024-07-15 03:31:17.604154] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x111c860 is same with the state(5) to be set 00:28:11.531 [2024-07-15 03:31:17.604173] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x111c860 is same with the state(5) to be set 00:28:11.531 [2024-07-15 03:31:17.604185] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x111c860 is same with the state(5) to be set 00:28:11.531 [2024-07-15 03:31:17.604213] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x111c860 is same with the state(5) to be set 00:28:11.531 [2024-07-15 03:31:17.604236] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x111c860 is same with the state(5) to be set 00:28:11.531 [2024-07-15 03:31:17.604248] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x111c860 is same with the state(5) to be set 00:28:11.531 [2024-07-15 03:31:17.604260] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x111c860 is same with the state(5) to be set 00:28:11.531 [2024-07-15 03:31:17.604272] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x111c860 is same with the state(5) to be set 00:28:11.531 [2024-07-15 03:31:17.604284] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x111c860 is same with the state(5) to be set 00:28:11.531 [2024-07-15 03:31:17.604296] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x111c860 is same with the state(5) to be set 00:28:11.531 [2024-07-15 03:31:17.604307] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x111c860 is same with the state(5) to be set 00:28:11.531 [2024-07-15 03:31:17.604319] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x111c860 is same with the state(5) to be set 00:28:11.531 [2024-07-15 03:31:17.604331] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x111c860 is same with the state(5) to be set 00:28:11.531 [2024-07-15 03:31:17.604343] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x111c860 is same with the state(5) to be set 00:28:11.531 [2024-07-15 03:31:17.604355] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x111c860 is same with the state(5) to be set 00:28:11.531 [2024-07-15 03:31:17.604368] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x111c860 is same with the state(5) to be set 00:28:11.531 [2024-07-15 03:31:17.604381] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x111c860 is same with the state(5) to be set 00:28:11.531 [2024-07-15 03:31:17.604396] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x111c860 is same with the state(5) to be set 00:28:11.531 [2024-07-15 03:31:17.604408] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x111c860 is same with the state(5) to be set 00:28:11.531 [2024-07-15 03:31:17.604420] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x111c860 is same with the state(5) to be set 00:28:11.531 [2024-07-15 03:31:17.604432] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x111c860 is same with the state(5) to be set 00:28:11.531 [2024-07-15 03:31:17.604444] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x111c860 is same with the state(5) to be set 00:28:11.531 [2024-07-15 03:31:17.604456] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x111c860 is same with the state(5) to be set 00:28:11.531 [2024-07-15 03:31:17.604469] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x111c860 is same with the state(5) to be set 00:28:11.531 [2024-07-15 03:31:17.604481] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x111c860 is same with the state(5) to be set 00:28:11.531 [2024-07-15 03:31:17.604494] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x111c860 is same with the state(5) to be set 00:28:11.531 [2024-07-15 03:31:17.604506] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x111c860 is same with the state(5) to be set 00:28:11.531 [2024-07-15 03:31:17.604518] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x111c860 is same with the state(5) to be set 00:28:11.531 [2024-07-15 03:31:17.604530] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x111c860 is same with the state(5) to be set 00:28:11.531 [2024-07-15 03:31:17.604542] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x111c860 is same with the state(5) to be set 00:28:11.531 [2024-07-15 03:31:17.604554] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x111c860 is same with the state(5) to be set 00:28:11.531 [2024-07-15 03:31:17.604566] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x111c860 is same with the state(5) to be set 00:28:11.531 [2024-07-15 03:31:17.604579] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x111c860 is same with the state(5) to be set 00:28:11.531 [2024-07-15 03:31:17.604590] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x111c860 is same with the state(5) to be set 00:28:11.531 [2024-07-15 03:31:17.604602] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x111c860 is same with the state(5) to be set 00:28:11.531 [2024-07-15 03:31:17.604614] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x111c860 is same with the state(5) to be set 00:28:11.531 [2024-07-15 03:31:17.604626] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x111c860 is same with the state(5) to be set 00:28:11.531 [2024-07-15 03:31:17.604638] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x111c860 is same with the state(5) to be set 00:28:11.531 [2024-07-15 03:31:17.604650] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x111c860 is same with the state(5) to be set 00:28:11.531 [2024-07-15 03:31:17.604662] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x111c860 is same with the state(5) to be set 00:28:11.531 [2024-07-15 03:31:17.604673] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x111c860 is same with the state(5) to be set 00:28:11.531 [2024-07-15 03:31:17.604686] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x111c860 is same with the state(5) to be set 00:28:11.531 [2024-07-15 03:31:17.604698] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x111c860 is same with the state(5) to be set 00:28:11.531 [2024-07-15 03:31:17.604710] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x111c860 is same with the state(5) to be set 00:28:11.531 [2024-07-15 03:31:17.605681] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:11.531 [2024-07-15 03:31:17.605723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.531 [2024-07-15 03:31:17.605740] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:11.531 [2024-07-15 03:31:17.605754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.531 [2024-07-15 03:31:17.605768] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:11.531 [2024-07-15 03:31:17.605781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.531 [2024-07-15 03:31:17.605794] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:11.531 [2024-07-15 03:31:17.605807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.531 [2024-07-15 03:31:17.605819] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2649290 is same with the state(5) to be set 00:28:11.531 [2024-07-15 03:31:17.605906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.531 [2024-07-15 03:31:17.605928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.531 [2024-07-15 03:31:17.605956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.531 [2024-07-15 03:31:17.605972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.531 [2024-07-15 03:31:17.605988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.531 [2024-07-15 03:31:17.606002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.532 [2024-07-15 03:31:17.606017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.532 [2024-07-15 03:31:17.606031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.532 [2024-07-15 03:31:17.606047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.532 [2024-07-15 03:31:17.606061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.532 [2024-07-15 03:31:17.606077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.532 [2024-07-15 03:31:17.606091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.532 [2024-07-15 03:31:17.606107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.532 [2024-07-15 03:31:17.606123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.532 [2024-07-15 03:31:17.606139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.532 [2024-07-15 03:31:17.606153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.532 [2024-07-15 03:31:17.606169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.532 [2024-07-15 03:31:17.606195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.532 [2024-07-15 03:31:17.606212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.532 [2024-07-15 03:31:17.606227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.532 [2024-07-15 03:31:17.606242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.532 [2024-07-15 03:31:17.606256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.532 [2024-07-15 03:31:17.606272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.532 [2024-07-15 03:31:17.606286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.532 [2024-07-15 03:31:17.606302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.532 [2024-07-15 03:31:17.606317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.532 [2024-07-15 03:31:17.606332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.532 [2024-07-15 03:31:17.606345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.532 [2024-07-15 03:31:17.606360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.532 [2024-07-15 03:31:17.606374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-15 03:31:17.606368] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf38c30 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.532 he state(5) to be set 00:28:11.532 [2024-07-15 03:31:17.606394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.532 [2024-07-15 03:31:17.606414] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf38c30 is same with the state(5) to be set 00:28:11.532 [2024-07-15 03:31:17.606424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.532 [2024-07-15 03:31:17.606429] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf38c30 is same with the state(5) to be set 00:28:11.532 [2024-07-15 03:31:17.606440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:1[2024-07-15 03:31:17.606442] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf38c30 is same with t28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.532 he state(5) to be set 00:28:11.532 [2024-07-15 03:31:17.606455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-15 03:31:17.606456] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf38c30 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.532 he state(5) to be set 00:28:11.532 [2024-07-15 03:31:17.606471] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf38c30 is same with the state(5) to be set 00:28:11.532 [2024-07-15 03:31:17.606472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.532 [2024-07-15 03:31:17.606483] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf38c30 is same with t[2024-07-15 03:31:17.606486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 che state(5) to be set 00:28:11.532 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.532 [2024-07-15 03:31:17.606505] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf38c30 is same with the state(5) to be set 00:28:11.532 [2024-07-15 03:31:17.606508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.532 [2024-07-15 03:31:17.606519] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf38c30 is same with the state(5) to be set 00:28:11.532 [2024-07-15 03:31:17.606524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.532 [2024-07-15 03:31:17.606531] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf38c30 is same with the state(5) to be set 00:28:11.532 [2024-07-15 03:31:17.606540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.532 [2024-07-15 03:31:17.606544] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf38c30 is same with the state(5) to be set 00:28:11.532 [2024-07-15 03:31:17.606554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.532 [2024-07-15 03:31:17.606556] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf38c30 is same with the state(5) to be set 00:28:11.532 [2024-07-15 03:31:17.606569] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf38c30 is same with t[2024-07-15 03:31:17.606569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:1he state(5) to be set 00:28:11.532 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.532 [2024-07-15 03:31:17.606583] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf38c30 is same with t[2024-07-15 03:31:17.606584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 che state(5) to be set 00:28:11.532 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.532 [2024-07-15 03:31:17.606597] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf38c30 is same with the state(5) to be set 00:28:11.532 [2024-07-15 03:31:17.606601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.532 [2024-07-15 03:31:17.606610] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf38c30 is same with the state(5) to be set 00:28:11.532 [2024-07-15 03:31:17.606614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.532 [2024-07-15 03:31:17.606622] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf38c30 is same with the state(5) to be set 00:28:11.532 [2024-07-15 03:31:17.606630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.532 [2024-07-15 03:31:17.606635] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf38c30 is same with the state(5) to be set 00:28:11.532 [2024-07-15 03:31:17.606645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.532 [2024-07-15 03:31:17.606647] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf38c30 is same with the state(5) to be set 00:28:11.532 [2024-07-15 03:31:17.606660] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf38c30 is same with t[2024-07-15 03:31:17.606660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:1he state(5) to be set 00:28:11.532 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.532 [2024-07-15 03:31:17.606674] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf38c30 is same with t[2024-07-15 03:31:17.606676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 che state(5) to be set 00:28:11.532 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.532 [2024-07-15 03:31:17.606691] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf38c30 is same with the state(5) to be set 00:28:11.532 [2024-07-15 03:31:17.606694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.532 [2024-07-15 03:31:17.606704] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf38c30 is same with the state(5) to be set 00:28:11.532 [2024-07-15 03:31:17.606708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.532 [2024-07-15 03:31:17.606716] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf38c30 is same with the state(5) to be set 00:28:11.532 [2024-07-15 03:31:17.606724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.532 [2024-07-15 03:31:17.606729] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf38c30 is same with the state(5) to be set 00:28:11.532 [2024-07-15 03:31:17.606738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.532 [2024-07-15 03:31:17.606741] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf38c30 is same with the state(5) to be set 00:28:11.532 [2024-07-15 03:31:17.606753] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf38c30 is same with t[2024-07-15 03:31:17.606753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:1he state(5) to be set 00:28:11.532 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.532 [2024-07-15 03:31:17.606768] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf38c30 is same with the state(5) to be set 00:28:11.532 [2024-07-15 03:31:17.606770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.532 [2024-07-15 03:31:17.606780] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf38c30 is same with the state(5) to be set 00:28:11.533 [2024-07-15 03:31:17.606786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.533 [2024-07-15 03:31:17.606793] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf38c30 is same with the state(5) to be set 00:28:11.533 [2024-07-15 03:31:17.606799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.533 [2024-07-15 03:31:17.606805] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf38c30 is same with the state(5) to be set 00:28:11.533 [2024-07-15 03:31:17.606815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:1[2024-07-15 03:31:17.606817] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf38c30 is same with t28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.533 he state(5) to be set 00:28:11.533 [2024-07-15 03:31:17.606830] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf38c30 is same with t[2024-07-15 03:31:17.606831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 che state(5) to be set 00:28:11.533 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.533 [2024-07-15 03:31:17.606845] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf38c30 is same with the state(5) to be set 00:28:11.533 [2024-07-15 03:31:17.606848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.533 [2024-07-15 03:31:17.606888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-15 03:31:17.606889] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf38c30 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.533 he state(5) to be set 00:28:11.533 [2024-07-15 03:31:17.606909] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf38c30 is same with the state(5) to be set 00:28:11.533 [2024-07-15 03:31:17.606912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.533 [2024-07-15 03:31:17.606922] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf38c30 is same with the state(5) to be set 00:28:11.533 [2024-07-15 03:31:17.606927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.533 [2024-07-15 03:31:17.606936] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf38c30 is same with the state(5) to be set 00:28:11.533 [2024-07-15 03:31:17.606943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.533 [2024-07-15 03:31:17.606948] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf38c30 is same with the state(5) to be set 00:28:11.533 [2024-07-15 03:31:17.606957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.533 [2024-07-15 03:31:17.606961] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf38c30 is same with the state(5) to be set 00:28:11.533 [2024-07-15 03:31:17.606974] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf38c30 is same with t[2024-07-15 03:31:17.606974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:1he state(5) to be set 00:28:11.533 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.533 [2024-07-15 03:31:17.606989] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf38c30 is same with the state(5) to be set 00:28:11.533 [2024-07-15 03:31:17.606991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.533 [2024-07-15 03:31:17.607001] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf38c30 is same with the state(5) to be set 00:28:11.533 [2024-07-15 03:31:17.607007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.533 [2024-07-15 03:31:17.607014] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf38c30 is same with the state(5) to be set 00:28:11.533 [2024-07-15 03:31:17.607022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.533 [2024-07-15 03:31:17.607026] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf38c30 is same with the state(5) to be set 00:28:11.533 [2024-07-15 03:31:17.607037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:1[2024-07-15 03:31:17.607039] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf38c30 is same with t28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.533 he state(5) to be set 00:28:11.533 [2024-07-15 03:31:17.607053] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf38c30 is same with t[2024-07-15 03:31:17.607054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 che state(5) to be set 00:28:11.533 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.533 [2024-07-15 03:31:17.607067] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf38c30 is same with the state(5) to be set 00:28:11.533 [2024-07-15 03:31:17.607071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.533 [2024-07-15 03:31:17.607079] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf38c30 is same with the state(5) to be set 00:28:11.533 [2024-07-15 03:31:17.607086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.533 [2024-07-15 03:31:17.607092] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf38c30 is same with the state(5) to be set 00:28:11.533 [2024-07-15 03:31:17.607106] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf38c30 is same with the state(5) to be set 00:28:11.533 [2024-07-15 03:31:17.607107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.533 [2024-07-15 03:31:17.607118] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf38c30 is same with the state(5) to be set 00:28:11.533 [2024-07-15 03:31:17.607122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.533 [2024-07-15 03:31:17.607131] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf38c30 is same with the state(5) to be set 00:28:11.533 [2024-07-15 03:31:17.607138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.533 [2024-07-15 03:31:17.607144] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf38c30 is same with the state(5) to be set 00:28:11.533 [2024-07-15 03:31:17.607152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.533 [2024-07-15 03:31:17.607157] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf38c30 is same with the state(5) to be set 00:28:11.533 [2024-07-15 03:31:17.607170] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf38c30 is same with t[2024-07-15 03:31:17.607168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:1he state(5) to be set 00:28:11.533 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.533 [2024-07-15 03:31:17.607210] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf38c30 is same with the state(5) to be set 00:28:11.533 [2024-07-15 03:31:17.607212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.533 [2024-07-15 03:31:17.607222] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf38c30 is same with the state(5) to be set 00:28:11.533 [2024-07-15 03:31:17.607228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.533 [2024-07-15 03:31:17.607235] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf38c30 is same with the state(5) to be set 00:28:11.533 [2024-07-15 03:31:17.607242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.533 [2024-07-15 03:31:17.607247] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf38c30 is same with the state(5) to be set 00:28:11.533 [2024-07-15 03:31:17.607258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:1[2024-07-15 03:31:17.607259] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf38c30 is same with t28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.533 he state(5) to be set 00:28:11.533 [2024-07-15 03:31:17.607274] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf38c30 is same with t[2024-07-15 03:31:17.607273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 che state(5) to be set 00:28:11.533 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.533 [2024-07-15 03:31:17.607287] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf38c30 is same with the state(5) to be set 00:28:11.533 [2024-07-15 03:31:17.607292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.533 [2024-07-15 03:31:17.607306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.533 [2024-07-15 03:31:17.607324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.533 [2024-07-15 03:31:17.607338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.533 [2024-07-15 03:31:17.607353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.533 [2024-07-15 03:31:17.607367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.533 [2024-07-15 03:31:17.607382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.533 [2024-07-15 03:31:17.607395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.533 [2024-07-15 03:31:17.607410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.533 [2024-07-15 03:31:17.607423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.533 [2024-07-15 03:31:17.607438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.533 [2024-07-15 03:31:17.607451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.533 [2024-07-15 03:31:17.607465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.533 [2024-07-15 03:31:17.607479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.534 [2024-07-15 03:31:17.607494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.534 [2024-07-15 03:31:17.607507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.534 [2024-07-15 03:31:17.607522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.534 [2024-07-15 03:31:17.607536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.534 [2024-07-15 03:31:17.607565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.534 [2024-07-15 03:31:17.607579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.534 [2024-07-15 03:31:17.607595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.534 [2024-07-15 03:31:17.607608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.534 [2024-07-15 03:31:17.607623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.534 [2024-07-15 03:31:17.607636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.534 [2024-07-15 03:31:17.607651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.534 [2024-07-15 03:31:17.607665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.534 [2024-07-15 03:31:17.607680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.534 [2024-07-15 03:31:17.607700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.534 [2024-07-15 03:31:17.607723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.534 [2024-07-15 03:31:17.607738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.534 [2024-07-15 03:31:17.607753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.534 [2024-07-15 03:31:17.607767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.534 [2024-07-15 03:31:17.607782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.534 [2024-07-15 03:31:17.607795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.534 [2024-07-15 03:31:17.607810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.534 [2024-07-15 03:31:17.607824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.534 [2024-07-15 03:31:17.607839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.534 [2024-07-15 03:31:17.607853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.534 [2024-07-15 03:31:17.607874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.534 [2024-07-15 03:31:17.607897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.534 [2024-07-15 03:31:17.607912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.534 [2024-07-15 03:31:17.607926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.534 [2024-07-15 03:31:17.607941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.534 [2024-07-15 03:31:17.607954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.534 [2024-07-15 03:31:17.607970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.534 [2024-07-15 03:31:17.607983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.534 [2024-07-15 03:31:17.608003] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2709a70 is same with the state(5) to be set 00:28:11.534 [2024-07-15 03:31:17.608076] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2709a70 was disconnected and freed. reset controller. 00:28:11.534 [2024-07-15 03:31:17.608342] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf390d0 is same with the state(5) to be set 00:28:11.534 [2024-07-15 03:31:17.608377] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf390d0 is same with the state(5) to be set 00:28:11.534 [2024-07-15 03:31:17.608392] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf390d0 is same with the state(5) to be set 00:28:11.534 [2024-07-15 03:31:17.608405] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf390d0 is same with the state(5) to be set 00:28:11.534 [2024-07-15 03:31:17.608424] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf390d0 is same with the state(5) to be set 00:28:11.534 [2024-07-15 03:31:17.608438] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf390d0 is same with the state(5) to be set 00:28:11.534 [2024-07-15 03:31:17.608451] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf390d0 is same with the state(5) to be set 00:28:11.534 [2024-07-15 03:31:17.608464] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf390d0 is same with the state(5) to be set 00:28:11.534 [2024-07-15 03:31:17.608502] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf390d0 is same with the state(5) to be set 00:28:11.534 [2024-07-15 03:31:17.608518] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf390d0 is same with the state(5) to be set 00:28:11.534 [2024-07-15 03:31:17.608531] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf390d0 is same with the state(5) to be set 00:28:11.534 [2024-07-15 03:31:17.608544] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf390d0 is same with the state(5) to be set 00:28:11.534 [2024-07-15 03:31:17.608561] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf390d0 is same with the state(5) to be set 00:28:11.534 [2024-07-15 03:31:17.608582] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf390d0 is same with the state(5) to be set 00:28:11.534 [2024-07-15 03:31:17.608604] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf390d0 is same with the state(5) to be set 00:28:11.534 [2024-07-15 03:31:17.608626] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf390d0 is same with the state(5) to be set 00:28:11.534 [2024-07-15 03:31:17.608646] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf390d0 is same with the state(5) to be set 00:28:11.534 [2024-07-15 03:31:17.608664] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf390d0 is same with the state(5) to be set 00:28:11.534 [2024-07-15 03:31:17.608677] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf390d0 is same with the state(5) to be set 00:28:11.534 [2024-07-15 03:31:17.608690] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf390d0 is same with the state(5) to be set 00:28:11.534 [2024-07-15 03:31:17.608703] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf390d0 is same with the state(5) to be set 00:28:11.534 [2024-07-15 03:31:17.608715] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf390d0 is same with the state(5) to be set 00:28:11.534 [2024-07-15 03:31:17.608728] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf390d0 is same with the state(5) to be set 00:28:11.534 [2024-07-15 03:31:17.608741] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf390d0 is same with the state(5) to be set 00:28:11.534 [2024-07-15 03:31:17.608754] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf390d0 is same with the state(5) to be set 00:28:11.534 [2024-07-15 03:31:17.608767] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf390d0 is same with the state(5) to be set 00:28:11.534 [2024-07-15 03:31:17.608779] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf390d0 is same with the state(5) to be set 00:28:11.534 [2024-07-15 03:31:17.608792] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf390d0 is same with the state(5) to be set 00:28:11.534 [2024-07-15 03:31:17.608804] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf390d0 is same with the state(5) to be set 00:28:11.534 [2024-07-15 03:31:17.608816] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf390d0 is same with the state(5) to be set 00:28:11.534 [2024-07-15 03:31:17.608830] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf390d0 is same with the state(5) to be set 00:28:11.534 [2024-07-15 03:31:17.608843] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf390d0 is same with the state(5) to be set 00:28:11.534 [2024-07-15 03:31:17.608870] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf390d0 is same with the state(5) to be set 00:28:11.534 [2024-07-15 03:31:17.608893] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf390d0 is same with the state(5) to be set 00:28:11.534 [2024-07-15 03:31:17.608907] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf390d0 is same with the state(5) to be set 00:28:11.534 [2024-07-15 03:31:17.608920] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf390d0 is same with the state(5) to be set 00:28:11.534 [2024-07-15 03:31:17.608933] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf390d0 is same with the state(5) to be set 00:28:11.534 [2024-07-15 03:31:17.608945] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf390d0 is same with the state(5) to be set 00:28:11.534 [2024-07-15 03:31:17.608958] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf390d0 is same with the state(5) to be set 00:28:11.534 [2024-07-15 03:31:17.608970] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf390d0 is same with the state(5) to be set 00:28:11.534 [2024-07-15 03:31:17.608983] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf390d0 is same with the state(5) to be set 00:28:11.534 [2024-07-15 03:31:17.608995] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf390d0 is same with the state(5) to be set 00:28:11.534 [2024-07-15 03:31:17.609008] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf390d0 is same with the state(5) to be set 00:28:11.534 [2024-07-15 03:31:17.609020] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf390d0 is same with the state(5) to be set 00:28:11.534 [2024-07-15 03:31:17.609033] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf390d0 is same with the state(5) to be set 00:28:11.535 [2024-07-15 03:31:17.609045] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf390d0 is same with the state(5) to be set 00:28:11.535 [2024-07-15 03:31:17.609058] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf390d0 is same with the state(5) to be set 00:28:11.535 [2024-07-15 03:31:17.609070] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf390d0 is same with the state(5) to be set 00:28:11.535 [2024-07-15 03:31:17.609083] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf390d0 is same with the state(5) to be set 00:28:11.535 [2024-07-15 03:31:17.609096] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf390d0 is same with the state(5) to be set 00:28:11.535 [2024-07-15 03:31:17.609108] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf390d0 is same with the state(5) to be set 00:28:11.535 [2024-07-15 03:31:17.609120] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf390d0 is same with the state(5) to be set 00:28:11.535 [2024-07-15 03:31:17.609133] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf390d0 is same with the state(5) to be set 00:28:11.535 [2024-07-15 03:31:17.609145] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf390d0 is same with the state(5) to be set 00:28:11.535 [2024-07-15 03:31:17.609184] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf390d0 is same with the state(5) to be set 00:28:11.535 [2024-07-15 03:31:17.609196] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf390d0 is same with the state(5) to be set 00:28:11.535 [2024-07-15 03:31:17.609208] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf390d0 is same with the state(5) to be set 00:28:11.535 [2024-07-15 03:31:17.609220] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf390d0 is same with the state(5) to be set 00:28:11.535 [2024-07-15 03:31:17.609233] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf390d0 is same with the state(5) to be set 00:28:11.535 [2024-07-15 03:31:17.609251] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf390d0 is same with the state(5) to be set 00:28:11.535 [2024-07-15 03:31:17.609264] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf390d0 is same with the state(5) to be set 00:28:11.535 [2024-07-15 03:31:17.609276] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf390d0 is same with the state(5) to be set 00:28:11.535 [2024-07-15 03:31:17.609288] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf390d0 is same with the state(5) to be set 00:28:11.535 [2024-07-15 03:31:17.610459] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:11.535 [2024-07-15 03:31:17.610501] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2649290 (9): Bad file descriptor 00:28:11.535 [2024-07-15 03:31:17.611433] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf39590 is same with the state(5) to be set 00:28:11.535 [2024-07-15 03:31:17.611468] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf39590 is same with the state(5) to be set 00:28:11.535 [2024-07-15 03:31:17.611484] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf39590 is same with the state(5) to be set 00:28:11.535 [2024-07-15 03:31:17.611497] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf39590 is same with the state(5) to be set 00:28:11.535 [2024-07-15 03:31:17.611510] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf39590 is same with the state(5) to be set 00:28:11.535 [2024-07-15 03:31:17.611522] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf39590 is same with the state(5) to be set 00:28:11.535 [2024-07-15 03:31:17.611534] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf39590 is same with the state(5) to be set 00:28:11.535 [2024-07-15 03:31:17.611546] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf39590 is same with the state(5) to be set 00:28:11.535 [2024-07-15 03:31:17.611558] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf39590 is same with the state(5) to be set 00:28:11.535 [2024-07-15 03:31:17.611570] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf39590 is same with the state(5) to be set 00:28:11.535 [2024-07-15 03:31:17.611582] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf39590 is same with the state(5) to be set 00:28:11.535 [2024-07-15 03:31:17.611595] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf39590 is same with the state(5) to be set 00:28:11.535 [2024-07-15 03:31:17.611607] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf39590 is same with the state(5) to be set 00:28:11.535 [2024-07-15 03:31:17.611620] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf39590 is same with the state(5) to be set 00:28:11.535 [2024-07-15 03:31:17.611633] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf39590 is same with the state(5) to be set 00:28:11.535 [2024-07-15 03:31:17.611645] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf39590 is same with the state(5) to be set 00:28:11.535 [2024-07-15 03:31:17.611657] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf39590 is same with the state(5) to be set 00:28:11.535 [2024-07-15 03:31:17.611670] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf39590 is same with the state(5) to be set 00:28:11.535 [2024-07-15 03:31:17.611682] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf39590 is same with the state(5) to be set 00:28:11.535 [2024-07-15 03:31:17.611694] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf39590 is same with the state(5) to be set 00:28:11.535 [2024-07-15 03:31:17.611706] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf39590 is same with the state(5) to be set 00:28:11.535 [2024-07-15 03:31:17.611724] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf39590 is same with the state(5) to be set 00:28:11.535 [2024-07-15 03:31:17.611737] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf39590 is same with the state(5) to be set 00:28:11.535 [2024-07-15 03:31:17.611750] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf39590 is same with the state(5) to be set 00:28:11.535 [2024-07-15 03:31:17.611762] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf39590 is same with the state(5) to be set 00:28:11.535 [2024-07-15 03:31:17.611774] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf39590 is same with the state(5) to be set 00:28:11.535 [2024-07-15 03:31:17.611786] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf39590 is same with the state(5) to be set 00:28:11.535 [2024-07-15 03:31:17.611799] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf39590 is same with the state(5) to be set 00:28:11.535 [2024-07-15 03:31:17.611812] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf39590 is same with the state(5) to be set 00:28:11.535 [2024-07-15 03:31:17.611825] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf39590 is same with the state(5) to be set 00:28:11.535 [2024-07-15 03:31:17.611837] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf39590 is same with the state(5) to be set 00:28:11.535 [2024-07-15 03:31:17.611837] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:28:11.535 [2024-07-15 03:31:17.611850] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf39590 is same with the state(5) to be set 00:28:11.535 [2024-07-15 03:31:17.611872] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf39590 is same with the state(5) to be set 00:28:11.535 [2024-07-15 03:31:17.611895] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf39590 is same with the state(5) to be set 00:28:11.535 [2024-07-15 03:31:17.611909] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf39590 is same with the state(5) to be set 00:28:11.535 [2024-07-15 03:31:17.611921] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf39590 is same with the state(5) to be set 00:28:11.535 [2024-07-15 03:31:17.611933] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf39590 is same with the state(5) to be set 00:28:11.535 [2024-07-15 03:31:17.611946] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf39590 is same with the state(5) to be set 00:28:11.535 [2024-07-15 03:31:17.611958] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf39590 is same with the state(5) to be set 00:28:11.535 [2024-07-15 03:31:17.611971] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf39590 is same with the state(5) to be set 00:28:11.535 [2024-07-15 03:31:17.611983] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf39590 is same with the state(5) to be set 00:28:11.535 [2024-07-15 03:31:17.611995] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf39590 is same with the state(5) to be set 00:28:11.536 [2024-07-15 03:31:17.612008] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf39590 is same with the state(5) to be set 00:28:11.536 [2024-07-15 03:31:17.612020] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf39590 is same with the state(5) to be set 00:28:11.536 [2024-07-15 03:31:17.612020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.536 [2024-07-15 03:31:17.612033] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf39590 is same with the state(5) to be set 00:28:11.536 [2024-07-15 03:31:17.612045] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf39590 is same with the state(5) to be set 00:28:11.536 [2024-07-15 03:31:17.612049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2649290 with addr=10.0.0.2, port=4420 00:28:11.536 [2024-07-15 03:31:17.612058] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf39590 is same with the state(5) to be set 00:28:11.536 [2024-07-15 03:31:17.612071] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf39590 is same with t[2024-07-15 03:31:17.612070] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2649290 is same he state(5) to be set 00:28:11.536 with the state(5) to be set 00:28:11.536 [2024-07-15 03:31:17.612085] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf39590 is same with the state(5) to be set 00:28:11.536 [2024-07-15 03:31:17.612098] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf39590 is same with the state(5) to be set 00:28:11.536 [2024-07-15 03:31:17.612111] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf39590 is same with the state(5) to be set 00:28:11.536 [2024-07-15 03:31:17.612151] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:28:11.536 [2024-07-15 03:31:17.612573] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2649290 (9): Bad file descriptor 00:28:11.536 [2024-07-15 03:31:17.612665] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:28:11.536 [2024-07-15 03:31:17.613030] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:11.536 [2024-07-15 03:31:17.613053] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:11.536 [2024-07-15 03:31:17.613070] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:11.536 [2024-07-15 03:31:17.613371] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf39a30 is same with the state(5) to be set 00:28:11.536 [2024-07-15 03:31:17.613402] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf39a30 is same with the state(5) to be set 00:28:11.536 [2024-07-15 03:31:17.613417] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf39a30 is same with the state(5) to be set 00:28:11.536 [2024-07-15 03:31:17.613429] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf39a30 is same with the state(5) to be set 00:28:11.536 [2024-07-15 03:31:17.613442] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf39a30 is same with the state(5) to be set 00:28:11.536 [2024-07-15 03:31:17.613454] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf39a30 is same with the state(5) to be set 00:28:11.536 [2024-07-15 03:31:17.613467] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf39a30 is same with the state(5) to be set 00:28:11.536 [2024-07-15 03:31:17.613480] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf39a30 is same with the state(5) to be set 00:28:11.536 [2024-07-15 03:31:17.613493] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf39a30 is same with the state(5) to be set 00:28:11.536 [2024-07-15 03:31:17.613506] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf39a30 is same with the state(5) to be set 00:28:11.536 [2024-07-15 03:31:17.613518] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf39a30 is same with the state(5) to be set 00:28:11.536 [2024-07-15 03:31:17.613531] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf39a30 is same with the state(5) to be set 00:28:11.536 [2024-07-15 03:31:17.613544] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf39a30 is same with the state(5) to be set 00:28:11.536 [2024-07-15 03:31:17.613557] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf39a30 is same with the state(5) to be set 00:28:11.536 [2024-07-15 03:31:17.613569] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf39a30 is same with the state(5) to be set 00:28:11.536 [2024-07-15 03:31:17.613581] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf39a30 is same with the state(5) to be set 00:28:11.536 [2024-07-15 03:31:17.613605] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf39a30 is same with the state(5) to be set 00:28:11.536 [2024-07-15 03:31:17.613619] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf39a30 is same with the state(5) to be set 00:28:11.536 [2024-07-15 03:31:17.613631] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf39a30 is same with the state(5) to be set 00:28:11.536 [2024-07-15 03:31:17.613645] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf39a30 is same with the state(5) to be set 00:28:11.536 [2024-07-15 03:31:17.613658] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf39a30 is same with the state(5) to be set 00:28:11.536 [2024-07-15 03:31:17.613670] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf39a30 is same with the state(5) to be set 00:28:11.536 [2024-07-15 03:31:17.613683] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf39a30 is same with the state(5) to be set 00:28:11.536 [2024-07-15 03:31:17.613696] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf39a30 is same with the state(5) to be set 00:28:11.536 [2024-07-15 03:31:17.613708] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf39a30 is same with the state(5) to be set 00:28:11.536 [2024-07-15 03:31:17.613721] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf39a30 is same with the state(5) to be set 00:28:11.536 [2024-07-15 03:31:17.613733] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf39a30 is same with the state(5) to be set 00:28:11.536 [2024-07-15 03:31:17.613746] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf39a30 is same with the state(5) to be set 00:28:11.536 [2024-07-15 03:31:17.613758] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf39a30 is same with the state(5) to be set 00:28:11.536 [2024-07-15 03:31:17.613770] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf39a30 is same with the state(5) to be set 00:28:11.536 [2024-07-15 03:31:17.613783] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf39a30 is same with the state(5) to be set 00:28:11.536 [2024-07-15 03:31:17.613796] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf39a30 is same with the state(5) to be set 00:28:11.536 [2024-07-15 03:31:17.613809] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf39a30 is same with the state(5) to be set 00:28:11.536 [2024-07-15 03:31:17.613822] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf39a30 is same with the state(5) to be set 00:28:11.536 [2024-07-15 03:31:17.613834] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf39a30 is same with the state(5) to be set 00:28:11.536 [2024-07-15 03:31:17.613846] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf39a30 is same with the state(5) to be set 00:28:11.536 [2024-07-15 03:31:17.613868] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf39a30 is same with the state(5) to be set 00:28:11.536 [2024-07-15 03:31:17.613888] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf39a30 is same with the state(5) to be set 00:28:11.536 [2024-07-15 03:31:17.613903] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf39a30 is same with the state(5) to be set 00:28:11.536 [2024-07-15 03:31:17.613916] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf39a30 is same with the state(5) to be set 00:28:11.536 [2024-07-15 03:31:17.613928] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf39a30 is same with the state(5) to be set 00:28:11.536 [2024-07-15 03:31:17.613940] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf39a30 is same with the state(5) to be set 00:28:11.536 [2024-07-15 03:31:17.613953] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf39a30 is same with the state(5) to be set 00:28:11.536 [2024-07-15 03:31:17.613966] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf39a30 is same with the state(5) to be set 00:28:11.536 [2024-07-15 03:31:17.613983] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf39a30 is same with t[2024-07-15 03:31:17.613978] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:11.536 he state(5) to be set 00:28:11.536 [2024-07-15 03:31:17.614000] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf39a30 is same with the state(5) to be set 00:28:11.536 [2024-07-15 03:31:17.614013] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf39a30 is same with the state(5) to be set 00:28:11.536 [2024-07-15 03:31:17.614025] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf39a30 is same with the state(5) to be set 00:28:11.536 [2024-07-15 03:31:17.614037] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf39a30 is same with the state(5) to be set 00:28:11.536 [2024-07-15 03:31:17.614050] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf39a30 is same with the state(5) to be set 00:28:11.536 [2024-07-15 03:31:17.614063] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf39a30 is same with the state(5) to be set 00:28:11.536 [2024-07-15 03:31:17.614075] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf39a30 is same with the state(5) to be set 00:28:11.536 [2024-07-15 03:31:17.614087] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf39a30 is same with the state(5) to be set 00:28:11.536 [2024-07-15 03:31:17.614099] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf39a30 is same with the state(5) to be set 00:28:11.536 [2024-07-15 03:31:17.614112] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf39a30 is same with the state(5) to be set 00:28:11.536 [2024-07-15 03:31:17.614124] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf39a30 is same with the state(5) to be set 00:28:11.536 [2024-07-15 03:31:17.614137] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf39a30 is same with the state(5) to be set 00:28:11.536 [2024-07-15 03:31:17.614149] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf39a30 is same with the state(5) to be set 00:28:11.536 [2024-07-15 03:31:17.614171] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf39a30 is same with the state(5) to be set 00:28:11.536 [2024-07-15 03:31:17.614183] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf39a30 is same with the state(5) to be set 00:28:11.536 [2024-07-15 03:31:17.614195] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf39a30 is same with the state(5) to be set 00:28:11.536 [2024-07-15 03:31:17.614207] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf39a30 is same with the state(5) to be set 00:28:11.536 [2024-07-15 03:31:17.614219] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf39a30 is same with the state(5) to be set 00:28:11.537 [2024-07-15 03:31:17.615204] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf39ef0 is same with the state(5) to be set 00:28:11.537 [2024-07-15 03:31:17.615232] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf39ef0 is same with the state(5) to be set 00:28:11.537 [2024-07-15 03:31:17.615246] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf39ef0 is same with the state(5) to be set 00:28:11.537 [2024-07-15 03:31:17.615259] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf39ef0 is same with the state(5) to be set 00:28:11.537 [2024-07-15 03:31:17.615272] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf39ef0 is same with the state(5) to be set 00:28:11.537 [2024-07-15 03:31:17.615286] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf39ef0 is same with the state(5) to be set 00:28:11.537 [2024-07-15 03:31:17.615299] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf39ef0 is same with the state(5) to be set 00:28:11.537 [2024-07-15 03:31:17.615316] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf39ef0 is same with the state(5) to be set 00:28:11.537 [2024-07-15 03:31:17.615330] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf39ef0 is same with the state(5) to be set 00:28:11.537 [2024-07-15 03:31:17.615343] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf39ef0 is same with the state(5) to be set 00:28:11.537 [2024-07-15 03:31:17.615356] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf39ef0 is same with the state(5) to be set 00:28:11.537 [2024-07-15 03:31:17.615368] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf39ef0 is same with the state(5) to be set 00:28:11.537 [2024-07-15 03:31:17.615381] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf39ef0 is same with the state(5) to be set 00:28:11.537 [2024-07-15 03:31:17.615394] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf39ef0 is same with the state(5) to be set 00:28:11.537 [2024-07-15 03:31:17.615407] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf39ef0 is same with the state(5) to be set 00:28:11.537 [2024-07-15 03:31:17.615419] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf39ef0 is same with the state(5) to be set 00:28:11.537 [2024-07-15 03:31:17.615435] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf39ef0 is same with the state(5) to be set 00:28:11.537 [2024-07-15 03:31:17.615456] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf39ef0 is same with the state(5) to be set 00:28:11.537 [2024-07-15 03:31:17.615477] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf39ef0 is same with the state(5) to be set 00:28:11.537 [2024-07-15 03:31:17.615498] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf39ef0 is same with the state(5) to be set 00:28:11.537 [2024-07-15 03:31:17.615524] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf39ef0 is same with the state(5) to be set 00:28:11.537 [2024-07-15 03:31:17.615547] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf39ef0 is same with the state(5) to be set 00:28:11.537 [2024-07-15 03:31:17.615567] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf39ef0 is same with the state(5) to be set 00:28:11.537 [2024-07-15 03:31:17.615587] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf39ef0 is same with the state(5) to be set 00:28:11.537 [2024-07-15 03:31:17.615608] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf39ef0 is same with the state(5) to be set 00:28:11.537 [2024-07-15 03:31:17.615631] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf39ef0 is same with the state(5) to be set 00:28:11.537 [2024-07-15 03:31:17.615656] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf39ef0 is same with the state(5) to be set 00:28:11.537 [2024-07-15 03:31:17.615676] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf39ef0 is same with the state(5) to be set 00:28:11.537 [2024-07-15 03:31:17.615696] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf39ef0 is same with the state(5) to be set 00:28:11.537 [2024-07-15 03:31:17.615715] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf39ef0 is same with the state(5) to be set 00:28:11.537 [2024-07-15 03:31:17.615735] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf39ef0 is same with the state(5) to be set 00:28:11.537 [2024-07-15 03:31:17.615756] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf39ef0 is same with the state(5) to be set 00:28:11.537 [2024-07-15 03:31:17.615774] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf39ef0 is same with the state(5) to be set 00:28:11.537 [2024-07-15 03:31:17.615787] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf39ef0 is same with the state(5) to be set 00:28:11.537 [2024-07-15 03:31:17.615805] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf39ef0 is same with the state(5) to be set 00:28:11.537 [2024-07-15 03:31:17.615819] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf39ef0 is same with the state(5) to be set 00:28:11.537 [2024-07-15 03:31:17.615831] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf39ef0 is same with the state(5) to be set 00:28:11.537 [2024-07-15 03:31:17.615844] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf39ef0 is same with the state(5) to be set 00:28:11.537 [2024-07-15 03:31:17.615857] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf39ef0 is same with the state(5) to be set 00:28:11.537 [2024-07-15 03:31:17.615872] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf39ef0 is same with the state(5) to be set 00:28:11.537 [2024-07-15 03:31:17.615897] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf39ef0 is same with the state(5) to be set 00:28:11.537 [2024-07-15 03:31:17.615911] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf39ef0 is same with the state(5) to be set 00:28:11.537 [2024-07-15 03:31:17.615924] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf39ef0 is same with the state(5) to be set 00:28:11.537 [2024-07-15 03:31:17.615937] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf39ef0 is same with the state(5) to be set 00:28:11.537 [2024-07-15 03:31:17.615949] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf39ef0 is same with the state(5) to be set 00:28:11.537 [2024-07-15 03:31:17.615962] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf39ef0 is same with the state(5) to be set 00:28:11.537 [2024-07-15 03:31:17.615974] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf39ef0 is same with the state(5) to be set 00:28:11.537 [2024-07-15 03:31:17.615986] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf39ef0 is same with the state(5) to be set 00:28:11.537 [2024-07-15 03:31:17.615999] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf39ef0 is same with the state(5) to be set 00:28:11.537 [2024-07-15 03:31:17.616011] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf39ef0 is same with the state(5) to be set 00:28:11.537 [2024-07-15 03:31:17.616012] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:11.537 [2024-07-15 03:31:17.616024] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf39ef0 is same with the state(5) to be set 00:28:11.537 [2024-07-15 03:31:17.616037] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf39ef0 is same with the state(5) to be set 00:28:11.537 [2024-07-15 03:31:17.616037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.537 [2024-07-15 03:31:17.616049] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf39ef0 is same with the state(5) to be set 00:28:11.537 [2024-07-15 03:31:17.616055] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:11.537 [2024-07-15 03:31:17.616062] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf39ef0 is same with the state(5) to be set 00:28:11.537 [2024-07-15 03:31:17.616069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.537 [2024-07-15 03:31:17.616075] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf39ef0 is same with the state(5) to be set 00:28:11.537 [2024-07-15 03:31:17.616084] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:11.537 [2024-07-15 03:31:17.616088] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf39ef0 is same with the state(5) to be set 00:28:11.537 [2024-07-15 03:31:17.616098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.537 [2024-07-15 03:31:17.616105] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf39ef0 is same with the state(5) to be set 00:28:11.537 [2024-07-15 03:31:17.616113] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:11.537 [2024-07-15 03:31:17.616119] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf39ef0 is same with the state(5) to be set 00:28:11.537 [2024-07-15 03:31:17.616127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.537 [2024-07-15 03:31:17.616132] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf39ef0 is same with the state(5) to be set 00:28:11.537 [2024-07-15 03:31:17.616141] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x266d700 is same with the state(5) to be set 00:28:11.537 [2024-07-15 03:31:17.616145] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf39ef0 is same with the state(5) to be set 00:28:11.537 [2024-07-15 03:31:17.616157] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf39ef0 is same with the state(5) to be set 00:28:11.537 [2024-07-15 03:31:17.616172] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf39ef0 is same with the state(5) to be set 00:28:11.537 [2024-07-15 03:31:17.616184] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf39ef0 is same with the state(5) to be set 00:28:11.537 [2024-07-15 03:31:17.616195] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:11.537 [2024-07-15 03:31:17.616216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.537 [2024-07-15 03:31:17.616239] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:11.537 [2024-07-15 03:31:17.616252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.537 [2024-07-15 03:31:17.616266] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:11.537 [2024-07-15 03:31:17.616279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.537 [2024-07-15 03:31:17.616293] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:11.538 [2024-07-15 03:31:17.616306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.538 [2024-07-15 03:31:17.616319] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2669830 is same with the state(5) to be set 00:28:11.538 [2024-07-15 03:31:17.616373] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:11.538 [2024-07-15 03:31:17.616394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.538 [2024-07-15 03:31:17.616410] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:11.538 [2024-07-15 03:31:17.616423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.538 [2024-07-15 03:31:17.616437] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:11.538 [2024-07-15 03:31:17.616450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.538 [2024-07-15 03:31:17.616469] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:11.538 [2024-07-15 03:31:17.616483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.538 [2024-07-15 03:31:17.616496] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27cc8b0 is same with the state(5) to be set 00:28:11.538 [2024-07-15 03:31:17.616560] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:11.538 [2024-07-15 03:31:17.616582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.538 [2024-07-15 03:31:17.616597] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:11.538 [2024-07-15 03:31:17.616616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.538 [2024-07-15 03:31:17.616640] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:11.538 [2024-07-15 03:31:17.616665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.538 [2024-07-15 03:31:17.616684] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:11.538 [2024-07-15 03:31:17.616697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.538 [2024-07-15 03:31:17.616710] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2685b50 is same with the state(5) to be set 00:28:11.538 [2024-07-15 03:31:17.616755] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:11.538 [2024-07-15 03:31:17.616775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.538 [2024-07-15 03:31:17.616790] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:11.538 [2024-07-15 03:31:17.616803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.538 [2024-07-15 03:31:17.616817] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:11.538 [2024-07-15 03:31:17.616830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.538 [2024-07-15 03:31:17.616844] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:11.538 [2024-07-15 03:31:17.616856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.538 [2024-07-15 03:31:17.616869] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2814e10 is same with the state(5) to be set 00:28:11.538 [2024-07-15 03:31:17.616974] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:28:11.538 [2024-07-15 03:31:17.617626] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:28:11.538 [2024-07-15 03:31:17.617668] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf3a390 is same with the state(5) to be set 00:28:11.538 [2024-07-15 03:31:17.617705] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf3a390 is same with the state(5) to be set 00:28:11.538 [2024-07-15 03:31:17.617745] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf3a390 is same with the state(5) to be set 00:28:11.538 [2024-07-15 03:31:17.617768] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf3a390 is same with the state(5) to be set 00:28:11.538 [2024-07-15 03:31:17.617792] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf3a390 is same with the state(5) to be set 00:28:11.538 [2024-07-15 03:31:17.617806] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf3a390 is same with the state(5) to be set 00:28:11.538 [2024-07-15 03:31:17.617819] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf3a390 is same with the state(5) to be set 00:28:11.538 [2024-07-15 03:31:17.617840] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf3a390 is same with the state(5) to be set 00:28:11.538 [2024-07-15 03:31:17.617904] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf3a390 is same with the state(5) to be set 00:28:11.538 [2024-07-15 03:31:17.617928] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf3a390 is same with the state(5) to be set 00:28:11.538 [2024-07-15 03:31:17.617952] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf3a390 is same with the state(5) to be set 00:28:11.538 [2024-07-15 03:31:17.617974] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf3a390 is same with the state(5) to be set 00:28:11.538 [2024-07-15 03:31:17.617994] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf3a390 is same with the state(5) to be set 00:28:11.538 [2024-07-15 03:31:17.618007] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf3a390 is same with the state(5) to be set 00:28:11.538 [2024-07-15 03:31:17.618022] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf3a390 is same with the state(5) to be set 00:28:11.538 [2024-07-15 03:31:17.618043] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf3a390 is same with the state(5) to be set 00:28:11.538 [2024-07-15 03:31:17.618064] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf3a390 is same with the state(5) to be set 00:28:11.538 [2024-07-15 03:31:17.618082] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:28:11.538 [2024-07-15 03:31:17.618087] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf3a390 is same with the state(5) to be set 00:28:11.538 [2024-07-15 03:31:17.618110] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf3a390 is same with the state(5) to be set 00:28:11.538 [2024-07-15 03:31:17.618131] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf3a390 is same with the state(5) to be set 00:28:11.538 [2024-07-15 03:31:17.618155] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf3a390 is same with the state(5) to be set 00:28:11.538 [2024-07-15 03:31:17.618186] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf3a390 is same with the state(5) to be set 00:28:11.538 [2024-07-15 03:31:17.618208] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf3a390 is same with the state(5) to be set 00:28:11.538 [2024-07-15 03:31:17.618229] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf3a390 is same with the state(5) to be set 00:28:11.538 [2024-07-15 03:31:17.618252] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf3a390 is same with the state(5) to be set 00:28:11.538 [2024-07-15 03:31:17.618275] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf3a390 is same with the state(5) to be set 00:28:11.538 [2024-07-15 03:31:17.618297] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf3a390 is same with the state(5) to be set 00:28:11.538 [2024-07-15 03:31:17.618318] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf3a390 is same with the state(5) to be set 00:28:11.538 [2024-07-15 03:31:17.618341] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf3a390 is same with the state(5) to be set 00:28:11.538 [2024-07-15 03:31:17.618363] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf3a390 is same with the state(5) to be set 00:28:11.538 [2024-07-15 03:31:17.618385] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf3a390 is same with the state(5) to be set 00:28:11.538 [2024-07-15 03:31:17.618415] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf3a390 is same with the state(5) to be set 00:28:11.538 [2024-07-15 03:31:17.618437] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf3a390 is same with the state(5) to be set 00:28:11.538 [2024-07-15 03:31:17.618460] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf3a390 is same with the state(5) to be set 00:28:11.538 [2024-07-15 03:31:17.618482] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf3a390 is same with the state(5) to be set 00:28:11.538 [2024-07-15 03:31:17.618505] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf3a390 is same with the state(5) to be set 00:28:11.538 [2024-07-15 03:31:17.618526] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf3a390 is same with the state(5) to be set 00:28:11.538 [2024-07-15 03:31:17.618548] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf3a390 is same with the state(5) to be set 00:28:11.538 [2024-07-15 03:31:17.618570] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf3a390 is same with the state(5) to be set 00:28:11.538 [2024-07-15 03:31:17.618591] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf3a390 is same with the state(5) to be set 00:28:11.538 [2024-07-15 03:31:17.618613] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf3a390 is same with the state(5) to be set 00:28:11.538 [2024-07-15 03:31:17.618635] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf3a390 is same with the state(5) to be set 00:28:11.538 [2024-07-15 03:31:17.618659] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf3a390 is same with the state(5) to be set 00:28:11.538 [2024-07-15 03:31:17.618681] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf3a390 is same with the state(5) to be set 00:28:11.538 [2024-07-15 03:31:17.618704] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf3a390 is same with the state(5) to be set 00:28:11.538 [2024-07-15 03:31:17.618725] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf3a390 is same with the state(5) to be set 00:28:11.538 [2024-07-15 03:31:17.618746] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf3a390 is same with the state(5) to be set 00:28:11.539 [2024-07-15 03:31:17.618769] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf3a390 is same with the state(5) to be set 00:28:11.539 [2024-07-15 03:31:17.618790] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf3a390 is same with the state(5) to be set 00:28:11.539 [2024-07-15 03:31:17.618813] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf3a390 is same with the state(5) to be set 00:28:11.539 [2024-07-15 03:31:17.618835] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf3a390 is same with the state(5) to be set 00:28:11.539 [2024-07-15 03:31:17.618856] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf3a390 is same with the state(5) to be set 00:28:11.539 [2024-07-15 03:31:17.618902] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf3a390 is same with the state(5) to be set 00:28:11.539 [2024-07-15 03:31:17.618926] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf3a390 is same with the state(5) to be set 00:28:11.539 [2024-07-15 03:31:17.618948] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf3a390 is same with the state(5) to be set 00:28:11.539 [2024-07-15 03:31:17.618972] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf3a390 is same with the state(5) to be set 00:28:11.539 [2024-07-15 03:31:17.618993] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf3a390 is same with the state(5) to be set 00:28:11.539 [2024-07-15 03:31:17.619013] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf3a390 is same with the state(5) to be set 00:28:11.539 [2024-07-15 03:31:17.619041] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf3a390 is same with the state(5) to be set 00:28:11.539 [2024-07-15 03:31:17.619062] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf3a390 is same with the state(5) to be set 00:28:11.539 [2024-07-15 03:31:17.619083] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf3a390 is same with the state(5) to be set 00:28:11.539 [2024-07-15 03:31:17.619106] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf3a390 is same with the state(5) to be set 00:28:11.539 [2024-07-15 03:31:17.619126] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf3a390 is same with the state(5) to be set 00:28:11.539 [2024-07-15 03:31:17.619301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.539 [2024-07-15 03:31:17.619325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.539 [2024-07-15 03:31:17.619350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.539 [2024-07-15 03:31:17.619367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.539 [2024-07-15 03:31:17.619383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.539 [2024-07-15 03:31:17.619398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.539 [2024-07-15 03:31:17.619414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.539 [2024-07-15 03:31:17.619428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.539 [2024-07-15 03:31:17.619444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.539 [2024-07-15 03:31:17.619458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.539 [2024-07-15 03:31:17.619474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.539 [2024-07-15 03:31:17.619488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.539 [2024-07-15 03:31:17.619503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.539 [2024-07-15 03:31:17.619517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.539 [2024-07-15 03:31:17.619534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.539 [2024-07-15 03:31:17.619547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.539 [2024-07-15 03:31:17.619563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.539 [2024-07-15 03:31:17.619577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.539 [2024-07-15 03:31:17.619594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.539 [2024-07-15 03:31:17.619609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.539 [2024-07-15 03:31:17.619629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.539 [2024-07-15 03:31:17.619645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.539 [2024-07-15 03:31:17.619677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.539 [2024-07-15 03:31:17.619691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.539 [2024-07-15 03:31:17.619707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.539 [2024-07-15 03:31:17.619721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.539 [2024-07-15 03:31:17.619736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.539 [2024-07-15 03:31:17.619749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.539 [2024-07-15 03:31:17.619764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.539 [2024-07-15 03:31:17.619778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.539 [2024-07-15 03:31:17.619794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.539 [2024-07-15 03:31:17.619807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.539 [2024-07-15 03:31:17.619822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.539 [2024-07-15 03:31:17.619835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.539 [2024-07-15 03:31:17.619851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.539 [2024-07-15 03:31:17.619872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.539 [2024-07-15 03:31:17.619912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.539 [2024-07-15 03:31:17.619927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.539 [2024-07-15 03:31:17.619942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.539 [2024-07-15 03:31:17.619956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.539 [2024-07-15 03:31:17.619972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.539 [2024-07-15 03:31:17.619985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.539 [2024-07-15 03:31:17.620000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.539 [2024-07-15 03:31:17.620014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.539 [2024-07-15 03:31:17.620029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.539 [2024-07-15 03:31:17.620047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.539 [2024-07-15 03:31:17.620063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.539 [2024-07-15 03:31:17.620077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.539 [2024-07-15 03:31:17.620092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.539 [2024-07-15 03:31:17.620106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.539 [2024-07-15 03:31:17.620122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.539 [2024-07-15 03:31:17.620135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.539 [2024-07-15 03:31:17.620151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.539 [2024-07-15 03:31:17.620164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.539 [2024-07-15 03:31:17.620180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.539 [2024-07-15 03:31:17.620194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.539 [2024-07-15 03:31:17.620210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.539 [2024-07-15 03:31:17.620223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.539 [2024-07-15 03:31:17.620239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.539 [2024-07-15 03:31:17.620254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.540 [2024-07-15 03:31:17.620249] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x111c3c0 is same with the state(5) to be set 00:28:11.540 [2024-07-15 03:31:17.620270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.540 [2024-07-15 03:31:17.620276] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x111c3c0 is same with the state(5) to be set 00:28:11.540 [2024-07-15 03:31:17.620284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.540 [2024-07-15 03:31:17.620291] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x111c3c0 is same with the state(5) to be set 00:28:11.540 [2024-07-15 03:31:17.620300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.540 [2024-07-15 03:31:17.620304] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x111c3c0 is same with the state(5) to be set 00:28:11.540 [2024-07-15 03:31:17.620314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.540 [2024-07-15 03:31:17.620317] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x111c3c0 is same with the state(5) to be set 00:28:11.540 [2024-07-15 03:31:17.620330] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x111c3c0 is same with [2024-07-15 03:31:17.620330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:12the state(5) to be set 00:28:11.540 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.540 [2024-07-15 03:31:17.620347] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x111c3c0 is same with the state(5) to be set 00:28:11.540 [2024-07-15 03:31:17.620350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.540 [2024-07-15 03:31:17.620361] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x111c3c0 is same with the state(5) to be set 00:28:11.540 [2024-07-15 03:31:17.620367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.540 [2024-07-15 03:31:17.620373] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x111c3c0 is same with the state(5) to be set 00:28:11.540 [2024-07-15 03:31:17.620381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.540 [2024-07-15 03:31:17.620387] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x111c3c0 is same with the state(5) to be set 00:28:11.540 [2024-07-15 03:31:17.620398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:12[2024-07-15 03:31:17.620399] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x111c3c0 is same with 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.540 the state(5) to be set 00:28:11.540 [2024-07-15 03:31:17.620414] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x111c3c0 is same with [2024-07-15 03:31:17.620414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:28:11.540 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.540 [2024-07-15 03:31:17.620428] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x111c3c0 is same with the state(5) to be set 00:28:11.540 [2024-07-15 03:31:17.620432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.540 [2024-07-15 03:31:17.620440] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x111c3c0 is same with the state(5) to be set 00:28:11.540 [2024-07-15 03:31:17.620446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.540 [2024-07-15 03:31:17.620453] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x111c3c0 is same with the state(5) to be set 00:28:11.540 [2024-07-15 03:31:17.620463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.540 [2024-07-15 03:31:17.620466] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x111c3c0 is same with the state(5) to be set 00:28:11.540 [2024-07-15 03:31:17.620477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-15 03:31:17.620478] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x111c3c0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.540 the state(5) to be set 00:28:11.540 [2024-07-15 03:31:17.620492] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x111c3c0 is same with the state(5) to be set 00:28:11.540 [2024-07-15 03:31:17.620495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.540 [2024-07-15 03:31:17.620504] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x111c3c0 is same with the state(5) to be set 00:28:11.540 [2024-07-15 03:31:17.620509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.540 [2024-07-15 03:31:17.620517] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x111c3c0 is same with the state(5) to be set 00:28:11.540 [2024-07-15 03:31:17.620525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:12[2024-07-15 03:31:17.620530] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x111c3c0 is same with 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.540 the state(5) to be set 00:28:11.540 [2024-07-15 03:31:17.620559] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x111c3c0 is same with [2024-07-15 03:31:17.620559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:28:11.540 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.540 [2024-07-15 03:31:17.620572] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x111c3c0 is same with the state(5) to be set 00:28:11.540 [2024-07-15 03:31:17.620578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.540 [2024-07-15 03:31:17.620585] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x111c3c0 is same with the state(5) to be set 00:28:11.540 [2024-07-15 03:31:17.620592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.540 [2024-07-15 03:31:17.620597] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x111c3c0 is same with the state(5) to be set 00:28:11.540 [2024-07-15 03:31:17.620608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:12[2024-07-15 03:31:17.620610] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x111c3c0 is same with 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.540 the state(5) to be set 00:28:11.540 [2024-07-15 03:31:17.620624] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x111c3c0 is same with [2024-07-15 03:31:17.620624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:28:11.540 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.540 [2024-07-15 03:31:17.620637] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x111c3c0 is same with the state(5) to be set 00:28:11.540 [2024-07-15 03:31:17.620642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.540 [2024-07-15 03:31:17.620650] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x111c3c0 is same with the state(5) to be set 00:28:11.540 [2024-07-15 03:31:17.620655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.540 [2024-07-15 03:31:17.620663] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x111c3c0 is same with the state(5) to be set 00:28:11.540 [2024-07-15 03:31:17.620671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.540 [2024-07-15 03:31:17.620675] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x111c3c0 is same with the state(5) to be set 00:28:11.540 [2024-07-15 03:31:17.620685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.540 [2024-07-15 03:31:17.620687] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x111c3c0 is same with the state(5) to be set 00:28:11.540 [2024-07-15 03:31:17.620700] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x111c3c0 is same with the state(5) to be set 00:28:11.540 [2024-07-15 03:31:17.620701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.540 [2024-07-15 03:31:17.620712] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x111c3c0 is same with the state(5) to be set 00:28:11.540 [2024-07-15 03:31:17.620714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.540 [2024-07-15 03:31:17.620725] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x111c3c0 is same with the state(5) to be set 00:28:11.540 [2024-07-15 03:31:17.620730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.540 [2024-07-15 03:31:17.620742] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x111c3c0 is same with the state(5) to be set 00:28:11.541 [2024-07-15 03:31:17.620745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.541 [2024-07-15 03:31:17.620755] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x111c3c0 is same with the state(5) to be set 00:28:11.541 [2024-07-15 03:31:17.620760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.541 [2024-07-15 03:31:17.620767] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x111c3c0 is same with the state(5) to be set 00:28:11.541 [2024-07-15 03:31:17.620774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.541 [2024-07-15 03:31:17.620779] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x111c3c0 is same with the state(5) to be set 00:28:11.541 [2024-07-15 03:31:17.620790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:12[2024-07-15 03:31:17.620791] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x111c3c0 is same with 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.541 the state(5) to be set 00:28:11.541 [2024-07-15 03:31:17.620805] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x111c3c0 is same with [2024-07-15 03:31:17.620805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:28:11.541 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.541 [2024-07-15 03:31:17.620819] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x111c3c0 is same with the state(5) to be set 00:28:11.541 [2024-07-15 03:31:17.620824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.541 [2024-07-15 03:31:17.620831] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x111c3c0 is same with the state(5) to be set 00:28:11.541 [2024-07-15 03:31:17.620838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.541 [2024-07-15 03:31:17.620844] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x111c3c0 is same with the state(5) to be set 00:28:11.541 [2024-07-15 03:31:17.620854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:12[2024-07-15 03:31:17.620856] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x111c3c0 is same with 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.541 the state(5) to be set 00:28:11.541 [2024-07-15 03:31:17.620872] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x111c3c0 is same with [2024-07-15 03:31:17.620872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:28:11.541 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.541 [2024-07-15 03:31:17.620910] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x111c3c0 is same with the state(5) to be set 00:28:11.541 [2024-07-15 03:31:17.620915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.541 [2024-07-15 03:31:17.620924] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x111c3c0 is same with the state(5) to be set 00:28:11.541 [2024-07-15 03:31:17.620930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.541 [2024-07-15 03:31:17.620936] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x111c3c0 is same with the state(5) to be set 00:28:11.541 [2024-07-15 03:31:17.620947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.541 [2024-07-15 03:31:17.620952] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x111c3c0 is same with the state(5) to be set 00:28:11.541 [2024-07-15 03:31:17.620961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.541 [2024-07-15 03:31:17.620965] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x111c3c0 is same with the state(5) to be set 00:28:11.541 [2024-07-15 03:31:17.620977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:12[2024-07-15 03:31:17.620978] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x111c3c0 is same with 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.541 the state(5) to be set 00:28:11.541 [2024-07-15 03:31:17.620993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-15 03:31:17.620993] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x111c3c0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.541 the state(5) to be set 00:28:11.541 [2024-07-15 03:31:17.621009] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x111c3c0 is same with the state(5) to be set 00:28:11.541 [2024-07-15 03:31:17.621012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.541 [2024-07-15 03:31:17.621022] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x111c3c0 is same with the state(5) to be set 00:28:11.541 [2024-07-15 03:31:17.621026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.541 [2024-07-15 03:31:17.621035] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x111c3c0 is same with the state(5) to be set 00:28:11.541 [2024-07-15 03:31:17.621042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.541 [2024-07-15 03:31:17.621047] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x111c3c0 is same with the state(5) to be set 00:28:11.541 [2024-07-15 03:31:17.621056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.541 [2024-07-15 03:31:17.621061] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x111c3c0 is same with the state(5) to be set 00:28:11.541 [2024-07-15 03:31:17.621073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.541 [2024-07-15 03:31:17.621087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.541 [2024-07-15 03:31:17.621103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.541 [2024-07-15 03:31:17.621116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.541 [2024-07-15 03:31:17.621132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.541 [2024-07-15 03:31:17.621146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.541 [2024-07-15 03:31:17.621161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.541 [2024-07-15 03:31:17.621186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.541 [2024-07-15 03:31:17.621217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.541 [2024-07-15 03:31:17.621235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.541 [2024-07-15 03:31:17.621251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.541 [2024-07-15 03:31:17.621264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.541 [2024-07-15 03:31:17.621279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.541 [2024-07-15 03:31:17.621292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.541 [2024-07-15 03:31:17.621307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.541 [2024-07-15 03:31:17.621320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.541 [2024-07-15 03:31:17.621336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.541 [2024-07-15 03:31:17.621349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.541 [2024-07-15 03:31:17.621365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.541 [2024-07-15 03:31:17.621379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.541 [2024-07-15 03:31:17.621393] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27b54a0 is same with the state(5) to be set 00:28:11.541 [2024-07-15 03:31:17.621462] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x27b54a0 was disconnected and freed. reset controller. 00:28:11.541 [2024-07-15 03:31:17.622809] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:28:11.541 [2024-07-15 03:31:17.622888] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x266cfd0 (9): Bad file descriptor 00:28:11.541 [2024-07-15 03:31:17.623017] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:11.541 [2024-07-15 03:31:17.623578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.541 [2024-07-15 03:31:17.623608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x266cfd0 with addr=10.0.0.2, port=4420 00:28:11.541 [2024-07-15 03:31:17.623625] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x266cfd0 is same with the state(5) to be set 00:28:11.541 [2024-07-15 03:31:17.623768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.541 [2024-07-15 03:31:17.623793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2649290 with addr=10.0.0.2, port=4420 00:28:11.541 [2024-07-15 03:31:17.623809] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2649290 is same with the state(5) to be set 00:28:11.541 [2024-07-15 03:31:17.623953] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x266cfd0 (9): Bad file descriptor 00:28:11.541 [2024-07-15 03:31:17.623981] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2649290 (9): Bad file descriptor 00:28:11.541 [2024-07-15 03:31:17.624065] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:28:11.541 [2024-07-15 03:31:17.624100] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:28:11.541 [2024-07-15 03:31:17.624117] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:28:11.541 [2024-07-15 03:31:17.624132] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:28:11.542 [2024-07-15 03:31:17.624156] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:11.542 [2024-07-15 03:31:17.624179] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:11.542 [2024-07-15 03:31:17.624192] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:11.542 [2024-07-15 03:31:17.624259] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:11.542 [2024-07-15 03:31:17.624280] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:11.542 [2024-07-15 03:31:17.626034] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x266d700 (9): Bad file descriptor 00:28:11.542 [2024-07-15 03:31:17.626068] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2669830 (9): Bad file descriptor 00:28:11.542 [2024-07-15 03:31:17.626099] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x27cc8b0 (9): Bad file descriptor 00:28:11.542 [2024-07-15 03:31:17.626150] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:11.542 [2024-07-15 03:31:17.626181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.542 [2024-07-15 03:31:17.626197] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:11.542 [2024-07-15 03:31:17.626210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.542 [2024-07-15 03:31:17.626224] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:11.542 [2024-07-15 03:31:17.626239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.542 [2024-07-15 03:31:17.626253] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:11.542 [2024-07-15 03:31:17.626266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.542 [2024-07-15 03:31:17.626279] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26a2370 is same with the state(5) to be set 00:28:11.542 [2024-07-15 03:31:17.626325] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:11.542 [2024-07-15 03:31:17.626346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.542 [2024-07-15 03:31:17.626360] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:11.542 [2024-07-15 03:31:17.626373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.542 [2024-07-15 03:31:17.626387] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:11.542 [2024-07-15 03:31:17.626401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.542 [2024-07-15 03:31:17.626414] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:11.542 [2024-07-15 03:31:17.626427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.542 [2024-07-15 03:31:17.626440] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x269f910 is same with the state(5) to be set 00:28:11.542 [2024-07-15 03:31:17.626468] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2685b50 (9): Bad file descriptor 00:28:11.542 [2024-07-15 03:31:17.626503] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2814e10 (9): Bad file descriptor 00:28:11.542 [2024-07-15 03:31:17.626550] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:11.542 [2024-07-15 03:31:17.626571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.542 [2024-07-15 03:31:17.626586] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:11.542 [2024-07-15 03:31:17.626600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.542 [2024-07-15 03:31:17.626614] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:11.542 [2024-07-15 03:31:17.626627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.542 [2024-07-15 03:31:17.626641] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:11.542 [2024-07-15 03:31:17.626653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.542 [2024-07-15 03:31:17.626666] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2141610 is same with the state(5) to be set 00:28:11.542 [2024-07-15 03:31:17.633224] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:11.542 [2024-07-15 03:31:17.633311] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:28:11.542 [2024-07-15 03:31:17.633596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.542 [2024-07-15 03:31:17.633633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2649290 with addr=10.0.0.2, port=4420 00:28:11.542 [2024-07-15 03:31:17.633654] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2649290 is same with the state(5) to be set 00:28:11.542 [2024-07-15 03:31:17.633778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.542 [2024-07-15 03:31:17.633805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x266cfd0 with addr=10.0.0.2, port=4420 00:28:11.542 [2024-07-15 03:31:17.633822] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x266cfd0 is same with the state(5) to be set 00:28:11.542 [2024-07-15 03:31:17.633894] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2649290 (9): Bad file descriptor 00:28:11.542 [2024-07-15 03:31:17.633921] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x266cfd0 (9): Bad file descriptor 00:28:11.542 [2024-07-15 03:31:17.633975] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:11.542 [2024-07-15 03:31:17.633994] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:11.542 [2024-07-15 03:31:17.634011] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:11.542 [2024-07-15 03:31:17.634032] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:28:11.542 [2024-07-15 03:31:17.634047] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:28:11.542 [2024-07-15 03:31:17.634061] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:28:11.542 [2024-07-15 03:31:17.634116] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:11.542 [2024-07-15 03:31:17.634136] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:11.542 [2024-07-15 03:31:17.636099] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x26a2370 (9): Bad file descriptor 00:28:11.542 [2024-07-15 03:31:17.636150] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x269f910 (9): Bad file descriptor 00:28:11.542 [2024-07-15 03:31:17.636195] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2141610 (9): Bad file descriptor 00:28:11.542 [2024-07-15 03:31:17.636355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.542 [2024-07-15 03:31:17.636382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.542 [2024-07-15 03:31:17.636413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.542 [2024-07-15 03:31:17.636428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.542 [2024-07-15 03:31:17.636445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.542 [2024-07-15 03:31:17.636460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.542 [2024-07-15 03:31:17.636476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.542 [2024-07-15 03:31:17.636490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.542 [2024-07-15 03:31:17.636507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.542 [2024-07-15 03:31:17.636521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.542 [2024-07-15 03:31:17.636537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.542 [2024-07-15 03:31:17.636551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.542 [2024-07-15 03:31:17.636567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.542 [2024-07-15 03:31:17.636580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.542 [2024-07-15 03:31:17.636595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.542 [2024-07-15 03:31:17.636609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.542 [2024-07-15 03:31:17.636625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.542 [2024-07-15 03:31:17.636639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.542 [2024-07-15 03:31:17.636654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.542 [2024-07-15 03:31:17.636668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.542 [2024-07-15 03:31:17.636684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.543 [2024-07-15 03:31:17.636698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.543 [2024-07-15 03:31:17.636719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.543 [2024-07-15 03:31:17.636734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.543 [2024-07-15 03:31:17.636749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.543 [2024-07-15 03:31:17.636763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.543 [2024-07-15 03:31:17.636779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.543 [2024-07-15 03:31:17.636792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.543 [2024-07-15 03:31:17.636808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.543 [2024-07-15 03:31:17.636822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.543 [2024-07-15 03:31:17.636838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.543 [2024-07-15 03:31:17.636852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.543 [2024-07-15 03:31:17.636873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.543 [2024-07-15 03:31:17.636902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.543 [2024-07-15 03:31:17.636919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.543 [2024-07-15 03:31:17.636933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.543 [2024-07-15 03:31:17.636949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.543 [2024-07-15 03:31:17.636963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.543 [2024-07-15 03:31:17.636979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.543 [2024-07-15 03:31:17.636992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.543 [2024-07-15 03:31:17.637008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.543 [2024-07-15 03:31:17.637022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.543 [2024-07-15 03:31:17.637037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.543 [2024-07-15 03:31:17.637051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.543 [2024-07-15 03:31:17.637067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.543 [2024-07-15 03:31:17.637081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.543 [2024-07-15 03:31:17.637097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.543 [2024-07-15 03:31:17.637114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.543 [2024-07-15 03:31:17.637131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.543 [2024-07-15 03:31:17.637145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.543 [2024-07-15 03:31:17.637172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.543 [2024-07-15 03:31:17.637186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.543 [2024-07-15 03:31:17.637201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.543 [2024-07-15 03:31:17.637215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.543 [2024-07-15 03:31:17.637231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.543 [2024-07-15 03:31:17.637245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.543 [2024-07-15 03:31:17.637261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.543 [2024-07-15 03:31:17.637274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.543 [2024-07-15 03:31:17.637292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.543 [2024-07-15 03:31:17.637305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.543 [2024-07-15 03:31:17.637322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.543 [2024-07-15 03:31:17.637336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.543 [2024-07-15 03:31:17.637351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.543 [2024-07-15 03:31:17.637365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.543 [2024-07-15 03:31:17.637381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.543 [2024-07-15 03:31:17.637394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.543 [2024-07-15 03:31:17.637410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.543 [2024-07-15 03:31:17.637424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.543 [2024-07-15 03:31:17.637439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.543 [2024-07-15 03:31:17.637453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.543 [2024-07-15 03:31:17.637469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.543 [2024-07-15 03:31:17.637484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.543 [2024-07-15 03:31:17.637499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.543 [2024-07-15 03:31:17.637521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.543 [2024-07-15 03:31:17.637538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.543 [2024-07-15 03:31:17.637552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.543 [2024-07-15 03:31:17.637567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.543 [2024-07-15 03:31:17.637581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.543 [2024-07-15 03:31:17.637597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.543 [2024-07-15 03:31:17.637612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.543 [2024-07-15 03:31:17.637627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.543 [2024-07-15 03:31:17.637641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.543 [2024-07-15 03:31:17.637657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.543 [2024-07-15 03:31:17.637670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.543 [2024-07-15 03:31:17.637686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.543 [2024-07-15 03:31:17.637699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.543 [2024-07-15 03:31:17.637715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.543 [2024-07-15 03:31:17.637728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.543 [2024-07-15 03:31:17.637743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.543 [2024-07-15 03:31:17.637757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.543 [2024-07-15 03:31:17.637772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.543 [2024-07-15 03:31:17.637786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.543 [2024-07-15 03:31:17.637801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.543 [2024-07-15 03:31:17.637814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.543 [2024-07-15 03:31:17.637831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.543 [2024-07-15 03:31:17.637844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.544 [2024-07-15 03:31:17.637860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.544 [2024-07-15 03:31:17.637881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.544 [2024-07-15 03:31:17.637903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.544 [2024-07-15 03:31:17.637918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.544 [2024-07-15 03:31:17.637934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.544 [2024-07-15 03:31:17.637948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.544 [2024-07-15 03:31:17.637963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.544 [2024-07-15 03:31:17.637977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.544 [2024-07-15 03:31:17.637992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.544 [2024-07-15 03:31:17.638005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.544 [2024-07-15 03:31:17.638021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.544 [2024-07-15 03:31:17.638035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.544 [2024-07-15 03:31:17.638050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.544 [2024-07-15 03:31:17.638064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.544 [2024-07-15 03:31:17.638080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.544 [2024-07-15 03:31:17.638094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.544 [2024-07-15 03:31:17.638110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.544 [2024-07-15 03:31:17.638123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.544 [2024-07-15 03:31:17.638138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.544 [2024-07-15 03:31:17.638152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.544 [2024-07-15 03:31:17.638171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.544 [2024-07-15 03:31:17.638185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.544 [2024-07-15 03:31:17.638200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.544 [2024-07-15 03:31:17.638214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.544 [2024-07-15 03:31:17.638229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.544 [2024-07-15 03:31:17.638243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.544 [2024-07-15 03:31:17.638258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.544 [2024-07-15 03:31:17.638276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.544 [2024-07-15 03:31:17.638293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.544 [2024-07-15 03:31:17.638306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.544 [2024-07-15 03:31:17.638321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.544 [2024-07-15 03:31:17.638335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.544 [2024-07-15 03:31:17.638350] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27b1d40 is same with the state(5) to be set 00:28:11.544 [2024-07-15 03:31:17.639614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.544 [2024-07-15 03:31:17.639638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.544 [2024-07-15 03:31:17.639660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.544 [2024-07-15 03:31:17.639676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.544 [2024-07-15 03:31:17.639691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.544 [2024-07-15 03:31:17.639706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.544 [2024-07-15 03:31:17.639721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.544 [2024-07-15 03:31:17.639735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.544 [2024-07-15 03:31:17.639751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.544 [2024-07-15 03:31:17.639764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.544 [2024-07-15 03:31:17.639780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.544 [2024-07-15 03:31:17.639793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.544 [2024-07-15 03:31:17.639808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.544 [2024-07-15 03:31:17.639822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.544 [2024-07-15 03:31:17.639838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.544 [2024-07-15 03:31:17.639852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.544 [2024-07-15 03:31:17.639874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.544 [2024-07-15 03:31:17.639895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.544 [2024-07-15 03:31:17.639911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.544 [2024-07-15 03:31:17.639930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.544 [2024-07-15 03:31:17.639946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.544 [2024-07-15 03:31:17.639960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.544 [2024-07-15 03:31:17.639978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.544 [2024-07-15 03:31:17.640002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.544 [2024-07-15 03:31:17.640029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.544 [2024-07-15 03:31:17.640047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.544 [2024-07-15 03:31:17.640065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.544 [2024-07-15 03:31:17.640078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.544 [2024-07-15 03:31:17.640094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.544 [2024-07-15 03:31:17.640108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.544 [2024-07-15 03:31:17.640123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.544 [2024-07-15 03:31:17.640137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.544 [2024-07-15 03:31:17.640153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.544 [2024-07-15 03:31:17.640176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.544 [2024-07-15 03:31:17.640191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.544 [2024-07-15 03:31:17.640205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.544 [2024-07-15 03:31:17.640221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.544 [2024-07-15 03:31:17.640234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.544 [2024-07-15 03:31:17.640250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.544 [2024-07-15 03:31:17.640264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.544 [2024-07-15 03:31:17.640279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.545 [2024-07-15 03:31:17.640293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.545 [2024-07-15 03:31:17.640308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.545 [2024-07-15 03:31:17.640322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.545 [2024-07-15 03:31:17.640342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.545 [2024-07-15 03:31:17.640358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.545 [2024-07-15 03:31:17.640373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.545 [2024-07-15 03:31:17.640387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.545 [2024-07-15 03:31:17.640402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.545 [2024-07-15 03:31:17.640416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.545 [2024-07-15 03:31:17.640432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.545 [2024-07-15 03:31:17.640446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.545 [2024-07-15 03:31:17.640462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.545 [2024-07-15 03:31:17.640476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.545 [2024-07-15 03:31:17.640492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.545 [2024-07-15 03:31:17.640507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.545 [2024-07-15 03:31:17.640522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.545 [2024-07-15 03:31:17.640536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.545 [2024-07-15 03:31:17.640551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.545 [2024-07-15 03:31:17.640565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.545 [2024-07-15 03:31:17.640580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.545 [2024-07-15 03:31:17.640594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.545 [2024-07-15 03:31:17.640610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.545 [2024-07-15 03:31:17.640623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.545 [2024-07-15 03:31:17.640639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.545 [2024-07-15 03:31:17.640653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.545 [2024-07-15 03:31:17.640669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.545 [2024-07-15 03:31:17.640682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.545 [2024-07-15 03:31:17.640698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.545 [2024-07-15 03:31:17.640715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.545 [2024-07-15 03:31:17.640731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.545 [2024-07-15 03:31:17.640745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.545 [2024-07-15 03:31:17.640760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.545 [2024-07-15 03:31:17.640773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.545 [2024-07-15 03:31:17.640789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.545 [2024-07-15 03:31:17.640802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.545 [2024-07-15 03:31:17.640819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.545 [2024-07-15 03:31:17.640833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.545 [2024-07-15 03:31:17.640848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.545 [2024-07-15 03:31:17.640871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.545 [2024-07-15 03:31:17.640895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.545 [2024-07-15 03:31:17.640911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.545 [2024-07-15 03:31:17.640926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.545 [2024-07-15 03:31:17.640940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.545 [2024-07-15 03:31:17.640955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.545 [2024-07-15 03:31:17.640968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.545 [2024-07-15 03:31:17.640984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.545 [2024-07-15 03:31:17.640998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.545 [2024-07-15 03:31:17.641013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.545 [2024-07-15 03:31:17.641026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.545 [2024-07-15 03:31:17.641042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.545 [2024-07-15 03:31:17.641056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.545 [2024-07-15 03:31:17.641071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.545 [2024-07-15 03:31:17.641084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.545 [2024-07-15 03:31:17.641104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.545 [2024-07-15 03:31:17.641118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.545 [2024-07-15 03:31:17.641134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.545 [2024-07-15 03:31:17.641149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.545 [2024-07-15 03:31:17.641173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.545 [2024-07-15 03:31:17.641187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.545 [2024-07-15 03:31:17.641202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.545 [2024-07-15 03:31:17.641216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.545 [2024-07-15 03:31:17.641231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.545 [2024-07-15 03:31:17.641245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.545 [2024-07-15 03:31:17.641260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.546 [2024-07-15 03:31:17.641273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.546 [2024-07-15 03:31:17.641288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.546 [2024-07-15 03:31:17.641302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.546 [2024-07-15 03:31:17.641318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.546 [2024-07-15 03:31:17.641332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.546 [2024-07-15 03:31:17.641347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.546 [2024-07-15 03:31:17.641361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.546 [2024-07-15 03:31:17.641376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.546 [2024-07-15 03:31:17.641390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.546 [2024-07-15 03:31:17.641406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.546 [2024-07-15 03:31:17.641420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.546 [2024-07-15 03:31:17.641435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.546 [2024-07-15 03:31:17.641449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.546 [2024-07-15 03:31:17.641464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.546 [2024-07-15 03:31:17.641482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.546 [2024-07-15 03:31:17.641498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.546 [2024-07-15 03:31:17.641511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.546 [2024-07-15 03:31:17.641526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.546 [2024-07-15 03:31:17.641540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.546 [2024-07-15 03:31:17.641555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.546 [2024-07-15 03:31:17.641569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.546 [2024-07-15 03:31:17.641584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.546 [2024-07-15 03:31:17.641598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.546 [2024-07-15 03:31:17.641613] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27b3160 is same with the state(5) to be set 00:28:11.546 [2024-07-15 03:31:17.642855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.546 [2024-07-15 03:31:17.642894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.546 [2024-07-15 03:31:17.642916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.546 [2024-07-15 03:31:17.642932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.546 [2024-07-15 03:31:17.642947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.546 [2024-07-15 03:31:17.642961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.546 [2024-07-15 03:31:17.642977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.546 [2024-07-15 03:31:17.642990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.546 [2024-07-15 03:31:17.643007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.546 [2024-07-15 03:31:17.643020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.546 [2024-07-15 03:31:17.643035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.546 [2024-07-15 03:31:17.643049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.546 [2024-07-15 03:31:17.643064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.546 [2024-07-15 03:31:17.643078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.546 [2024-07-15 03:31:17.643094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.546 [2024-07-15 03:31:17.643112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.546 [2024-07-15 03:31:17.643128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.546 [2024-07-15 03:31:17.643142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.546 [2024-07-15 03:31:17.643157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.546 [2024-07-15 03:31:17.643174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.546 [2024-07-15 03:31:17.643189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.546 [2024-07-15 03:31:17.643203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.546 [2024-07-15 03:31:17.643218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.546 [2024-07-15 03:31:17.643231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.546 [2024-07-15 03:31:17.643247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.546 [2024-07-15 03:31:17.643261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.546 [2024-07-15 03:31:17.643277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.546 [2024-07-15 03:31:17.643291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.546 [2024-07-15 03:31:17.643306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.546 [2024-07-15 03:31:17.643320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.546 [2024-07-15 03:31:17.643336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.546 [2024-07-15 03:31:17.643349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.546 [2024-07-15 03:31:17.643365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.546 [2024-07-15 03:31:17.643379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.546 [2024-07-15 03:31:17.643395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.546 [2024-07-15 03:31:17.643408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.546 [2024-07-15 03:31:17.643424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.546 [2024-07-15 03:31:17.643438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.546 [2024-07-15 03:31:17.643454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.546 [2024-07-15 03:31:17.643467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.546 [2024-07-15 03:31:17.643487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.546 [2024-07-15 03:31:17.643502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.546 [2024-07-15 03:31:17.643518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.546 [2024-07-15 03:31:17.643532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.546 [2024-07-15 03:31:17.643547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.546 [2024-07-15 03:31:17.643561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.546 [2024-07-15 03:31:17.643577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.546 [2024-07-15 03:31:17.643591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.546 [2024-07-15 03:31:17.643607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.546 [2024-07-15 03:31:17.643621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.546 [2024-07-15 03:31:17.643636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.547 [2024-07-15 03:31:17.643650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.547 [2024-07-15 03:31:17.643665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.547 [2024-07-15 03:31:17.643679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.547 [2024-07-15 03:31:17.643694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.547 [2024-07-15 03:31:17.643708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.547 [2024-07-15 03:31:17.643723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.547 [2024-07-15 03:31:17.643737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.547 [2024-07-15 03:31:17.643752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.547 [2024-07-15 03:31:17.643765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.547 [2024-07-15 03:31:17.643780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.547 [2024-07-15 03:31:17.643794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.547 [2024-07-15 03:31:17.643810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.547 [2024-07-15 03:31:17.643824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.547 [2024-07-15 03:31:17.643839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.547 [2024-07-15 03:31:17.643856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.547 [2024-07-15 03:31:17.643885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.547 [2024-07-15 03:31:17.643902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.547 [2024-07-15 03:31:17.643918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.547 [2024-07-15 03:31:17.643932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.547 [2024-07-15 03:31:17.643948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.547 [2024-07-15 03:31:17.643962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.547 [2024-07-15 03:31:17.643977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.547 [2024-07-15 03:31:17.643991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.547 [2024-07-15 03:31:17.644007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.547 [2024-07-15 03:31:17.644020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.547 [2024-07-15 03:31:17.644036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.547 [2024-07-15 03:31:17.644050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.547 [2024-07-15 03:31:17.644066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.547 [2024-07-15 03:31:17.644079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.547 [2024-07-15 03:31:17.644095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.547 [2024-07-15 03:31:17.644109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.547 [2024-07-15 03:31:17.644125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.547 [2024-07-15 03:31:17.644138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.547 [2024-07-15 03:31:17.644154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.547 [2024-07-15 03:31:17.644178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.547 [2024-07-15 03:31:17.644193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.547 [2024-07-15 03:31:17.644207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.547 [2024-07-15 03:31:17.644222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.547 [2024-07-15 03:31:17.644235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.547 [2024-07-15 03:31:17.644254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.547 [2024-07-15 03:31:17.644269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.547 [2024-07-15 03:31:17.644285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.547 [2024-07-15 03:31:17.644298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.547 [2024-07-15 03:31:17.644314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.547 [2024-07-15 03:31:17.644327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.547 [2024-07-15 03:31:17.644343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.547 [2024-07-15 03:31:17.644356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.547 [2024-07-15 03:31:17.644372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.547 [2024-07-15 03:31:17.644386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.547 [2024-07-15 03:31:17.644401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.547 [2024-07-15 03:31:17.644415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.547 [2024-07-15 03:31:17.644430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.547 [2024-07-15 03:31:17.644443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.547 [2024-07-15 03:31:17.644458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.547 [2024-07-15 03:31:17.644471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.547 [2024-07-15 03:31:17.644487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.547 [2024-07-15 03:31:17.644500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.547 [2024-07-15 03:31:17.644516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.547 [2024-07-15 03:31:17.644529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.547 [2024-07-15 03:31:17.644545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.547 [2024-07-15 03:31:17.644559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.547 [2024-07-15 03:31:17.644574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.547 [2024-07-15 03:31:17.644588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.547 [2024-07-15 03:31:17.644603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.547 [2024-07-15 03:31:17.644620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.547 [2024-07-15 03:31:17.644636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.547 [2024-07-15 03:31:17.644650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.547 [2024-07-15 03:31:17.644665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.547 [2024-07-15 03:31:17.644678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.547 [2024-07-15 03:31:17.644692] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2643a00 is same with the state(5) to be set 00:28:11.547 [2024-07-15 03:31:17.645904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.547 [2024-07-15 03:31:17.645927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.547 [2024-07-15 03:31:17.645948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.547 [2024-07-15 03:31:17.645963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.548 [2024-07-15 03:31:17.645978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.548 [2024-07-15 03:31:17.645993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.548 [2024-07-15 03:31:17.646008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.548 [2024-07-15 03:31:17.646022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.548 [2024-07-15 03:31:17.646038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.548 [2024-07-15 03:31:17.646051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.548 [2024-07-15 03:31:17.646066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.548 [2024-07-15 03:31:17.646080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.548 [2024-07-15 03:31:17.646096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.548 [2024-07-15 03:31:17.646110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.548 [2024-07-15 03:31:17.646125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.548 [2024-07-15 03:31:17.646139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.548 [2024-07-15 03:31:17.646155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.548 [2024-07-15 03:31:17.646168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.548 [2024-07-15 03:31:17.646184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.548 [2024-07-15 03:31:17.646198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.548 [2024-07-15 03:31:17.646218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.548 [2024-07-15 03:31:17.646233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.548 [2024-07-15 03:31:17.646249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.548 [2024-07-15 03:31:17.646263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.548 [2024-07-15 03:31:17.646278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.548 [2024-07-15 03:31:17.646292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.548 [2024-07-15 03:31:17.646308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.548 [2024-07-15 03:31:17.646322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.548 [2024-07-15 03:31:17.646337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.548 [2024-07-15 03:31:17.646351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.548 [2024-07-15 03:31:17.646366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.548 [2024-07-15 03:31:17.646380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.548 [2024-07-15 03:31:17.646396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.548 [2024-07-15 03:31:17.646410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.548 [2024-07-15 03:31:17.646426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.548 [2024-07-15 03:31:17.646439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.548 [2024-07-15 03:31:17.646455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.548 [2024-07-15 03:31:17.646469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.548 [2024-07-15 03:31:17.646484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.548 [2024-07-15 03:31:17.646498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.548 [2024-07-15 03:31:17.646514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.548 [2024-07-15 03:31:17.646528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.548 [2024-07-15 03:31:17.646543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.548 [2024-07-15 03:31:17.646557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.548 [2024-07-15 03:31:17.646573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.548 [2024-07-15 03:31:17.646590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.548 [2024-07-15 03:31:17.646606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.548 [2024-07-15 03:31:17.646620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.548 [2024-07-15 03:31:17.646636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.548 [2024-07-15 03:31:17.646649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.548 [2024-07-15 03:31:17.646664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.548 [2024-07-15 03:31:17.646678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.548 [2024-07-15 03:31:17.646693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.548 [2024-07-15 03:31:17.646706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.548 [2024-07-15 03:31:17.646722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.548 [2024-07-15 03:31:17.646735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.548 [2024-07-15 03:31:17.646751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.548 [2024-07-15 03:31:17.646765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.548 [2024-07-15 03:31:17.646780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.548 [2024-07-15 03:31:17.646793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.548 [2024-07-15 03:31:17.646809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.548 [2024-07-15 03:31:17.646823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.548 [2024-07-15 03:31:17.646839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.548 [2024-07-15 03:31:17.646853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.548 [2024-07-15 03:31:17.646869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.548 [2024-07-15 03:31:17.646889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.548 [2024-07-15 03:31:17.646906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.548 [2024-07-15 03:31:17.646920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.548 [2024-07-15 03:31:17.646935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.548 [2024-07-15 03:31:17.646949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.548 [2024-07-15 03:31:17.646972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.548 [2024-07-15 03:31:17.646987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.548 [2024-07-15 03:31:17.647003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.548 [2024-07-15 03:31:17.647017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.548 [2024-07-15 03:31:17.647032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.548 [2024-07-15 03:31:17.647046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.548 [2024-07-15 03:31:17.647062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.548 [2024-07-15 03:31:17.647076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.548 [2024-07-15 03:31:17.647092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.549 [2024-07-15 03:31:17.647106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.549 [2024-07-15 03:31:17.647121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.549 [2024-07-15 03:31:17.647135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.549 [2024-07-15 03:31:17.647151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.549 [2024-07-15 03:31:17.647165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.549 [2024-07-15 03:31:17.647181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.549 [2024-07-15 03:31:17.647194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.549 [2024-07-15 03:31:17.647210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.549 [2024-07-15 03:31:17.647224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.549 [2024-07-15 03:31:17.647240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.549 [2024-07-15 03:31:17.647254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.549 [2024-07-15 03:31:17.647269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.549 [2024-07-15 03:31:17.647283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.549 [2024-07-15 03:31:17.647298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.549 [2024-07-15 03:31:17.647312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.549 [2024-07-15 03:31:17.647328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.549 [2024-07-15 03:31:17.647345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.549 [2024-07-15 03:31:17.647361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.549 [2024-07-15 03:31:17.647375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.549 [2024-07-15 03:31:17.647391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.549 [2024-07-15 03:31:17.647405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.549 [2024-07-15 03:31:17.647420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.549 [2024-07-15 03:31:17.647434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.549 [2024-07-15 03:31:17.647449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.549 [2024-07-15 03:31:17.647463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.549 [2024-07-15 03:31:17.647478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.549 [2024-07-15 03:31:17.647492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.549 [2024-07-15 03:31:17.647507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.549 [2024-07-15 03:31:17.647521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.549 [2024-07-15 03:31:17.647537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.549 [2024-07-15 03:31:17.647551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.549 [2024-07-15 03:31:17.647566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.549 [2024-07-15 03:31:17.647580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.549 [2024-07-15 03:31:17.647596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.549 [2024-07-15 03:31:17.647609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.549 [2024-07-15 03:31:17.647625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.549 [2024-07-15 03:31:17.647639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.549 [2024-07-15 03:31:17.647654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.549 [2024-07-15 03:31:17.647668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.549 [2024-07-15 03:31:17.647683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.549 [2024-07-15 03:31:17.647697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.549 [2024-07-15 03:31:17.647716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.549 [2024-07-15 03:31:17.647730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.549 [2024-07-15 03:31:17.647746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.549 [2024-07-15 03:31:17.647760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.549 [2024-07-15 03:31:17.647775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.549 [2024-07-15 03:31:17.647789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.549 [2024-07-15 03:31:17.647805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.549 [2024-07-15 03:31:17.647819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.549 [2024-07-15 03:31:17.647833] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2644eb0 is same with the state(5) to be set 00:28:11.549 [2024-07-15 03:31:17.649104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.549 [2024-07-15 03:31:17.649127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.549 [2024-07-15 03:31:17.649149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.549 [2024-07-15 03:31:17.649164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.549 [2024-07-15 03:31:17.649181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.549 [2024-07-15 03:31:17.649195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.549 [2024-07-15 03:31:17.649211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.549 [2024-07-15 03:31:17.649225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.549 [2024-07-15 03:31:17.649240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.549 [2024-07-15 03:31:17.649254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.549 [2024-07-15 03:31:17.649270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.549 [2024-07-15 03:31:17.649284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.549 [2024-07-15 03:31:17.649299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.549 [2024-07-15 03:31:17.649313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.550 [2024-07-15 03:31:17.649329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.550 [2024-07-15 03:31:17.649343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.550 [2024-07-15 03:31:17.649363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.550 [2024-07-15 03:31:17.649377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.550 [2024-07-15 03:31:17.649393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.550 [2024-07-15 03:31:17.649407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.550 [2024-07-15 03:31:17.649423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.550 [2024-07-15 03:31:17.649436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.550 [2024-07-15 03:31:17.649452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.550 [2024-07-15 03:31:17.649466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.550 [2024-07-15 03:31:17.649481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.550 [2024-07-15 03:31:17.649495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.550 [2024-07-15 03:31:17.649510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.550 [2024-07-15 03:31:17.649524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.550 [2024-07-15 03:31:17.649540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.550 [2024-07-15 03:31:17.649554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.550 [2024-07-15 03:31:17.649569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.550 [2024-07-15 03:31:17.649584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.550 [2024-07-15 03:31:17.649599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.550 [2024-07-15 03:31:17.649613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.550 [2024-07-15 03:31:17.649629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.550 [2024-07-15 03:31:17.649643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.550 [2024-07-15 03:31:17.649659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.550 [2024-07-15 03:31:17.649672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.550 [2024-07-15 03:31:17.649688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.550 [2024-07-15 03:31:17.649702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.550 [2024-07-15 03:31:17.649718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.550 [2024-07-15 03:31:17.649735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.550 [2024-07-15 03:31:17.649751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.550 [2024-07-15 03:31:17.649765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.550 [2024-07-15 03:31:17.649781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.550 [2024-07-15 03:31:17.649795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.550 [2024-07-15 03:31:17.649811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.550 [2024-07-15 03:31:17.649825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.550 [2024-07-15 03:31:17.649841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.550 [2024-07-15 03:31:17.649855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.550 [2024-07-15 03:31:17.649871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.550 [2024-07-15 03:31:17.649932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.550 [2024-07-15 03:31:17.649952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.550 [2024-07-15 03:31:17.649966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.550 [2024-07-15 03:31:17.649981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.550 [2024-07-15 03:31:17.649995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.550 [2024-07-15 03:31:17.650010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.550 [2024-07-15 03:31:17.650024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.550 [2024-07-15 03:31:17.650040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.550 [2024-07-15 03:31:17.650053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.550 [2024-07-15 03:31:17.650069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.550 [2024-07-15 03:31:17.650082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.550 [2024-07-15 03:31:17.650098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.550 [2024-07-15 03:31:17.650112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.550 [2024-07-15 03:31:17.650128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.550 [2024-07-15 03:31:17.650142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.550 [2024-07-15 03:31:17.650161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.550 [2024-07-15 03:31:17.650176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.550 [2024-07-15 03:31:17.650191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.550 [2024-07-15 03:31:17.650205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.550 [2024-07-15 03:31:17.650220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.550 [2024-07-15 03:31:17.650234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.550 [2024-07-15 03:31:17.650249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.550 [2024-07-15 03:31:17.650263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.550 [2024-07-15 03:31:17.650279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.550 [2024-07-15 03:31:17.650292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.550 [2024-07-15 03:31:17.650307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.550 [2024-07-15 03:31:17.650321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.550 [2024-07-15 03:31:17.650337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.550 [2024-07-15 03:31:17.650351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.550 [2024-07-15 03:31:17.650368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.550 [2024-07-15 03:31:17.650382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.550 [2024-07-15 03:31:17.650397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.550 [2024-07-15 03:31:17.650411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.550 [2024-07-15 03:31:17.650427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.550 [2024-07-15 03:31:17.650440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.550 [2024-07-15 03:31:17.650455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.550 [2024-07-15 03:31:17.650469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.551 [2024-07-15 03:31:17.650484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.551 [2024-07-15 03:31:17.650498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.551 [2024-07-15 03:31:17.650513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.551 [2024-07-15 03:31:17.650531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.551 [2024-07-15 03:31:17.650547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.551 [2024-07-15 03:31:17.650560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.551 [2024-07-15 03:31:17.650576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.551 [2024-07-15 03:31:17.650589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.551 [2024-07-15 03:31:17.650605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.551 [2024-07-15 03:31:17.650618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.551 [2024-07-15 03:31:17.650634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.551 [2024-07-15 03:31:17.650648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.551 [2024-07-15 03:31:17.650663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.551 [2024-07-15 03:31:17.650677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.551 [2024-07-15 03:31:17.650692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.551 [2024-07-15 03:31:17.650706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.551 [2024-07-15 03:31:17.650721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.551 [2024-07-15 03:31:17.650735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.551 [2024-07-15 03:31:17.650750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.551 [2024-07-15 03:31:17.650764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.551 [2024-07-15 03:31:17.650779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.551 [2024-07-15 03:31:17.650794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.551 [2024-07-15 03:31:17.650810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.551 [2024-07-15 03:31:17.650823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.551 [2024-07-15 03:31:17.650839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.551 [2024-07-15 03:31:17.650852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.551 [2024-07-15 03:31:17.650868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.551 [2024-07-15 03:31:17.650888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.551 [2024-07-15 03:31:17.650910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.551 [2024-07-15 03:31:17.650925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.551 [2024-07-15 03:31:17.650941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.551 [2024-07-15 03:31:17.650954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.551 [2024-07-15 03:31:17.650970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.551 [2024-07-15 03:31:17.650984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.551 [2024-07-15 03:31:17.650999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.551 [2024-07-15 03:31:17.651013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.551 [2024-07-15 03:31:17.651028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.551 [2024-07-15 03:31:17.651041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.551 [2024-07-15 03:31:17.651057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.551 [2024-07-15 03:31:17.651070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.551 [2024-07-15 03:31:17.651084] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27b8e20 is same with the state(5) to be set 00:28:11.551 [2024-07-15 03:31:17.652695] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:28:11.551 [2024-07-15 03:31:17.652732] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:28:11.551 [2024-07-15 03:31:17.652752] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:28:11.551 [2024-07-15 03:31:17.652769] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:28:11.551 [2024-07-15 03:31:17.652885] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:11.551 [2024-07-15 03:31:17.653027] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:28:11.551 [2024-07-15 03:31:17.653318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.551 [2024-07-15 03:31:17.653350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2814e10 with addr=10.0.0.2, port=4420 00:28:11.551 [2024-07-15 03:31:17.653368] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2814e10 is same with the state(5) to be set 00:28:11.551 [2024-07-15 03:31:17.653498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.551 [2024-07-15 03:31:17.653525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x266d700 with addr=10.0.0.2, port=4420 00:28:11.551 [2024-07-15 03:31:17.653541] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x266d700 is same with the state(5) to be set 00:28:11.551 [2024-07-15 03:31:17.653643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.551 [2024-07-15 03:31:17.653670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2685b50 with addr=10.0.0.2, port=4420 00:28:11.551 [2024-07-15 03:31:17.653686] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2685b50 is same with the state(5) to be set 00:28:11.551 [2024-07-15 03:31:17.653801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.551 [2024-07-15 03:31:17.653828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2669830 with addr=10.0.0.2, port=4420 00:28:11.551 [2024-07-15 03:31:17.653844] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2669830 is same with the state(5) to be set 00:28:11.551 [2024-07-15 03:31:17.654916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.551 [2024-07-15 03:31:17.654942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.551 [2024-07-15 03:31:17.654968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.551 [2024-07-15 03:31:17.654984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.551 [2024-07-15 03:31:17.655001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.551 [2024-07-15 03:31:17.655015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.551 [2024-07-15 03:31:17.655031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.551 [2024-07-15 03:31:17.655045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.551 [2024-07-15 03:31:17.655060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.551 [2024-07-15 03:31:17.655074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.551 [2024-07-15 03:31:17.655090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.551 [2024-07-15 03:31:17.655104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.551 [2024-07-15 03:31:17.655120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.551 [2024-07-15 03:31:17.655134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.551 [2024-07-15 03:31:17.655150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.551 [2024-07-15 03:31:17.655164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.551 [2024-07-15 03:31:17.655179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.551 [2024-07-15 03:31:17.655194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.551 [2024-07-15 03:31:17.655209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.552 [2024-07-15 03:31:17.655223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.552 [2024-07-15 03:31:17.655239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.552 [2024-07-15 03:31:17.655253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.552 [2024-07-15 03:31:17.655274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.552 [2024-07-15 03:31:17.655289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.552 [2024-07-15 03:31:17.655304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.552 [2024-07-15 03:31:17.655318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.552 [2024-07-15 03:31:17.655334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.552 [2024-07-15 03:31:17.655348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.552 [2024-07-15 03:31:17.655364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.552 [2024-07-15 03:31:17.655377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.552 [2024-07-15 03:31:17.655393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.552 [2024-07-15 03:31:17.655407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.552 [2024-07-15 03:31:17.655423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.552 [2024-07-15 03:31:17.655436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.552 [2024-07-15 03:31:17.655452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.552 [2024-07-15 03:31:17.655466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.552 [2024-07-15 03:31:17.655482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.552 [2024-07-15 03:31:17.655496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.552 [2024-07-15 03:31:17.655511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.552 [2024-07-15 03:31:17.655526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.552 [2024-07-15 03:31:17.655542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.552 [2024-07-15 03:31:17.655556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.552 [2024-07-15 03:31:17.655572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.552 [2024-07-15 03:31:17.655586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.552 [2024-07-15 03:31:17.655602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.552 [2024-07-15 03:31:17.655615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.552 [2024-07-15 03:31:17.655631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.552 [2024-07-15 03:31:17.655649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.552 [2024-07-15 03:31:17.655666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.552 [2024-07-15 03:31:17.655681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.552 [2024-07-15 03:31:17.655697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.552 [2024-07-15 03:31:17.655711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.552 [2024-07-15 03:31:17.655726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.552 [2024-07-15 03:31:17.655740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.552 [2024-07-15 03:31:17.655756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.552 [2024-07-15 03:31:17.655770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.552 [2024-07-15 03:31:17.655785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.552 [2024-07-15 03:31:17.655799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.552 [2024-07-15 03:31:17.655816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.552 [2024-07-15 03:31:17.655829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.552 [2024-07-15 03:31:17.655845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.552 [2024-07-15 03:31:17.655860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.552 [2024-07-15 03:31:17.655886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.552 [2024-07-15 03:31:17.655903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.552 [2024-07-15 03:31:17.655919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.552 [2024-07-15 03:31:17.655933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.552 [2024-07-15 03:31:17.655949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.552 [2024-07-15 03:31:17.655963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.552 [2024-07-15 03:31:17.655978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.552 [2024-07-15 03:31:17.655992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.552 [2024-07-15 03:31:17.656008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.552 [2024-07-15 03:31:17.656021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.552 [2024-07-15 03:31:17.656038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.552 [2024-07-15 03:31:17.656056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.552 [2024-07-15 03:31:17.656073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.552 [2024-07-15 03:31:17.656087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.552 [2024-07-15 03:31:17.656103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.552 [2024-07-15 03:31:17.656116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.552 [2024-07-15 03:31:17.656133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.552 [2024-07-15 03:31:17.656147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.552 [2024-07-15 03:31:17.656163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.552 [2024-07-15 03:31:17.656178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.552 [2024-07-15 03:31:17.656194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.552 [2024-07-15 03:31:17.656208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.552 [2024-07-15 03:31:17.656223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.552 [2024-07-15 03:31:17.656237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.552 [2024-07-15 03:31:17.656254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.552 [2024-07-15 03:31:17.656268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.552 [2024-07-15 03:31:17.656283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.552 [2024-07-15 03:31:17.656297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.552 [2024-07-15 03:31:17.656312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.552 [2024-07-15 03:31:17.656326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.552 [2024-07-15 03:31:17.656342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.553 [2024-07-15 03:31:17.656356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.553 [2024-07-15 03:31:17.656371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.553 [2024-07-15 03:31:17.656385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.553 [2024-07-15 03:31:17.656401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.553 [2024-07-15 03:31:17.656415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.553 [2024-07-15 03:31:17.656434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.553 [2024-07-15 03:31:17.656449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.553 [2024-07-15 03:31:17.656465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.553 [2024-07-15 03:31:17.656479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.553 [2024-07-15 03:31:17.656495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.553 [2024-07-15 03:31:17.656508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.553 [2024-07-15 03:31:17.656524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.553 [2024-07-15 03:31:17.656538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.553 [2024-07-15 03:31:17.656553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.553 [2024-07-15 03:31:17.656567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.553 [2024-07-15 03:31:17.656583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.553 [2024-07-15 03:31:17.656597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.553 [2024-07-15 03:31:17.656613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.553 [2024-07-15 03:31:17.656626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.553 [2024-07-15 03:31:17.656642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.553 [2024-07-15 03:31:17.656655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.553 [2024-07-15 03:31:17.656672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.553 [2024-07-15 03:31:17.656686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.553 [2024-07-15 03:31:17.656702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.553 [2024-07-15 03:31:17.656716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.553 [2024-07-15 03:31:17.656732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.553 [2024-07-15 03:31:17.656747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.553 [2024-07-15 03:31:17.656762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.553 [2024-07-15 03:31:17.656775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.553 [2024-07-15 03:31:17.656791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.553 [2024-07-15 03:31:17.656809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.553 [2024-07-15 03:31:17.656825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.553 [2024-07-15 03:31:17.656839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.553 [2024-07-15 03:31:17.656854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.553 [2024-07-15 03:31:17.656868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.553 [2024-07-15 03:31:17.656891] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27b3ff0 is same with the state(5) to be set 00:28:11.553 [2024-07-15 03:31:17.658147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.553 [2024-07-15 03:31:17.658172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.553 [2024-07-15 03:31:17.658194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.553 [2024-07-15 03:31:17.658210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.553 [2024-07-15 03:31:17.658227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.553 [2024-07-15 03:31:17.658241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.553 [2024-07-15 03:31:17.658257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.553 [2024-07-15 03:31:17.658271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.553 [2024-07-15 03:31:17.658288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.553 [2024-07-15 03:31:17.658301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.553 [2024-07-15 03:31:17.658318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.553 [2024-07-15 03:31:17.658331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.553 [2024-07-15 03:31:17.658347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.553 [2024-07-15 03:31:17.658361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.553 [2024-07-15 03:31:17.658377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.553 [2024-07-15 03:31:17.658391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.553 [2024-07-15 03:31:17.658407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.553 [2024-07-15 03:31:17.658421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.553 [2024-07-15 03:31:17.658437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.553 [2024-07-15 03:31:17.658455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.553 [2024-07-15 03:31:17.658472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.553 [2024-07-15 03:31:17.658486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.553 [2024-07-15 03:31:17.658502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.553 [2024-07-15 03:31:17.658516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.553 [2024-07-15 03:31:17.658532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.553 [2024-07-15 03:31:17.658546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.553 [2024-07-15 03:31:17.658562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.553 [2024-07-15 03:31:17.658575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.553 [2024-07-15 03:31:17.658591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.553 [2024-07-15 03:31:17.658605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.553 [2024-07-15 03:31:17.658621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.553 [2024-07-15 03:31:17.658635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.553 [2024-07-15 03:31:17.658650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.553 [2024-07-15 03:31:17.658664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.553 [2024-07-15 03:31:17.658680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.553 [2024-07-15 03:31:17.658694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.553 [2024-07-15 03:31:17.658709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.553 [2024-07-15 03:31:17.658723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.553 [2024-07-15 03:31:17.658739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.554 [2024-07-15 03:31:17.658753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.554 [2024-07-15 03:31:17.658769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.554 [2024-07-15 03:31:17.658782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.554 [2024-07-15 03:31:17.658798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.554 [2024-07-15 03:31:17.658813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.554 [2024-07-15 03:31:17.658832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.554 [2024-07-15 03:31:17.658847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.554 [2024-07-15 03:31:17.658863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.554 [2024-07-15 03:31:17.658884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.554 [2024-07-15 03:31:17.658901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.554 [2024-07-15 03:31:17.658916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.554 [2024-07-15 03:31:17.658932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.554 [2024-07-15 03:31:17.658946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.554 [2024-07-15 03:31:17.658962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.554 [2024-07-15 03:31:17.658976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.554 [2024-07-15 03:31:17.658991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.554 [2024-07-15 03:31:17.659005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.554 [2024-07-15 03:31:17.659020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.554 [2024-07-15 03:31:17.659034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.554 [2024-07-15 03:31:17.659050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.554 [2024-07-15 03:31:17.659064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.554 [2024-07-15 03:31:17.659080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.554 [2024-07-15 03:31:17.659093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.554 [2024-07-15 03:31:17.659109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.554 [2024-07-15 03:31:17.659123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.554 [2024-07-15 03:31:17.659138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.554 [2024-07-15 03:31:17.659152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.554 [2024-07-15 03:31:17.659168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.554 [2024-07-15 03:31:17.659181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.554 [2024-07-15 03:31:17.659197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.554 [2024-07-15 03:31:17.659215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.554 [2024-07-15 03:31:17.659232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.554 [2024-07-15 03:31:17.659246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.554 [2024-07-15 03:31:17.659262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.554 [2024-07-15 03:31:17.659277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.554 [2024-07-15 03:31:17.659292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.554 [2024-07-15 03:31:17.659306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.554 [2024-07-15 03:31:17.659322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.554 [2024-07-15 03:31:17.659336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.554 [2024-07-15 03:31:17.659352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.554 [2024-07-15 03:31:17.659366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.554 [2024-07-15 03:31:17.659382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.554 [2024-07-15 03:31:17.659397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.554 [2024-07-15 03:31:17.659413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.554 [2024-07-15 03:31:17.659427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.554 [2024-07-15 03:31:17.659442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.554 [2024-07-15 03:31:17.659456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.554 [2024-07-15 03:31:17.659472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.554 [2024-07-15 03:31:17.659486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.554 [2024-07-15 03:31:17.659501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.554 [2024-07-15 03:31:17.659515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.554 [2024-07-15 03:31:17.659530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.554 [2024-07-15 03:31:17.659544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.554 [2024-07-15 03:31:17.659560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.554 [2024-07-15 03:31:17.659574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.554 [2024-07-15 03:31:17.659594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.554 [2024-07-15 03:31:17.659609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.554 [2024-07-15 03:31:17.659625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.554 [2024-07-15 03:31:17.659639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.554 [2024-07-15 03:31:17.659655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.554 [2024-07-15 03:31:17.659668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.555 [2024-07-15 03:31:17.659684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.555 [2024-07-15 03:31:17.659698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.555 [2024-07-15 03:31:17.659713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.555 [2024-07-15 03:31:17.659727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.555 [2024-07-15 03:31:17.659743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.555 [2024-07-15 03:31:17.659757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.555 [2024-07-15 03:31:17.659773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.555 [2024-07-15 03:31:17.659786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.555 [2024-07-15 03:31:17.659802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.555 [2024-07-15 03:31:17.659816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.555 [2024-07-15 03:31:17.659831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.555 [2024-07-15 03:31:17.659845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.555 [2024-07-15 03:31:17.659860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.555 [2024-07-15 03:31:17.659875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.555 [2024-07-15 03:31:17.659899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.555 [2024-07-15 03:31:17.659913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.555 [2024-07-15 03:31:17.659929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.555 [2024-07-15 03:31:17.659943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.814 [2024-07-15 03:31:17.659958] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27b7a60 is same with the state(5) to be set 00:28:11.814 [2024-07-15 03:31:17.662495] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:28:11.814 [2024-07-15 03:31:17.662544] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:11.814 [2024-07-15 03:31:17.662561] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:28:11.814 [2024-07-15 03:31:17.662580] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:28:11.814 task offset: 24576 on job bdev=Nvme1n1 fails 00:28:11.814 00:28:11.814 Latency(us) 00:28:11.814 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:11.814 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:11.814 Job: Nvme1n1 ended in about 0.92 seconds with error 00:28:11.814 Verification LBA range: start 0x0 length 0x400 00:28:11.814 Nvme1n1 : 0.92 208.24 13.02 69.41 0.00 227937.00 4077.80 257872.02 00:28:11.814 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:11.814 Job: Nvme2n1 ended in about 0.95 seconds with error 00:28:11.814 Verification LBA range: start 0x0 length 0x400 00:28:11.814 Nvme2n1 : 0.95 205.94 12.87 67.24 0.00 227230.49 18932.62 250104.79 00:28:11.814 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:11.814 Job: Nvme3n1 ended in about 0.95 seconds with error 00:28:11.814 Verification LBA range: start 0x0 length 0x400 00:28:11.814 Nvme3n1 : 0.95 134.03 8.38 67.02 0.00 302810.45 21359.88 253211.69 00:28:11.814 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:11.814 Job: Nvme4n1 ended in about 0.96 seconds with error 00:28:11.814 Verification LBA range: start 0x0 length 0x400 00:28:11.814 Nvme4n1 : 0.96 204.58 12.79 62.63 0.00 222695.16 16505.36 250104.79 00:28:11.814 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:11.814 Job: Nvme5n1 ended in about 0.96 seconds with error 00:28:11.814 Verification LBA range: start 0x0 length 0x400 00:28:11.814 Nvme5n1 : 0.96 199.75 12.48 66.58 0.00 219454.39 21554.06 254765.13 00:28:11.814 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:11.814 Job: Nvme6n1 ended in about 0.97 seconds with error 00:28:11.814 Verification LBA range: start 0x0 length 0x400 00:28:11.814 Nvme6n1 : 0.97 131.93 8.25 65.96 0.00 289667.10 20291.89 256318.58 00:28:11.814 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:11.814 Job: Nvme7n1 ended in about 0.94 seconds with error 00:28:11.814 Verification LBA range: start 0x0 length 0x400 00:28:11.814 Nvme7n1 : 0.94 209.62 13.10 68.45 0.00 200643.97 19418.07 240784.12 00:28:11.814 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:11.814 Verification LBA range: start 0x0 length 0x400 00:28:11.814 Nvme8n1 : 0.93 207.04 12.94 0.00 0.00 263037.28 20194.80 253211.69 00:28:11.814 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:11.814 Job: Nvme9n1 ended in about 0.97 seconds with error 00:28:11.814 Verification LBA range: start 0x0 length 0x400 00:28:11.814 Nvme9n1 : 0.97 136.65 8.54 60.62 0.00 271460.63 16796.63 257872.02 00:28:11.814 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:11.814 Job: Nvme10n1 ended in about 0.96 seconds with error 00:28:11.814 Verification LBA range: start 0x0 length 0x400 00:28:11.814 Nvme10n1 : 0.96 132.72 8.30 66.36 0.00 263858.76 20097.71 284280.60 00:28:11.814 =================================================================================================================== 00:28:11.814 Total : 1770.51 110.66 594.28 0.00 244586.16 4077.80 284280.60 00:28:11.814 [2024-07-15 03:31:17.689496] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:28:11.814 [2024-07-15 03:31:17.689584] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:28:11.814 [2024-07-15 03:31:17.689938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.814 [2024-07-15 03:31:17.689987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x27cc8b0 with addr=10.0.0.2, port=4420 00:28:11.814 [2024-07-15 03:31:17.690009] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27cc8b0 is same with the state(5) to be set 00:28:11.814 [2024-07-15 03:31:17.690047] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2814e10 (9): Bad file descriptor 00:28:11.814 [2024-07-15 03:31:17.690073] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x266d700 (9): Bad file descriptor 00:28:11.814 [2024-07-15 03:31:17.690103] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2685b50 (9): Bad file descriptor 00:28:11.814 [2024-07-15 03:31:17.690132] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2669830 (9): Bad file descriptor 00:28:11.814 [2024-07-15 03:31:17.690448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.814 [2024-07-15 03:31:17.690479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x266cfd0 with addr=10.0.0.2, port=4420 00:28:11.814 [2024-07-15 03:31:17.690496] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x266cfd0 is same with the state(5) to be set 00:28:11.814 [2024-07-15 03:31:17.690639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.814 [2024-07-15 03:31:17.690666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2649290 with addr=10.0.0.2, port=4420 00:28:11.814 [2024-07-15 03:31:17.690683] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2649290 is same with the state(5) to be set 00:28:11.814 [2024-07-15 03:31:17.690793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.814 [2024-07-15 03:31:17.690820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x269f910 with addr=10.0.0.2, port=4420 00:28:11.814 [2024-07-15 03:31:17.690836] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x269f910 is same with the state(5) to be set 00:28:11.814 [2024-07-15 03:31:17.690943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.814 [2024-07-15 03:31:17.690970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2141610 with addr=10.0.0.2, port=4420 00:28:11.814 [2024-07-15 03:31:17.690986] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2141610 is same with the state(5) to be set 00:28:11.815 [2024-07-15 03:31:17.691085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.815 [2024-07-15 03:31:17.691111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x26a2370 with addr=10.0.0.2, port=4420 00:28:11.815 [2024-07-15 03:31:17.691127] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26a2370 is same with the state(5) to be set 00:28:11.815 [2024-07-15 03:31:17.691145] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x27cc8b0 (9): Bad file descriptor 00:28:11.815 [2024-07-15 03:31:17.691163] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:28:11.815 [2024-07-15 03:31:17.691177] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:28:11.815 [2024-07-15 03:31:17.691193] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:28:11.815 [2024-07-15 03:31:17.691214] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:28:11.815 [2024-07-15 03:31:17.691229] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:28:11.815 [2024-07-15 03:31:17.691242] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:28:11.815 [2024-07-15 03:31:17.691265] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:28:11.815 [2024-07-15 03:31:17.691281] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:28:11.815 [2024-07-15 03:31:17.691294] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:28:11.815 [2024-07-15 03:31:17.691310] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:28:11.815 [2024-07-15 03:31:17.691324] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:28:11.815 [2024-07-15 03:31:17.691336] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:28:11.815 [2024-07-15 03:31:17.691369] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:11.815 [2024-07-15 03:31:17.691391] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:11.815 [2024-07-15 03:31:17.691411] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:11.815 [2024-07-15 03:31:17.691429] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:11.815 [2024-07-15 03:31:17.691447] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:11.815 [2024-07-15 03:31:17.692075] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:11.815 [2024-07-15 03:31:17.692101] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:11.815 [2024-07-15 03:31:17.692118] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:11.815 [2024-07-15 03:31:17.692129] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:11.815 [2024-07-15 03:31:17.692145] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x266cfd0 (9): Bad file descriptor 00:28:11.815 [2024-07-15 03:31:17.692164] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2649290 (9): Bad file descriptor 00:28:11.815 [2024-07-15 03:31:17.692182] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x269f910 (9): Bad file descriptor 00:28:11.815 [2024-07-15 03:31:17.692199] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2141610 (9): Bad file descriptor 00:28:11.815 [2024-07-15 03:31:17.692216] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x26a2370 (9): Bad file descriptor 00:28:11.815 [2024-07-15 03:31:17.692231] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:28:11.815 [2024-07-15 03:31:17.692244] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:28:11.815 [2024-07-15 03:31:17.692258] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:28:11.815 [2024-07-15 03:31:17.692324] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:11.815 [2024-07-15 03:31:17.692346] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:28:11.815 [2024-07-15 03:31:17.692359] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:28:11.815 [2024-07-15 03:31:17.692373] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:28:11.815 [2024-07-15 03:31:17.692389] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:11.815 [2024-07-15 03:31:17.692403] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:11.815 [2024-07-15 03:31:17.692416] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:11.815 [2024-07-15 03:31:17.692431] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:28:11.815 [2024-07-15 03:31:17.692450] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:28:11.815 [2024-07-15 03:31:17.692463] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:28:11.815 [2024-07-15 03:31:17.692482] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:28:11.815 [2024-07-15 03:31:17.692496] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:28:11.815 [2024-07-15 03:31:17.692508] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:28:11.815 [2024-07-15 03:31:17.692524] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:28:11.815 [2024-07-15 03:31:17.692537] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:28:11.815 [2024-07-15 03:31:17.692550] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:28:11.815 [2024-07-15 03:31:17.692602] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:11.815 [2024-07-15 03:31:17.692621] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:11.815 [2024-07-15 03:31:17.692633] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:11.815 [2024-07-15 03:31:17.692645] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:11.815 [2024-07-15 03:31:17.692656] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:12.075 03:31:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # nvmfpid= 00:28:12.075 03:31:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@139 -- # sleep 1 00:28:13.070 03:31:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # kill -9 3278902 00:28:13.070 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 142: kill: (3278902) - No such process 00:28:13.070 03:31:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # true 00:28:13.070 03:31:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@144 -- # stoptarget 00:28:13.070 03:31:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:28:13.070 03:31:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:28:13.070 03:31:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:28:13.070 03:31:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@45 -- # nvmftestfini 00:28:13.070 03:31:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:13.070 03:31:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # sync 00:28:13.070 03:31:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:13.070 03:31:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@120 -- # set +e 00:28:13.070 03:31:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:13.070 03:31:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:13.070 rmmod nvme_tcp 00:28:13.070 rmmod nvme_fabrics 00:28:13.070 rmmod nvme_keyring 00:28:13.328 03:31:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:13.328 03:31:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set -e 00:28:13.328 03:31:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # return 0 00:28:13.328 03:31:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:28:13.328 03:31:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:28:13.328 03:31:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:28:13.328 03:31:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:28:13.328 03:31:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:13.328 03:31:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:13.328 03:31:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:13.328 03:31:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:13.328 03:31:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:15.232 03:31:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:28:15.232 00:28:15.232 real 0m7.524s 00:28:15.232 user 0m18.434s 00:28:15.232 sys 0m1.482s 00:28:15.232 03:31:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:15.232 03:31:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:15.232 ************************************ 00:28:15.232 END TEST nvmf_shutdown_tc3 00:28:15.232 ************************************ 00:28:15.232 03:31:21 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1142 -- # return 0 00:28:15.232 03:31:21 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@151 -- # trap - SIGINT SIGTERM EXIT 00:28:15.232 00:28:15.232 real 0m27.469s 00:28:15.232 user 1m16.813s 00:28:15.232 sys 0m6.413s 00:28:15.232 03:31:21 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:15.232 03:31:21 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:28:15.232 ************************************ 00:28:15.232 END TEST nvmf_shutdown 00:28:15.232 ************************************ 00:28:15.232 03:31:21 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:28:15.232 03:31:21 nvmf_tcp -- nvmf/nvmf.sh@86 -- # timing_exit target 00:28:15.232 03:31:21 nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:28:15.232 03:31:21 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:15.232 03:31:21 nvmf_tcp -- nvmf/nvmf.sh@88 -- # timing_enter host 00:28:15.232 03:31:21 nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:28:15.232 03:31:21 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:15.232 03:31:21 nvmf_tcp -- nvmf/nvmf.sh@90 -- # [[ 0 -eq 0 ]] 00:28:15.232 03:31:21 nvmf_tcp -- nvmf/nvmf.sh@91 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:28:15.232 03:31:21 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:28:15.232 03:31:21 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:15.232 03:31:21 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:15.232 ************************************ 00:28:15.232 START TEST nvmf_multicontroller 00:28:15.232 ************************************ 00:28:15.232 03:31:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:28:15.490 * Looking for test storage... 00:28:15.490 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:15.490 03:31:21 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:15.490 03:31:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:28:15.490 03:31:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:15.490 03:31:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:15.490 03:31:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:15.490 03:31:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:15.490 03:31:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:15.490 03:31:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:15.490 03:31:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:15.490 03:31:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:15.490 03:31:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:15.490 03:31:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:15.490 03:31:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:28:15.490 03:31:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:28:15.490 03:31:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:15.490 03:31:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:15.490 03:31:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:15.491 03:31:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:15.491 03:31:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:15.491 03:31:21 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:15.491 03:31:21 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:15.491 03:31:21 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:15.491 03:31:21 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:15.491 03:31:21 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:15.491 03:31:21 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:15.491 03:31:21 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:28:15.491 03:31:21 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:15.491 03:31:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@47 -- # : 0 00:28:15.491 03:31:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:15.491 03:31:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:15.491 03:31:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:15.491 03:31:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:15.491 03:31:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:15.491 03:31:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:15.491 03:31:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:15.491 03:31:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:15.491 03:31:21 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:28:15.491 03:31:21 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:28:15.491 03:31:21 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:28:15.491 03:31:21 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:28:15.491 03:31:21 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:28:15.491 03:31:21 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:28:15.491 03:31:21 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:28:15.491 03:31:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:28:15.491 03:31:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:15.491 03:31:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:15.491 03:31:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:15.491 03:31:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:15.491 03:31:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:15.491 03:31:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:15.491 03:31:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:15.491 03:31:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:28:15.491 03:31:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:28:15.491 03:31:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@285 -- # xtrace_disable 00:28:15.491 03:31:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:17.415 03:31:23 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:17.415 03:31:23 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@291 -- # pci_devs=() 00:28:17.415 03:31:23 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:17.415 03:31:23 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:17.415 03:31:23 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:17.415 03:31:23 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:17.415 03:31:23 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:17.415 03:31:23 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@295 -- # net_devs=() 00:28:17.415 03:31:23 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:17.415 03:31:23 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@296 -- # e810=() 00:28:17.415 03:31:23 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@296 -- # local -ga e810 00:28:17.415 03:31:23 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@297 -- # x722=() 00:28:17.415 03:31:23 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@297 -- # local -ga x722 00:28:17.415 03:31:23 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@298 -- # mlx=() 00:28:17.415 03:31:23 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@298 -- # local -ga mlx 00:28:17.415 03:31:23 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:17.415 03:31:23 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:17.415 03:31:23 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:17.415 03:31:23 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:17.415 03:31:23 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:17.415 03:31:23 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:17.415 03:31:23 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:17.415 03:31:23 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:17.415 03:31:23 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:17.415 03:31:23 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:17.415 03:31:23 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:17.415 03:31:23 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:17.415 03:31:23 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:17.415 03:31:23 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:28:17.415 03:31:23 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:28:17.415 03:31:23 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:28:17.415 03:31:23 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:17.415 03:31:23 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:17.415 03:31:23 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:28:17.415 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:28:17.415 03:31:23 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:17.415 03:31:23 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:17.415 03:31:23 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:17.415 03:31:23 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:17.415 03:31:23 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:17.415 03:31:23 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:17.415 03:31:23 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:28:17.415 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:28:17.415 03:31:23 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:17.415 03:31:23 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:17.415 03:31:23 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:17.415 03:31:23 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:17.415 03:31:23 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:17.415 03:31:23 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:17.415 03:31:23 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:28:17.415 03:31:23 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:28:17.415 03:31:23 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:17.415 03:31:23 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:17.415 03:31:23 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:17.415 03:31:23 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:17.415 03:31:23 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:17.415 03:31:23 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:17.415 03:31:23 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:17.416 03:31:23 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:28:17.416 Found net devices under 0000:0a:00.0: cvl_0_0 00:28:17.416 03:31:23 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:17.416 03:31:23 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:17.416 03:31:23 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:17.416 03:31:23 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:17.416 03:31:23 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:17.416 03:31:23 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:17.416 03:31:23 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:17.416 03:31:23 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:17.416 03:31:23 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:28:17.416 Found net devices under 0000:0a:00.1: cvl_0_1 00:28:17.416 03:31:23 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:17.416 03:31:23 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:28:17.416 03:31:23 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # is_hw=yes 00:28:17.416 03:31:23 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:28:17.416 03:31:23 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:28:17.416 03:31:23 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:28:17.416 03:31:23 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:17.416 03:31:23 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:17.416 03:31:23 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:17.416 03:31:23 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:28:17.416 03:31:23 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:17.416 03:31:23 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:17.416 03:31:23 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:28:17.416 03:31:23 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:17.416 03:31:23 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:17.416 03:31:23 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:28:17.416 03:31:23 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:28:17.416 03:31:23 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:28:17.416 03:31:23 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:17.416 03:31:23 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:17.416 03:31:23 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:17.416 03:31:23 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:28:17.416 03:31:23 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:17.416 03:31:23 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:17.416 03:31:23 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:17.416 03:31:23 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:28:17.416 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:17.416 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.243 ms 00:28:17.416 00:28:17.416 --- 10.0.0.2 ping statistics --- 00:28:17.416 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:17.416 rtt min/avg/max/mdev = 0.243/0.243/0.243/0.000 ms 00:28:17.416 03:31:23 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:17.416 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:17.416 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.116 ms 00:28:17.416 00:28:17.416 --- 10.0.0.1 ping statistics --- 00:28:17.416 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:17.416 rtt min/avg/max/mdev = 0.116/0.116/0.116/0.000 ms 00:28:17.416 03:31:23 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:17.416 03:31:23 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@422 -- # return 0 00:28:17.416 03:31:23 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:28:17.416 03:31:23 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:17.416 03:31:23 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:28:17.416 03:31:23 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:28:17.416 03:31:23 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:17.416 03:31:23 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:28:17.416 03:31:23 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:28:17.416 03:31:23 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:28:17.416 03:31:23 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:17.416 03:31:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@722 -- # xtrace_disable 00:28:17.416 03:31:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:17.416 03:31:23 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@481 -- # nvmfpid=3281309 00:28:17.416 03:31:23 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:28:17.416 03:31:23 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@482 -- # waitforlisten 3281309 00:28:17.416 03:31:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@829 -- # '[' -z 3281309 ']' 00:28:17.416 03:31:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:17.416 03:31:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:17.416 03:31:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:17.416 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:17.416 03:31:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:17.416 03:31:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:17.674 [2024-07-15 03:31:23.602243] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:28:17.674 [2024-07-15 03:31:23.602334] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:17.674 EAL: No free 2048 kB hugepages reported on node 1 00:28:17.674 [2024-07-15 03:31:23.675588] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:28:17.674 [2024-07-15 03:31:23.767627] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:17.674 [2024-07-15 03:31:23.767687] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:17.674 [2024-07-15 03:31:23.767718] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:17.674 [2024-07-15 03:31:23.767735] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:17.674 [2024-07-15 03:31:23.767749] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:17.674 [2024-07-15 03:31:23.767870] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:28:17.674 [2024-07-15 03:31:23.768002] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:28:17.674 [2024-07-15 03:31:23.768010] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:17.932 03:31:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:17.932 03:31:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@862 -- # return 0 00:28:17.932 03:31:23 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:17.932 03:31:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@728 -- # xtrace_disable 00:28:17.932 03:31:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:17.932 03:31:23 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:17.932 03:31:23 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:17.932 03:31:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:17.932 03:31:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:17.932 [2024-07-15 03:31:23.914682] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:17.932 03:31:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:17.932 03:31:23 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:28:17.932 03:31:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:17.932 03:31:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:17.932 Malloc0 00:28:17.932 03:31:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:17.932 03:31:23 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:17.932 03:31:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:17.932 03:31:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:17.932 03:31:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:17.932 03:31:23 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:17.932 03:31:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:17.932 03:31:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:17.932 03:31:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:17.932 03:31:23 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:17.932 03:31:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:17.932 03:31:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:17.933 [2024-07-15 03:31:23.979578] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:17.933 03:31:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:17.933 03:31:23 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:28:17.933 03:31:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:17.933 03:31:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:17.933 [2024-07-15 03:31:23.987395] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:28:17.933 03:31:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:17.933 03:31:23 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:28:17.933 03:31:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:17.933 03:31:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:17.933 Malloc1 00:28:17.933 03:31:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:17.933 03:31:24 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:28:17.933 03:31:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:17.933 03:31:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:17.933 03:31:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:17.933 03:31:24 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:28:17.933 03:31:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:17.933 03:31:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:17.933 03:31:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:17.933 03:31:24 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:28:17.933 03:31:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:17.933 03:31:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:17.933 03:31:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:17.933 03:31:24 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:28:17.933 03:31:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:17.933 03:31:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:17.933 03:31:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:17.933 03:31:24 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=3281445 00:28:17.933 03:31:24 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:28:17.933 03:31:24 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 3281445 /var/tmp/bdevperf.sock 00:28:17.933 03:31:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@829 -- # '[' -z 3281445 ']' 00:28:17.933 03:31:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:28:17.933 03:31:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:17.933 03:31:24 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:28:17.933 03:31:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:28:17.933 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:28:17.933 03:31:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:17.933 03:31:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:18.498 03:31:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:18.498 03:31:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@862 -- # return 0 00:28:18.498 03:31:24 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:28:18.498 03:31:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:18.498 03:31:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:18.498 NVMe0n1 00:28:18.498 03:31:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:18.498 03:31:24 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:28:18.498 03:31:24 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:28:18.498 03:31:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:18.498 03:31:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:18.498 03:31:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:18.498 1 00:28:18.498 03:31:24 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:28:18.498 03:31:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:28:18.498 03:31:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:28:18.498 03:31:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:28:18.498 03:31:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:18.498 03:31:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:28:18.498 03:31:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:18.498 03:31:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:28:18.498 03:31:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:18.498 03:31:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:18.498 request: 00:28:18.498 { 00:28:18.498 "name": "NVMe0", 00:28:18.498 "trtype": "tcp", 00:28:18.498 "traddr": "10.0.0.2", 00:28:18.498 "adrfam": "ipv4", 00:28:18.498 "trsvcid": "4420", 00:28:18.498 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:18.498 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:28:18.498 "hostaddr": "10.0.0.2", 00:28:18.498 "hostsvcid": "60000", 00:28:18.498 "prchk_reftag": false, 00:28:18.498 "prchk_guard": false, 00:28:18.498 "hdgst": false, 00:28:18.498 "ddgst": false, 00:28:18.498 "method": "bdev_nvme_attach_controller", 00:28:18.498 "req_id": 1 00:28:18.498 } 00:28:18.498 Got JSON-RPC error response 00:28:18.498 response: 00:28:18.498 { 00:28:18.498 "code": -114, 00:28:18.498 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:28:18.498 } 00:28:18.498 03:31:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:28:18.498 03:31:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:28:18.498 03:31:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:28:18.498 03:31:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:28:18.498 03:31:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:28:18.498 03:31:24 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:28:18.498 03:31:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:28:18.498 03:31:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:28:18.498 03:31:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:28:18.498 03:31:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:18.498 03:31:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:28:18.498 03:31:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:18.498 03:31:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:28:18.498 03:31:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:18.498 03:31:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:18.498 request: 00:28:18.498 { 00:28:18.498 "name": "NVMe0", 00:28:18.498 "trtype": "tcp", 00:28:18.498 "traddr": "10.0.0.2", 00:28:18.498 "adrfam": "ipv4", 00:28:18.498 "trsvcid": "4420", 00:28:18.498 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:28:18.498 "hostaddr": "10.0.0.2", 00:28:18.498 "hostsvcid": "60000", 00:28:18.498 "prchk_reftag": false, 00:28:18.498 "prchk_guard": false, 00:28:18.498 "hdgst": false, 00:28:18.498 "ddgst": false, 00:28:18.498 "method": "bdev_nvme_attach_controller", 00:28:18.498 "req_id": 1 00:28:18.498 } 00:28:18.498 Got JSON-RPC error response 00:28:18.498 response: 00:28:18.498 { 00:28:18.498 "code": -114, 00:28:18.498 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:28:18.498 } 00:28:18.498 03:31:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:28:18.498 03:31:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:28:18.498 03:31:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:28:18.498 03:31:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:28:18.498 03:31:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:28:18.498 03:31:24 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:28:18.498 03:31:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:28:18.499 03:31:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:28:18.499 03:31:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:28:18.499 03:31:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:18.499 03:31:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:28:18.499 03:31:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:18.499 03:31:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:28:18.499 03:31:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:18.499 03:31:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:18.499 request: 00:28:18.499 { 00:28:18.499 "name": "NVMe0", 00:28:18.499 "trtype": "tcp", 00:28:18.499 "traddr": "10.0.0.2", 00:28:18.499 "adrfam": "ipv4", 00:28:18.499 "trsvcid": "4420", 00:28:18.499 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:18.499 "hostaddr": "10.0.0.2", 00:28:18.499 "hostsvcid": "60000", 00:28:18.499 "prchk_reftag": false, 00:28:18.499 "prchk_guard": false, 00:28:18.499 "hdgst": false, 00:28:18.499 "ddgst": false, 00:28:18.499 "multipath": "disable", 00:28:18.499 "method": "bdev_nvme_attach_controller", 00:28:18.499 "req_id": 1 00:28:18.499 } 00:28:18.499 Got JSON-RPC error response 00:28:18.499 response: 00:28:18.499 { 00:28:18.499 "code": -114, 00:28:18.499 "message": "A controller named NVMe0 already exists and multipath is disabled\n" 00:28:18.499 } 00:28:18.499 03:31:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:28:18.499 03:31:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:28:18.499 03:31:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:28:18.499 03:31:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:28:18.499 03:31:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:28:18.499 03:31:24 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:28:18.499 03:31:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:28:18.499 03:31:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:28:18.499 03:31:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:28:18.499 03:31:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:18.499 03:31:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:28:18.499 03:31:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:18.499 03:31:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:28:18.499 03:31:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:18.499 03:31:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:18.499 request: 00:28:18.499 { 00:28:18.499 "name": "NVMe0", 00:28:18.499 "trtype": "tcp", 00:28:18.499 "traddr": "10.0.0.2", 00:28:18.499 "adrfam": "ipv4", 00:28:18.499 "trsvcid": "4420", 00:28:18.499 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:18.499 "hostaddr": "10.0.0.2", 00:28:18.499 "hostsvcid": "60000", 00:28:18.499 "prchk_reftag": false, 00:28:18.499 "prchk_guard": false, 00:28:18.499 "hdgst": false, 00:28:18.499 "ddgst": false, 00:28:18.499 "multipath": "failover", 00:28:18.499 "method": "bdev_nvme_attach_controller", 00:28:18.499 "req_id": 1 00:28:18.499 } 00:28:18.499 Got JSON-RPC error response 00:28:18.499 response: 00:28:18.499 { 00:28:18.499 "code": -114, 00:28:18.499 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:28:18.499 } 00:28:18.499 03:31:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:28:18.499 03:31:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:28:18.499 03:31:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:28:18.499 03:31:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:28:18.499 03:31:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:28:18.499 03:31:24 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:28:18.499 03:31:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:18.499 03:31:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:18.756 00:28:18.756 03:31:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:18.756 03:31:24 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:28:18.756 03:31:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:18.756 03:31:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:18.756 03:31:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:18.756 03:31:24 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:28:18.756 03:31:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:18.756 03:31:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:19.013 00:28:19.013 03:31:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:19.013 03:31:24 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:28:19.013 03:31:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:19.013 03:31:24 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:28:19.013 03:31:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:19.013 03:31:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:19.013 03:31:24 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:28:19.013 03:31:24 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:28:20.388 0 00:28:20.388 03:31:26 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:28:20.388 03:31:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:20.388 03:31:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:20.388 03:31:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:20.388 03:31:26 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@100 -- # killprocess 3281445 00:28:20.388 03:31:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@948 -- # '[' -z 3281445 ']' 00:28:20.388 03:31:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # kill -0 3281445 00:28:20.388 03:31:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # uname 00:28:20.388 03:31:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:20.388 03:31:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3281445 00:28:20.388 03:31:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:28:20.388 03:31:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:28:20.388 03:31:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3281445' 00:28:20.388 killing process with pid 3281445 00:28:20.388 03:31:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@967 -- # kill 3281445 00:28:20.388 03:31:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@972 -- # wait 3281445 00:28:20.388 03:31:26 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@102 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:20.388 03:31:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:20.388 03:31:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:20.388 03:31:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:20.388 03:31:26 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@103 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:28:20.388 03:31:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:20.388 03:31:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:20.388 03:31:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:20.388 03:31:26 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@105 -- # trap - SIGINT SIGTERM EXIT 00:28:20.388 03:31:26 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@107 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:28:20.388 03:31:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1612 -- # read -r file 00:28:20.388 03:31:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1611 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:28:20.388 03:31:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1611 -- # sort -u 00:28:20.388 03:31:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1613 -- # cat 00:28:20.388 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:28:20.388 [2024-07-15 03:31:24.094336] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:28:20.388 [2024-07-15 03:31:24.094436] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3281445 ] 00:28:20.388 EAL: No free 2048 kB hugepages reported on node 1 00:28:20.388 [2024-07-15 03:31:24.154758] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:20.388 [2024-07-15 03:31:24.240962] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:20.388 [2024-07-15 03:31:24.961436] bdev.c:4613:bdev_name_add: *ERROR*: Bdev name c7c6fba4-00f5-40b8-b077-9adc36c00362 already exists 00:28:20.388 [2024-07-15 03:31:24.961475] bdev.c:7722:bdev_register: *ERROR*: Unable to add uuid:c7c6fba4-00f5-40b8-b077-9adc36c00362 alias for bdev NVMe1n1 00:28:20.388 [2024-07-15 03:31:24.961489] bdev_nvme.c:4317:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:28:20.388 Running I/O for 1 seconds... 00:28:20.388 00:28:20.388 Latency(us) 00:28:20.388 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:20.388 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:28:20.388 NVMe0n1 : 1.00 19226.63 75.10 0.00 0.00 6646.81 5728.33 14757.74 00:28:20.388 =================================================================================================================== 00:28:20.388 Total : 19226.63 75.10 0.00 0.00 6646.81 5728.33 14757.74 00:28:20.388 Received shutdown signal, test time was about 1.000000 seconds 00:28:20.388 00:28:20.388 Latency(us) 00:28:20.388 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:20.388 =================================================================================================================== 00:28:20.388 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:20.388 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:28:20.388 03:31:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1618 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:28:20.388 03:31:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1612 -- # read -r file 00:28:20.388 03:31:26 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@108 -- # nvmftestfini 00:28:20.388 03:31:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:20.388 03:31:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@117 -- # sync 00:28:20.388 03:31:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:20.388 03:31:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@120 -- # set +e 00:28:20.388 03:31:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:20.388 03:31:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:20.388 rmmod nvme_tcp 00:28:20.388 rmmod nvme_fabrics 00:28:20.388 rmmod nvme_keyring 00:28:20.388 03:31:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:20.388 03:31:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@124 -- # set -e 00:28:20.388 03:31:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@125 -- # return 0 00:28:20.388 03:31:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@489 -- # '[' -n 3281309 ']' 00:28:20.388 03:31:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@490 -- # killprocess 3281309 00:28:20.388 03:31:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@948 -- # '[' -z 3281309 ']' 00:28:20.388 03:31:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # kill -0 3281309 00:28:20.388 03:31:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # uname 00:28:20.388 03:31:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:20.388 03:31:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3281309 00:28:20.388 03:31:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:28:20.388 03:31:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:28:20.389 03:31:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3281309' 00:28:20.389 killing process with pid 3281309 00:28:20.389 03:31:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@967 -- # kill 3281309 00:28:20.389 03:31:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@972 -- # wait 3281309 00:28:20.648 03:31:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:28:20.648 03:31:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:28:20.648 03:31:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:28:20.648 03:31:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:20.648 03:31:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:20.648 03:31:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:20.648 03:31:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:20.648 03:31:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:23.173 03:31:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:28:23.173 00:28:23.173 real 0m7.464s 00:28:23.173 user 0m12.009s 00:28:23.173 sys 0m2.226s 00:28:23.173 03:31:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:23.173 03:31:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:23.173 ************************************ 00:28:23.173 END TEST nvmf_multicontroller 00:28:23.173 ************************************ 00:28:23.173 03:31:28 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:28:23.173 03:31:28 nvmf_tcp -- nvmf/nvmf.sh@92 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:28:23.173 03:31:28 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:28:23.173 03:31:28 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:23.173 03:31:28 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:23.173 ************************************ 00:28:23.173 START TEST nvmf_aer 00:28:23.173 ************************************ 00:28:23.173 03:31:28 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:28:23.173 * Looking for test storage... 00:28:23.173 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:23.174 03:31:28 nvmf_tcp.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:23.174 03:31:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:28:23.174 03:31:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:23.174 03:31:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:23.174 03:31:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:23.174 03:31:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:23.174 03:31:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:23.174 03:31:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:23.174 03:31:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:23.174 03:31:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:23.174 03:31:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:23.174 03:31:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:23.174 03:31:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:28:23.174 03:31:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:28:23.174 03:31:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:23.174 03:31:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:23.174 03:31:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:23.174 03:31:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:23.174 03:31:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:23.174 03:31:28 nvmf_tcp.nvmf_aer -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:23.174 03:31:28 nvmf_tcp.nvmf_aer -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:23.174 03:31:28 nvmf_tcp.nvmf_aer -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:23.174 03:31:28 nvmf_tcp.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:23.174 03:31:28 nvmf_tcp.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:23.174 03:31:28 nvmf_tcp.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:23.174 03:31:28 nvmf_tcp.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:28:23.174 03:31:28 nvmf_tcp.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:23.174 03:31:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@47 -- # : 0 00:28:23.174 03:31:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:23.174 03:31:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:23.174 03:31:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:23.174 03:31:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:23.174 03:31:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:23.174 03:31:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:23.174 03:31:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:23.174 03:31:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:23.174 03:31:28 nvmf_tcp.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:28:23.174 03:31:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:28:23.174 03:31:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:23.174 03:31:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:23.174 03:31:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:23.174 03:31:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:23.174 03:31:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:23.174 03:31:28 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:23.174 03:31:28 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:23.174 03:31:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:28:23.174 03:31:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:28:23.174 03:31:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@285 -- # xtrace_disable 00:28:23.174 03:31:28 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:25.074 03:31:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:25.074 03:31:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@291 -- # pci_devs=() 00:28:25.074 03:31:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:25.074 03:31:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:25.074 03:31:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:25.074 03:31:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:25.074 03:31:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:25.074 03:31:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@295 -- # net_devs=() 00:28:25.074 03:31:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:25.074 03:31:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@296 -- # e810=() 00:28:25.074 03:31:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@296 -- # local -ga e810 00:28:25.074 03:31:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@297 -- # x722=() 00:28:25.074 03:31:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@297 -- # local -ga x722 00:28:25.074 03:31:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@298 -- # mlx=() 00:28:25.074 03:31:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@298 -- # local -ga mlx 00:28:25.074 03:31:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:25.074 03:31:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:25.074 03:31:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:25.074 03:31:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:25.074 03:31:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:25.074 03:31:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:25.074 03:31:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:25.074 03:31:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:25.074 03:31:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:25.074 03:31:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:25.074 03:31:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:25.074 03:31:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:25.074 03:31:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:25.074 03:31:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:28:25.074 03:31:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:28:25.074 03:31:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:28:25.074 03:31:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:25.074 03:31:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:25.074 03:31:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:28:25.074 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:28:25.074 03:31:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:25.074 03:31:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:25.074 03:31:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:25.074 03:31:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:25.074 03:31:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:25.074 03:31:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:25.074 03:31:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:28:25.074 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:28:25.074 03:31:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:25.074 03:31:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:25.074 03:31:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:25.074 03:31:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:25.074 03:31:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:25.074 03:31:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:25.074 03:31:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:28:25.074 03:31:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:28:25.074 03:31:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:25.074 03:31:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:25.074 03:31:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:25.074 03:31:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:25.074 03:31:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:25.074 03:31:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:25.074 03:31:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:25.074 03:31:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:28:25.074 Found net devices under 0000:0a:00.0: cvl_0_0 00:28:25.074 03:31:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:25.074 03:31:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:25.074 03:31:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:25.074 03:31:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:25.075 03:31:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:25.075 03:31:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:25.075 03:31:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:25.075 03:31:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:25.075 03:31:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:28:25.075 Found net devices under 0000:0a:00.1: cvl_0_1 00:28:25.075 03:31:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:25.075 03:31:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:28:25.075 03:31:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # is_hw=yes 00:28:25.075 03:31:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:28:25.075 03:31:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:28:25.075 03:31:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:28:25.075 03:31:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:25.075 03:31:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:25.075 03:31:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:25.075 03:31:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:28:25.075 03:31:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:25.075 03:31:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:25.075 03:31:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:28:25.075 03:31:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:25.075 03:31:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:25.075 03:31:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:28:25.075 03:31:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:28:25.075 03:31:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:28:25.075 03:31:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:25.075 03:31:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:25.075 03:31:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:25.075 03:31:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:28:25.075 03:31:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:25.075 03:31:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:25.075 03:31:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:25.075 03:31:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:28:25.075 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:25.075 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.262 ms 00:28:25.075 00:28:25.075 --- 10.0.0.2 ping statistics --- 00:28:25.075 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:25.075 rtt min/avg/max/mdev = 0.262/0.262/0.262/0.000 ms 00:28:25.075 03:31:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:25.075 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:25.075 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.128 ms 00:28:25.075 00:28:25.075 --- 10.0.0.1 ping statistics --- 00:28:25.075 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:25.075 rtt min/avg/max/mdev = 0.128/0.128/0.128/0.000 ms 00:28:25.075 03:31:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:25.075 03:31:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@422 -- # return 0 00:28:25.075 03:31:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:28:25.075 03:31:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:25.075 03:31:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:28:25.075 03:31:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:28:25.075 03:31:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:25.075 03:31:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:28:25.075 03:31:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:28:25.075 03:31:30 nvmf_tcp.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:28:25.075 03:31:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:25.075 03:31:30 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@722 -- # xtrace_disable 00:28:25.075 03:31:30 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:25.075 03:31:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@481 -- # nvmfpid=3283650 00:28:25.075 03:31:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:28:25.075 03:31:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@482 -- # waitforlisten 3283650 00:28:25.075 03:31:30 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@829 -- # '[' -z 3283650 ']' 00:28:25.075 03:31:30 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:25.075 03:31:30 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:25.075 03:31:30 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:25.075 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:25.075 03:31:30 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:25.075 03:31:30 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:25.075 [2024-07-15 03:31:31.001001] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:28:25.075 [2024-07-15 03:31:31.001090] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:25.075 EAL: No free 2048 kB hugepages reported on node 1 00:28:25.075 [2024-07-15 03:31:31.067176] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:25.075 [2024-07-15 03:31:31.166087] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:25.075 [2024-07-15 03:31:31.166141] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:25.075 [2024-07-15 03:31:31.166155] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:25.075 [2024-07-15 03:31:31.166177] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:25.075 [2024-07-15 03:31:31.166188] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:25.075 [2024-07-15 03:31:31.166265] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:25.075 [2024-07-15 03:31:31.166318] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:28:25.075 [2024-07-15 03:31:31.166291] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:28:25.075 [2024-07-15 03:31:31.166321] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:25.334 03:31:31 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:25.334 03:31:31 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@862 -- # return 0 00:28:25.334 03:31:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:25.334 03:31:31 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@728 -- # xtrace_disable 00:28:25.334 03:31:31 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:25.334 03:31:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:25.334 03:31:31 nvmf_tcp.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:25.334 03:31:31 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:25.334 03:31:31 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:25.334 [2024-07-15 03:31:31.317756] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:25.334 03:31:31 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:25.334 03:31:31 nvmf_tcp.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:28:25.334 03:31:31 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:25.334 03:31:31 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:25.334 Malloc0 00:28:25.334 03:31:31 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:25.334 03:31:31 nvmf_tcp.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:28:25.334 03:31:31 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:25.334 03:31:31 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:25.334 03:31:31 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:25.334 03:31:31 nvmf_tcp.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:25.334 03:31:31 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:25.334 03:31:31 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:25.334 03:31:31 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:25.334 03:31:31 nvmf_tcp.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:25.334 03:31:31 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:25.334 03:31:31 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:25.334 [2024-07-15 03:31:31.371175] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:25.334 03:31:31 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:25.334 03:31:31 nvmf_tcp.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:28:25.334 03:31:31 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:25.334 03:31:31 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:25.334 [ 00:28:25.334 { 00:28:25.334 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:28:25.334 "subtype": "Discovery", 00:28:25.334 "listen_addresses": [], 00:28:25.334 "allow_any_host": true, 00:28:25.334 "hosts": [] 00:28:25.334 }, 00:28:25.334 { 00:28:25.334 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:28:25.334 "subtype": "NVMe", 00:28:25.334 "listen_addresses": [ 00:28:25.334 { 00:28:25.334 "trtype": "TCP", 00:28:25.334 "adrfam": "IPv4", 00:28:25.334 "traddr": "10.0.0.2", 00:28:25.334 "trsvcid": "4420" 00:28:25.334 } 00:28:25.334 ], 00:28:25.334 "allow_any_host": true, 00:28:25.334 "hosts": [], 00:28:25.334 "serial_number": "SPDK00000000000001", 00:28:25.334 "model_number": "SPDK bdev Controller", 00:28:25.334 "max_namespaces": 2, 00:28:25.334 "min_cntlid": 1, 00:28:25.334 "max_cntlid": 65519, 00:28:25.334 "namespaces": [ 00:28:25.334 { 00:28:25.334 "nsid": 1, 00:28:25.334 "bdev_name": "Malloc0", 00:28:25.334 "name": "Malloc0", 00:28:25.334 "nguid": "28490D07F38C4B88B09927E33165E773", 00:28:25.334 "uuid": "28490d07-f38c-4b88-b099-27e33165e773" 00:28:25.334 } 00:28:25.334 ] 00:28:25.334 } 00:28:25.334 ] 00:28:25.334 03:31:31 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:25.334 03:31:31 nvmf_tcp.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:28:25.334 03:31:31 nvmf_tcp.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:28:25.334 03:31:31 nvmf_tcp.nvmf_aer -- host/aer.sh@33 -- # aerpid=3283683 00:28:25.334 03:31:31 nvmf_tcp.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:28:25.334 03:31:31 nvmf_tcp.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:28:25.334 03:31:31 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1265 -- # local i=0 00:28:25.334 03:31:31 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:28:25.334 03:31:31 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 0 -lt 200 ']' 00:28:25.334 03:31:31 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1268 -- # i=1 00:28:25.334 03:31:31 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:28:25.334 EAL: No free 2048 kB hugepages reported on node 1 00:28:25.593 03:31:31 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:28:25.593 03:31:31 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 1 -lt 200 ']' 00:28:25.593 03:31:31 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1268 -- # i=2 00:28:25.593 03:31:31 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:28:25.593 03:31:31 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:28:25.593 03:31:31 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:28:25.593 03:31:31 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1276 -- # return 0 00:28:25.593 03:31:31 nvmf_tcp.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:28:25.593 03:31:31 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:25.593 03:31:31 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:25.593 Malloc1 00:28:25.593 03:31:31 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:25.593 03:31:31 nvmf_tcp.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:28:25.593 03:31:31 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:25.593 03:31:31 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:25.593 03:31:31 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:25.593 03:31:31 nvmf_tcp.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:28:25.593 03:31:31 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:25.593 03:31:31 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:25.593 [ 00:28:25.593 { 00:28:25.593 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:28:25.593 "subtype": "Discovery", 00:28:25.593 "listen_addresses": [], 00:28:25.593 "allow_any_host": true, 00:28:25.593 "hosts": [] 00:28:25.593 }, 00:28:25.593 { 00:28:25.593 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:28:25.593 "subtype": "NVMe", 00:28:25.593 "listen_addresses": [ 00:28:25.593 { 00:28:25.593 "trtype": "TCP", 00:28:25.593 "adrfam": "IPv4", 00:28:25.593 "traddr": "10.0.0.2", 00:28:25.593 "trsvcid": "4420" 00:28:25.593 } 00:28:25.593 ], 00:28:25.593 "allow_any_host": true, 00:28:25.593 "hosts": [], 00:28:25.593 "serial_number": "SPDK00000000000001", 00:28:25.593 "model_number": "SPDK bdev Controller", 00:28:25.593 "max_namespaces": 2, 00:28:25.593 "min_cntlid": 1, 00:28:25.593 "max_cntlid": 65519, 00:28:25.593 "namespaces": [ 00:28:25.593 { 00:28:25.593 "nsid": 1, 00:28:25.593 "bdev_name": "Malloc0", 00:28:25.593 "name": "Malloc0", 00:28:25.593 "nguid": "28490D07F38C4B88B09927E33165E773", 00:28:25.593 "uuid": "28490d07-f38c-4b88-b099-27e33165e773" 00:28:25.593 }, 00:28:25.593 { 00:28:25.593 "nsid": 2, 00:28:25.593 "bdev_name": "Malloc1", 00:28:25.593 "name": "Malloc1", 00:28:25.593 "nguid": "E5781F680E004996A49E6ACCD361AD9F", 00:28:25.593 "uuid": "e5781f68-0e00-4996-a49e-6accd361ad9f" 00:28:25.593 } 00:28:25.593 ] 00:28:25.593 } 00:28:25.593 ] 00:28:25.593 03:31:31 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:25.593 03:31:31 nvmf_tcp.nvmf_aer -- host/aer.sh@43 -- # wait 3283683 00:28:25.593 Asynchronous Event Request test 00:28:25.593 Attaching to 10.0.0.2 00:28:25.593 Attached to 10.0.0.2 00:28:25.593 Registering asynchronous event callbacks... 00:28:25.593 Starting namespace attribute notice tests for all controllers... 00:28:25.593 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:28:25.593 aer_cb - Changed Namespace 00:28:25.593 Cleaning up... 00:28:25.593 03:31:31 nvmf_tcp.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:28:25.593 03:31:31 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:25.593 03:31:31 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:25.593 03:31:31 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:25.593 03:31:31 nvmf_tcp.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:28:25.593 03:31:31 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:25.593 03:31:31 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:25.852 03:31:31 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:25.852 03:31:31 nvmf_tcp.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:25.852 03:31:31 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:25.852 03:31:31 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:25.852 03:31:31 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:25.852 03:31:31 nvmf_tcp.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:28:25.852 03:31:31 nvmf_tcp.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:28:25.852 03:31:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:25.852 03:31:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@117 -- # sync 00:28:25.852 03:31:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:25.852 03:31:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@120 -- # set +e 00:28:25.852 03:31:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:25.852 03:31:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:25.852 rmmod nvme_tcp 00:28:25.852 rmmod nvme_fabrics 00:28:25.852 rmmod nvme_keyring 00:28:25.852 03:31:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:25.852 03:31:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@124 -- # set -e 00:28:25.852 03:31:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@125 -- # return 0 00:28:25.852 03:31:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@489 -- # '[' -n 3283650 ']' 00:28:25.852 03:31:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@490 -- # killprocess 3283650 00:28:25.852 03:31:31 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@948 -- # '[' -z 3283650 ']' 00:28:25.852 03:31:31 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@952 -- # kill -0 3283650 00:28:25.852 03:31:31 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@953 -- # uname 00:28:25.852 03:31:31 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:25.852 03:31:31 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3283650 00:28:25.852 03:31:31 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:28:25.852 03:31:31 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:28:25.852 03:31:31 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3283650' 00:28:25.852 killing process with pid 3283650 00:28:25.852 03:31:31 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@967 -- # kill 3283650 00:28:25.852 03:31:31 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@972 -- # wait 3283650 00:28:26.110 03:31:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:28:26.110 03:31:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:28:26.110 03:31:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:28:26.110 03:31:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:26.110 03:31:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:26.110 03:31:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:26.110 03:31:32 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:26.110 03:31:32 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:28.010 03:31:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:28:28.010 00:28:28.010 real 0m5.239s 00:28:28.010 user 0m4.151s 00:28:28.010 sys 0m1.863s 00:28:28.010 03:31:34 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:28.011 03:31:34 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:28.011 ************************************ 00:28:28.011 END TEST nvmf_aer 00:28:28.011 ************************************ 00:28:28.011 03:31:34 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:28:28.011 03:31:34 nvmf_tcp -- nvmf/nvmf.sh@93 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:28:28.011 03:31:34 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:28:28.011 03:31:34 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:28.011 03:31:34 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:28.011 ************************************ 00:28:28.011 START TEST nvmf_async_init 00:28:28.011 ************************************ 00:28:28.011 03:31:34 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:28:28.268 * Looking for test storage... 00:28:28.268 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:28.268 03:31:34 nvmf_tcp.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:28.268 03:31:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:28:28.268 03:31:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:28.268 03:31:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:28.268 03:31:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:28.268 03:31:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:28.268 03:31:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:28.268 03:31:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:28.268 03:31:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:28.268 03:31:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:28.268 03:31:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:28.268 03:31:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:28.268 03:31:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:28:28.268 03:31:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:28:28.268 03:31:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:28.268 03:31:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:28.268 03:31:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:28.268 03:31:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:28.268 03:31:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:28.268 03:31:34 nvmf_tcp.nvmf_async_init -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:28.268 03:31:34 nvmf_tcp.nvmf_async_init -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:28.268 03:31:34 nvmf_tcp.nvmf_async_init -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:28.268 03:31:34 nvmf_tcp.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:28.268 03:31:34 nvmf_tcp.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:28.268 03:31:34 nvmf_tcp.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:28.268 03:31:34 nvmf_tcp.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:28:28.268 03:31:34 nvmf_tcp.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:28.268 03:31:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@47 -- # : 0 00:28:28.268 03:31:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:28.268 03:31:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:28.268 03:31:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:28.268 03:31:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:28.269 03:31:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:28.269 03:31:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:28.269 03:31:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:28.269 03:31:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:28.269 03:31:34 nvmf_tcp.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:28:28.269 03:31:34 nvmf_tcp.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:28:28.269 03:31:34 nvmf_tcp.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:28:28.269 03:31:34 nvmf_tcp.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:28:28.269 03:31:34 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:28:28.269 03:31:34 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:28:28.269 03:31:34 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # nguid=d4cfbb8c1d894209bed91963668a00d1 00:28:28.269 03:31:34 nvmf_tcp.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:28:28.269 03:31:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:28:28.269 03:31:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:28.269 03:31:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:28.269 03:31:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:28.269 03:31:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:28.269 03:31:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:28.269 03:31:34 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:28.269 03:31:34 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:28.269 03:31:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:28:28.269 03:31:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:28:28.269 03:31:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@285 -- # xtrace_disable 00:28:28.269 03:31:34 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:30.165 03:31:36 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:30.165 03:31:36 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@291 -- # pci_devs=() 00:28:30.165 03:31:36 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:30.165 03:31:36 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:30.165 03:31:36 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:30.165 03:31:36 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:30.165 03:31:36 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:30.165 03:31:36 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@295 -- # net_devs=() 00:28:30.165 03:31:36 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:30.165 03:31:36 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@296 -- # e810=() 00:28:30.165 03:31:36 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@296 -- # local -ga e810 00:28:30.165 03:31:36 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@297 -- # x722=() 00:28:30.165 03:31:36 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@297 -- # local -ga x722 00:28:30.165 03:31:36 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@298 -- # mlx=() 00:28:30.165 03:31:36 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@298 -- # local -ga mlx 00:28:30.165 03:31:36 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:30.165 03:31:36 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:30.165 03:31:36 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:30.165 03:31:36 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:30.165 03:31:36 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:30.165 03:31:36 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:30.165 03:31:36 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:30.165 03:31:36 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:30.165 03:31:36 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:30.165 03:31:36 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:30.165 03:31:36 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:30.165 03:31:36 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:30.165 03:31:36 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:30.165 03:31:36 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:28:30.165 03:31:36 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:28:30.165 03:31:36 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:28:30.165 03:31:36 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:30.165 03:31:36 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:30.165 03:31:36 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:28:30.165 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:28:30.165 03:31:36 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:30.165 03:31:36 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:30.165 03:31:36 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:30.165 03:31:36 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:30.165 03:31:36 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:30.165 03:31:36 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:30.165 03:31:36 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:28:30.165 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:28:30.165 03:31:36 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:30.165 03:31:36 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:30.165 03:31:36 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:30.165 03:31:36 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:30.165 03:31:36 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:30.165 03:31:36 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:30.165 03:31:36 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:28:30.165 03:31:36 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:28:30.165 03:31:36 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:30.165 03:31:36 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:30.165 03:31:36 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:30.165 03:31:36 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:30.165 03:31:36 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:30.165 03:31:36 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:30.165 03:31:36 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:30.165 03:31:36 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:28:30.165 Found net devices under 0000:0a:00.0: cvl_0_0 00:28:30.165 03:31:36 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:30.165 03:31:36 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:30.165 03:31:36 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:30.165 03:31:36 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:30.165 03:31:36 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:30.165 03:31:36 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:30.165 03:31:36 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:30.165 03:31:36 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:30.165 03:31:36 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:28:30.165 Found net devices under 0000:0a:00.1: cvl_0_1 00:28:30.165 03:31:36 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:30.165 03:31:36 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:28:30.165 03:31:36 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # is_hw=yes 00:28:30.165 03:31:36 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:28:30.165 03:31:36 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:28:30.165 03:31:36 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:28:30.165 03:31:36 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:30.165 03:31:36 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:30.165 03:31:36 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:30.165 03:31:36 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:28:30.165 03:31:36 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:30.165 03:31:36 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:30.165 03:31:36 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:28:30.165 03:31:36 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:30.165 03:31:36 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:30.165 03:31:36 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:28:30.165 03:31:36 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:28:30.165 03:31:36 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:28:30.165 03:31:36 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:30.165 03:31:36 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:30.165 03:31:36 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:30.165 03:31:36 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:28:30.165 03:31:36 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:30.165 03:31:36 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:30.423 03:31:36 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:30.423 03:31:36 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:28:30.423 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:30.423 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.230 ms 00:28:30.423 00:28:30.423 --- 10.0.0.2 ping statistics --- 00:28:30.423 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:30.423 rtt min/avg/max/mdev = 0.230/0.230/0.230/0.000 ms 00:28:30.423 03:31:36 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:30.423 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:30.423 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.147 ms 00:28:30.423 00:28:30.423 --- 10.0.0.1 ping statistics --- 00:28:30.423 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:30.423 rtt min/avg/max/mdev = 0.147/0.147/0.147/0.000 ms 00:28:30.423 03:31:36 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:30.423 03:31:36 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@422 -- # return 0 00:28:30.423 03:31:36 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:28:30.423 03:31:36 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:30.423 03:31:36 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:28:30.423 03:31:36 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:28:30.423 03:31:36 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:30.423 03:31:36 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:28:30.423 03:31:36 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:28:30.423 03:31:36 nvmf_tcp.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:28:30.423 03:31:36 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:30.423 03:31:36 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@722 -- # xtrace_disable 00:28:30.423 03:31:36 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:30.423 03:31:36 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@481 -- # nvmfpid=3285616 00:28:30.423 03:31:36 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:28:30.423 03:31:36 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@482 -- # waitforlisten 3285616 00:28:30.423 03:31:36 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@829 -- # '[' -z 3285616 ']' 00:28:30.423 03:31:36 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:30.423 03:31:36 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:30.423 03:31:36 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:30.423 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:30.423 03:31:36 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:30.423 03:31:36 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:30.423 [2024-07-15 03:31:36.398654] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:28:30.423 [2024-07-15 03:31:36.398731] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:30.423 EAL: No free 2048 kB hugepages reported on node 1 00:28:30.424 [2024-07-15 03:31:36.469108] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:30.424 [2024-07-15 03:31:36.560304] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:30.424 [2024-07-15 03:31:36.560369] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:30.424 [2024-07-15 03:31:36.560385] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:30.424 [2024-07-15 03:31:36.560399] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:30.424 [2024-07-15 03:31:36.560410] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:30.424 [2024-07-15 03:31:36.560448] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:30.681 03:31:36 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:30.681 03:31:36 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@862 -- # return 0 00:28:30.682 03:31:36 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:30.682 03:31:36 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@728 -- # xtrace_disable 00:28:30.682 03:31:36 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:30.682 03:31:36 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:30.682 03:31:36 nvmf_tcp.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:28:30.682 03:31:36 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:30.682 03:31:36 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:30.682 [2024-07-15 03:31:36.706794] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:30.682 03:31:36 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:30.682 03:31:36 nvmf_tcp.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:28:30.682 03:31:36 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:30.682 03:31:36 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:30.682 null0 00:28:30.682 03:31:36 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:30.682 03:31:36 nvmf_tcp.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:28:30.682 03:31:36 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:30.682 03:31:36 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:30.682 03:31:36 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:30.682 03:31:36 nvmf_tcp.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:28:30.682 03:31:36 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:30.682 03:31:36 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:30.682 03:31:36 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:30.682 03:31:36 nvmf_tcp.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g d4cfbb8c1d894209bed91963668a00d1 00:28:30.682 03:31:36 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:30.682 03:31:36 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:30.682 03:31:36 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:30.682 03:31:36 nvmf_tcp.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:28:30.682 03:31:36 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:30.682 03:31:36 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:30.682 [2024-07-15 03:31:36.747055] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:30.682 03:31:36 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:30.682 03:31:36 nvmf_tcp.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:28:30.682 03:31:36 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:30.682 03:31:36 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:30.939 nvme0n1 00:28:30.939 03:31:36 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:30.939 03:31:36 nvmf_tcp.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:28:30.939 03:31:36 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:30.939 03:31:36 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:30.939 [ 00:28:30.939 { 00:28:30.939 "name": "nvme0n1", 00:28:30.939 "aliases": [ 00:28:30.939 "d4cfbb8c-1d89-4209-bed9-1963668a00d1" 00:28:30.939 ], 00:28:30.939 "product_name": "NVMe disk", 00:28:30.939 "block_size": 512, 00:28:30.939 "num_blocks": 2097152, 00:28:30.939 "uuid": "d4cfbb8c-1d89-4209-bed9-1963668a00d1", 00:28:30.939 "assigned_rate_limits": { 00:28:30.939 "rw_ios_per_sec": 0, 00:28:30.939 "rw_mbytes_per_sec": 0, 00:28:30.939 "r_mbytes_per_sec": 0, 00:28:30.939 "w_mbytes_per_sec": 0 00:28:30.939 }, 00:28:30.939 "claimed": false, 00:28:30.939 "zoned": false, 00:28:30.939 "supported_io_types": { 00:28:30.939 "read": true, 00:28:30.939 "write": true, 00:28:30.939 "unmap": false, 00:28:30.939 "flush": true, 00:28:30.939 "reset": true, 00:28:30.939 "nvme_admin": true, 00:28:30.939 "nvme_io": true, 00:28:30.939 "nvme_io_md": false, 00:28:30.939 "write_zeroes": true, 00:28:30.939 "zcopy": false, 00:28:30.939 "get_zone_info": false, 00:28:30.939 "zone_management": false, 00:28:30.939 "zone_append": false, 00:28:30.939 "compare": true, 00:28:30.939 "compare_and_write": true, 00:28:30.939 "abort": true, 00:28:30.939 "seek_hole": false, 00:28:30.939 "seek_data": false, 00:28:30.939 "copy": true, 00:28:30.939 "nvme_iov_md": false 00:28:30.939 }, 00:28:30.939 "memory_domains": [ 00:28:30.939 { 00:28:30.939 "dma_device_id": "system", 00:28:30.939 "dma_device_type": 1 00:28:30.939 } 00:28:30.939 ], 00:28:30.939 "driver_specific": { 00:28:30.939 "nvme": [ 00:28:30.939 { 00:28:30.939 "trid": { 00:28:30.939 "trtype": "TCP", 00:28:30.939 "adrfam": "IPv4", 00:28:30.939 "traddr": "10.0.0.2", 00:28:30.939 "trsvcid": "4420", 00:28:30.939 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:28:30.939 }, 00:28:30.939 "ctrlr_data": { 00:28:30.939 "cntlid": 1, 00:28:30.939 "vendor_id": "0x8086", 00:28:30.939 "model_number": "SPDK bdev Controller", 00:28:30.940 "serial_number": "00000000000000000000", 00:28:30.940 "firmware_revision": "24.09", 00:28:30.940 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:30.940 "oacs": { 00:28:30.940 "security": 0, 00:28:30.940 "format": 0, 00:28:30.940 "firmware": 0, 00:28:30.940 "ns_manage": 0 00:28:30.940 }, 00:28:30.940 "multi_ctrlr": true, 00:28:30.940 "ana_reporting": false 00:28:30.940 }, 00:28:30.940 "vs": { 00:28:30.940 "nvme_version": "1.3" 00:28:30.940 }, 00:28:30.940 "ns_data": { 00:28:30.940 "id": 1, 00:28:30.940 "can_share": true 00:28:30.940 } 00:28:30.940 } 00:28:30.940 ], 00:28:30.940 "mp_policy": "active_passive" 00:28:30.940 } 00:28:30.940 } 00:28:30.940 ] 00:28:30.940 03:31:36 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:30.940 03:31:36 nvmf_tcp.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:28:30.940 03:31:36 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:30.940 03:31:36 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:30.940 [2024-07-15 03:31:37.000302] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:28:30.940 [2024-07-15 03:31:37.000392] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf98500 (9): Bad file descriptor 00:28:31.197 [2024-07-15 03:31:37.173037] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:28:31.197 03:31:37 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:31.197 03:31:37 nvmf_tcp.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:28:31.197 03:31:37 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:31.197 03:31:37 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:31.198 [ 00:28:31.198 { 00:28:31.198 "name": "nvme0n1", 00:28:31.198 "aliases": [ 00:28:31.198 "d4cfbb8c-1d89-4209-bed9-1963668a00d1" 00:28:31.198 ], 00:28:31.198 "product_name": "NVMe disk", 00:28:31.198 "block_size": 512, 00:28:31.198 "num_blocks": 2097152, 00:28:31.198 "uuid": "d4cfbb8c-1d89-4209-bed9-1963668a00d1", 00:28:31.198 "assigned_rate_limits": { 00:28:31.198 "rw_ios_per_sec": 0, 00:28:31.198 "rw_mbytes_per_sec": 0, 00:28:31.198 "r_mbytes_per_sec": 0, 00:28:31.198 "w_mbytes_per_sec": 0 00:28:31.198 }, 00:28:31.198 "claimed": false, 00:28:31.198 "zoned": false, 00:28:31.198 "supported_io_types": { 00:28:31.198 "read": true, 00:28:31.198 "write": true, 00:28:31.198 "unmap": false, 00:28:31.198 "flush": true, 00:28:31.198 "reset": true, 00:28:31.198 "nvme_admin": true, 00:28:31.198 "nvme_io": true, 00:28:31.198 "nvme_io_md": false, 00:28:31.198 "write_zeroes": true, 00:28:31.198 "zcopy": false, 00:28:31.198 "get_zone_info": false, 00:28:31.198 "zone_management": false, 00:28:31.198 "zone_append": false, 00:28:31.198 "compare": true, 00:28:31.198 "compare_and_write": true, 00:28:31.198 "abort": true, 00:28:31.198 "seek_hole": false, 00:28:31.198 "seek_data": false, 00:28:31.198 "copy": true, 00:28:31.198 "nvme_iov_md": false 00:28:31.198 }, 00:28:31.198 "memory_domains": [ 00:28:31.198 { 00:28:31.198 "dma_device_id": "system", 00:28:31.198 "dma_device_type": 1 00:28:31.198 } 00:28:31.198 ], 00:28:31.198 "driver_specific": { 00:28:31.198 "nvme": [ 00:28:31.198 { 00:28:31.198 "trid": { 00:28:31.198 "trtype": "TCP", 00:28:31.198 "adrfam": "IPv4", 00:28:31.198 "traddr": "10.0.0.2", 00:28:31.198 "trsvcid": "4420", 00:28:31.198 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:28:31.198 }, 00:28:31.198 "ctrlr_data": { 00:28:31.198 "cntlid": 2, 00:28:31.198 "vendor_id": "0x8086", 00:28:31.198 "model_number": "SPDK bdev Controller", 00:28:31.198 "serial_number": "00000000000000000000", 00:28:31.198 "firmware_revision": "24.09", 00:28:31.198 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:31.198 "oacs": { 00:28:31.198 "security": 0, 00:28:31.198 "format": 0, 00:28:31.198 "firmware": 0, 00:28:31.198 "ns_manage": 0 00:28:31.198 }, 00:28:31.198 "multi_ctrlr": true, 00:28:31.198 "ana_reporting": false 00:28:31.198 }, 00:28:31.198 "vs": { 00:28:31.198 "nvme_version": "1.3" 00:28:31.198 }, 00:28:31.198 "ns_data": { 00:28:31.198 "id": 1, 00:28:31.198 "can_share": true 00:28:31.198 } 00:28:31.198 } 00:28:31.198 ], 00:28:31.198 "mp_policy": "active_passive" 00:28:31.198 } 00:28:31.198 } 00:28:31.198 ] 00:28:31.198 03:31:37 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:31.198 03:31:37 nvmf_tcp.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:31.198 03:31:37 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:31.198 03:31:37 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:31.198 03:31:37 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:31.198 03:31:37 nvmf_tcp.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:28:31.198 03:31:37 nvmf_tcp.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.O6jw42v0wL 00:28:31.198 03:31:37 nvmf_tcp.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:28:31.198 03:31:37 nvmf_tcp.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.O6jw42v0wL 00:28:31.198 03:31:37 nvmf_tcp.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:28:31.198 03:31:37 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:31.198 03:31:37 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:31.198 03:31:37 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:31.198 03:31:37 nvmf_tcp.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:28:31.198 03:31:37 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:31.198 03:31:37 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:31.198 [2024-07-15 03:31:37.225053] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:28:31.198 [2024-07-15 03:31:37.225200] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:28:31.198 03:31:37 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:31.198 03:31:37 nvmf_tcp.nvmf_async_init -- host/async_init.sh@59 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.O6jw42v0wL 00:28:31.198 03:31:37 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:31.198 03:31:37 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:31.198 [2024-07-15 03:31:37.233077] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:28:31.198 03:31:37 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:31.198 03:31:37 nvmf_tcp.nvmf_async_init -- host/async_init.sh@65 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.O6jw42v0wL 00:28:31.198 03:31:37 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:31.198 03:31:37 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:31.198 [2024-07-15 03:31:37.241107] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:28:31.198 [2024-07-15 03:31:37.241177] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:28:31.198 nvme0n1 00:28:31.198 03:31:37 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:31.198 03:31:37 nvmf_tcp.nvmf_async_init -- host/async_init.sh@69 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:28:31.198 03:31:37 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:31.198 03:31:37 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:31.198 [ 00:28:31.198 { 00:28:31.198 "name": "nvme0n1", 00:28:31.198 "aliases": [ 00:28:31.198 "d4cfbb8c-1d89-4209-bed9-1963668a00d1" 00:28:31.198 ], 00:28:31.198 "product_name": "NVMe disk", 00:28:31.198 "block_size": 512, 00:28:31.198 "num_blocks": 2097152, 00:28:31.198 "uuid": "d4cfbb8c-1d89-4209-bed9-1963668a00d1", 00:28:31.198 "assigned_rate_limits": { 00:28:31.198 "rw_ios_per_sec": 0, 00:28:31.198 "rw_mbytes_per_sec": 0, 00:28:31.198 "r_mbytes_per_sec": 0, 00:28:31.198 "w_mbytes_per_sec": 0 00:28:31.198 }, 00:28:31.198 "claimed": false, 00:28:31.198 "zoned": false, 00:28:31.198 "supported_io_types": { 00:28:31.198 "read": true, 00:28:31.198 "write": true, 00:28:31.198 "unmap": false, 00:28:31.198 "flush": true, 00:28:31.198 "reset": true, 00:28:31.198 "nvme_admin": true, 00:28:31.198 "nvme_io": true, 00:28:31.198 "nvme_io_md": false, 00:28:31.198 "write_zeroes": true, 00:28:31.198 "zcopy": false, 00:28:31.198 "get_zone_info": false, 00:28:31.198 "zone_management": false, 00:28:31.198 "zone_append": false, 00:28:31.198 "compare": true, 00:28:31.198 "compare_and_write": true, 00:28:31.198 "abort": true, 00:28:31.198 "seek_hole": false, 00:28:31.198 "seek_data": false, 00:28:31.198 "copy": true, 00:28:31.198 "nvme_iov_md": false 00:28:31.198 }, 00:28:31.198 "memory_domains": [ 00:28:31.198 { 00:28:31.198 "dma_device_id": "system", 00:28:31.198 "dma_device_type": 1 00:28:31.198 } 00:28:31.198 ], 00:28:31.198 "driver_specific": { 00:28:31.198 "nvme": [ 00:28:31.198 { 00:28:31.198 "trid": { 00:28:31.198 "trtype": "TCP", 00:28:31.198 "adrfam": "IPv4", 00:28:31.198 "traddr": "10.0.0.2", 00:28:31.198 "trsvcid": "4421", 00:28:31.198 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:28:31.198 }, 00:28:31.198 "ctrlr_data": { 00:28:31.198 "cntlid": 3, 00:28:31.198 "vendor_id": "0x8086", 00:28:31.198 "model_number": "SPDK bdev Controller", 00:28:31.198 "serial_number": "00000000000000000000", 00:28:31.198 "firmware_revision": "24.09", 00:28:31.198 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:31.198 "oacs": { 00:28:31.198 "security": 0, 00:28:31.198 "format": 0, 00:28:31.198 "firmware": 0, 00:28:31.198 "ns_manage": 0 00:28:31.198 }, 00:28:31.198 "multi_ctrlr": true, 00:28:31.198 "ana_reporting": false 00:28:31.198 }, 00:28:31.198 "vs": { 00:28:31.198 "nvme_version": "1.3" 00:28:31.198 }, 00:28:31.198 "ns_data": { 00:28:31.198 "id": 1, 00:28:31.198 "can_share": true 00:28:31.198 } 00:28:31.198 } 00:28:31.198 ], 00:28:31.198 "mp_policy": "active_passive" 00:28:31.198 } 00:28:31.198 } 00:28:31.198 ] 00:28:31.198 03:31:37 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:31.198 03:31:37 nvmf_tcp.nvmf_async_init -- host/async_init.sh@72 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:31.198 03:31:37 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:31.198 03:31:37 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:31.455 03:31:37 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:31.455 03:31:37 nvmf_tcp.nvmf_async_init -- host/async_init.sh@75 -- # rm -f /tmp/tmp.O6jw42v0wL 00:28:31.455 03:31:37 nvmf_tcp.nvmf_async_init -- host/async_init.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:28:31.455 03:31:37 nvmf_tcp.nvmf_async_init -- host/async_init.sh@78 -- # nvmftestfini 00:28:31.455 03:31:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:31.455 03:31:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@117 -- # sync 00:28:31.455 03:31:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:31.455 03:31:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@120 -- # set +e 00:28:31.455 03:31:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:31.455 03:31:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:31.455 rmmod nvme_tcp 00:28:31.455 rmmod nvme_fabrics 00:28:31.455 rmmod nvme_keyring 00:28:31.455 03:31:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:31.455 03:31:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@124 -- # set -e 00:28:31.455 03:31:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@125 -- # return 0 00:28:31.455 03:31:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@489 -- # '[' -n 3285616 ']' 00:28:31.455 03:31:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@490 -- # killprocess 3285616 00:28:31.455 03:31:37 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@948 -- # '[' -z 3285616 ']' 00:28:31.455 03:31:37 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@952 -- # kill -0 3285616 00:28:31.455 03:31:37 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@953 -- # uname 00:28:31.455 03:31:37 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:31.455 03:31:37 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3285616 00:28:31.456 03:31:37 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:28:31.456 03:31:37 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:28:31.456 03:31:37 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3285616' 00:28:31.456 killing process with pid 3285616 00:28:31.456 03:31:37 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@967 -- # kill 3285616 00:28:31.456 [2024-07-15 03:31:37.424482] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:28:31.456 [2024-07-15 03:31:37.424516] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:28:31.456 03:31:37 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@972 -- # wait 3285616 00:28:31.713 03:31:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:28:31.713 03:31:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:28:31.713 03:31:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:28:31.713 03:31:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:31.713 03:31:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:31.713 03:31:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:31.713 03:31:37 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:31.713 03:31:37 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:33.616 03:31:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:28:33.616 00:28:33.616 real 0m5.517s 00:28:33.616 user 0m2.117s 00:28:33.616 sys 0m1.793s 00:28:33.616 03:31:39 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:33.616 03:31:39 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:33.616 ************************************ 00:28:33.616 END TEST nvmf_async_init 00:28:33.616 ************************************ 00:28:33.616 03:31:39 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:28:33.616 03:31:39 nvmf_tcp -- nvmf/nvmf.sh@94 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:28:33.616 03:31:39 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:28:33.616 03:31:39 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:33.616 03:31:39 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:33.616 ************************************ 00:28:33.616 START TEST dma 00:28:33.616 ************************************ 00:28:33.616 03:31:39 nvmf_tcp.dma -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:28:33.875 * Looking for test storage... 00:28:33.875 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:33.875 03:31:39 nvmf_tcp.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:33.875 03:31:39 nvmf_tcp.dma -- nvmf/common.sh@7 -- # uname -s 00:28:33.875 03:31:39 nvmf_tcp.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:33.875 03:31:39 nvmf_tcp.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:33.875 03:31:39 nvmf_tcp.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:33.875 03:31:39 nvmf_tcp.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:33.875 03:31:39 nvmf_tcp.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:33.875 03:31:39 nvmf_tcp.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:33.875 03:31:39 nvmf_tcp.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:33.875 03:31:39 nvmf_tcp.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:33.875 03:31:39 nvmf_tcp.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:33.875 03:31:39 nvmf_tcp.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:33.875 03:31:39 nvmf_tcp.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:28:33.875 03:31:39 nvmf_tcp.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:28:33.875 03:31:39 nvmf_tcp.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:33.875 03:31:39 nvmf_tcp.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:33.875 03:31:39 nvmf_tcp.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:33.875 03:31:39 nvmf_tcp.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:33.875 03:31:39 nvmf_tcp.dma -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:33.875 03:31:39 nvmf_tcp.dma -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:33.875 03:31:39 nvmf_tcp.dma -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:33.875 03:31:39 nvmf_tcp.dma -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:33.875 03:31:39 nvmf_tcp.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:33.875 03:31:39 nvmf_tcp.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:33.875 03:31:39 nvmf_tcp.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:33.875 03:31:39 nvmf_tcp.dma -- paths/export.sh@5 -- # export PATH 00:28:33.875 03:31:39 nvmf_tcp.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:33.875 03:31:39 nvmf_tcp.dma -- nvmf/common.sh@47 -- # : 0 00:28:33.875 03:31:39 nvmf_tcp.dma -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:33.875 03:31:39 nvmf_tcp.dma -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:33.875 03:31:39 nvmf_tcp.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:33.875 03:31:39 nvmf_tcp.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:33.875 03:31:39 nvmf_tcp.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:33.875 03:31:39 nvmf_tcp.dma -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:33.875 03:31:39 nvmf_tcp.dma -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:33.875 03:31:39 nvmf_tcp.dma -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:33.875 03:31:39 nvmf_tcp.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:28:33.875 03:31:39 nvmf_tcp.dma -- host/dma.sh@13 -- # exit 0 00:28:33.875 00:28:33.875 real 0m0.062s 00:28:33.876 user 0m0.026s 00:28:33.876 sys 0m0.041s 00:28:33.876 03:31:39 nvmf_tcp.dma -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:33.876 03:31:39 nvmf_tcp.dma -- common/autotest_common.sh@10 -- # set +x 00:28:33.876 ************************************ 00:28:33.876 END TEST dma 00:28:33.876 ************************************ 00:28:33.876 03:31:39 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:28:33.876 03:31:39 nvmf_tcp -- nvmf/nvmf.sh@97 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:28:33.876 03:31:39 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:28:33.876 03:31:39 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:33.876 03:31:39 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:33.876 ************************************ 00:28:33.876 START TEST nvmf_identify 00:28:33.876 ************************************ 00:28:33.876 03:31:39 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:28:33.876 * Looking for test storage... 00:28:33.876 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:33.876 03:31:39 nvmf_tcp.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:33.876 03:31:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:28:33.876 03:31:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:33.876 03:31:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:33.876 03:31:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:33.876 03:31:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:33.876 03:31:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:33.876 03:31:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:33.876 03:31:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:33.876 03:31:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:33.876 03:31:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:33.876 03:31:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:33.876 03:31:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:28:33.876 03:31:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:28:33.876 03:31:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:33.876 03:31:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:33.876 03:31:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:33.876 03:31:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:33.876 03:31:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:33.876 03:31:39 nvmf_tcp.nvmf_identify -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:33.876 03:31:39 nvmf_tcp.nvmf_identify -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:33.876 03:31:39 nvmf_tcp.nvmf_identify -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:33.876 03:31:39 nvmf_tcp.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:33.876 03:31:39 nvmf_tcp.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:33.876 03:31:39 nvmf_tcp.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:33.876 03:31:39 nvmf_tcp.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:28:33.876 03:31:39 nvmf_tcp.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:33.876 03:31:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@47 -- # : 0 00:28:33.876 03:31:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:33.876 03:31:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:33.876 03:31:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:33.876 03:31:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:33.876 03:31:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:33.876 03:31:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:33.876 03:31:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:33.876 03:31:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:33.876 03:31:39 nvmf_tcp.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:28:33.876 03:31:39 nvmf_tcp.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:28:33.876 03:31:39 nvmf_tcp.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:28:33.876 03:31:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:28:33.876 03:31:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:33.876 03:31:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:33.876 03:31:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:33.876 03:31:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:33.876 03:31:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:33.876 03:31:39 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:33.876 03:31:39 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:33.876 03:31:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:28:33.876 03:31:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:28:33.876 03:31:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@285 -- # xtrace_disable 00:28:33.876 03:31:39 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:36.410 03:31:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:36.410 03:31:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@291 -- # pci_devs=() 00:28:36.410 03:31:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:36.410 03:31:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:36.410 03:31:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:36.410 03:31:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:36.410 03:31:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:36.410 03:31:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@295 -- # net_devs=() 00:28:36.410 03:31:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:36.410 03:31:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@296 -- # e810=() 00:28:36.410 03:31:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@296 -- # local -ga e810 00:28:36.410 03:31:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@297 -- # x722=() 00:28:36.410 03:31:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@297 -- # local -ga x722 00:28:36.410 03:31:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@298 -- # mlx=() 00:28:36.410 03:31:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@298 -- # local -ga mlx 00:28:36.410 03:31:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:36.410 03:31:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:36.410 03:31:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:36.410 03:31:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:36.410 03:31:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:36.410 03:31:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:36.410 03:31:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:36.410 03:31:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:36.410 03:31:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:36.410 03:31:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:36.410 03:31:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:36.410 03:31:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:36.410 03:31:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:36.410 03:31:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:28:36.410 03:31:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:28:36.410 03:31:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:28:36.410 03:31:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:36.410 03:31:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:36.410 03:31:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:28:36.410 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:28:36.410 03:31:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:36.410 03:31:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:36.410 03:31:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:36.410 03:31:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:36.410 03:31:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:36.410 03:31:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:36.410 03:31:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:28:36.410 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:28:36.410 03:31:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:36.410 03:31:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:36.410 03:31:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:36.410 03:31:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:36.410 03:31:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:36.410 03:31:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:36.410 03:31:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:28:36.410 03:31:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:28:36.410 03:31:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:36.410 03:31:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:36.410 03:31:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:36.410 03:31:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:36.410 03:31:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:36.410 03:31:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:36.410 03:31:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:36.410 03:31:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:28:36.410 Found net devices under 0000:0a:00.0: cvl_0_0 00:28:36.410 03:31:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:36.410 03:31:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:36.410 03:31:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:36.410 03:31:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:36.410 03:31:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:36.410 03:31:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:36.410 03:31:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:36.410 03:31:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:36.410 03:31:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:28:36.410 Found net devices under 0000:0a:00.1: cvl_0_1 00:28:36.410 03:31:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:36.410 03:31:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:28:36.410 03:31:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # is_hw=yes 00:28:36.410 03:31:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:28:36.410 03:31:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:28:36.410 03:31:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:28:36.410 03:31:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:36.410 03:31:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:36.410 03:31:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:36.410 03:31:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:28:36.410 03:31:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:36.410 03:31:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:36.410 03:31:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:28:36.410 03:31:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:36.410 03:31:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:36.410 03:31:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:28:36.410 03:31:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:28:36.410 03:31:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:28:36.410 03:31:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:36.410 03:31:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:36.410 03:31:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:36.410 03:31:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:28:36.410 03:31:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:36.410 03:31:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:36.410 03:31:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:36.410 03:31:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:28:36.411 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:36.411 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.263 ms 00:28:36.411 00:28:36.411 --- 10.0.0.2 ping statistics --- 00:28:36.411 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:36.411 rtt min/avg/max/mdev = 0.263/0.263/0.263/0.000 ms 00:28:36.411 03:31:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:36.411 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:36.411 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.245 ms 00:28:36.411 00:28:36.411 --- 10.0.0.1 ping statistics --- 00:28:36.411 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:36.411 rtt min/avg/max/mdev = 0.245/0.245/0.245/0.000 ms 00:28:36.411 03:31:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:36.411 03:31:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@422 -- # return 0 00:28:36.411 03:31:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:28:36.411 03:31:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:36.411 03:31:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:28:36.411 03:31:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:28:36.411 03:31:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:36.411 03:31:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:28:36.411 03:31:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:28:36.411 03:31:42 nvmf_tcp.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:28:36.411 03:31:42 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@722 -- # xtrace_disable 00:28:36.411 03:31:42 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:36.411 03:31:42 nvmf_tcp.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=3287755 00:28:36.411 03:31:42 nvmf_tcp.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:28:36.411 03:31:42 nvmf_tcp.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:28:36.411 03:31:42 nvmf_tcp.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 3287755 00:28:36.411 03:31:42 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@829 -- # '[' -z 3287755 ']' 00:28:36.411 03:31:42 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:36.411 03:31:42 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:36.411 03:31:42 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:36.411 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:36.411 03:31:42 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:36.411 03:31:42 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:36.411 [2024-07-15 03:31:42.194341] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:28:36.411 [2024-07-15 03:31:42.194413] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:36.411 EAL: No free 2048 kB hugepages reported on node 1 00:28:36.411 [2024-07-15 03:31:42.266585] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:36.411 [2024-07-15 03:31:42.358514] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:36.411 [2024-07-15 03:31:42.358566] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:36.411 [2024-07-15 03:31:42.358582] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:36.411 [2024-07-15 03:31:42.358595] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:36.411 [2024-07-15 03:31:42.358607] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:36.411 [2024-07-15 03:31:42.358686] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:36.411 [2024-07-15 03:31:42.358742] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:28:36.411 [2024-07-15 03:31:42.358858] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:28:36.411 [2024-07-15 03:31:42.358860] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:36.411 03:31:42 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:36.411 03:31:42 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@862 -- # return 0 00:28:36.411 03:31:42 nvmf_tcp.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:36.411 03:31:42 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:36.411 03:31:42 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:36.411 [2024-07-15 03:31:42.486704] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:36.411 03:31:42 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:36.411 03:31:42 nvmf_tcp.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:28:36.411 03:31:42 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@728 -- # xtrace_disable 00:28:36.411 03:31:42 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:36.411 03:31:42 nvmf_tcp.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:28:36.411 03:31:42 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:36.411 03:31:42 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:36.411 Malloc0 00:28:36.411 03:31:42 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:36.411 03:31:42 nvmf_tcp.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:36.411 03:31:42 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:36.411 03:31:42 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:36.411 03:31:42 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:36.411 03:31:42 nvmf_tcp.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:28:36.411 03:31:42 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:36.411 03:31:42 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:36.671 03:31:42 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:36.671 03:31:42 nvmf_tcp.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:36.671 03:31:42 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:36.671 03:31:42 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:36.671 [2024-07-15 03:31:42.559707] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:36.671 03:31:42 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:36.671 03:31:42 nvmf_tcp.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:28:36.671 03:31:42 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:36.672 03:31:42 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:36.672 03:31:42 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:36.672 03:31:42 nvmf_tcp.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:28:36.672 03:31:42 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:36.672 03:31:42 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:36.672 [ 00:28:36.672 { 00:28:36.672 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:28:36.672 "subtype": "Discovery", 00:28:36.672 "listen_addresses": [ 00:28:36.672 { 00:28:36.672 "trtype": "TCP", 00:28:36.672 "adrfam": "IPv4", 00:28:36.672 "traddr": "10.0.0.2", 00:28:36.672 "trsvcid": "4420" 00:28:36.672 } 00:28:36.672 ], 00:28:36.672 "allow_any_host": true, 00:28:36.672 "hosts": [] 00:28:36.672 }, 00:28:36.672 { 00:28:36.672 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:28:36.672 "subtype": "NVMe", 00:28:36.672 "listen_addresses": [ 00:28:36.672 { 00:28:36.672 "trtype": "TCP", 00:28:36.672 "adrfam": "IPv4", 00:28:36.672 "traddr": "10.0.0.2", 00:28:36.672 "trsvcid": "4420" 00:28:36.672 } 00:28:36.672 ], 00:28:36.672 "allow_any_host": true, 00:28:36.672 "hosts": [], 00:28:36.672 "serial_number": "SPDK00000000000001", 00:28:36.672 "model_number": "SPDK bdev Controller", 00:28:36.672 "max_namespaces": 32, 00:28:36.672 "min_cntlid": 1, 00:28:36.672 "max_cntlid": 65519, 00:28:36.672 "namespaces": [ 00:28:36.672 { 00:28:36.672 "nsid": 1, 00:28:36.672 "bdev_name": "Malloc0", 00:28:36.672 "name": "Malloc0", 00:28:36.672 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:28:36.672 "eui64": "ABCDEF0123456789", 00:28:36.672 "uuid": "a3dc2ef5-74e2-486e-8e4e-0cdf6c8be555" 00:28:36.672 } 00:28:36.672 ] 00:28:36.672 } 00:28:36.672 ] 00:28:36.672 03:31:42 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:36.672 03:31:42 nvmf_tcp.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:28:36.672 [2024-07-15 03:31:42.600138] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:28:36.672 [2024-07-15 03:31:42.600190] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3287882 ] 00:28:36.672 EAL: No free 2048 kB hugepages reported on node 1 00:28:36.672 [2024-07-15 03:31:42.634279] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:28:36.672 [2024-07-15 03:31:42.634352] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:28:36.672 [2024-07-15 03:31:42.634362] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:28:36.672 [2024-07-15 03:31:42.634377] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:28:36.672 [2024-07-15 03:31:42.634389] sock.c: 337:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:28:36.672 [2024-07-15 03:31:42.637933] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:28:36.672 [2024-07-15 03:31:42.637999] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1be4fe0 0 00:28:36.672 [2024-07-15 03:31:42.644890] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:28:36.672 [2024-07-15 03:31:42.644915] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:28:36.672 [2024-07-15 03:31:42.644932] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:28:36.672 [2024-07-15 03:31:42.644938] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:28:36.672 [2024-07-15 03:31:42.644992] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:36.672 [2024-07-15 03:31:42.645006] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:36.672 [2024-07-15 03:31:42.645017] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1be4fe0) 00:28:36.672 [2024-07-15 03:31:42.645037] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:28:36.672 [2024-07-15 03:31:42.645063] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c4b880, cid 0, qid 0 00:28:36.672 [2024-07-15 03:31:42.652890] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:36.672 [2024-07-15 03:31:42.652908] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:36.672 [2024-07-15 03:31:42.652916] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:36.672 [2024-07-15 03:31:42.652924] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c4b880) on tqpair=0x1be4fe0 00:28:36.672 [2024-07-15 03:31:42.652950] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:28:36.672 [2024-07-15 03:31:42.652961] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:28:36.672 [2024-07-15 03:31:42.652970] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:28:36.672 [2024-07-15 03:31:42.652993] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:36.672 [2024-07-15 03:31:42.653002] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:36.672 [2024-07-15 03:31:42.653009] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1be4fe0) 00:28:36.672 [2024-07-15 03:31:42.653020] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.672 [2024-07-15 03:31:42.653044] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c4b880, cid 0, qid 0 00:28:36.672 [2024-07-15 03:31:42.653169] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:36.672 [2024-07-15 03:31:42.653182] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:36.672 [2024-07-15 03:31:42.653189] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:36.672 [2024-07-15 03:31:42.653196] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c4b880) on tqpair=0x1be4fe0 00:28:36.672 [2024-07-15 03:31:42.653214] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:28:36.672 [2024-07-15 03:31:42.653227] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:28:36.672 [2024-07-15 03:31:42.653239] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:36.672 [2024-07-15 03:31:42.653246] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:36.672 [2024-07-15 03:31:42.653252] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1be4fe0) 00:28:36.672 [2024-07-15 03:31:42.653262] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.672 [2024-07-15 03:31:42.653283] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c4b880, cid 0, qid 0 00:28:36.672 [2024-07-15 03:31:42.653391] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:36.672 [2024-07-15 03:31:42.653406] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:36.672 [2024-07-15 03:31:42.653413] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:36.672 [2024-07-15 03:31:42.653420] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c4b880) on tqpair=0x1be4fe0 00:28:36.672 [2024-07-15 03:31:42.653429] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:28:36.672 [2024-07-15 03:31:42.653444] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:28:36.672 [2024-07-15 03:31:42.653456] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:36.672 [2024-07-15 03:31:42.653464] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:36.672 [2024-07-15 03:31:42.653474] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1be4fe0) 00:28:36.672 [2024-07-15 03:31:42.653485] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.672 [2024-07-15 03:31:42.653506] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c4b880, cid 0, qid 0 00:28:36.672 [2024-07-15 03:31:42.653606] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:36.672 [2024-07-15 03:31:42.653618] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:36.672 [2024-07-15 03:31:42.653625] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:36.672 [2024-07-15 03:31:42.653632] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c4b880) on tqpair=0x1be4fe0 00:28:36.672 [2024-07-15 03:31:42.653642] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:28:36.672 [2024-07-15 03:31:42.653658] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:36.672 [2024-07-15 03:31:42.653667] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:36.672 [2024-07-15 03:31:42.653673] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1be4fe0) 00:28:36.672 [2024-07-15 03:31:42.653684] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.672 [2024-07-15 03:31:42.653704] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c4b880, cid 0, qid 0 00:28:36.672 [2024-07-15 03:31:42.653801] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:36.672 [2024-07-15 03:31:42.653813] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:36.672 [2024-07-15 03:31:42.653820] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:36.672 [2024-07-15 03:31:42.653827] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c4b880) on tqpair=0x1be4fe0 00:28:36.672 [2024-07-15 03:31:42.653836] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:28:36.672 [2024-07-15 03:31:42.653845] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:28:36.672 [2024-07-15 03:31:42.653858] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:28:36.672 [2024-07-15 03:31:42.653968] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:28:36.672 [2024-07-15 03:31:42.653979] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:28:36.672 [2024-07-15 03:31:42.653993] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:36.673 [2024-07-15 03:31:42.654001] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:36.673 [2024-07-15 03:31:42.654007] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1be4fe0) 00:28:36.673 [2024-07-15 03:31:42.654018] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.673 [2024-07-15 03:31:42.654039] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c4b880, cid 0, qid 0 00:28:36.673 [2024-07-15 03:31:42.654174] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:36.673 [2024-07-15 03:31:42.654189] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:36.673 [2024-07-15 03:31:42.654196] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:36.673 [2024-07-15 03:31:42.654202] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c4b880) on tqpair=0x1be4fe0 00:28:36.673 [2024-07-15 03:31:42.654211] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:28:36.673 [2024-07-15 03:31:42.654228] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:36.673 [2024-07-15 03:31:42.654250] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:36.673 [2024-07-15 03:31:42.654257] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1be4fe0) 00:28:36.673 [2024-07-15 03:31:42.654268] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.673 [2024-07-15 03:31:42.654288] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c4b880, cid 0, qid 0 00:28:36.673 [2024-07-15 03:31:42.654391] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:36.673 [2024-07-15 03:31:42.654406] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:36.673 [2024-07-15 03:31:42.654413] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:36.673 [2024-07-15 03:31:42.654420] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c4b880) on tqpair=0x1be4fe0 00:28:36.673 [2024-07-15 03:31:42.654428] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:28:36.673 [2024-07-15 03:31:42.654436] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:28:36.673 [2024-07-15 03:31:42.654451] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:28:36.673 [2024-07-15 03:31:42.654465] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:28:36.673 [2024-07-15 03:31:42.654482] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:36.673 [2024-07-15 03:31:42.654490] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1be4fe0) 00:28:36.673 [2024-07-15 03:31:42.654501] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.673 [2024-07-15 03:31:42.654522] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c4b880, cid 0, qid 0 00:28:36.673 [2024-07-15 03:31:42.654698] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:36.673 [2024-07-15 03:31:42.654714] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:36.673 [2024-07-15 03:31:42.654721] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:36.673 [2024-07-15 03:31:42.654728] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1be4fe0): datao=0, datal=4096, cccid=0 00:28:36.673 [2024-07-15 03:31:42.654736] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1c4b880) on tqpair(0x1be4fe0): expected_datao=0, payload_size=4096 00:28:36.673 [2024-07-15 03:31:42.654744] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:36.673 [2024-07-15 03:31:42.654755] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:36.673 [2024-07-15 03:31:42.654764] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:36.673 [2024-07-15 03:31:42.695017] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:36.673 [2024-07-15 03:31:42.695037] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:36.673 [2024-07-15 03:31:42.695044] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:36.673 [2024-07-15 03:31:42.695051] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c4b880) on tqpair=0x1be4fe0 00:28:36.673 [2024-07-15 03:31:42.695067] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:28:36.673 [2024-07-15 03:31:42.695081] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:28:36.673 [2024-07-15 03:31:42.695090] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:28:36.673 [2024-07-15 03:31:42.695100] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:28:36.673 [2024-07-15 03:31:42.695109] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:28:36.673 [2024-07-15 03:31:42.695120] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:28:36.673 [2024-07-15 03:31:42.695137] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:28:36.673 [2024-07-15 03:31:42.695151] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:36.673 [2024-07-15 03:31:42.695158] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:36.673 [2024-07-15 03:31:42.695164] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1be4fe0) 00:28:36.673 [2024-07-15 03:31:42.695176] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:28:36.673 [2024-07-15 03:31:42.695199] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c4b880, cid 0, qid 0 00:28:36.673 [2024-07-15 03:31:42.695313] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:36.673 [2024-07-15 03:31:42.695329] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:36.673 [2024-07-15 03:31:42.695336] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:36.673 [2024-07-15 03:31:42.695343] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c4b880) on tqpair=0x1be4fe0 00:28:36.673 [2024-07-15 03:31:42.695357] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:36.673 [2024-07-15 03:31:42.695365] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:36.673 [2024-07-15 03:31:42.695371] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1be4fe0) 00:28:36.673 [2024-07-15 03:31:42.695381] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:36.673 [2024-07-15 03:31:42.695391] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:36.673 [2024-07-15 03:31:42.695397] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:36.673 [2024-07-15 03:31:42.695404] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1be4fe0) 00:28:36.673 [2024-07-15 03:31:42.695412] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:36.673 [2024-07-15 03:31:42.695421] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:36.673 [2024-07-15 03:31:42.695428] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:36.673 [2024-07-15 03:31:42.695434] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1be4fe0) 00:28:36.673 [2024-07-15 03:31:42.695442] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:36.673 [2024-07-15 03:31:42.695452] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:36.673 [2024-07-15 03:31:42.695458] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:36.673 [2024-07-15 03:31:42.695465] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1be4fe0) 00:28:36.673 [2024-07-15 03:31:42.695473] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:36.673 [2024-07-15 03:31:42.695482] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:28:36.673 [2024-07-15 03:31:42.695502] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:28:36.673 [2024-07-15 03:31:42.695516] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:36.673 [2024-07-15 03:31:42.695523] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1be4fe0) 00:28:36.673 [2024-07-15 03:31:42.695533] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.673 [2024-07-15 03:31:42.695575] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c4b880, cid 0, qid 0 00:28:36.673 [2024-07-15 03:31:42.695587] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c4ba00, cid 1, qid 0 00:28:36.673 [2024-07-15 03:31:42.695594] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c4bb80, cid 2, qid 0 00:28:36.673 [2024-07-15 03:31:42.695602] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c4bd00, cid 3, qid 0 00:28:36.673 [2024-07-15 03:31:42.695624] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c4be80, cid 4, qid 0 00:28:36.673 [2024-07-15 03:31:42.695802] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:36.673 [2024-07-15 03:31:42.695815] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:36.673 [2024-07-15 03:31:42.695821] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:36.673 [2024-07-15 03:31:42.695828] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c4be80) on tqpair=0x1be4fe0 00:28:36.673 [2024-07-15 03:31:42.695838] nvme_ctrlr.c:3022:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:28:36.673 [2024-07-15 03:31:42.695848] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:28:36.673 [2024-07-15 03:31:42.695865] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:36.673 [2024-07-15 03:31:42.695874] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1be4fe0) 00:28:36.673 [2024-07-15 03:31:42.699896] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.673 [2024-07-15 03:31:42.699921] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c4be80, cid 4, qid 0 00:28:36.673 [2024-07-15 03:31:42.700087] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:36.673 [2024-07-15 03:31:42.700103] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:36.673 [2024-07-15 03:31:42.700110] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:36.674 [2024-07-15 03:31:42.700117] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1be4fe0): datao=0, datal=4096, cccid=4 00:28:36.674 [2024-07-15 03:31:42.700125] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1c4be80) on tqpair(0x1be4fe0): expected_datao=0, payload_size=4096 00:28:36.674 [2024-07-15 03:31:42.700132] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:36.674 [2024-07-15 03:31:42.700142] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:36.674 [2024-07-15 03:31:42.700150] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:36.674 [2024-07-15 03:31:42.700161] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:36.674 [2024-07-15 03:31:42.700171] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:36.674 [2024-07-15 03:31:42.700177] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:36.674 [2024-07-15 03:31:42.700184] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c4be80) on tqpair=0x1be4fe0 00:28:36.674 [2024-07-15 03:31:42.700203] nvme_ctrlr.c:4160:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:28:36.674 [2024-07-15 03:31:42.700245] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:36.674 [2024-07-15 03:31:42.700256] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1be4fe0) 00:28:36.674 [2024-07-15 03:31:42.700267] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.674 [2024-07-15 03:31:42.700279] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:36.674 [2024-07-15 03:31:42.700287] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:36.674 [2024-07-15 03:31:42.700293] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1be4fe0) 00:28:36.674 [2024-07-15 03:31:42.700302] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:28:36.674 [2024-07-15 03:31:42.700335] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c4be80, cid 4, qid 0 00:28:36.674 [2024-07-15 03:31:42.700347] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c4c000, cid 5, qid 0 00:28:36.674 [2024-07-15 03:31:42.700489] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:36.674 [2024-07-15 03:31:42.700501] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:36.674 [2024-07-15 03:31:42.700508] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:36.674 [2024-07-15 03:31:42.700514] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1be4fe0): datao=0, datal=1024, cccid=4 00:28:36.674 [2024-07-15 03:31:42.700522] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1c4be80) on tqpair(0x1be4fe0): expected_datao=0, payload_size=1024 00:28:36.674 [2024-07-15 03:31:42.700529] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:36.674 [2024-07-15 03:31:42.700539] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:36.674 [2024-07-15 03:31:42.700546] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:36.674 [2024-07-15 03:31:42.700554] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:36.674 [2024-07-15 03:31:42.700563] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:36.674 [2024-07-15 03:31:42.700569] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:36.674 [2024-07-15 03:31:42.700576] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c4c000) on tqpair=0x1be4fe0 00:28:36.674 [2024-07-15 03:31:42.740989] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:36.674 [2024-07-15 03:31:42.741008] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:36.674 [2024-07-15 03:31:42.741016] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:36.674 [2024-07-15 03:31:42.741023] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c4be80) on tqpair=0x1be4fe0 00:28:36.674 [2024-07-15 03:31:42.741043] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:36.674 [2024-07-15 03:31:42.741053] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1be4fe0) 00:28:36.674 [2024-07-15 03:31:42.741064] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.674 [2024-07-15 03:31:42.741094] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c4be80, cid 4, qid 0 00:28:36.674 [2024-07-15 03:31:42.741226] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:36.674 [2024-07-15 03:31:42.741241] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:36.674 [2024-07-15 03:31:42.741248] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:36.674 [2024-07-15 03:31:42.741255] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1be4fe0): datao=0, datal=3072, cccid=4 00:28:36.674 [2024-07-15 03:31:42.741262] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1c4be80) on tqpair(0x1be4fe0): expected_datao=0, payload_size=3072 00:28:36.674 [2024-07-15 03:31:42.741270] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:36.674 [2024-07-15 03:31:42.741280] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:36.674 [2024-07-15 03:31:42.741287] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:36.674 [2024-07-15 03:31:42.741298] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:36.674 [2024-07-15 03:31:42.741308] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:36.674 [2024-07-15 03:31:42.741314] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:36.674 [2024-07-15 03:31:42.741321] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c4be80) on tqpair=0x1be4fe0 00:28:36.674 [2024-07-15 03:31:42.741336] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:36.674 [2024-07-15 03:31:42.741345] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1be4fe0) 00:28:36.674 [2024-07-15 03:31:42.741355] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.674 [2024-07-15 03:31:42.741387] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c4be80, cid 4, qid 0 00:28:36.674 [2024-07-15 03:31:42.741510] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:36.674 [2024-07-15 03:31:42.741526] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:36.674 [2024-07-15 03:31:42.741533] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:36.674 [2024-07-15 03:31:42.741539] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1be4fe0): datao=0, datal=8, cccid=4 00:28:36.674 [2024-07-15 03:31:42.741547] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1c4be80) on tqpair(0x1be4fe0): expected_datao=0, payload_size=8 00:28:36.674 [2024-07-15 03:31:42.741554] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:36.674 [2024-07-15 03:31:42.741563] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:36.674 [2024-07-15 03:31:42.741570] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:36.674 [2024-07-15 03:31:42.781995] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:36.674 [2024-07-15 03:31:42.782014] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:36.674 [2024-07-15 03:31:42.782021] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:36.674 [2024-07-15 03:31:42.782028] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c4be80) on tqpair=0x1be4fe0 00:28:36.674 ===================================================== 00:28:36.674 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:28:36.674 ===================================================== 00:28:36.674 Controller Capabilities/Features 00:28:36.674 ================================ 00:28:36.674 Vendor ID: 0000 00:28:36.674 Subsystem Vendor ID: 0000 00:28:36.674 Serial Number: .................... 00:28:36.674 Model Number: ........................................ 00:28:36.674 Firmware Version: 24.09 00:28:36.674 Recommended Arb Burst: 0 00:28:36.674 IEEE OUI Identifier: 00 00 00 00:28:36.674 Multi-path I/O 00:28:36.674 May have multiple subsystem ports: No 00:28:36.674 May have multiple controllers: No 00:28:36.674 Associated with SR-IOV VF: No 00:28:36.674 Max Data Transfer Size: 131072 00:28:36.674 Max Number of Namespaces: 0 00:28:36.674 Max Number of I/O Queues: 1024 00:28:36.674 NVMe Specification Version (VS): 1.3 00:28:36.674 NVMe Specification Version (Identify): 1.3 00:28:36.674 Maximum Queue Entries: 128 00:28:36.674 Contiguous Queues Required: Yes 00:28:36.674 Arbitration Mechanisms Supported 00:28:36.674 Weighted Round Robin: Not Supported 00:28:36.674 Vendor Specific: Not Supported 00:28:36.674 Reset Timeout: 15000 ms 00:28:36.674 Doorbell Stride: 4 bytes 00:28:36.674 NVM Subsystem Reset: Not Supported 00:28:36.674 Command Sets Supported 00:28:36.674 NVM Command Set: Supported 00:28:36.674 Boot Partition: Not Supported 00:28:36.674 Memory Page Size Minimum: 4096 bytes 00:28:36.674 Memory Page Size Maximum: 4096 bytes 00:28:36.674 Persistent Memory Region: Not Supported 00:28:36.674 Optional Asynchronous Events Supported 00:28:36.674 Namespace Attribute Notices: Not Supported 00:28:36.674 Firmware Activation Notices: Not Supported 00:28:36.674 ANA Change Notices: Not Supported 00:28:36.674 PLE Aggregate Log Change Notices: Not Supported 00:28:36.674 LBA Status Info Alert Notices: Not Supported 00:28:36.674 EGE Aggregate Log Change Notices: Not Supported 00:28:36.674 Normal NVM Subsystem Shutdown event: Not Supported 00:28:36.674 Zone Descriptor Change Notices: Not Supported 00:28:36.674 Discovery Log Change Notices: Supported 00:28:36.674 Controller Attributes 00:28:36.674 128-bit Host Identifier: Not Supported 00:28:36.674 Non-Operational Permissive Mode: Not Supported 00:28:36.674 NVM Sets: Not Supported 00:28:36.674 Read Recovery Levels: Not Supported 00:28:36.674 Endurance Groups: Not Supported 00:28:36.674 Predictable Latency Mode: Not Supported 00:28:36.674 Traffic Based Keep ALive: Not Supported 00:28:36.674 Namespace Granularity: Not Supported 00:28:36.674 SQ Associations: Not Supported 00:28:36.674 UUID List: Not Supported 00:28:36.674 Multi-Domain Subsystem: Not Supported 00:28:36.674 Fixed Capacity Management: Not Supported 00:28:36.674 Variable Capacity Management: Not Supported 00:28:36.674 Delete Endurance Group: Not Supported 00:28:36.674 Delete NVM Set: Not Supported 00:28:36.675 Extended LBA Formats Supported: Not Supported 00:28:36.675 Flexible Data Placement Supported: Not Supported 00:28:36.675 00:28:36.675 Controller Memory Buffer Support 00:28:36.675 ================================ 00:28:36.675 Supported: No 00:28:36.675 00:28:36.675 Persistent Memory Region Support 00:28:36.675 ================================ 00:28:36.675 Supported: No 00:28:36.675 00:28:36.675 Admin Command Set Attributes 00:28:36.675 ============================ 00:28:36.675 Security Send/Receive: Not Supported 00:28:36.675 Format NVM: Not Supported 00:28:36.675 Firmware Activate/Download: Not Supported 00:28:36.675 Namespace Management: Not Supported 00:28:36.675 Device Self-Test: Not Supported 00:28:36.675 Directives: Not Supported 00:28:36.675 NVMe-MI: Not Supported 00:28:36.675 Virtualization Management: Not Supported 00:28:36.675 Doorbell Buffer Config: Not Supported 00:28:36.675 Get LBA Status Capability: Not Supported 00:28:36.675 Command & Feature Lockdown Capability: Not Supported 00:28:36.675 Abort Command Limit: 1 00:28:36.675 Async Event Request Limit: 4 00:28:36.675 Number of Firmware Slots: N/A 00:28:36.675 Firmware Slot 1 Read-Only: N/A 00:28:36.675 Firmware Activation Without Reset: N/A 00:28:36.675 Multiple Update Detection Support: N/A 00:28:36.675 Firmware Update Granularity: No Information Provided 00:28:36.675 Per-Namespace SMART Log: No 00:28:36.675 Asymmetric Namespace Access Log Page: Not Supported 00:28:36.675 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:28:36.675 Command Effects Log Page: Not Supported 00:28:36.675 Get Log Page Extended Data: Supported 00:28:36.675 Telemetry Log Pages: Not Supported 00:28:36.675 Persistent Event Log Pages: Not Supported 00:28:36.675 Supported Log Pages Log Page: May Support 00:28:36.675 Commands Supported & Effects Log Page: Not Supported 00:28:36.675 Feature Identifiers & Effects Log Page:May Support 00:28:36.675 NVMe-MI Commands & Effects Log Page: May Support 00:28:36.675 Data Area 4 for Telemetry Log: Not Supported 00:28:36.675 Error Log Page Entries Supported: 128 00:28:36.675 Keep Alive: Not Supported 00:28:36.675 00:28:36.675 NVM Command Set Attributes 00:28:36.675 ========================== 00:28:36.675 Submission Queue Entry Size 00:28:36.675 Max: 1 00:28:36.675 Min: 1 00:28:36.675 Completion Queue Entry Size 00:28:36.675 Max: 1 00:28:36.675 Min: 1 00:28:36.675 Number of Namespaces: 0 00:28:36.675 Compare Command: Not Supported 00:28:36.675 Write Uncorrectable Command: Not Supported 00:28:36.675 Dataset Management Command: Not Supported 00:28:36.675 Write Zeroes Command: Not Supported 00:28:36.675 Set Features Save Field: Not Supported 00:28:36.675 Reservations: Not Supported 00:28:36.675 Timestamp: Not Supported 00:28:36.675 Copy: Not Supported 00:28:36.675 Volatile Write Cache: Not Present 00:28:36.675 Atomic Write Unit (Normal): 1 00:28:36.675 Atomic Write Unit (PFail): 1 00:28:36.675 Atomic Compare & Write Unit: 1 00:28:36.675 Fused Compare & Write: Supported 00:28:36.675 Scatter-Gather List 00:28:36.675 SGL Command Set: Supported 00:28:36.675 SGL Keyed: Supported 00:28:36.675 SGL Bit Bucket Descriptor: Not Supported 00:28:36.675 SGL Metadata Pointer: Not Supported 00:28:36.675 Oversized SGL: Not Supported 00:28:36.675 SGL Metadata Address: Not Supported 00:28:36.675 SGL Offset: Supported 00:28:36.675 Transport SGL Data Block: Not Supported 00:28:36.675 Replay Protected Memory Block: Not Supported 00:28:36.675 00:28:36.675 Firmware Slot Information 00:28:36.675 ========================= 00:28:36.675 Active slot: 0 00:28:36.675 00:28:36.675 00:28:36.675 Error Log 00:28:36.675 ========= 00:28:36.675 00:28:36.675 Active Namespaces 00:28:36.675 ================= 00:28:36.675 Discovery Log Page 00:28:36.675 ================== 00:28:36.675 Generation Counter: 2 00:28:36.675 Number of Records: 2 00:28:36.675 Record Format: 0 00:28:36.675 00:28:36.675 Discovery Log Entry 0 00:28:36.675 ---------------------- 00:28:36.675 Transport Type: 3 (TCP) 00:28:36.675 Address Family: 1 (IPv4) 00:28:36.675 Subsystem Type: 3 (Current Discovery Subsystem) 00:28:36.675 Entry Flags: 00:28:36.675 Duplicate Returned Information: 1 00:28:36.675 Explicit Persistent Connection Support for Discovery: 1 00:28:36.675 Transport Requirements: 00:28:36.675 Secure Channel: Not Required 00:28:36.675 Port ID: 0 (0x0000) 00:28:36.675 Controller ID: 65535 (0xffff) 00:28:36.675 Admin Max SQ Size: 128 00:28:36.675 Transport Service Identifier: 4420 00:28:36.675 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:28:36.675 Transport Address: 10.0.0.2 00:28:36.675 Discovery Log Entry 1 00:28:36.675 ---------------------- 00:28:36.675 Transport Type: 3 (TCP) 00:28:36.675 Address Family: 1 (IPv4) 00:28:36.675 Subsystem Type: 2 (NVM Subsystem) 00:28:36.675 Entry Flags: 00:28:36.675 Duplicate Returned Information: 0 00:28:36.675 Explicit Persistent Connection Support for Discovery: 0 00:28:36.675 Transport Requirements: 00:28:36.675 Secure Channel: Not Required 00:28:36.675 Port ID: 0 (0x0000) 00:28:36.675 Controller ID: 65535 (0xffff) 00:28:36.675 Admin Max SQ Size: 128 00:28:36.675 Transport Service Identifier: 4420 00:28:36.675 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:28:36.675 Transport Address: 10.0.0.2 [2024-07-15 03:31:42.782153] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:28:36.675 [2024-07-15 03:31:42.782176] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c4b880) on tqpair=0x1be4fe0 00:28:36.675 [2024-07-15 03:31:42.782189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:36.675 [2024-07-15 03:31:42.782198] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c4ba00) on tqpair=0x1be4fe0 00:28:36.675 [2024-07-15 03:31:42.782206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:36.675 [2024-07-15 03:31:42.782215] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c4bb80) on tqpair=0x1be4fe0 00:28:36.675 [2024-07-15 03:31:42.782222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:36.675 [2024-07-15 03:31:42.782230] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c4bd00) on tqpair=0x1be4fe0 00:28:36.675 [2024-07-15 03:31:42.782238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:36.675 [2024-07-15 03:31:42.782256] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:36.675 [2024-07-15 03:31:42.782281] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:36.675 [2024-07-15 03:31:42.782287] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1be4fe0) 00:28:36.675 [2024-07-15 03:31:42.782298] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.675 [2024-07-15 03:31:42.782323] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c4bd00, cid 3, qid 0 00:28:36.675 [2024-07-15 03:31:42.782463] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:36.675 [2024-07-15 03:31:42.782476] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:36.675 [2024-07-15 03:31:42.782483] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:36.675 [2024-07-15 03:31:42.782489] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c4bd00) on tqpair=0x1be4fe0 00:28:36.675 [2024-07-15 03:31:42.782502] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:36.675 [2024-07-15 03:31:42.782510] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:36.675 [2024-07-15 03:31:42.782516] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1be4fe0) 00:28:36.675 [2024-07-15 03:31:42.782530] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.675 [2024-07-15 03:31:42.782558] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c4bd00, cid 3, qid 0 00:28:36.675 [2024-07-15 03:31:42.782671] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:36.675 [2024-07-15 03:31:42.782683] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:36.675 [2024-07-15 03:31:42.782690] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:36.675 [2024-07-15 03:31:42.782697] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c4bd00) on tqpair=0x1be4fe0 00:28:36.675 [2024-07-15 03:31:42.782707] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:28:36.675 [2024-07-15 03:31:42.782715] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:28:36.675 [2024-07-15 03:31:42.782730] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:36.675 [2024-07-15 03:31:42.782739] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:36.675 [2024-07-15 03:31:42.782746] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1be4fe0) 00:28:36.675 [2024-07-15 03:31:42.782756] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.675 [2024-07-15 03:31:42.782776] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c4bd00, cid 3, qid 0 00:28:36.675 [2024-07-15 03:31:42.782884] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:36.676 [2024-07-15 03:31:42.782898] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:36.676 [2024-07-15 03:31:42.782905] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:36.676 [2024-07-15 03:31:42.782911] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c4bd00) on tqpair=0x1be4fe0 00:28:36.676 [2024-07-15 03:31:42.782929] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:36.676 [2024-07-15 03:31:42.782939] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:36.676 [2024-07-15 03:31:42.782945] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1be4fe0) 00:28:36.676 [2024-07-15 03:31:42.782956] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.676 [2024-07-15 03:31:42.782976] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c4bd00, cid 3, qid 0 00:28:36.676 [2024-07-15 03:31:42.783075] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:36.676 [2024-07-15 03:31:42.783087] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:36.676 [2024-07-15 03:31:42.783093] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:36.676 [2024-07-15 03:31:42.783100] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c4bd00) on tqpair=0x1be4fe0 00:28:36.676 [2024-07-15 03:31:42.783116] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:36.676 [2024-07-15 03:31:42.783125] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:36.676 [2024-07-15 03:31:42.783132] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1be4fe0) 00:28:36.676 [2024-07-15 03:31:42.783142] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.676 [2024-07-15 03:31:42.783162] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c4bd00, cid 3, qid 0 00:28:36.676 [2024-07-15 03:31:42.783260] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:36.676 [2024-07-15 03:31:42.783272] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:36.676 [2024-07-15 03:31:42.783279] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:36.676 [2024-07-15 03:31:42.783285] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c4bd00) on tqpair=0x1be4fe0 00:28:36.676 [2024-07-15 03:31:42.783301] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:36.676 [2024-07-15 03:31:42.783314] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:36.676 [2024-07-15 03:31:42.783321] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1be4fe0) 00:28:36.676 [2024-07-15 03:31:42.783331] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.676 [2024-07-15 03:31:42.783352] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c4bd00, cid 3, qid 0 00:28:36.676 [2024-07-15 03:31:42.783453] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:36.676 [2024-07-15 03:31:42.783465] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:36.676 [2024-07-15 03:31:42.783472] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:36.676 [2024-07-15 03:31:42.783478] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c4bd00) on tqpair=0x1be4fe0 00:28:36.676 [2024-07-15 03:31:42.783494] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:36.676 [2024-07-15 03:31:42.783504] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:36.676 [2024-07-15 03:31:42.783510] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1be4fe0) 00:28:36.676 [2024-07-15 03:31:42.783520] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.676 [2024-07-15 03:31:42.783540] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c4bd00, cid 3, qid 0 00:28:36.676 [2024-07-15 03:31:42.783638] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:36.676 [2024-07-15 03:31:42.783651] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:36.676 [2024-07-15 03:31:42.783657] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:36.676 [2024-07-15 03:31:42.783664] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c4bd00) on tqpair=0x1be4fe0 00:28:36.676 [2024-07-15 03:31:42.783680] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:36.676 [2024-07-15 03:31:42.783689] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:36.676 [2024-07-15 03:31:42.783696] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1be4fe0) 00:28:36.676 [2024-07-15 03:31:42.783706] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.676 [2024-07-15 03:31:42.783726] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c4bd00, cid 3, qid 0 00:28:36.676 [2024-07-15 03:31:42.783828] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:36.676 [2024-07-15 03:31:42.783842] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:36.676 [2024-07-15 03:31:42.783849] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:36.676 [2024-07-15 03:31:42.783856] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c4bd00) on tqpair=0x1be4fe0 00:28:36.676 [2024-07-15 03:31:42.783872] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:36.676 [2024-07-15 03:31:42.787894] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:36.676 [2024-07-15 03:31:42.787902] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1be4fe0) 00:28:36.676 [2024-07-15 03:31:42.787913] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.676 [2024-07-15 03:31:42.787936] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c4bd00, cid 3, qid 0 00:28:36.676 [2024-07-15 03:31:42.788083] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:36.676 [2024-07-15 03:31:42.788095] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:36.676 [2024-07-15 03:31:42.788102] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:36.676 [2024-07-15 03:31:42.788109] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c4bd00) on tqpair=0x1be4fe0 00:28:36.676 [2024-07-15 03:31:42.788123] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 5 milliseconds 00:28:36.676 00:28:36.676 03:31:42 nvmf_tcp.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:28:36.937 [2024-07-15 03:31:42.824048] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:28:36.938 [2024-07-15 03:31:42.824094] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3287891 ] 00:28:36.938 EAL: No free 2048 kB hugepages reported on node 1 00:28:36.938 [2024-07-15 03:31:42.858787] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:28:36.938 [2024-07-15 03:31:42.858838] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:28:36.938 [2024-07-15 03:31:42.858848] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:28:36.938 [2024-07-15 03:31:42.858885] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:28:36.938 [2024-07-15 03:31:42.858897] sock.c: 337:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:28:36.938 [2024-07-15 03:31:42.862932] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:28:36.938 [2024-07-15 03:31:42.862971] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x10d0fe0 0 00:28:36.938 [2024-07-15 03:31:42.873897] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:28:36.938 [2024-07-15 03:31:42.873927] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:28:36.938 [2024-07-15 03:31:42.873935] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:28:36.938 [2024-07-15 03:31:42.873942] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:28:36.938 [2024-07-15 03:31:42.873981] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:36.938 [2024-07-15 03:31:42.873993] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:36.938 [2024-07-15 03:31:42.874000] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x10d0fe0) 00:28:36.938 [2024-07-15 03:31:42.874015] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:28:36.938 [2024-07-15 03:31:42.874041] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1137880, cid 0, qid 0 00:28:36.938 [2024-07-15 03:31:42.881902] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:36.938 [2024-07-15 03:31:42.881919] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:36.938 [2024-07-15 03:31:42.881927] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:36.938 [2024-07-15 03:31:42.881934] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1137880) on tqpair=0x10d0fe0 00:28:36.938 [2024-07-15 03:31:42.881952] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:28:36.938 [2024-07-15 03:31:42.881979] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:28:36.938 [2024-07-15 03:31:42.881988] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:28:36.938 [2024-07-15 03:31:42.882006] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:36.938 [2024-07-15 03:31:42.882016] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:36.938 [2024-07-15 03:31:42.882023] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x10d0fe0) 00:28:36.938 [2024-07-15 03:31:42.882034] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.938 [2024-07-15 03:31:42.882058] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1137880, cid 0, qid 0 00:28:36.938 [2024-07-15 03:31:42.882212] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:36.938 [2024-07-15 03:31:42.882228] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:36.938 [2024-07-15 03:31:42.882235] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:36.938 [2024-07-15 03:31:42.882243] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1137880) on tqpair=0x10d0fe0 00:28:36.938 [2024-07-15 03:31:42.882251] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:28:36.938 [2024-07-15 03:31:42.882264] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:28:36.938 [2024-07-15 03:31:42.882277] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:36.938 [2024-07-15 03:31:42.882285] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:36.938 [2024-07-15 03:31:42.882291] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x10d0fe0) 00:28:36.938 [2024-07-15 03:31:42.882302] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.938 [2024-07-15 03:31:42.882324] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1137880, cid 0, qid 0 00:28:36.938 [2024-07-15 03:31:42.882426] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:36.938 [2024-07-15 03:31:42.882441] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:36.938 [2024-07-15 03:31:42.882448] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:36.938 [2024-07-15 03:31:42.882455] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1137880) on tqpair=0x10d0fe0 00:28:36.938 [2024-07-15 03:31:42.882463] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:28:36.938 [2024-07-15 03:31:42.882477] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:28:36.938 [2024-07-15 03:31:42.882490] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:36.938 [2024-07-15 03:31:42.882497] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:36.938 [2024-07-15 03:31:42.882504] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x10d0fe0) 00:28:36.938 [2024-07-15 03:31:42.882515] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.938 [2024-07-15 03:31:42.882536] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1137880, cid 0, qid 0 00:28:36.938 [2024-07-15 03:31:42.882643] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:36.938 [2024-07-15 03:31:42.882655] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:36.938 [2024-07-15 03:31:42.882662] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:36.938 [2024-07-15 03:31:42.882669] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1137880) on tqpair=0x10d0fe0 00:28:36.938 [2024-07-15 03:31:42.882677] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:28:36.938 [2024-07-15 03:31:42.882693] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:36.938 [2024-07-15 03:31:42.882702] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:36.938 [2024-07-15 03:31:42.882709] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x10d0fe0) 00:28:36.938 [2024-07-15 03:31:42.882719] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.938 [2024-07-15 03:31:42.882740] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1137880, cid 0, qid 0 00:28:36.938 [2024-07-15 03:31:42.882837] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:36.938 [2024-07-15 03:31:42.882849] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:36.938 [2024-07-15 03:31:42.882856] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:36.938 [2024-07-15 03:31:42.882867] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1137880) on tqpair=0x10d0fe0 00:28:36.938 [2024-07-15 03:31:42.882883] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:28:36.938 [2024-07-15 03:31:42.882893] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:28:36.938 [2024-07-15 03:31:42.882907] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:28:36.938 [2024-07-15 03:31:42.883017] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:28:36.938 [2024-07-15 03:31:42.883024] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:28:36.938 [2024-07-15 03:31:42.883036] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:36.938 [2024-07-15 03:31:42.883044] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:36.938 [2024-07-15 03:31:42.883050] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x10d0fe0) 00:28:36.938 [2024-07-15 03:31:42.883061] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.938 [2024-07-15 03:31:42.883082] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1137880, cid 0, qid 0 00:28:36.938 [2024-07-15 03:31:42.883217] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:36.938 [2024-07-15 03:31:42.883232] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:36.938 [2024-07-15 03:31:42.883239] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:36.938 [2024-07-15 03:31:42.883245] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1137880) on tqpair=0x10d0fe0 00:28:36.938 [2024-07-15 03:31:42.883254] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:28:36.938 [2024-07-15 03:31:42.883270] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:36.938 [2024-07-15 03:31:42.883279] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:36.938 [2024-07-15 03:31:42.883286] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x10d0fe0) 00:28:36.938 [2024-07-15 03:31:42.883297] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.938 [2024-07-15 03:31:42.883318] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1137880, cid 0, qid 0 00:28:36.938 [2024-07-15 03:31:42.883418] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:36.938 [2024-07-15 03:31:42.883433] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:36.938 [2024-07-15 03:31:42.883440] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:36.938 [2024-07-15 03:31:42.883447] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1137880) on tqpair=0x10d0fe0 00:28:36.938 [2024-07-15 03:31:42.883455] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:28:36.938 [2024-07-15 03:31:42.883463] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:28:36.938 [2024-07-15 03:31:42.883477] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:28:36.938 [2024-07-15 03:31:42.883491] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:28:36.938 [2024-07-15 03:31:42.883505] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:36.939 [2024-07-15 03:31:42.883513] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x10d0fe0) 00:28:36.939 [2024-07-15 03:31:42.883524] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.939 [2024-07-15 03:31:42.883549] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1137880, cid 0, qid 0 00:28:36.939 [2024-07-15 03:31:42.883687] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:36.939 [2024-07-15 03:31:42.883703] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:36.939 [2024-07-15 03:31:42.883710] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:36.939 [2024-07-15 03:31:42.883716] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x10d0fe0): datao=0, datal=4096, cccid=0 00:28:36.939 [2024-07-15 03:31:42.883724] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1137880) on tqpair(0x10d0fe0): expected_datao=0, payload_size=4096 00:28:36.939 [2024-07-15 03:31:42.883732] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:36.939 [2024-07-15 03:31:42.883742] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:36.939 [2024-07-15 03:31:42.883750] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:36.939 [2024-07-15 03:31:42.883762] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:36.939 [2024-07-15 03:31:42.883772] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:36.939 [2024-07-15 03:31:42.883779] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:36.939 [2024-07-15 03:31:42.883785] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1137880) on tqpair=0x10d0fe0 00:28:36.939 [2024-07-15 03:31:42.883796] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:28:36.939 [2024-07-15 03:31:42.883809] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:28:36.939 [2024-07-15 03:31:42.883818] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:28:36.939 [2024-07-15 03:31:42.883825] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:28:36.939 [2024-07-15 03:31:42.883833] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:28:36.939 [2024-07-15 03:31:42.883841] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:28:36.939 [2024-07-15 03:31:42.883855] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:28:36.939 [2024-07-15 03:31:42.883867] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:36.939 [2024-07-15 03:31:42.883875] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:36.939 [2024-07-15 03:31:42.883890] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x10d0fe0) 00:28:36.939 [2024-07-15 03:31:42.883901] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:28:36.939 [2024-07-15 03:31:42.883923] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1137880, cid 0, qid 0 00:28:36.939 [2024-07-15 03:31:42.884031] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:36.939 [2024-07-15 03:31:42.884046] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:36.939 [2024-07-15 03:31:42.884053] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:36.939 [2024-07-15 03:31:42.884060] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1137880) on tqpair=0x10d0fe0 00:28:36.939 [2024-07-15 03:31:42.884070] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:36.939 [2024-07-15 03:31:42.884078] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:36.939 [2024-07-15 03:31:42.884084] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x10d0fe0) 00:28:36.939 [2024-07-15 03:31:42.884094] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:36.939 [2024-07-15 03:31:42.884105] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:36.939 [2024-07-15 03:31:42.884116] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:36.939 [2024-07-15 03:31:42.884123] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x10d0fe0) 00:28:36.939 [2024-07-15 03:31:42.884132] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:36.939 [2024-07-15 03:31:42.884142] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:36.939 [2024-07-15 03:31:42.884149] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:36.939 [2024-07-15 03:31:42.884155] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x10d0fe0) 00:28:36.939 [2024-07-15 03:31:42.884164] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:36.939 [2024-07-15 03:31:42.884174] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:36.939 [2024-07-15 03:31:42.884181] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:36.939 [2024-07-15 03:31:42.884187] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x10d0fe0) 00:28:36.939 [2024-07-15 03:31:42.884196] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:36.939 [2024-07-15 03:31:42.884221] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:28:36.939 [2024-07-15 03:31:42.884240] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:28:36.939 [2024-07-15 03:31:42.884253] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:36.939 [2024-07-15 03:31:42.884260] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x10d0fe0) 00:28:36.939 [2024-07-15 03:31:42.884270] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.939 [2024-07-15 03:31:42.884292] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1137880, cid 0, qid 0 00:28:36.939 [2024-07-15 03:31:42.884320] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1137a00, cid 1, qid 0 00:28:36.939 [2024-07-15 03:31:42.884328] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1137b80, cid 2, qid 0 00:28:36.939 [2024-07-15 03:31:42.884336] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1137d00, cid 3, qid 0 00:28:36.939 [2024-07-15 03:31:42.884344] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1137e80, cid 4, qid 0 00:28:36.939 [2024-07-15 03:31:42.884491] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:36.939 [2024-07-15 03:31:42.884504] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:36.939 [2024-07-15 03:31:42.884510] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:36.939 [2024-07-15 03:31:42.884518] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1137e80) on tqpair=0x10d0fe0 00:28:36.939 [2024-07-15 03:31:42.884526] nvme_ctrlr.c:3022:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:28:36.939 [2024-07-15 03:31:42.884535] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:28:36.939 [2024-07-15 03:31:42.884549] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:28:36.939 [2024-07-15 03:31:42.884562] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:28:36.939 [2024-07-15 03:31:42.884573] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:36.939 [2024-07-15 03:31:42.884581] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:36.939 [2024-07-15 03:31:42.884587] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x10d0fe0) 00:28:36.939 [2024-07-15 03:31:42.884601] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:28:36.939 [2024-07-15 03:31:42.884639] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1137e80, cid 4, qid 0 00:28:36.939 [2024-07-15 03:31:42.884809] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:36.939 [2024-07-15 03:31:42.884822] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:36.939 [2024-07-15 03:31:42.884828] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:36.939 [2024-07-15 03:31:42.884835] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1137e80) on tqpair=0x10d0fe0 00:28:36.939 [2024-07-15 03:31:42.884907] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:28:36.939 [2024-07-15 03:31:42.884929] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:28:36.939 [2024-07-15 03:31:42.884944] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:36.939 [2024-07-15 03:31:42.884953] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x10d0fe0) 00:28:36.939 [2024-07-15 03:31:42.884963] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.939 [2024-07-15 03:31:42.884985] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1137e80, cid 4, qid 0 00:28:36.939 [2024-07-15 03:31:42.885137] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:36.939 [2024-07-15 03:31:42.885149] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:36.939 [2024-07-15 03:31:42.885156] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:36.939 [2024-07-15 03:31:42.885162] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x10d0fe0): datao=0, datal=4096, cccid=4 00:28:36.939 [2024-07-15 03:31:42.885170] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1137e80) on tqpair(0x10d0fe0): expected_datao=0, payload_size=4096 00:28:36.939 [2024-07-15 03:31:42.885182] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:36.939 [2024-07-15 03:31:42.885208] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:36.939 [2024-07-15 03:31:42.885220] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:36.939 [2024-07-15 03:31:42.885247] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:36.939 [2024-07-15 03:31:42.885259] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:36.939 [2024-07-15 03:31:42.885266] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:36.939 [2024-07-15 03:31:42.885273] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1137e80) on tqpair=0x10d0fe0 00:28:36.939 [2024-07-15 03:31:42.885297] nvme_ctrlr.c:4693:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:28:36.939 [2024-07-15 03:31:42.885315] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:28:36.940 [2024-07-15 03:31:42.885335] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:28:36.940 [2024-07-15 03:31:42.885348] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:36.940 [2024-07-15 03:31:42.885356] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x10d0fe0) 00:28:36.940 [2024-07-15 03:31:42.885367] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.940 [2024-07-15 03:31:42.885389] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1137e80, cid 4, qid 0 00:28:36.940 [2024-07-15 03:31:42.885519] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:36.940 [2024-07-15 03:31:42.885535] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:36.940 [2024-07-15 03:31:42.885541] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:36.940 [2024-07-15 03:31:42.885554] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x10d0fe0): datao=0, datal=4096, cccid=4 00:28:36.940 [2024-07-15 03:31:42.885563] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1137e80) on tqpair(0x10d0fe0): expected_datao=0, payload_size=4096 00:28:36.940 [2024-07-15 03:31:42.885570] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:36.940 [2024-07-15 03:31:42.885588] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:36.940 [2024-07-15 03:31:42.885597] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:36.940 [2024-07-15 03:31:42.885638] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:36.940 [2024-07-15 03:31:42.885649] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:36.940 [2024-07-15 03:31:42.885655] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:36.940 [2024-07-15 03:31:42.885662] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1137e80) on tqpair=0x10d0fe0 00:28:36.940 [2024-07-15 03:31:42.885686] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:28:36.940 [2024-07-15 03:31:42.885706] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:28:36.940 [2024-07-15 03:31:42.885720] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:36.940 [2024-07-15 03:31:42.885728] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x10d0fe0) 00:28:36.940 [2024-07-15 03:31:42.885739] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.940 [2024-07-15 03:31:42.885760] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1137e80, cid 4, qid 0 00:28:36.940 [2024-07-15 03:31:42.885873] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:36.940 [2024-07-15 03:31:42.889896] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:36.940 [2024-07-15 03:31:42.889905] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:36.940 [2024-07-15 03:31:42.889911] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x10d0fe0): datao=0, datal=4096, cccid=4 00:28:36.940 [2024-07-15 03:31:42.889919] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1137e80) on tqpair(0x10d0fe0): expected_datao=0, payload_size=4096 00:28:36.940 [2024-07-15 03:31:42.889927] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:36.940 [2024-07-15 03:31:42.889944] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:36.940 [2024-07-15 03:31:42.889954] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:36.940 [2024-07-15 03:31:42.889965] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:36.940 [2024-07-15 03:31:42.889974] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:36.940 [2024-07-15 03:31:42.889981] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:36.940 [2024-07-15 03:31:42.889987] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1137e80) on tqpair=0x10d0fe0 00:28:36.940 [2024-07-15 03:31:42.890002] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:28:36.940 [2024-07-15 03:31:42.890017] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:28:36.940 [2024-07-15 03:31:42.890049] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:28:36.940 [2024-07-15 03:31:42.890062] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host behavior support feature (timeout 30000 ms) 00:28:36.940 [2024-07-15 03:31:42.890071] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:28:36.940 [2024-07-15 03:31:42.890080] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:28:36.940 [2024-07-15 03:31:42.890093] nvme_ctrlr.c:3110:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:28:36.940 [2024-07-15 03:31:42.890102] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:28:36.940 [2024-07-15 03:31:42.890111] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:28:36.940 [2024-07-15 03:31:42.890131] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:36.940 [2024-07-15 03:31:42.890140] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x10d0fe0) 00:28:36.940 [2024-07-15 03:31:42.890151] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.940 [2024-07-15 03:31:42.890178] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:36.940 [2024-07-15 03:31:42.890186] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:36.940 [2024-07-15 03:31:42.890192] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x10d0fe0) 00:28:36.940 [2024-07-15 03:31:42.890201] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:28:36.940 [2024-07-15 03:31:42.890227] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1137e80, cid 4, qid 0 00:28:36.940 [2024-07-15 03:31:42.890255] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1138000, cid 5, qid 0 00:28:36.940 [2024-07-15 03:31:42.890397] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:36.940 [2024-07-15 03:31:42.890410] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:36.940 [2024-07-15 03:31:42.890417] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:36.940 [2024-07-15 03:31:42.890424] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1137e80) on tqpair=0x10d0fe0 00:28:36.940 [2024-07-15 03:31:42.890434] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:36.940 [2024-07-15 03:31:42.890443] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:36.940 [2024-07-15 03:31:42.890450] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:36.940 [2024-07-15 03:31:42.890456] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1138000) on tqpair=0x10d0fe0 00:28:36.940 [2024-07-15 03:31:42.890472] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:36.940 [2024-07-15 03:31:42.890481] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x10d0fe0) 00:28:36.940 [2024-07-15 03:31:42.890492] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.940 [2024-07-15 03:31:42.890512] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1138000, cid 5, qid 0 00:28:36.940 [2024-07-15 03:31:42.890622] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:36.940 [2024-07-15 03:31:42.890635] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:36.940 [2024-07-15 03:31:42.890641] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:36.940 [2024-07-15 03:31:42.890649] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1138000) on tqpair=0x10d0fe0 00:28:36.940 [2024-07-15 03:31:42.890664] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:36.940 [2024-07-15 03:31:42.890673] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x10d0fe0) 00:28:36.940 [2024-07-15 03:31:42.890683] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.940 [2024-07-15 03:31:42.890704] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1138000, cid 5, qid 0 00:28:36.940 [2024-07-15 03:31:42.890803] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:36.940 [2024-07-15 03:31:42.890815] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:36.940 [2024-07-15 03:31:42.890825] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:36.940 [2024-07-15 03:31:42.890833] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1138000) on tqpair=0x10d0fe0 00:28:36.940 [2024-07-15 03:31:42.890849] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:36.940 [2024-07-15 03:31:42.890858] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x10d0fe0) 00:28:36.940 [2024-07-15 03:31:42.890869] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.940 [2024-07-15 03:31:42.890899] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1138000, cid 5, qid 0 00:28:36.940 [2024-07-15 03:31:42.890999] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:36.940 [2024-07-15 03:31:42.891012] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:36.940 [2024-07-15 03:31:42.891019] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:36.940 [2024-07-15 03:31:42.891026] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1138000) on tqpair=0x10d0fe0 00:28:36.940 [2024-07-15 03:31:42.891049] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:36.940 [2024-07-15 03:31:42.891060] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x10d0fe0) 00:28:36.940 [2024-07-15 03:31:42.891071] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.940 [2024-07-15 03:31:42.891084] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:36.940 [2024-07-15 03:31:42.891092] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x10d0fe0) 00:28:36.940 [2024-07-15 03:31:42.891101] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.940 [2024-07-15 03:31:42.891113] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:36.940 [2024-07-15 03:31:42.891121] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x10d0fe0) 00:28:36.940 [2024-07-15 03:31:42.891130] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.940 [2024-07-15 03:31:42.891142] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:36.941 [2024-07-15 03:31:42.891150] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x10d0fe0) 00:28:36.941 [2024-07-15 03:31:42.891159] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.941 [2024-07-15 03:31:42.891196] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1138000, cid 5, qid 0 00:28:36.941 [2024-07-15 03:31:42.891208] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1137e80, cid 4, qid 0 00:28:36.941 [2024-07-15 03:31:42.891216] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1138180, cid 6, qid 0 00:28:36.941 [2024-07-15 03:31:42.891223] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1138300, cid 7, qid 0 00:28:36.941 [2024-07-15 03:31:42.891530] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:36.941 [2024-07-15 03:31:42.891546] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:36.941 [2024-07-15 03:31:42.891553] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:36.941 [2024-07-15 03:31:42.891560] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x10d0fe0): datao=0, datal=8192, cccid=5 00:28:36.941 [2024-07-15 03:31:42.891568] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1138000) on tqpair(0x10d0fe0): expected_datao=0, payload_size=8192 00:28:36.941 [2024-07-15 03:31:42.891576] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:36.941 [2024-07-15 03:31:42.891586] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:36.941 [2024-07-15 03:31:42.891594] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:36.941 [2024-07-15 03:31:42.891606] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:36.941 [2024-07-15 03:31:42.891615] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:36.941 [2024-07-15 03:31:42.891622] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:36.941 [2024-07-15 03:31:42.891628] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x10d0fe0): datao=0, datal=512, cccid=4 00:28:36.941 [2024-07-15 03:31:42.891636] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1137e80) on tqpair(0x10d0fe0): expected_datao=0, payload_size=512 00:28:36.941 [2024-07-15 03:31:42.891644] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:36.941 [2024-07-15 03:31:42.891653] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:36.941 [2024-07-15 03:31:42.891660] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:36.941 [2024-07-15 03:31:42.891669] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:36.941 [2024-07-15 03:31:42.891678] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:36.941 [2024-07-15 03:31:42.891684] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:36.941 [2024-07-15 03:31:42.891690] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x10d0fe0): datao=0, datal=512, cccid=6 00:28:36.941 [2024-07-15 03:31:42.891698] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1138180) on tqpair(0x10d0fe0): expected_datao=0, payload_size=512 00:28:36.941 [2024-07-15 03:31:42.891706] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:36.941 [2024-07-15 03:31:42.891715] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:36.941 [2024-07-15 03:31:42.891722] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:36.941 [2024-07-15 03:31:42.891730] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:36.941 [2024-07-15 03:31:42.891739] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:36.941 [2024-07-15 03:31:42.891745] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:36.941 [2024-07-15 03:31:42.891752] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x10d0fe0): datao=0, datal=4096, cccid=7 00:28:36.941 [2024-07-15 03:31:42.891759] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1138300) on tqpair(0x10d0fe0): expected_datao=0, payload_size=4096 00:28:36.941 [2024-07-15 03:31:42.891767] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:36.941 [2024-07-15 03:31:42.891777] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:36.941 [2024-07-15 03:31:42.891784] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:36.941 [2024-07-15 03:31:42.891796] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:36.941 [2024-07-15 03:31:42.891820] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:36.941 [2024-07-15 03:31:42.891827] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:36.941 [2024-07-15 03:31:42.891834] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1138000) on tqpair=0x10d0fe0 00:28:36.941 [2024-07-15 03:31:42.891851] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:36.941 [2024-07-15 03:31:42.891862] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:36.941 [2024-07-15 03:31:42.891890] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:36.941 [2024-07-15 03:31:42.891898] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1137e80) on tqpair=0x10d0fe0 00:28:36.941 [2024-07-15 03:31:42.891915] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:36.941 [2024-07-15 03:31:42.891925] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:36.941 [2024-07-15 03:31:42.891932] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:36.941 [2024-07-15 03:31:42.891939] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1138180) on tqpair=0x10d0fe0 00:28:36.941 [2024-07-15 03:31:42.891949] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:36.941 [2024-07-15 03:31:42.891959] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:36.941 [2024-07-15 03:31:42.891965] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:36.941 [2024-07-15 03:31:42.891975] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1138300) on tqpair=0x10d0fe0 00:28:36.941 ===================================================== 00:28:36.941 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:36.941 ===================================================== 00:28:36.941 Controller Capabilities/Features 00:28:36.941 ================================ 00:28:36.941 Vendor ID: 8086 00:28:36.941 Subsystem Vendor ID: 8086 00:28:36.941 Serial Number: SPDK00000000000001 00:28:36.941 Model Number: SPDK bdev Controller 00:28:36.941 Firmware Version: 24.09 00:28:36.941 Recommended Arb Burst: 6 00:28:36.941 IEEE OUI Identifier: e4 d2 5c 00:28:36.941 Multi-path I/O 00:28:36.941 May have multiple subsystem ports: Yes 00:28:36.941 May have multiple controllers: Yes 00:28:36.941 Associated with SR-IOV VF: No 00:28:36.941 Max Data Transfer Size: 131072 00:28:36.941 Max Number of Namespaces: 32 00:28:36.941 Max Number of I/O Queues: 127 00:28:36.941 NVMe Specification Version (VS): 1.3 00:28:36.941 NVMe Specification Version (Identify): 1.3 00:28:36.941 Maximum Queue Entries: 128 00:28:36.941 Contiguous Queues Required: Yes 00:28:36.941 Arbitration Mechanisms Supported 00:28:36.941 Weighted Round Robin: Not Supported 00:28:36.941 Vendor Specific: Not Supported 00:28:36.941 Reset Timeout: 15000 ms 00:28:36.941 Doorbell Stride: 4 bytes 00:28:36.941 NVM Subsystem Reset: Not Supported 00:28:36.941 Command Sets Supported 00:28:36.941 NVM Command Set: Supported 00:28:36.941 Boot Partition: Not Supported 00:28:36.941 Memory Page Size Minimum: 4096 bytes 00:28:36.941 Memory Page Size Maximum: 4096 bytes 00:28:36.941 Persistent Memory Region: Not Supported 00:28:36.941 Optional Asynchronous Events Supported 00:28:36.941 Namespace Attribute Notices: Supported 00:28:36.941 Firmware Activation Notices: Not Supported 00:28:36.941 ANA Change Notices: Not Supported 00:28:36.941 PLE Aggregate Log Change Notices: Not Supported 00:28:36.941 LBA Status Info Alert Notices: Not Supported 00:28:36.941 EGE Aggregate Log Change Notices: Not Supported 00:28:36.941 Normal NVM Subsystem Shutdown event: Not Supported 00:28:36.941 Zone Descriptor Change Notices: Not Supported 00:28:36.941 Discovery Log Change Notices: Not Supported 00:28:36.941 Controller Attributes 00:28:36.941 128-bit Host Identifier: Supported 00:28:36.941 Non-Operational Permissive Mode: Not Supported 00:28:36.941 NVM Sets: Not Supported 00:28:36.941 Read Recovery Levels: Not Supported 00:28:36.941 Endurance Groups: Not Supported 00:28:36.941 Predictable Latency Mode: Not Supported 00:28:36.941 Traffic Based Keep ALive: Not Supported 00:28:36.941 Namespace Granularity: Not Supported 00:28:36.941 SQ Associations: Not Supported 00:28:36.941 UUID List: Not Supported 00:28:36.941 Multi-Domain Subsystem: Not Supported 00:28:36.941 Fixed Capacity Management: Not Supported 00:28:36.941 Variable Capacity Management: Not Supported 00:28:36.941 Delete Endurance Group: Not Supported 00:28:36.941 Delete NVM Set: Not Supported 00:28:36.941 Extended LBA Formats Supported: Not Supported 00:28:36.941 Flexible Data Placement Supported: Not Supported 00:28:36.941 00:28:36.941 Controller Memory Buffer Support 00:28:36.941 ================================ 00:28:36.941 Supported: No 00:28:36.941 00:28:36.941 Persistent Memory Region Support 00:28:36.941 ================================ 00:28:36.941 Supported: No 00:28:36.941 00:28:36.941 Admin Command Set Attributes 00:28:36.941 ============================ 00:28:36.941 Security Send/Receive: Not Supported 00:28:36.941 Format NVM: Not Supported 00:28:36.941 Firmware Activate/Download: Not Supported 00:28:36.941 Namespace Management: Not Supported 00:28:36.941 Device Self-Test: Not Supported 00:28:36.941 Directives: Not Supported 00:28:36.941 NVMe-MI: Not Supported 00:28:36.941 Virtualization Management: Not Supported 00:28:36.941 Doorbell Buffer Config: Not Supported 00:28:36.941 Get LBA Status Capability: Not Supported 00:28:36.941 Command & Feature Lockdown Capability: Not Supported 00:28:36.941 Abort Command Limit: 4 00:28:36.941 Async Event Request Limit: 4 00:28:36.941 Number of Firmware Slots: N/A 00:28:36.942 Firmware Slot 1 Read-Only: N/A 00:28:36.942 Firmware Activation Without Reset: N/A 00:28:36.942 Multiple Update Detection Support: N/A 00:28:36.942 Firmware Update Granularity: No Information Provided 00:28:36.942 Per-Namespace SMART Log: No 00:28:36.942 Asymmetric Namespace Access Log Page: Not Supported 00:28:36.942 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:28:36.942 Command Effects Log Page: Supported 00:28:36.942 Get Log Page Extended Data: Supported 00:28:36.942 Telemetry Log Pages: Not Supported 00:28:36.942 Persistent Event Log Pages: Not Supported 00:28:36.942 Supported Log Pages Log Page: May Support 00:28:36.942 Commands Supported & Effects Log Page: Not Supported 00:28:36.942 Feature Identifiers & Effects Log Page:May Support 00:28:36.942 NVMe-MI Commands & Effects Log Page: May Support 00:28:36.942 Data Area 4 for Telemetry Log: Not Supported 00:28:36.942 Error Log Page Entries Supported: 128 00:28:36.942 Keep Alive: Supported 00:28:36.942 Keep Alive Granularity: 10000 ms 00:28:36.942 00:28:36.942 NVM Command Set Attributes 00:28:36.942 ========================== 00:28:36.942 Submission Queue Entry Size 00:28:36.942 Max: 64 00:28:36.942 Min: 64 00:28:36.942 Completion Queue Entry Size 00:28:36.942 Max: 16 00:28:36.942 Min: 16 00:28:36.942 Number of Namespaces: 32 00:28:36.942 Compare Command: Supported 00:28:36.942 Write Uncorrectable Command: Not Supported 00:28:36.942 Dataset Management Command: Supported 00:28:36.942 Write Zeroes Command: Supported 00:28:36.942 Set Features Save Field: Not Supported 00:28:36.942 Reservations: Supported 00:28:36.942 Timestamp: Not Supported 00:28:36.942 Copy: Supported 00:28:36.942 Volatile Write Cache: Present 00:28:36.942 Atomic Write Unit (Normal): 1 00:28:36.942 Atomic Write Unit (PFail): 1 00:28:36.942 Atomic Compare & Write Unit: 1 00:28:36.942 Fused Compare & Write: Supported 00:28:36.942 Scatter-Gather List 00:28:36.942 SGL Command Set: Supported 00:28:36.942 SGL Keyed: Supported 00:28:36.942 SGL Bit Bucket Descriptor: Not Supported 00:28:36.942 SGL Metadata Pointer: Not Supported 00:28:36.942 Oversized SGL: Not Supported 00:28:36.942 SGL Metadata Address: Not Supported 00:28:36.942 SGL Offset: Supported 00:28:36.942 Transport SGL Data Block: Not Supported 00:28:36.942 Replay Protected Memory Block: Not Supported 00:28:36.942 00:28:36.942 Firmware Slot Information 00:28:36.942 ========================= 00:28:36.942 Active slot: 1 00:28:36.942 Slot 1 Firmware Revision: 24.09 00:28:36.942 00:28:36.942 00:28:36.942 Commands Supported and Effects 00:28:36.942 ============================== 00:28:36.942 Admin Commands 00:28:36.942 -------------- 00:28:36.942 Get Log Page (02h): Supported 00:28:36.942 Identify (06h): Supported 00:28:36.942 Abort (08h): Supported 00:28:36.942 Set Features (09h): Supported 00:28:36.942 Get Features (0Ah): Supported 00:28:36.942 Asynchronous Event Request (0Ch): Supported 00:28:36.942 Keep Alive (18h): Supported 00:28:36.942 I/O Commands 00:28:36.942 ------------ 00:28:36.942 Flush (00h): Supported LBA-Change 00:28:36.942 Write (01h): Supported LBA-Change 00:28:36.942 Read (02h): Supported 00:28:36.942 Compare (05h): Supported 00:28:36.942 Write Zeroes (08h): Supported LBA-Change 00:28:36.942 Dataset Management (09h): Supported LBA-Change 00:28:36.942 Copy (19h): Supported LBA-Change 00:28:36.942 00:28:36.942 Error Log 00:28:36.942 ========= 00:28:36.942 00:28:36.942 Arbitration 00:28:36.942 =========== 00:28:36.942 Arbitration Burst: 1 00:28:36.942 00:28:36.942 Power Management 00:28:36.942 ================ 00:28:36.942 Number of Power States: 1 00:28:36.942 Current Power State: Power State #0 00:28:36.942 Power State #0: 00:28:36.942 Max Power: 0.00 W 00:28:36.942 Non-Operational State: Operational 00:28:36.942 Entry Latency: Not Reported 00:28:36.942 Exit Latency: Not Reported 00:28:36.942 Relative Read Throughput: 0 00:28:36.942 Relative Read Latency: 0 00:28:36.942 Relative Write Throughput: 0 00:28:36.942 Relative Write Latency: 0 00:28:36.942 Idle Power: Not Reported 00:28:36.942 Active Power: Not Reported 00:28:36.942 Non-Operational Permissive Mode: Not Supported 00:28:36.942 00:28:36.942 Health Information 00:28:36.942 ================== 00:28:36.942 Critical Warnings: 00:28:36.942 Available Spare Space: OK 00:28:36.942 Temperature: OK 00:28:36.942 Device Reliability: OK 00:28:36.942 Read Only: No 00:28:36.942 Volatile Memory Backup: OK 00:28:36.942 Current Temperature: 0 Kelvin (-273 Celsius) 00:28:36.942 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:28:36.942 Available Spare: 0% 00:28:36.942 Available Spare Threshold: 0% 00:28:36.942 Life Percentage Used:[2024-07-15 03:31:42.892095] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:36.942 [2024-07-15 03:31:42.892108] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x10d0fe0) 00:28:36.942 [2024-07-15 03:31:42.892119] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.942 [2024-07-15 03:31:42.892142] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1138300, cid 7, qid 0 00:28:36.942 [2024-07-15 03:31:42.892291] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:36.942 [2024-07-15 03:31:42.892306] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:36.942 [2024-07-15 03:31:42.892313] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:36.942 [2024-07-15 03:31:42.892320] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1138300) on tqpair=0x10d0fe0 00:28:36.942 [2024-07-15 03:31:42.892371] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:28:36.942 [2024-07-15 03:31:42.892391] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1137880) on tqpair=0x10d0fe0 00:28:36.942 [2024-07-15 03:31:42.892402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:36.942 [2024-07-15 03:31:42.892411] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1137a00) on tqpair=0x10d0fe0 00:28:36.942 [2024-07-15 03:31:42.892419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:36.942 [2024-07-15 03:31:42.892428] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1137b80) on tqpair=0x10d0fe0 00:28:36.942 [2024-07-15 03:31:42.892435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:36.942 [2024-07-15 03:31:42.892444] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1137d00) on tqpair=0x10d0fe0 00:28:36.942 [2024-07-15 03:31:42.892466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:36.942 [2024-07-15 03:31:42.892479] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:36.942 [2024-07-15 03:31:42.892487] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:36.942 [2024-07-15 03:31:42.892494] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x10d0fe0) 00:28:36.942 [2024-07-15 03:31:42.892504] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.943 [2024-07-15 03:31:42.892526] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1137d00, cid 3, qid 0 00:28:36.943 [2024-07-15 03:31:42.892663] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:36.943 [2024-07-15 03:31:42.892676] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:36.943 [2024-07-15 03:31:42.892683] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:36.943 [2024-07-15 03:31:42.892690] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1137d00) on tqpair=0x10d0fe0 00:28:36.943 [2024-07-15 03:31:42.892701] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:36.943 [2024-07-15 03:31:42.892709] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:36.943 [2024-07-15 03:31:42.892715] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x10d0fe0) 00:28:36.943 [2024-07-15 03:31:42.892726] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.943 [2024-07-15 03:31:42.892752] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1137d00, cid 3, qid 0 00:28:36.943 [2024-07-15 03:31:42.892868] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:36.943 [2024-07-15 03:31:42.892891] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:36.943 [2024-07-15 03:31:42.892903] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:36.943 [2024-07-15 03:31:42.892911] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1137d00) on tqpair=0x10d0fe0 00:28:36.943 [2024-07-15 03:31:42.892918] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:28:36.943 [2024-07-15 03:31:42.892926] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:28:36.943 [2024-07-15 03:31:42.892943] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:36.943 [2024-07-15 03:31:42.892952] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:36.943 [2024-07-15 03:31:42.892959] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x10d0fe0) 00:28:36.943 [2024-07-15 03:31:42.892969] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.943 [2024-07-15 03:31:42.892991] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1137d00, cid 3, qid 0 00:28:36.943 [2024-07-15 03:31:42.893089] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:36.943 [2024-07-15 03:31:42.893102] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:36.943 [2024-07-15 03:31:42.893108] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:36.943 [2024-07-15 03:31:42.893115] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1137d00) on tqpair=0x10d0fe0 00:28:36.943 [2024-07-15 03:31:42.893132] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:36.943 [2024-07-15 03:31:42.893141] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:36.943 [2024-07-15 03:31:42.893147] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x10d0fe0) 00:28:36.943 [2024-07-15 03:31:42.893158] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.943 [2024-07-15 03:31:42.893179] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1137d00, cid 3, qid 0 00:28:36.943 [2024-07-15 03:31:42.893281] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:36.943 [2024-07-15 03:31:42.893296] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:36.943 [2024-07-15 03:31:42.893303] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:36.943 [2024-07-15 03:31:42.893310] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1137d00) on tqpair=0x10d0fe0 00:28:36.943 [2024-07-15 03:31:42.893326] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:36.943 [2024-07-15 03:31:42.893336] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:36.943 [2024-07-15 03:31:42.893342] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x10d0fe0) 00:28:36.943 [2024-07-15 03:31:42.893353] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.943 [2024-07-15 03:31:42.893373] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1137d00, cid 3, qid 0 00:28:36.943 [2024-07-15 03:31:42.893470] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:36.943 [2024-07-15 03:31:42.893482] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:36.943 [2024-07-15 03:31:42.893489] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:36.943 [2024-07-15 03:31:42.893496] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1137d00) on tqpair=0x10d0fe0 00:28:36.943 [2024-07-15 03:31:42.893511] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:36.943 [2024-07-15 03:31:42.893521] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:36.943 [2024-07-15 03:31:42.893527] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x10d0fe0) 00:28:36.943 [2024-07-15 03:31:42.893538] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.943 [2024-07-15 03:31:42.893558] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1137d00, cid 3, qid 0 00:28:36.943 [2024-07-15 03:31:42.893657] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:36.943 [2024-07-15 03:31:42.893673] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:36.943 [2024-07-15 03:31:42.893680] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:36.943 [2024-07-15 03:31:42.893687] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1137d00) on tqpair=0x10d0fe0 00:28:36.943 [2024-07-15 03:31:42.893703] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:36.943 [2024-07-15 03:31:42.893712] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:36.943 [2024-07-15 03:31:42.893719] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x10d0fe0) 00:28:36.943 [2024-07-15 03:31:42.893729] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.943 [2024-07-15 03:31:42.893750] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1137d00, cid 3, qid 0 00:28:36.943 [2024-07-15 03:31:42.893847] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:36.943 [2024-07-15 03:31:42.893862] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:36.943 [2024-07-15 03:31:42.893869] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:36.943 [2024-07-15 03:31:42.897887] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1137d00) on tqpair=0x10d0fe0 00:28:36.943 [2024-07-15 03:31:42.897913] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:36.943 [2024-07-15 03:31:42.897939] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:36.943 [2024-07-15 03:31:42.897945] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x10d0fe0) 00:28:36.943 [2024-07-15 03:31:42.897956] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.943 [2024-07-15 03:31:42.897979] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1137d00, cid 3, qid 0 00:28:36.943 [2024-07-15 03:31:42.898118] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:36.943 [2024-07-15 03:31:42.898134] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:36.943 [2024-07-15 03:31:42.898141] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:36.943 [2024-07-15 03:31:42.898147] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1137d00) on tqpair=0x10d0fe0 00:28:36.943 [2024-07-15 03:31:42.898160] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 5 milliseconds 00:28:36.943 0% 00:28:36.943 Data Units Read: 0 00:28:36.943 Data Units Written: 0 00:28:36.943 Host Read Commands: 0 00:28:36.943 Host Write Commands: 0 00:28:36.943 Controller Busy Time: 0 minutes 00:28:36.943 Power Cycles: 0 00:28:36.943 Power On Hours: 0 hours 00:28:36.943 Unsafe Shutdowns: 0 00:28:36.943 Unrecoverable Media Errors: 0 00:28:36.943 Lifetime Error Log Entries: 0 00:28:36.943 Warning Temperature Time: 0 minutes 00:28:36.943 Critical Temperature Time: 0 minutes 00:28:36.943 00:28:36.943 Number of Queues 00:28:36.943 ================ 00:28:36.943 Number of I/O Submission Queues: 127 00:28:36.943 Number of I/O Completion Queues: 127 00:28:36.943 00:28:36.943 Active Namespaces 00:28:36.943 ================= 00:28:36.943 Namespace ID:1 00:28:36.943 Error Recovery Timeout: Unlimited 00:28:36.943 Command Set Identifier: NVM (00h) 00:28:36.943 Deallocate: Supported 00:28:36.943 Deallocated/Unwritten Error: Not Supported 00:28:36.943 Deallocated Read Value: Unknown 00:28:36.943 Deallocate in Write Zeroes: Not Supported 00:28:36.943 Deallocated Guard Field: 0xFFFF 00:28:36.943 Flush: Supported 00:28:36.943 Reservation: Supported 00:28:36.943 Namespace Sharing Capabilities: Multiple Controllers 00:28:36.943 Size (in LBAs): 131072 (0GiB) 00:28:36.943 Capacity (in LBAs): 131072 (0GiB) 00:28:36.943 Utilization (in LBAs): 131072 (0GiB) 00:28:36.943 NGUID: ABCDEF0123456789ABCDEF0123456789 00:28:36.943 EUI64: ABCDEF0123456789 00:28:36.943 UUID: a3dc2ef5-74e2-486e-8e4e-0cdf6c8be555 00:28:36.943 Thin Provisioning: Not Supported 00:28:36.943 Per-NS Atomic Units: Yes 00:28:36.943 Atomic Boundary Size (Normal): 0 00:28:36.943 Atomic Boundary Size (PFail): 0 00:28:36.943 Atomic Boundary Offset: 0 00:28:36.943 Maximum Single Source Range Length: 65535 00:28:36.943 Maximum Copy Length: 65535 00:28:36.943 Maximum Source Range Count: 1 00:28:36.943 NGUID/EUI64 Never Reused: No 00:28:36.943 Namespace Write Protected: No 00:28:36.943 Number of LBA Formats: 1 00:28:36.943 Current LBA Format: LBA Format #00 00:28:36.943 LBA Format #00: Data Size: 512 Metadata Size: 0 00:28:36.943 00:28:36.943 03:31:42 nvmf_tcp.nvmf_identify -- host/identify.sh@51 -- # sync 00:28:36.943 03:31:42 nvmf_tcp.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:36.943 03:31:42 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:36.944 03:31:42 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:36.944 03:31:42 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:36.944 03:31:42 nvmf_tcp.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:28:36.944 03:31:42 nvmf_tcp.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:28:36.944 03:31:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:36.944 03:31:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@117 -- # sync 00:28:36.944 03:31:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:36.944 03:31:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@120 -- # set +e 00:28:36.944 03:31:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:36.944 03:31:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:36.944 rmmod nvme_tcp 00:28:36.944 rmmod nvme_fabrics 00:28:36.944 rmmod nvme_keyring 00:28:36.944 03:31:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:36.944 03:31:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@124 -- # set -e 00:28:36.944 03:31:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@125 -- # return 0 00:28:36.944 03:31:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@489 -- # '[' -n 3287755 ']' 00:28:36.944 03:31:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@490 -- # killprocess 3287755 00:28:36.944 03:31:42 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@948 -- # '[' -z 3287755 ']' 00:28:36.944 03:31:42 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@952 -- # kill -0 3287755 00:28:36.944 03:31:42 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@953 -- # uname 00:28:36.944 03:31:42 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:36.944 03:31:42 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3287755 00:28:36.944 03:31:43 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:28:36.944 03:31:43 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:28:36.944 03:31:43 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3287755' 00:28:36.944 killing process with pid 3287755 00:28:36.944 03:31:43 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@967 -- # kill 3287755 00:28:36.944 03:31:43 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@972 -- # wait 3287755 00:28:37.203 03:31:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:28:37.203 03:31:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:28:37.203 03:31:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:28:37.203 03:31:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:37.203 03:31:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:37.203 03:31:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:37.203 03:31:43 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:37.203 03:31:43 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:39.728 03:31:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:28:39.728 00:28:39.728 real 0m5.493s 00:28:39.728 user 0m4.204s 00:28:39.728 sys 0m1.990s 00:28:39.728 03:31:45 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:39.728 03:31:45 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:39.728 ************************************ 00:28:39.728 END TEST nvmf_identify 00:28:39.728 ************************************ 00:28:39.728 03:31:45 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:28:39.728 03:31:45 nvmf_tcp -- nvmf/nvmf.sh@98 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:28:39.728 03:31:45 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:28:39.728 03:31:45 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:39.728 03:31:45 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:39.728 ************************************ 00:28:39.728 START TEST nvmf_perf 00:28:39.729 ************************************ 00:28:39.729 03:31:45 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:28:39.729 * Looking for test storage... 00:28:39.729 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:39.729 03:31:45 nvmf_tcp.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:39.729 03:31:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:28:39.729 03:31:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:39.729 03:31:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:39.729 03:31:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:39.729 03:31:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:39.729 03:31:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:39.729 03:31:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:39.729 03:31:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:39.729 03:31:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:39.729 03:31:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:39.729 03:31:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:39.729 03:31:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:28:39.729 03:31:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:28:39.729 03:31:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:39.729 03:31:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:39.729 03:31:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:39.729 03:31:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:39.729 03:31:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:39.729 03:31:45 nvmf_tcp.nvmf_perf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:39.729 03:31:45 nvmf_tcp.nvmf_perf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:39.729 03:31:45 nvmf_tcp.nvmf_perf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:39.729 03:31:45 nvmf_tcp.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:39.729 03:31:45 nvmf_tcp.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:39.729 03:31:45 nvmf_tcp.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:39.729 03:31:45 nvmf_tcp.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:28:39.729 03:31:45 nvmf_tcp.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:39.729 03:31:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@47 -- # : 0 00:28:39.729 03:31:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:39.729 03:31:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:39.729 03:31:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:39.729 03:31:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:39.729 03:31:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:39.729 03:31:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:39.729 03:31:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:39.729 03:31:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:39.729 03:31:45 nvmf_tcp.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:28:39.729 03:31:45 nvmf_tcp.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:28:39.729 03:31:45 nvmf_tcp.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:28:39.729 03:31:45 nvmf_tcp.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:28:39.729 03:31:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:28:39.729 03:31:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:39.729 03:31:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:39.729 03:31:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:39.729 03:31:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:39.729 03:31:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:39.729 03:31:45 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:39.729 03:31:45 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:39.729 03:31:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:28:39.729 03:31:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:28:39.729 03:31:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@285 -- # xtrace_disable 00:28:39.729 03:31:45 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:28:41.103 03:31:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:41.103 03:31:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@291 -- # pci_devs=() 00:28:41.103 03:31:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:41.103 03:31:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:41.103 03:31:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:41.103 03:31:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:41.103 03:31:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:41.103 03:31:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@295 -- # net_devs=() 00:28:41.103 03:31:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:41.103 03:31:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@296 -- # e810=() 00:28:41.103 03:31:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@296 -- # local -ga e810 00:28:41.103 03:31:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@297 -- # x722=() 00:28:41.103 03:31:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@297 -- # local -ga x722 00:28:41.103 03:31:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@298 -- # mlx=() 00:28:41.103 03:31:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@298 -- # local -ga mlx 00:28:41.103 03:31:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:41.103 03:31:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:41.103 03:31:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:41.103 03:31:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:41.103 03:31:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:41.103 03:31:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:41.104 03:31:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:41.104 03:31:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:41.104 03:31:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:41.104 03:31:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:41.104 03:31:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:41.104 03:31:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:41.104 03:31:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:41.104 03:31:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:28:41.104 03:31:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:28:41.104 03:31:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:28:41.104 03:31:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:41.104 03:31:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:41.104 03:31:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:28:41.104 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:28:41.104 03:31:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:41.104 03:31:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:41.104 03:31:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:41.104 03:31:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:41.104 03:31:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:41.104 03:31:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:41.104 03:31:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:28:41.104 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:28:41.104 03:31:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:41.104 03:31:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:41.104 03:31:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:41.104 03:31:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:41.104 03:31:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:41.104 03:31:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:41.104 03:31:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:28:41.104 03:31:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:28:41.104 03:31:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:41.104 03:31:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:41.104 03:31:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:41.104 03:31:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:41.104 03:31:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:41.104 03:31:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:41.104 03:31:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:41.104 03:31:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:28:41.104 Found net devices under 0000:0a:00.0: cvl_0_0 00:28:41.104 03:31:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:41.104 03:31:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:41.104 03:31:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:41.104 03:31:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:41.104 03:31:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:41.104 03:31:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:41.104 03:31:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:41.104 03:31:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:41.104 03:31:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:28:41.104 Found net devices under 0000:0a:00.1: cvl_0_1 00:28:41.104 03:31:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:41.104 03:31:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:28:41.104 03:31:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # is_hw=yes 00:28:41.104 03:31:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:28:41.104 03:31:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:28:41.104 03:31:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:28:41.104 03:31:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:41.104 03:31:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:41.104 03:31:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:41.104 03:31:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:28:41.104 03:31:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:41.104 03:31:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:41.104 03:31:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:28:41.104 03:31:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:41.104 03:31:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:41.104 03:31:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:28:41.104 03:31:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:28:41.104 03:31:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:28:41.104 03:31:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:41.363 03:31:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:41.363 03:31:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:41.363 03:31:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:28:41.363 03:31:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:41.363 03:31:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:41.363 03:31:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:41.363 03:31:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:28:41.363 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:41.363 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.247 ms 00:28:41.363 00:28:41.363 --- 10.0.0.2 ping statistics --- 00:28:41.363 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:41.363 rtt min/avg/max/mdev = 0.247/0.247/0.247/0.000 ms 00:28:41.363 03:31:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:41.363 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:41.363 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.106 ms 00:28:41.363 00:28:41.363 --- 10.0.0.1 ping statistics --- 00:28:41.363 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:41.363 rtt min/avg/max/mdev = 0.106/0.106/0.106/0.000 ms 00:28:41.363 03:31:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:41.363 03:31:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@422 -- # return 0 00:28:41.363 03:31:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:28:41.363 03:31:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:41.363 03:31:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:28:41.363 03:31:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:28:41.363 03:31:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:41.363 03:31:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:28:41.363 03:31:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:28:41.363 03:31:47 nvmf_tcp.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:28:41.363 03:31:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:41.363 03:31:47 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@722 -- # xtrace_disable 00:28:41.363 03:31:47 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:28:41.363 03:31:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@481 -- # nvmfpid=3289815 00:28:41.363 03:31:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:28:41.363 03:31:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@482 -- # waitforlisten 3289815 00:28:41.363 03:31:47 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@829 -- # '[' -z 3289815 ']' 00:28:41.363 03:31:47 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:41.363 03:31:47 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:41.363 03:31:47 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:41.363 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:41.363 03:31:47 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:41.363 03:31:47 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:28:41.363 [2024-07-15 03:31:47.418000] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:28:41.363 [2024-07-15 03:31:47.418092] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:41.363 EAL: No free 2048 kB hugepages reported on node 1 00:28:41.363 [2024-07-15 03:31:47.482850] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:41.621 [2024-07-15 03:31:47.568312] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:41.621 [2024-07-15 03:31:47.568366] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:41.621 [2024-07-15 03:31:47.568395] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:41.621 [2024-07-15 03:31:47.568407] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:41.622 [2024-07-15 03:31:47.568417] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:41.622 [2024-07-15 03:31:47.568550] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:41.622 [2024-07-15 03:31:47.568617] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:28:41.622 [2024-07-15 03:31:47.568664] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:28:41.622 [2024-07-15 03:31:47.568666] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:41.622 03:31:47 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:41.622 03:31:47 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@862 -- # return 0 00:28:41.622 03:31:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:41.622 03:31:47 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@728 -- # xtrace_disable 00:28:41.622 03:31:47 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:28:41.622 03:31:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:41.622 03:31:47 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:28:41.622 03:31:47 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:28:44.896 03:31:50 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:28:44.896 03:31:50 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:28:45.153 03:31:51 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:88:00.0 00:28:45.153 03:31:51 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:28:45.410 03:31:51 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:28:45.410 03:31:51 nvmf_tcp.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:88:00.0 ']' 00:28:45.410 03:31:51 nvmf_tcp.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:28:45.410 03:31:51 nvmf_tcp.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:28:45.410 03:31:51 nvmf_tcp.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:28:45.410 [2024-07-15 03:31:51.551228] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:45.667 03:31:51 nvmf_tcp.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:45.923 03:31:51 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:28:45.923 03:31:51 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:45.923 03:31:52 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:28:45.923 03:31:52 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:28:46.180 03:31:52 nvmf_tcp.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:46.437 [2024-07-15 03:31:52.538852] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:46.437 03:31:52 nvmf_tcp.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:28:46.694 03:31:52 nvmf_tcp.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:88:00.0 ']' 00:28:46.694 03:31:52 nvmf_tcp.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:88:00.0' 00:28:46.694 03:31:52 nvmf_tcp.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:28:46.694 03:31:52 nvmf_tcp.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:88:00.0' 00:28:48.065 Initializing NVMe Controllers 00:28:48.065 Attached to NVMe Controller at 0000:88:00.0 [8086:0a54] 00:28:48.065 Associating PCIE (0000:88:00.0) NSID 1 with lcore 0 00:28:48.065 Initialization complete. Launching workers. 00:28:48.065 ======================================================== 00:28:48.065 Latency(us) 00:28:48.065 Device Information : IOPS MiB/s Average min max 00:28:48.065 PCIE (0000:88:00.0) NSID 1 from core 0: 85561.95 334.23 373.53 42.88 4317.79 00:28:48.065 ======================================================== 00:28:48.065 Total : 85561.95 334.23 373.53 42.88 4317.79 00:28:48.065 00:28:48.065 03:31:54 nvmf_tcp.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:48.065 EAL: No free 2048 kB hugepages reported on node 1 00:28:49.433 Initializing NVMe Controllers 00:28:49.433 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:49.433 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:49.433 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:28:49.433 Initialization complete. Launching workers. 00:28:49.433 ======================================================== 00:28:49.433 Latency(us) 00:28:49.433 Device Information : IOPS MiB/s Average min max 00:28:49.434 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 93.67 0.37 11048.25 168.76 45777.70 00:28:49.434 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 50.82 0.20 19832.54 5979.80 50891.18 00:28:49.434 ======================================================== 00:28:49.434 Total : 144.49 0.56 14137.89 168.76 50891.18 00:28:49.434 00:28:49.434 03:31:55 nvmf_tcp.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:49.434 EAL: No free 2048 kB hugepages reported on node 1 00:28:50.803 Initializing NVMe Controllers 00:28:50.803 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:50.803 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:50.803 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:28:50.803 Initialization complete. Launching workers. 00:28:50.803 ======================================================== 00:28:50.803 Latency(us) 00:28:50.803 Device Information : IOPS MiB/s Average min max 00:28:50.803 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8585.23 33.54 3728.66 618.15 8485.89 00:28:50.803 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3839.65 15.00 8378.75 5673.38 17724.51 00:28:50.803 ======================================================== 00:28:50.803 Total : 12424.88 48.53 5165.67 618.15 17724.51 00:28:50.803 00:28:50.803 03:31:56 nvmf_tcp.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:28:50.803 03:31:56 nvmf_tcp.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:28:50.803 03:31:56 nvmf_tcp.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:50.803 EAL: No free 2048 kB hugepages reported on node 1 00:28:53.369 Initializing NVMe Controllers 00:28:53.369 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:53.369 Controller IO queue size 128, less than required. 00:28:53.369 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:53.369 Controller IO queue size 128, less than required. 00:28:53.369 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:53.369 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:53.369 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:28:53.369 Initialization complete. Launching workers. 00:28:53.369 ======================================================== 00:28:53.369 Latency(us) 00:28:53.369 Device Information : IOPS MiB/s Average min max 00:28:53.369 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1478.00 369.50 88368.00 51255.22 126622.49 00:28:53.369 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 563.00 140.75 233246.91 86961.50 382952.39 00:28:53.369 ======================================================== 00:28:53.369 Total : 2041.00 510.25 128332.15 51255.22 382952.39 00:28:53.369 00:28:53.369 03:31:59 nvmf_tcp.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:28:53.369 EAL: No free 2048 kB hugepages reported on node 1 00:28:53.369 No valid NVMe controllers or AIO or URING devices found 00:28:53.369 Initializing NVMe Controllers 00:28:53.369 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:53.369 Controller IO queue size 128, less than required. 00:28:53.369 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:53.369 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:28:53.369 Controller IO queue size 128, less than required. 00:28:53.369 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:53.369 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:28:53.369 WARNING: Some requested NVMe devices were skipped 00:28:53.369 03:31:59 nvmf_tcp.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:28:53.369 EAL: No free 2048 kB hugepages reported on node 1 00:28:55.897 Initializing NVMe Controllers 00:28:55.897 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:55.897 Controller IO queue size 128, less than required. 00:28:55.897 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:55.897 Controller IO queue size 128, less than required. 00:28:55.897 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:55.897 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:55.897 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:28:55.897 Initialization complete. Launching workers. 00:28:55.897 00:28:55.897 ==================== 00:28:55.897 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:28:55.897 TCP transport: 00:28:55.897 polls: 15789 00:28:55.897 idle_polls: 6811 00:28:55.897 sock_completions: 8978 00:28:55.897 nvme_completions: 5369 00:28:55.897 submitted_requests: 8104 00:28:55.897 queued_requests: 1 00:28:55.897 00:28:55.897 ==================== 00:28:55.897 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:28:55.897 TCP transport: 00:28:55.897 polls: 19720 00:28:55.897 idle_polls: 9492 00:28:55.897 sock_completions: 10228 00:28:55.897 nvme_completions: 4615 00:28:55.897 submitted_requests: 6934 00:28:55.897 queued_requests: 1 00:28:55.897 ======================================================== 00:28:55.897 Latency(us) 00:28:55.897 Device Information : IOPS MiB/s Average min max 00:28:55.897 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1341.99 335.50 97462.06 57199.37 167644.64 00:28:55.897 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1153.49 288.37 113494.63 55644.71 176125.06 00:28:55.897 ======================================================== 00:28:55.897 Total : 2495.49 623.87 104872.83 55644.71 176125.06 00:28:55.897 00:28:55.897 03:32:01 nvmf_tcp.nvmf_perf -- host/perf.sh@66 -- # sync 00:28:55.897 03:32:01 nvmf_tcp.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:56.154 03:32:02 nvmf_tcp.nvmf_perf -- host/perf.sh@69 -- # '[' 1 -eq 1 ']' 00:28:56.154 03:32:02 nvmf_tcp.nvmf_perf -- host/perf.sh@71 -- # '[' -n 0000:88:00.0 ']' 00:28:56.154 03:32:02 nvmf_tcp.nvmf_perf -- host/perf.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore Nvme0n1 lvs_0 00:28:59.426 03:32:05 nvmf_tcp.nvmf_perf -- host/perf.sh@72 -- # ls_guid=61b5c19b-4fdf-49c9-8192-5e2b30c86857 00:28:59.426 03:32:05 nvmf_tcp.nvmf_perf -- host/perf.sh@73 -- # get_lvs_free_mb 61b5c19b-4fdf-49c9-8192-5e2b30c86857 00:28:59.426 03:32:05 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1364 -- # local lvs_uuid=61b5c19b-4fdf-49c9-8192-5e2b30c86857 00:28:59.426 03:32:05 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1365 -- # local lvs_info 00:28:59.426 03:32:05 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1366 -- # local fc 00:28:59.426 03:32:05 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1367 -- # local cs 00:28:59.426 03:32:05 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1368 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:28:59.683 03:32:05 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1368 -- # lvs_info='[ 00:28:59.683 { 00:28:59.683 "uuid": "61b5c19b-4fdf-49c9-8192-5e2b30c86857", 00:28:59.683 "name": "lvs_0", 00:28:59.683 "base_bdev": "Nvme0n1", 00:28:59.683 "total_data_clusters": 238234, 00:28:59.683 "free_clusters": 238234, 00:28:59.683 "block_size": 512, 00:28:59.683 "cluster_size": 4194304 00:28:59.683 } 00:28:59.683 ]' 00:28:59.683 03:32:05 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="61b5c19b-4fdf-49c9-8192-5e2b30c86857") .free_clusters' 00:28:59.683 03:32:05 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1369 -- # fc=238234 00:28:59.684 03:32:05 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1370 -- # jq '.[] | select(.uuid=="61b5c19b-4fdf-49c9-8192-5e2b30c86857") .cluster_size' 00:28:59.684 03:32:05 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1370 -- # cs=4194304 00:28:59.684 03:32:05 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1373 -- # free_mb=952936 00:28:59.684 03:32:05 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1374 -- # echo 952936 00:28:59.684 952936 00:28:59.684 03:32:05 nvmf_tcp.nvmf_perf -- host/perf.sh@77 -- # '[' 952936 -gt 20480 ']' 00:28:59.684 03:32:05 nvmf_tcp.nvmf_perf -- host/perf.sh@78 -- # free_mb=20480 00:28:59.684 03:32:05 nvmf_tcp.nvmf_perf -- host/perf.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 61b5c19b-4fdf-49c9-8192-5e2b30c86857 lbd_0 20480 00:29:00.248 03:32:06 nvmf_tcp.nvmf_perf -- host/perf.sh@80 -- # lb_guid=20c37159-6fcc-46d3-8a3f-bc43b3bd962b 00:29:00.248 03:32:06 nvmf_tcp.nvmf_perf -- host/perf.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore 20c37159-6fcc-46d3-8a3f-bc43b3bd962b lvs_n_0 00:29:01.181 03:32:07 nvmf_tcp.nvmf_perf -- host/perf.sh@83 -- # ls_nested_guid=342caa97-cdc0-4cfe-bcc5-6fc982ef2a0d 00:29:01.181 03:32:07 nvmf_tcp.nvmf_perf -- host/perf.sh@84 -- # get_lvs_free_mb 342caa97-cdc0-4cfe-bcc5-6fc982ef2a0d 00:29:01.181 03:32:07 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1364 -- # local lvs_uuid=342caa97-cdc0-4cfe-bcc5-6fc982ef2a0d 00:29:01.181 03:32:07 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1365 -- # local lvs_info 00:29:01.181 03:32:07 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1366 -- # local fc 00:29:01.181 03:32:07 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1367 -- # local cs 00:29:01.181 03:32:07 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1368 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:29:01.439 03:32:07 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1368 -- # lvs_info='[ 00:29:01.439 { 00:29:01.439 "uuid": "61b5c19b-4fdf-49c9-8192-5e2b30c86857", 00:29:01.439 "name": "lvs_0", 00:29:01.439 "base_bdev": "Nvme0n1", 00:29:01.439 "total_data_clusters": 238234, 00:29:01.439 "free_clusters": 233114, 00:29:01.439 "block_size": 512, 00:29:01.439 "cluster_size": 4194304 00:29:01.439 }, 00:29:01.439 { 00:29:01.439 "uuid": "342caa97-cdc0-4cfe-bcc5-6fc982ef2a0d", 00:29:01.439 "name": "lvs_n_0", 00:29:01.439 "base_bdev": "20c37159-6fcc-46d3-8a3f-bc43b3bd962b", 00:29:01.439 "total_data_clusters": 5114, 00:29:01.439 "free_clusters": 5114, 00:29:01.439 "block_size": 512, 00:29:01.439 "cluster_size": 4194304 00:29:01.439 } 00:29:01.439 ]' 00:29:01.439 03:32:07 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="342caa97-cdc0-4cfe-bcc5-6fc982ef2a0d") .free_clusters' 00:29:01.439 03:32:07 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1369 -- # fc=5114 00:29:01.439 03:32:07 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1370 -- # jq '.[] | select(.uuid=="342caa97-cdc0-4cfe-bcc5-6fc982ef2a0d") .cluster_size' 00:29:01.439 03:32:07 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1370 -- # cs=4194304 00:29:01.439 03:32:07 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1373 -- # free_mb=20456 00:29:01.439 03:32:07 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1374 -- # echo 20456 00:29:01.439 20456 00:29:01.439 03:32:07 nvmf_tcp.nvmf_perf -- host/perf.sh@85 -- # '[' 20456 -gt 20480 ']' 00:29:01.439 03:32:07 nvmf_tcp.nvmf_perf -- host/perf.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 342caa97-cdc0-4cfe-bcc5-6fc982ef2a0d lbd_nest_0 20456 00:29:01.697 03:32:07 nvmf_tcp.nvmf_perf -- host/perf.sh@88 -- # lb_nested_guid=ae97d9e2-60e8-404e-a53e-86eba2b0aa78 00:29:01.697 03:32:07 nvmf_tcp.nvmf_perf -- host/perf.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:01.954 03:32:07 nvmf_tcp.nvmf_perf -- host/perf.sh@90 -- # for bdev in $lb_nested_guid 00:29:01.954 03:32:07 nvmf_tcp.nvmf_perf -- host/perf.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 ae97d9e2-60e8-404e-a53e-86eba2b0aa78 00:29:02.212 03:32:08 nvmf_tcp.nvmf_perf -- host/perf.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:02.469 03:32:08 nvmf_tcp.nvmf_perf -- host/perf.sh@95 -- # qd_depth=("1" "32" "128") 00:29:02.469 03:32:08 nvmf_tcp.nvmf_perf -- host/perf.sh@96 -- # io_size=("512" "131072") 00:29:02.469 03:32:08 nvmf_tcp.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:29:02.469 03:32:08 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:29:02.469 03:32:08 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:02.469 EAL: No free 2048 kB hugepages reported on node 1 00:29:14.656 Initializing NVMe Controllers 00:29:14.656 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:14.656 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:14.656 Initialization complete. Launching workers. 00:29:14.656 ======================================================== 00:29:14.656 Latency(us) 00:29:14.656 Device Information : IOPS MiB/s Average min max 00:29:14.656 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 44.79 0.02 22342.04 196.89 46226.90 00:29:14.656 ======================================================== 00:29:14.656 Total : 44.79 0.02 22342.04 196.89 46226.90 00:29:14.656 00:29:14.656 03:32:18 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:29:14.656 03:32:18 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:14.656 EAL: No free 2048 kB hugepages reported on node 1 00:29:24.629 Initializing NVMe Controllers 00:29:24.629 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:24.629 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:24.629 Initialization complete. Launching workers. 00:29:24.629 ======================================================== 00:29:24.629 Latency(us) 00:29:24.629 Device Information : IOPS MiB/s Average min max 00:29:24.629 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 82.80 10.35 12095.45 5970.76 18919.68 00:29:24.629 ======================================================== 00:29:24.629 Total : 82.80 10.35 12095.45 5970.76 18919.68 00:29:24.629 00:29:24.629 03:32:29 nvmf_tcp.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:29:24.629 03:32:29 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:29:24.629 03:32:29 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:24.629 EAL: No free 2048 kB hugepages reported on node 1 00:29:34.610 Initializing NVMe Controllers 00:29:34.610 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:34.610 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:34.610 Initialization complete. Launching workers. 00:29:34.610 ======================================================== 00:29:34.610 Latency(us) 00:29:34.610 Device Information : IOPS MiB/s Average min max 00:29:34.610 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 7309.20 3.57 4377.86 294.61 10500.76 00:29:34.610 ======================================================== 00:29:34.610 Total : 7309.20 3.57 4377.86 294.61 10500.76 00:29:34.610 00:29:34.610 03:32:39 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:29:34.610 03:32:39 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:34.610 EAL: No free 2048 kB hugepages reported on node 1 00:29:44.589 Initializing NVMe Controllers 00:29:44.590 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:44.590 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:44.590 Initialization complete. Launching workers. 00:29:44.590 ======================================================== 00:29:44.590 Latency(us) 00:29:44.590 Device Information : IOPS MiB/s Average min max 00:29:44.590 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2295.90 286.99 13954.19 595.47 31037.25 00:29:44.590 ======================================================== 00:29:44.590 Total : 2295.90 286.99 13954.19 595.47 31037.25 00:29:44.590 00:29:44.590 03:32:49 nvmf_tcp.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:29:44.590 03:32:49 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:29:44.590 03:32:49 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:44.590 EAL: No free 2048 kB hugepages reported on node 1 00:29:54.564 Initializing NVMe Controllers 00:29:54.564 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:54.564 Controller IO queue size 128, less than required. 00:29:54.564 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:54.564 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:54.564 Initialization complete. Launching workers. 00:29:54.564 ======================================================== 00:29:54.564 Latency(us) 00:29:54.564 Device Information : IOPS MiB/s Average min max 00:29:54.564 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 11952.00 5.84 10716.82 1753.27 52507.96 00:29:54.564 ======================================================== 00:29:54.564 Total : 11952.00 5.84 10716.82 1753.27 52507.96 00:29:54.564 00:29:54.564 03:33:00 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:29:54.564 03:33:00 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:54.564 EAL: No free 2048 kB hugepages reported on node 1 00:30:04.531 Initializing NVMe Controllers 00:30:04.531 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:04.531 Controller IO queue size 128, less than required. 00:30:04.531 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:04.532 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:04.532 Initialization complete. Launching workers. 00:30:04.532 ======================================================== 00:30:04.532 Latency(us) 00:30:04.532 Device Information : IOPS MiB/s Average min max 00:30:04.532 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1185.50 148.19 108776.52 23258.14 238477.93 00:30:04.532 ======================================================== 00:30:04.532 Total : 1185.50 148.19 108776.52 23258.14 238477.93 00:30:04.532 00:30:04.790 03:33:10 nvmf_tcp.nvmf_perf -- host/perf.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:04.790 03:33:10 nvmf_tcp.nvmf_perf -- host/perf.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete ae97d9e2-60e8-404e-a53e-86eba2b0aa78 00:30:05.726 03:33:11 nvmf_tcp.nvmf_perf -- host/perf.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:30:05.984 03:33:11 nvmf_tcp.nvmf_perf -- host/perf.sh@107 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 20c37159-6fcc-46d3-8a3f-bc43b3bd962b 00:30:06.242 03:33:12 nvmf_tcp.nvmf_perf -- host/perf.sh@108 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:30:06.499 03:33:12 nvmf_tcp.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:30:06.499 03:33:12 nvmf_tcp.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:30:06.499 03:33:12 nvmf_tcp.nvmf_perf -- nvmf/common.sh@488 -- # nvmfcleanup 00:30:06.499 03:33:12 nvmf_tcp.nvmf_perf -- nvmf/common.sh@117 -- # sync 00:30:06.499 03:33:12 nvmf_tcp.nvmf_perf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:30:06.499 03:33:12 nvmf_tcp.nvmf_perf -- nvmf/common.sh@120 -- # set +e 00:30:06.499 03:33:12 nvmf_tcp.nvmf_perf -- nvmf/common.sh@121 -- # for i in {1..20} 00:30:06.499 03:33:12 nvmf_tcp.nvmf_perf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:30:06.499 rmmod nvme_tcp 00:30:06.499 rmmod nvme_fabrics 00:30:06.499 rmmod nvme_keyring 00:30:06.499 03:33:12 nvmf_tcp.nvmf_perf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:30:06.499 03:33:12 nvmf_tcp.nvmf_perf -- nvmf/common.sh@124 -- # set -e 00:30:06.499 03:33:12 nvmf_tcp.nvmf_perf -- nvmf/common.sh@125 -- # return 0 00:30:06.499 03:33:12 nvmf_tcp.nvmf_perf -- nvmf/common.sh@489 -- # '[' -n 3289815 ']' 00:30:06.499 03:33:12 nvmf_tcp.nvmf_perf -- nvmf/common.sh@490 -- # killprocess 3289815 00:30:06.499 03:33:12 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@948 -- # '[' -z 3289815 ']' 00:30:06.499 03:33:12 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@952 -- # kill -0 3289815 00:30:06.499 03:33:12 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@953 -- # uname 00:30:06.499 03:33:12 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:30:06.499 03:33:12 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3289815 00:30:06.499 03:33:12 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:30:06.499 03:33:12 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:30:06.499 03:33:12 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3289815' 00:30:06.499 killing process with pid 3289815 00:30:06.499 03:33:12 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@967 -- # kill 3289815 00:30:06.499 03:33:12 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@972 -- # wait 3289815 00:30:08.399 03:33:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:30:08.399 03:33:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:30:08.399 03:33:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:30:08.399 03:33:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:30:08.399 03:33:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:30:08.399 03:33:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:08.399 03:33:14 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:08.399 03:33:14 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:10.302 03:33:16 nvmf_tcp.nvmf_perf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:30:10.302 00:30:10.302 real 1m30.840s 00:30:10.302 user 5m35.303s 00:30:10.302 sys 0m16.174s 00:30:10.302 03:33:16 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:30:10.302 03:33:16 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:30:10.302 ************************************ 00:30:10.302 END TEST nvmf_perf 00:30:10.302 ************************************ 00:30:10.302 03:33:16 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:30:10.302 03:33:16 nvmf_tcp -- nvmf/nvmf.sh@99 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:30:10.302 03:33:16 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:30:10.302 03:33:16 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:10.302 03:33:16 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:10.302 ************************************ 00:30:10.302 START TEST nvmf_fio_host 00:30:10.302 ************************************ 00:30:10.302 03:33:16 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:30:10.302 * Looking for test storage... 00:30:10.302 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:10.302 03:33:16 nvmf_tcp.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:10.302 03:33:16 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:10.302 03:33:16 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:10.302 03:33:16 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:10.302 03:33:16 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:10.302 03:33:16 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:10.302 03:33:16 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:10.302 03:33:16 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:30:10.302 03:33:16 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:10.302 03:33:16 nvmf_tcp.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:10.302 03:33:16 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:30:10.302 03:33:16 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:10.303 03:33:16 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:10.303 03:33:16 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:10.303 03:33:16 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:10.303 03:33:16 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:10.303 03:33:16 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:10.303 03:33:16 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:10.303 03:33:16 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:10.303 03:33:16 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:10.303 03:33:16 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:10.303 03:33:16 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:30:10.303 03:33:16 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:30:10.303 03:33:16 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:10.303 03:33:16 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:10.303 03:33:16 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:10.303 03:33:16 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:10.303 03:33:16 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:10.303 03:33:16 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:10.303 03:33:16 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:10.303 03:33:16 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:10.303 03:33:16 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:10.303 03:33:16 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:10.303 03:33:16 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:10.303 03:33:16 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:30:10.303 03:33:16 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:10.303 03:33:16 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@47 -- # : 0 00:30:10.303 03:33:16 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:30:10.303 03:33:16 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:30:10.303 03:33:16 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:10.303 03:33:16 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:10.303 03:33:16 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:10.303 03:33:16 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:30:10.303 03:33:16 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:30:10.303 03:33:16 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:30:10.303 03:33:16 nvmf_tcp.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:10.303 03:33:16 nvmf_tcp.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:30:10.303 03:33:16 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:30:10.303 03:33:16 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:10.303 03:33:16 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:30:10.303 03:33:16 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:30:10.303 03:33:16 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:30:10.303 03:33:16 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:10.303 03:33:16 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:10.303 03:33:16 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:10.303 03:33:16 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:30:10.303 03:33:16 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:30:10.303 03:33:16 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@285 -- # xtrace_disable 00:30:10.303 03:33:16 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:30:12.241 03:33:18 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:12.241 03:33:18 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@291 -- # pci_devs=() 00:30:12.241 03:33:18 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:30:12.241 03:33:18 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:30:12.241 03:33:18 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:30:12.241 03:33:18 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:30:12.241 03:33:18 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:30:12.241 03:33:18 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@295 -- # net_devs=() 00:30:12.241 03:33:18 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:30:12.241 03:33:18 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@296 -- # e810=() 00:30:12.241 03:33:18 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@296 -- # local -ga e810 00:30:12.241 03:33:18 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@297 -- # x722=() 00:30:12.241 03:33:18 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@297 -- # local -ga x722 00:30:12.241 03:33:18 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@298 -- # mlx=() 00:30:12.241 03:33:18 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@298 -- # local -ga mlx 00:30:12.241 03:33:18 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:12.241 03:33:18 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:12.241 03:33:18 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:12.241 03:33:18 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:12.241 03:33:18 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:12.241 03:33:18 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:12.241 03:33:18 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:12.241 03:33:18 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:12.241 03:33:18 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:12.241 03:33:18 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:12.241 03:33:18 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:12.241 03:33:18 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:30:12.241 03:33:18 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:30:12.241 03:33:18 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:30:12.241 03:33:18 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:30:12.241 03:33:18 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:30:12.241 03:33:18 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:30:12.241 03:33:18 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:12.241 03:33:18 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:30:12.241 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:30:12.241 03:33:18 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:12.241 03:33:18 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:12.241 03:33:18 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:12.241 03:33:18 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:12.241 03:33:18 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:12.241 03:33:18 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:12.241 03:33:18 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:30:12.241 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:30:12.241 03:33:18 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:12.241 03:33:18 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:12.241 03:33:18 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:12.241 03:33:18 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:12.241 03:33:18 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:12.241 03:33:18 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:30:12.241 03:33:18 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:30:12.241 03:33:18 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:30:12.241 03:33:18 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:12.241 03:33:18 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:12.241 03:33:18 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:12.241 03:33:18 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:12.241 03:33:18 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:12.241 03:33:18 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:12.241 03:33:18 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:12.241 03:33:18 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:30:12.241 Found net devices under 0000:0a:00.0: cvl_0_0 00:30:12.241 03:33:18 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:12.241 03:33:18 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:12.241 03:33:18 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:12.241 03:33:18 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:12.241 03:33:18 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:12.241 03:33:18 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:12.241 03:33:18 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:12.241 03:33:18 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:12.241 03:33:18 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:30:12.241 Found net devices under 0000:0a:00.1: cvl_0_1 00:30:12.241 03:33:18 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:12.241 03:33:18 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:30:12.241 03:33:18 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # is_hw=yes 00:30:12.241 03:33:18 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:30:12.241 03:33:18 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:30:12.241 03:33:18 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:30:12.241 03:33:18 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:12.241 03:33:18 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:12.242 03:33:18 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:12.242 03:33:18 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:30:12.242 03:33:18 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:12.242 03:33:18 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:12.242 03:33:18 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:30:12.242 03:33:18 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:12.242 03:33:18 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:12.242 03:33:18 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:30:12.242 03:33:18 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:30:12.242 03:33:18 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:30:12.242 03:33:18 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:12.242 03:33:18 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:12.242 03:33:18 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:12.500 03:33:18 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:30:12.500 03:33:18 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:12.500 03:33:18 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:12.500 03:33:18 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:12.500 03:33:18 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:30:12.500 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:12.500 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.134 ms 00:30:12.500 00:30:12.500 --- 10.0.0.2 ping statistics --- 00:30:12.500 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:12.500 rtt min/avg/max/mdev = 0.134/0.134/0.134/0.000 ms 00:30:12.500 03:33:18 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:12.500 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:12.500 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.107 ms 00:30:12.500 00:30:12.500 --- 10.0.0.1 ping statistics --- 00:30:12.500 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:12.500 rtt min/avg/max/mdev = 0.107/0.107/0.107/0.000 ms 00:30:12.500 03:33:18 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:12.500 03:33:18 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@422 -- # return 0 00:30:12.500 03:33:18 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:30:12.500 03:33:18 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:12.500 03:33:18 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:30:12.500 03:33:18 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:30:12.500 03:33:18 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:12.500 03:33:18 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:30:12.500 03:33:18 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:30:12.500 03:33:18 nvmf_tcp.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:30:12.500 03:33:18 nvmf_tcp.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:30:12.500 03:33:18 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@722 -- # xtrace_disable 00:30:12.500 03:33:18 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:30:12.500 03:33:18 nvmf_tcp.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=3302393 00:30:12.500 03:33:18 nvmf_tcp.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:30:12.500 03:33:18 nvmf_tcp.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:30:12.500 03:33:18 nvmf_tcp.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 3302393 00:30:12.500 03:33:18 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@829 -- # '[' -z 3302393 ']' 00:30:12.500 03:33:18 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:12.500 03:33:18 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:30:12.500 03:33:18 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:12.500 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:12.500 03:33:18 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:30:12.500 03:33:18 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:30:12.500 [2024-07-15 03:33:18.517465] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:30:12.500 [2024-07-15 03:33:18.517539] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:12.500 EAL: No free 2048 kB hugepages reported on node 1 00:30:12.500 [2024-07-15 03:33:18.589495] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:12.758 [2024-07-15 03:33:18.685565] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:12.758 [2024-07-15 03:33:18.685625] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:12.758 [2024-07-15 03:33:18.685642] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:12.758 [2024-07-15 03:33:18.685655] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:12.758 [2024-07-15 03:33:18.685667] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:12.758 [2024-07-15 03:33:18.685724] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:30:12.758 [2024-07-15 03:33:18.685791] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:30:12.758 [2024-07-15 03:33:18.685901] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:30:12.758 [2024-07-15 03:33:18.685904] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:30:12.758 03:33:18 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:30:12.758 03:33:18 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@862 -- # return 0 00:30:12.758 03:33:18 nvmf_tcp.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:30:13.026 [2024-07-15 03:33:19.083575] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:13.026 03:33:19 nvmf_tcp.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:30:13.026 03:33:19 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@728 -- # xtrace_disable 00:30:13.026 03:33:19 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:30:13.026 03:33:19 nvmf_tcp.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:30:13.284 Malloc1 00:30:13.284 03:33:19 nvmf_tcp.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:13.847 03:33:19 nvmf_tcp.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:30:13.847 03:33:19 nvmf_tcp.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:14.103 [2024-07-15 03:33:20.215175] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:14.103 03:33:20 nvmf_tcp.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:30:14.667 03:33:20 nvmf_tcp.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:30:14.667 03:33:20 nvmf_tcp.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:30:14.667 03:33:20 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:30:14.667 03:33:20 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:30:14.667 03:33:20 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:30:14.667 03:33:20 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:30:14.667 03:33:20 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:30:14.667 03:33:20 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:30:14.667 03:33:20 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:30:14.667 03:33:20 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:30:14.667 03:33:20 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:30:14.667 03:33:20 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:30:14.667 03:33:20 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:30:14.667 03:33:20 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:30:14.667 03:33:20 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:30:14.667 03:33:20 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:30:14.667 03:33:20 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:30:14.667 03:33:20 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:30:14.667 03:33:20 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:30:14.667 03:33:20 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:30:14.667 03:33:20 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:30:14.667 03:33:20 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:30:14.667 03:33:20 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:30:14.667 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:30:14.667 fio-3.35 00:30:14.667 Starting 1 thread 00:30:14.667 EAL: No free 2048 kB hugepages reported on node 1 00:30:17.194 00:30:17.194 test: (groupid=0, jobs=1): err= 0: pid=3302869: Mon Jul 15 03:33:23 2024 00:30:17.194 read: IOPS=8189, BW=32.0MiB/s (33.5MB/s)(64.2MiB/2007msec) 00:30:17.194 slat (nsec): min=1997, max=109625, avg=2558.18, stdev=1457.13 00:30:17.194 clat (usec): min=2764, max=14709, avg=8609.34, stdev=695.19 00:30:17.194 lat (usec): min=2785, max=14712, avg=8611.90, stdev=695.12 00:30:17.194 clat percentiles (usec): 00:30:17.194 | 1.00th=[ 7046], 5.00th=[ 7504], 10.00th=[ 7767], 20.00th=[ 8094], 00:30:17.194 | 30.00th=[ 8291], 40.00th=[ 8455], 50.00th=[ 8586], 60.00th=[ 8848], 00:30:17.194 | 70.00th=[ 8979], 80.00th=[ 9110], 90.00th=[ 9372], 95.00th=[ 9634], 00:30:17.194 | 99.00th=[10159], 99.50th=[10421], 99.90th=[11469], 99.95th=[13829], 00:30:17.194 | 99.99th=[14615] 00:30:17.194 bw ( KiB/s): min=31632, max=33688, per=100.00%, avg=32762.00, stdev=854.82, samples=4 00:30:17.194 iops : min= 7908, max= 8422, avg=8190.50, stdev=213.71, samples=4 00:30:17.194 write: IOPS=8195, BW=32.0MiB/s (33.6MB/s)(64.3MiB/2007msec); 0 zone resets 00:30:17.194 slat (nsec): min=2102, max=98179, avg=2689.51, stdev=1214.47 00:30:17.194 clat (usec): min=1228, max=13743, avg=6977.25, stdev=609.64 00:30:17.194 lat (usec): min=1234, max=13745, avg=6979.94, stdev=609.59 00:30:17.194 clat percentiles (usec): 00:30:17.194 | 1.00th=[ 5604], 5.00th=[ 6063], 10.00th=[ 6259], 20.00th=[ 6521], 00:30:17.194 | 30.00th=[ 6718], 40.00th=[ 6849], 50.00th=[ 6980], 60.00th=[ 7111], 00:30:17.194 | 70.00th=[ 7242], 80.00th=[ 7439], 90.00th=[ 7635], 95.00th=[ 7832], 00:30:17.194 | 99.00th=[ 8291], 99.50th=[ 8455], 99.90th=[11338], 99.95th=[12518], 00:30:17.194 | 99.99th=[13698] 00:30:17.194 bw ( KiB/s): min=32384, max=33288, per=99.92%, avg=32756.00, stdev=399.17, samples=4 00:30:17.194 iops : min= 8096, max= 8322, avg=8189.00, stdev=99.79, samples=4 00:30:17.194 lat (msec) : 2=0.02%, 4=0.09%, 10=98.97%, 20=0.92% 00:30:17.194 cpu : usr=58.92%, sys=36.74%, ctx=74, majf=0, minf=7 00:30:17.194 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:30:17.194 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:17.194 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:30:17.194 issued rwts: total=16437,16449,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:17.194 latency : target=0, window=0, percentile=100.00%, depth=128 00:30:17.194 00:30:17.194 Run status group 0 (all jobs): 00:30:17.194 READ: bw=32.0MiB/s (33.5MB/s), 32.0MiB/s-32.0MiB/s (33.5MB/s-33.5MB/s), io=64.2MiB (67.3MB), run=2007-2007msec 00:30:17.194 WRITE: bw=32.0MiB/s (33.6MB/s), 32.0MiB/s-32.0MiB/s (33.6MB/s-33.6MB/s), io=64.3MiB (67.4MB), run=2007-2007msec 00:30:17.194 03:33:23 nvmf_tcp.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:30:17.194 03:33:23 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:30:17.194 03:33:23 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:30:17.194 03:33:23 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:30:17.194 03:33:23 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:30:17.194 03:33:23 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:30:17.194 03:33:23 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:30:17.194 03:33:23 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:30:17.194 03:33:23 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:30:17.194 03:33:23 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:30:17.194 03:33:23 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:30:17.194 03:33:23 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:30:17.194 03:33:23 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:30:17.194 03:33:23 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:30:17.194 03:33:23 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:30:17.194 03:33:23 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:30:17.195 03:33:23 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:30:17.195 03:33:23 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:30:17.195 03:33:23 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:30:17.195 03:33:23 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:30:17.195 03:33:23 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:30:17.195 03:33:23 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:30:17.195 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:30:17.195 fio-3.35 00:30:17.195 Starting 1 thread 00:30:17.195 EAL: No free 2048 kB hugepages reported on node 1 00:30:19.717 00:30:19.717 test: (groupid=0, jobs=1): err= 0: pid=3303198: Mon Jul 15 03:33:25 2024 00:30:19.717 read: IOPS=8463, BW=132MiB/s (139MB/s)(266MiB/2009msec) 00:30:19.717 slat (nsec): min=2958, max=96510, avg=3807.03, stdev=1651.41 00:30:19.717 clat (usec): min=2145, max=17188, avg=8700.80, stdev=1979.36 00:30:19.717 lat (usec): min=2148, max=17194, avg=8704.60, stdev=1979.41 00:30:19.717 clat percentiles (usec): 00:30:19.717 | 1.00th=[ 4883], 5.00th=[ 5604], 10.00th=[ 6259], 20.00th=[ 7046], 00:30:19.717 | 30.00th=[ 7570], 40.00th=[ 8094], 50.00th=[ 8586], 60.00th=[ 9110], 00:30:19.717 | 70.00th=[ 9765], 80.00th=[10159], 90.00th=[11207], 95.00th=[12125], 00:30:19.717 | 99.00th=[14222], 99.50th=[14746], 99.90th=[15926], 99.95th=[16188], 00:30:19.717 | 99.99th=[16909] 00:30:19.717 bw ( KiB/s): min=60800, max=79712, per=52.48%, avg=71064.00, stdev=7778.54, samples=4 00:30:19.717 iops : min= 3800, max= 4982, avg=4441.50, stdev=486.16, samples=4 00:30:19.717 write: IOPS=5049, BW=78.9MiB/s (82.7MB/s)(145MiB/1838msec); 0 zone resets 00:30:19.717 slat (usec): min=30, max=203, avg=34.14, stdev= 5.65 00:30:19.717 clat (usec): min=6339, max=18881, avg=11108.92, stdev=1914.91 00:30:19.717 lat (usec): min=6389, max=18912, avg=11143.06, stdev=1915.34 00:30:19.717 clat percentiles (usec): 00:30:19.717 | 1.00th=[ 7373], 5.00th=[ 8291], 10.00th=[ 8848], 20.00th=[ 9503], 00:30:19.717 | 30.00th=[10028], 40.00th=[10421], 50.00th=[10945], 60.00th=[11469], 00:30:19.717 | 70.00th=[11994], 80.00th=[12518], 90.00th=[13698], 95.00th=[14484], 00:30:19.717 | 99.00th=[16581], 99.50th=[17171], 99.90th=[18220], 99.95th=[18482], 00:30:19.717 | 99.99th=[19006] 00:30:19.717 bw ( KiB/s): min=64544, max=82528, per=91.34%, avg=73792.00, stdev=7425.56, samples=4 00:30:19.717 iops : min= 4034, max= 5158, avg=4612.00, stdev=464.10, samples=4 00:30:19.717 lat (msec) : 4=0.12%, 10=59.94%, 20=39.94% 00:30:19.717 cpu : usr=74.76%, sys=22.30%, ctx=44, majf=0, minf=3 00:30:19.717 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:30:19.717 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:19.717 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:30:19.717 issued rwts: total=17004,9281,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:19.717 latency : target=0, window=0, percentile=100.00%, depth=128 00:30:19.717 00:30:19.717 Run status group 0 (all jobs): 00:30:19.717 READ: bw=132MiB/s (139MB/s), 132MiB/s-132MiB/s (139MB/s-139MB/s), io=266MiB (279MB), run=2009-2009msec 00:30:19.717 WRITE: bw=78.9MiB/s (82.7MB/s), 78.9MiB/s-78.9MiB/s (82.7MB/s-82.7MB/s), io=145MiB (152MB), run=1838-1838msec 00:30:19.717 03:33:25 nvmf_tcp.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:19.974 03:33:25 nvmf_tcp.nvmf_fio_host -- host/fio.sh@49 -- # '[' 1 -eq 1 ']' 00:30:19.974 03:33:25 nvmf_tcp.nvmf_fio_host -- host/fio.sh@51 -- # bdfs=($(get_nvme_bdfs)) 00:30:19.974 03:33:25 nvmf_tcp.nvmf_fio_host -- host/fio.sh@51 -- # get_nvme_bdfs 00:30:19.974 03:33:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1513 -- # bdfs=() 00:30:19.974 03:33:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1513 -- # local bdfs 00:30:19.974 03:33:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:30:19.974 03:33:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:30:19.974 03:33:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:30:19.974 03:33:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:30:19.974 03:33:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:88:00.0 00:30:19.974 03:33:25 nvmf_tcp.nvmf_fio_host -- host/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:88:00.0 -i 10.0.0.2 00:30:23.253 Nvme0n1 00:30:23.253 03:33:29 nvmf_tcp.nvmf_fio_host -- host/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore -c 1073741824 Nvme0n1 lvs_0 00:30:26.534 03:33:31 nvmf_tcp.nvmf_fio_host -- host/fio.sh@53 -- # ls_guid=71665f4a-3e68-4eec-9929-068894511b2b 00:30:26.534 03:33:31 nvmf_tcp.nvmf_fio_host -- host/fio.sh@54 -- # get_lvs_free_mb 71665f4a-3e68-4eec-9929-068894511b2b 00:30:26.534 03:33:31 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1364 -- # local lvs_uuid=71665f4a-3e68-4eec-9929-068894511b2b 00:30:26.534 03:33:31 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1365 -- # local lvs_info 00:30:26.534 03:33:31 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1366 -- # local fc 00:30:26.534 03:33:31 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1367 -- # local cs 00:30:26.534 03:33:31 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1368 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:30:26.534 03:33:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1368 -- # lvs_info='[ 00:30:26.534 { 00:30:26.534 "uuid": "71665f4a-3e68-4eec-9929-068894511b2b", 00:30:26.534 "name": "lvs_0", 00:30:26.534 "base_bdev": "Nvme0n1", 00:30:26.534 "total_data_clusters": 930, 00:30:26.534 "free_clusters": 930, 00:30:26.534 "block_size": 512, 00:30:26.534 "cluster_size": 1073741824 00:30:26.534 } 00:30:26.534 ]' 00:30:26.534 03:33:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="71665f4a-3e68-4eec-9929-068894511b2b") .free_clusters' 00:30:26.534 03:33:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1369 -- # fc=930 00:30:26.534 03:33:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1370 -- # jq '.[] | select(.uuid=="71665f4a-3e68-4eec-9929-068894511b2b") .cluster_size' 00:30:26.534 03:33:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1370 -- # cs=1073741824 00:30:26.534 03:33:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1373 -- # free_mb=952320 00:30:26.534 03:33:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1374 -- # echo 952320 00:30:26.534 952320 00:30:26.534 03:33:32 nvmf_tcp.nvmf_fio_host -- host/fio.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -l lvs_0 lbd_0 952320 00:30:26.792 22dcfc9d-e7e5-4455-aea1-6daeaa166c59 00:30:26.792 03:33:32 nvmf_tcp.nvmf_fio_host -- host/fio.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000001 00:30:27.050 03:33:32 nvmf_tcp.nvmf_fio_host -- host/fio.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 lvs_0/lbd_0 00:30:27.308 03:33:33 nvmf_tcp.nvmf_fio_host -- host/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:30:27.565 03:33:33 nvmf_tcp.nvmf_fio_host -- host/fio.sh@59 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:30:27.565 03:33:33 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:30:27.565 03:33:33 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:30:27.565 03:33:33 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:30:27.565 03:33:33 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:30:27.565 03:33:33 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:30:27.565 03:33:33 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:30:27.565 03:33:33 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:30:27.565 03:33:33 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:30:27.565 03:33:33 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:30:27.565 03:33:33 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:30:27.565 03:33:33 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:30:27.565 03:33:33 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:30:27.565 03:33:33 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:30:27.565 03:33:33 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:30:27.565 03:33:33 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:30:27.565 03:33:33 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:30:27.565 03:33:33 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:30:27.565 03:33:33 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:30:27.565 03:33:33 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:30:27.565 03:33:33 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:30:27.565 03:33:33 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:30:27.565 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:30:27.565 fio-3.35 00:30:27.565 Starting 1 thread 00:30:27.565 EAL: No free 2048 kB hugepages reported on node 1 00:30:30.090 00:30:30.090 test: (groupid=0, jobs=1): err= 0: pid=3304483: Mon Jul 15 03:33:35 2024 00:30:30.090 read: IOPS=6109, BW=23.9MiB/s (25.0MB/s)(47.9MiB/2008msec) 00:30:30.090 slat (usec): min=2, max=131, avg= 2.61, stdev= 1.76 00:30:30.090 clat (usec): min=984, max=171206, avg=11528.97, stdev=11564.57 00:30:30.090 lat (usec): min=987, max=171243, avg=11531.58, stdev=11564.80 00:30:30.090 clat percentiles (msec): 00:30:30.090 | 1.00th=[ 9], 5.00th=[ 10], 10.00th=[ 10], 20.00th=[ 11], 00:30:30.090 | 30.00th=[ 11], 40.00th=[ 11], 50.00th=[ 11], 60.00th=[ 11], 00:30:30.090 | 70.00th=[ 12], 80.00th=[ 12], 90.00th=[ 12], 95.00th=[ 13], 00:30:30.090 | 99.00th=[ 13], 99.50th=[ 157], 99.90th=[ 171], 99.95th=[ 171], 00:30:30.090 | 99.99th=[ 171] 00:30:30.090 bw ( KiB/s): min=17048, max=27040, per=99.83%, avg=24394.00, stdev=4899.39, samples=4 00:30:30.090 iops : min= 4262, max= 6760, avg=6098.50, stdev=1224.85, samples=4 00:30:30.090 write: IOPS=6090, BW=23.8MiB/s (24.9MB/s)(47.8MiB/2008msec); 0 zone resets 00:30:30.090 slat (usec): min=2, max=105, avg= 2.75, stdev= 1.44 00:30:30.090 clat (usec): min=275, max=169309, avg=9297.99, stdev=10854.64 00:30:30.090 lat (usec): min=277, max=169315, avg=9300.74, stdev=10854.87 00:30:30.090 clat percentiles (msec): 00:30:30.090 | 1.00th=[ 7], 5.00th=[ 8], 10.00th=[ 8], 20.00th=[ 8], 00:30:30.090 | 30.00th=[ 9], 40.00th=[ 9], 50.00th=[ 9], 60.00th=[ 9], 00:30:30.090 | 70.00th=[ 9], 80.00th=[ 10], 90.00th=[ 10], 95.00th=[ 10], 00:30:30.090 | 99.00th=[ 11], 99.50th=[ 15], 99.90th=[ 169], 99.95th=[ 169], 00:30:30.090 | 99.99th=[ 169] 00:30:30.090 bw ( KiB/s): min=18088, max=26544, per=99.93%, avg=24346.00, stdev=4173.00, samples=4 00:30:30.090 iops : min= 4522, max= 6636, avg=6086.50, stdev=1043.25, samples=4 00:30:30.090 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.01% 00:30:30.090 lat (msec) : 2=0.03%, 4=0.11%, 10=58.82%, 20=40.49%, 250=0.52% 00:30:30.090 cpu : usr=58.45%, sys=38.57%, ctx=116, majf=0, minf=25 00:30:30.090 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.7% 00:30:30.090 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:30.090 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:30:30.090 issued rwts: total=12267,12230,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:30.090 latency : target=0, window=0, percentile=100.00%, depth=128 00:30:30.090 00:30:30.090 Run status group 0 (all jobs): 00:30:30.090 READ: bw=23.9MiB/s (25.0MB/s), 23.9MiB/s-23.9MiB/s (25.0MB/s-25.0MB/s), io=47.9MiB (50.2MB), run=2008-2008msec 00:30:30.090 WRITE: bw=23.8MiB/s (24.9MB/s), 23.8MiB/s-23.8MiB/s (24.9MB/s-24.9MB/s), io=47.8MiB (50.1MB), run=2008-2008msec 00:30:30.090 03:33:35 nvmf_tcp.nvmf_fio_host -- host/fio.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:30:30.348 03:33:36 nvmf_tcp.nvmf_fio_host -- host/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none lvs_0/lbd_0 lvs_n_0 00:30:31.279 03:33:37 nvmf_tcp.nvmf_fio_host -- host/fio.sh@64 -- # ls_nested_guid=a573b897-f6d9-49dc-93d4-5642fee3e5b5 00:30:31.279 03:33:37 nvmf_tcp.nvmf_fio_host -- host/fio.sh@65 -- # get_lvs_free_mb a573b897-f6d9-49dc-93d4-5642fee3e5b5 00:30:31.279 03:33:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1364 -- # local lvs_uuid=a573b897-f6d9-49dc-93d4-5642fee3e5b5 00:30:31.279 03:33:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1365 -- # local lvs_info 00:30:31.279 03:33:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1366 -- # local fc 00:30:31.279 03:33:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1367 -- # local cs 00:30:31.279 03:33:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1368 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:30:31.536 03:33:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1368 -- # lvs_info='[ 00:30:31.536 { 00:30:31.536 "uuid": "71665f4a-3e68-4eec-9929-068894511b2b", 00:30:31.536 "name": "lvs_0", 00:30:31.536 "base_bdev": "Nvme0n1", 00:30:31.536 "total_data_clusters": 930, 00:30:31.536 "free_clusters": 0, 00:30:31.536 "block_size": 512, 00:30:31.536 "cluster_size": 1073741824 00:30:31.536 }, 00:30:31.536 { 00:30:31.536 "uuid": "a573b897-f6d9-49dc-93d4-5642fee3e5b5", 00:30:31.536 "name": "lvs_n_0", 00:30:31.536 "base_bdev": "22dcfc9d-e7e5-4455-aea1-6daeaa166c59", 00:30:31.536 "total_data_clusters": 237847, 00:30:31.536 "free_clusters": 237847, 00:30:31.536 "block_size": 512, 00:30:31.536 "cluster_size": 4194304 00:30:31.536 } 00:30:31.536 ]' 00:30:31.536 03:33:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="a573b897-f6d9-49dc-93d4-5642fee3e5b5") .free_clusters' 00:30:31.794 03:33:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1369 -- # fc=237847 00:30:31.794 03:33:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1370 -- # jq '.[] | select(.uuid=="a573b897-f6d9-49dc-93d4-5642fee3e5b5") .cluster_size' 00:30:31.794 03:33:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1370 -- # cs=4194304 00:30:31.794 03:33:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1373 -- # free_mb=951388 00:30:31.794 03:33:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1374 -- # echo 951388 00:30:31.794 951388 00:30:31.794 03:33:37 nvmf_tcp.nvmf_fio_host -- host/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -l lvs_n_0 lbd_nest_0 951388 00:30:32.423 57d985ca-50ee-4d4a-b648-63bb7c5ab4d6 00:30:32.423 03:33:38 nvmf_tcp.nvmf_fio_host -- host/fio.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000001 00:30:32.681 03:33:38 nvmf_tcp.nvmf_fio_host -- host/fio.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 lvs_n_0/lbd_nest_0 00:30:32.939 03:33:38 nvmf_tcp.nvmf_fio_host -- host/fio.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:30:33.197 03:33:39 nvmf_tcp.nvmf_fio_host -- host/fio.sh@70 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:30:33.197 03:33:39 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:30:33.197 03:33:39 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:30:33.197 03:33:39 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:30:33.197 03:33:39 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:30:33.197 03:33:39 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:30:33.197 03:33:39 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:30:33.197 03:33:39 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:30:33.197 03:33:39 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:30:33.197 03:33:39 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:30:33.197 03:33:39 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:30:33.197 03:33:39 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:30:33.197 03:33:39 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:30:33.197 03:33:39 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:30:33.198 03:33:39 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:30:33.198 03:33:39 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:30:33.198 03:33:39 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:30:33.198 03:33:39 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:30:33.198 03:33:39 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:30:33.198 03:33:39 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:30:33.198 03:33:39 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:30:33.198 03:33:39 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:30:33.456 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:30:33.456 fio-3.35 00:30:33.456 Starting 1 thread 00:30:33.456 EAL: No free 2048 kB hugepages reported on node 1 00:30:35.981 00:30:35.981 test: (groupid=0, jobs=1): err= 0: pid=3305213: Mon Jul 15 03:33:41 2024 00:30:35.981 read: IOPS=5903, BW=23.1MiB/s (24.2MB/s)(46.3MiB/2009msec) 00:30:35.981 slat (nsec): min=1949, max=169665, avg=2583.12, stdev=2167.28 00:30:35.981 clat (usec): min=4087, max=20213, avg=11935.92, stdev=1057.03 00:30:35.981 lat (usec): min=4110, max=20216, avg=11938.51, stdev=1056.94 00:30:35.981 clat percentiles (usec): 00:30:35.981 | 1.00th=[ 9372], 5.00th=[10290], 10.00th=[10683], 20.00th=[11076], 00:30:35.981 | 30.00th=[11469], 40.00th=[11731], 50.00th=[11994], 60.00th=[12256], 00:30:35.981 | 70.00th=[12518], 80.00th=[12780], 90.00th=[13173], 95.00th=[13566], 00:30:35.981 | 99.00th=[14222], 99.50th=[14484], 99.90th=[18220], 99.95th=[18482], 00:30:35.981 | 99.99th=[20055] 00:30:35.981 bw ( KiB/s): min=22480, max=24032, per=99.92%, avg=23596.00, stdev=747.69, samples=4 00:30:35.981 iops : min= 5620, max= 6008, avg=5899.00, stdev=186.92, samples=4 00:30:35.981 write: IOPS=5901, BW=23.1MiB/s (24.2MB/s)(46.3MiB/2009msec); 0 zone resets 00:30:35.981 slat (usec): min=2, max=120, avg= 2.72, stdev= 1.53 00:30:35.981 clat (usec): min=2871, max=18422, avg=9629.74, stdev=915.30 00:30:35.981 lat (usec): min=2880, max=18425, avg=9632.46, stdev=915.28 00:30:35.981 clat percentiles (usec): 00:30:35.981 | 1.00th=[ 7570], 5.00th=[ 8291], 10.00th=[ 8586], 20.00th=[ 8979], 00:30:35.981 | 30.00th=[ 9241], 40.00th=[ 9503], 50.00th=[ 9634], 60.00th=[ 9765], 00:30:35.981 | 70.00th=[10028], 80.00th=[10290], 90.00th=[10683], 95.00th=[10945], 00:30:35.981 | 99.00th=[11600], 99.50th=[11863], 99.90th=[17171], 99.95th=[18220], 00:30:35.981 | 99.99th=[18482] 00:30:35.981 bw ( KiB/s): min=23360, max=23744, per=99.90%, avg=23584.00, stdev=169.33, samples=4 00:30:35.981 iops : min= 5840, max= 5936, avg=5896.00, stdev=42.33, samples=4 00:30:35.981 lat (msec) : 4=0.03%, 10=35.57%, 20=64.38%, 50=0.01% 00:30:35.981 cpu : usr=60.91%, sys=35.86%, ctx=90, majf=0, minf=25 00:30:35.981 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.7% 00:30:35.981 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:35.981 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:30:35.981 issued rwts: total=11860,11857,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:35.981 latency : target=0, window=0, percentile=100.00%, depth=128 00:30:35.981 00:30:35.981 Run status group 0 (all jobs): 00:30:35.981 READ: bw=23.1MiB/s (24.2MB/s), 23.1MiB/s-23.1MiB/s (24.2MB/s-24.2MB/s), io=46.3MiB (48.6MB), run=2009-2009msec 00:30:35.981 WRITE: bw=23.1MiB/s (24.2MB/s), 23.1MiB/s-23.1MiB/s (24.2MB/s-24.2MB/s), io=46.3MiB (48.6MB), run=2009-2009msec 00:30:35.981 03:33:41 nvmf_tcp.nvmf_fio_host -- host/fio.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:30:35.981 03:33:41 nvmf_tcp.nvmf_fio_host -- host/fio.sh@74 -- # sync 00:30:35.981 03:33:41 nvmf_tcp.nvmf_fio_host -- host/fio.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete lvs_n_0/lbd_nest_0 00:30:40.158 03:33:45 nvmf_tcp.nvmf_fio_host -- host/fio.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:30:40.158 03:33:46 nvmf_tcp.nvmf_fio_host -- host/fio.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete lvs_0/lbd_0 00:30:43.429 03:33:48 nvmf_tcp.nvmf_fio_host -- host/fio.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:30:43.429 03:33:49 nvmf_tcp.nvmf_fio_host -- host/fio.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:30:45.329 03:33:51 nvmf_tcp.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:30:45.329 03:33:51 nvmf_tcp.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:30:45.329 03:33:51 nvmf_tcp.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:30:45.329 03:33:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:30:45.329 03:33:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@117 -- # sync 00:30:45.329 03:33:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:30:45.329 03:33:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@120 -- # set +e 00:30:45.329 03:33:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:30:45.329 03:33:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:30:45.329 rmmod nvme_tcp 00:30:45.329 rmmod nvme_fabrics 00:30:45.329 rmmod nvme_keyring 00:30:45.329 03:33:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:30:45.329 03:33:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@124 -- # set -e 00:30:45.329 03:33:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@125 -- # return 0 00:30:45.329 03:33:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@489 -- # '[' -n 3302393 ']' 00:30:45.329 03:33:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@490 -- # killprocess 3302393 00:30:45.329 03:33:51 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@948 -- # '[' -z 3302393 ']' 00:30:45.329 03:33:51 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@952 -- # kill -0 3302393 00:30:45.329 03:33:51 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@953 -- # uname 00:30:45.329 03:33:51 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:30:45.329 03:33:51 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3302393 00:30:45.329 03:33:51 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:30:45.329 03:33:51 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:30:45.329 03:33:51 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3302393' 00:30:45.329 killing process with pid 3302393 00:30:45.329 03:33:51 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@967 -- # kill 3302393 00:30:45.329 03:33:51 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@972 -- # wait 3302393 00:30:45.329 03:33:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:30:45.329 03:33:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:30:45.329 03:33:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:30:45.329 03:33:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:30:45.329 03:33:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:30:45.329 03:33:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:45.329 03:33:51 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:45.329 03:33:51 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:47.862 03:33:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:30:47.862 00:30:47.862 real 0m37.178s 00:30:47.862 user 2m22.586s 00:30:47.862 sys 0m6.928s 00:30:47.862 03:33:53 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1124 -- # xtrace_disable 00:30:47.862 03:33:53 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:30:47.862 ************************************ 00:30:47.862 END TEST nvmf_fio_host 00:30:47.862 ************************************ 00:30:47.862 03:33:53 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:30:47.862 03:33:53 nvmf_tcp -- nvmf/nvmf.sh@100 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:30:47.862 03:33:53 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:30:47.862 03:33:53 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:47.862 03:33:53 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:47.862 ************************************ 00:30:47.862 START TEST nvmf_failover 00:30:47.862 ************************************ 00:30:47.862 03:33:53 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:30:47.862 * Looking for test storage... 00:30:47.862 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:47.862 03:33:53 nvmf_tcp.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:47.862 03:33:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:30:47.862 03:33:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:47.862 03:33:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:47.862 03:33:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:47.862 03:33:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:47.862 03:33:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:47.862 03:33:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:47.862 03:33:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:47.862 03:33:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:47.862 03:33:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:47.862 03:33:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:47.862 03:33:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:30:47.862 03:33:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:30:47.862 03:33:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:47.862 03:33:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:47.862 03:33:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:47.862 03:33:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:47.862 03:33:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:47.862 03:33:53 nvmf_tcp.nvmf_failover -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:47.862 03:33:53 nvmf_tcp.nvmf_failover -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:47.862 03:33:53 nvmf_tcp.nvmf_failover -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:47.862 03:33:53 nvmf_tcp.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:47.862 03:33:53 nvmf_tcp.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:47.862 03:33:53 nvmf_tcp.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:47.862 03:33:53 nvmf_tcp.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:30:47.862 03:33:53 nvmf_tcp.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:47.862 03:33:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@47 -- # : 0 00:30:47.862 03:33:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:30:47.862 03:33:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:30:47.862 03:33:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:47.862 03:33:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:47.862 03:33:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:47.862 03:33:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:30:47.862 03:33:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:30:47.862 03:33:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@51 -- # have_pci_nics=0 00:30:47.862 03:33:53 nvmf_tcp.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:30:47.862 03:33:53 nvmf_tcp.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:30:47.862 03:33:53 nvmf_tcp.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:47.862 03:33:53 nvmf_tcp.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:30:47.862 03:33:53 nvmf_tcp.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:30:47.862 03:33:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:30:47.862 03:33:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:47.862 03:33:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@448 -- # prepare_net_devs 00:30:47.862 03:33:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@410 -- # local -g is_hw=no 00:30:47.862 03:33:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@412 -- # remove_spdk_ns 00:30:47.862 03:33:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:47.862 03:33:53 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:47.862 03:33:53 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:47.862 03:33:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:30:47.862 03:33:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:30:47.862 03:33:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@285 -- # xtrace_disable 00:30:47.862 03:33:53 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:30:49.762 03:33:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:49.762 03:33:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@291 -- # pci_devs=() 00:30:49.762 03:33:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@291 -- # local -a pci_devs 00:30:49.762 03:33:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@292 -- # pci_net_devs=() 00:30:49.762 03:33:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:30:49.762 03:33:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@293 -- # pci_drivers=() 00:30:49.762 03:33:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@293 -- # local -A pci_drivers 00:30:49.762 03:33:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@295 -- # net_devs=() 00:30:49.762 03:33:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@295 -- # local -ga net_devs 00:30:49.762 03:33:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@296 -- # e810=() 00:30:49.762 03:33:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@296 -- # local -ga e810 00:30:49.762 03:33:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@297 -- # x722=() 00:30:49.762 03:33:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@297 -- # local -ga x722 00:30:49.762 03:33:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@298 -- # mlx=() 00:30:49.762 03:33:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@298 -- # local -ga mlx 00:30:49.762 03:33:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:49.762 03:33:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:49.762 03:33:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:49.762 03:33:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:49.762 03:33:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:49.762 03:33:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:49.762 03:33:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:49.762 03:33:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:49.762 03:33:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:49.762 03:33:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:49.762 03:33:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:49.762 03:33:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:30:49.762 03:33:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:30:49.762 03:33:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:30:49.762 03:33:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:30:49.762 03:33:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:30:49.762 03:33:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:30:49.762 03:33:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:49.762 03:33:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:30:49.762 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:30:49.762 03:33:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:49.762 03:33:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:49.762 03:33:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:49.762 03:33:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:49.762 03:33:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:49.762 03:33:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:49.762 03:33:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:30:49.762 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:30:49.762 03:33:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:49.762 03:33:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:49.762 03:33:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:49.762 03:33:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:49.762 03:33:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:49.762 03:33:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:30:49.762 03:33:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:30:49.762 03:33:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:30:49.762 03:33:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:49.762 03:33:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:49.762 03:33:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:49.762 03:33:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:49.762 03:33:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:49.762 03:33:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:49.762 03:33:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:49.762 03:33:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:30:49.762 Found net devices under 0000:0a:00.0: cvl_0_0 00:30:49.762 03:33:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:49.762 03:33:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:49.762 03:33:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:49.762 03:33:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:49.762 03:33:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:49.762 03:33:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:49.762 03:33:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:49.762 03:33:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:49.762 03:33:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:30:49.762 Found net devices under 0000:0a:00.1: cvl_0_1 00:30:49.762 03:33:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:49.762 03:33:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:30:49.762 03:33:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # is_hw=yes 00:30:49.762 03:33:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:30:49.762 03:33:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:30:49.762 03:33:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:30:49.762 03:33:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:49.762 03:33:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:49.762 03:33:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:49.762 03:33:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:30:49.762 03:33:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:49.762 03:33:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:49.762 03:33:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:30:49.762 03:33:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:49.762 03:33:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:49.762 03:33:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:30:49.762 03:33:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:30:49.762 03:33:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:30:49.762 03:33:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:49.762 03:33:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:49.762 03:33:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:49.762 03:33:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:30:49.762 03:33:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:49.762 03:33:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:49.762 03:33:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:49.762 03:33:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:30:49.762 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:49.762 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.185 ms 00:30:49.762 00:30:49.762 --- 10.0.0.2 ping statistics --- 00:30:49.762 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:49.762 rtt min/avg/max/mdev = 0.185/0.185/0.185/0.000 ms 00:30:49.762 03:33:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:49.762 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:49.763 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.115 ms 00:30:49.763 00:30:49.763 --- 10.0.0.1 ping statistics --- 00:30:49.763 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:49.763 rtt min/avg/max/mdev = 0.115/0.115/0.115/0.000 ms 00:30:49.763 03:33:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:49.763 03:33:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@422 -- # return 0 00:30:49.763 03:33:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:30:49.763 03:33:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:49.763 03:33:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:30:49.763 03:33:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:30:49.763 03:33:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:49.763 03:33:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:30:49.763 03:33:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:30:49.763 03:33:55 nvmf_tcp.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:30:49.763 03:33:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:30:49.763 03:33:55 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@722 -- # xtrace_disable 00:30:49.763 03:33:55 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:30:49.763 03:33:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@481 -- # nvmfpid=3308573 00:30:49.763 03:33:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:30:49.763 03:33:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@482 -- # waitforlisten 3308573 00:30:49.763 03:33:55 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 3308573 ']' 00:30:49.763 03:33:55 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:49.763 03:33:55 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:30:49.763 03:33:55 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:49.763 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:49.763 03:33:55 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:30:49.763 03:33:55 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:30:49.763 [2024-07-15 03:33:55.805687] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:30:49.763 [2024-07-15 03:33:55.805760] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:49.763 EAL: No free 2048 kB hugepages reported on node 1 00:30:49.763 [2024-07-15 03:33:55.874412] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:30:50.021 [2024-07-15 03:33:55.964882] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:50.021 [2024-07-15 03:33:55.964945] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:50.021 [2024-07-15 03:33:55.964970] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:50.021 [2024-07-15 03:33:55.964984] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:50.021 [2024-07-15 03:33:55.964995] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:50.021 [2024-07-15 03:33:55.965095] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:30:50.021 [2024-07-15 03:33:55.965196] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:30:50.021 [2024-07-15 03:33:55.965199] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:30:50.021 03:33:56 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:30:50.021 03:33:56 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:30:50.021 03:33:56 nvmf_tcp.nvmf_failover -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:30:50.021 03:33:56 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@728 -- # xtrace_disable 00:30:50.021 03:33:56 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:30:50.021 03:33:56 nvmf_tcp.nvmf_failover -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:50.021 03:33:56 nvmf_tcp.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:30:50.278 [2024-07-15 03:33:56.312799] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:50.278 03:33:56 nvmf_tcp.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:30:50.535 Malloc0 00:30:50.535 03:33:56 nvmf_tcp.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:50.792 03:33:56 nvmf_tcp.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:51.048 03:33:57 nvmf_tcp.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:51.305 [2024-07-15 03:33:57.333742] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:51.305 03:33:57 nvmf_tcp.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:30:51.563 [2024-07-15 03:33:57.578520] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:30:51.563 03:33:57 nvmf_tcp.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:30:51.820 [2024-07-15 03:33:57.823429] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:30:51.820 03:33:57 nvmf_tcp.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=3308744 00:30:51.820 03:33:57 nvmf_tcp.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:30:51.820 03:33:57 nvmf_tcp.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:30:51.820 03:33:57 nvmf_tcp.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 3308744 /var/tmp/bdevperf.sock 00:30:51.820 03:33:57 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 3308744 ']' 00:30:51.820 03:33:57 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:30:51.820 03:33:57 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:30:51.820 03:33:57 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:30:51.820 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:30:51.820 03:33:57 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:30:51.820 03:33:57 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:30:52.077 03:33:58 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:30:52.077 03:33:58 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:30:52.077 03:33:58 nvmf_tcp.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:52.643 NVMe0n1 00:30:52.643 03:33:58 nvmf_tcp.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:52.900 00:30:52.900 03:33:58 nvmf_tcp.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=3308878 00:30:52.900 03:33:58 nvmf_tcp.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:30:52.901 03:33:58 nvmf_tcp.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:30:53.835 03:33:59 nvmf_tcp.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:54.092 [2024-07-15 03:34:00.115339] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17de970 is same with the state(5) to be set 00:30:54.092 [2024-07-15 03:34:00.115407] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17de970 is same with the state(5) to be set 00:30:54.092 [2024-07-15 03:34:00.115432] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17de970 is same with the state(5) to be set 00:30:54.092 [2024-07-15 03:34:00.115445] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17de970 is same with the state(5) to be set 00:30:54.092 [2024-07-15 03:34:00.115457] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17de970 is same with the state(5) to be set 00:30:54.092 [2024-07-15 03:34:00.115469] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17de970 is same with the state(5) to be set 00:30:54.092 [2024-07-15 03:34:00.115492] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17de970 is same with the state(5) to be set 00:30:54.092 [2024-07-15 03:34:00.115503] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17de970 is same with the state(5) to be set 00:30:54.092 [2024-07-15 03:34:00.115515] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17de970 is same with the state(5) to be set 00:30:54.092 [2024-07-15 03:34:00.115527] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17de970 is same with the state(5) to be set 00:30:54.092 [2024-07-15 03:34:00.115538] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17de970 is same with the state(5) to be set 00:30:54.092 [2024-07-15 03:34:00.115550] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17de970 is same with the state(5) to be set 00:30:54.092 [2024-07-15 03:34:00.115562] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17de970 is same with the state(5) to be set 00:30:54.092 [2024-07-15 03:34:00.115574] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17de970 is same with the state(5) to be set 00:30:54.092 [2024-07-15 03:34:00.115586] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17de970 is same with the state(5) to be set 00:30:54.092 [2024-07-15 03:34:00.115612] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17de970 is same with the state(5) to be set 00:30:54.092 [2024-07-15 03:34:00.115623] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17de970 is same with the state(5) to be set 00:30:54.092 [2024-07-15 03:34:00.115635] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17de970 is same with the state(5) to be set 00:30:54.092 [2024-07-15 03:34:00.115647] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17de970 is same with the state(5) to be set 00:30:54.092 [2024-07-15 03:34:00.115658] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17de970 is same with the state(5) to be set 00:30:54.092 [2024-07-15 03:34:00.115670] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17de970 is same with the state(5) to be set 00:30:54.092 [2024-07-15 03:34:00.115681] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17de970 is same with the state(5) to be set 00:30:54.092 [2024-07-15 03:34:00.115693] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17de970 is same with the state(5) to be set 00:30:54.092 [2024-07-15 03:34:00.115704] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17de970 is same with the state(5) to be set 00:30:54.092 [2024-07-15 03:34:00.115715] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17de970 is same with the state(5) to be set 00:30:54.092 [2024-07-15 03:34:00.115737] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17de970 is same with the state(5) to be set 00:30:54.092 [2024-07-15 03:34:00.115749] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17de970 is same with the state(5) to be set 00:30:54.092 [2024-07-15 03:34:00.115760] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17de970 is same with the state(5) to be set 00:30:54.092 [2024-07-15 03:34:00.115771] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17de970 is same with the state(5) to be set 00:30:54.092 [2024-07-15 03:34:00.115782] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17de970 is same with the state(5) to be set 00:30:54.092 [2024-07-15 03:34:00.115793] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17de970 is same with the state(5) to be set 00:30:54.092 03:34:00 nvmf_tcp.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:30:57.395 03:34:03 nvmf_tcp.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:57.652 00:30:57.652 03:34:03 nvmf_tcp.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:30:57.909 [2024-07-15 03:34:03.920823] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dfed0 is same with the state(5) to be set 00:30:57.909 [2024-07-15 03:34:03.920892] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dfed0 is same with the state(5) to be set 00:30:57.909 [2024-07-15 03:34:03.920919] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dfed0 is same with the state(5) to be set 00:30:57.909 [2024-07-15 03:34:03.920931] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dfed0 is same with the state(5) to be set 00:30:57.909 [2024-07-15 03:34:03.920944] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dfed0 is same with the state(5) to be set 00:30:57.909 [2024-07-15 03:34:03.920956] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dfed0 is same with the state(5) to be set 00:30:57.909 [2024-07-15 03:34:03.920968] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dfed0 is same with the state(5) to be set 00:30:57.909 [2024-07-15 03:34:03.920980] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dfed0 is same with the state(5) to be set 00:30:57.909 [2024-07-15 03:34:03.920992] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dfed0 is same with the state(5) to be set 00:30:57.909 [2024-07-15 03:34:03.921003] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dfed0 is same with the state(5) to be set 00:30:57.909 [2024-07-15 03:34:03.921016] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dfed0 is same with the state(5) to be set 00:30:57.909 [2024-07-15 03:34:03.921028] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dfed0 is same with the state(5) to be set 00:30:57.909 [2024-07-15 03:34:03.921040] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dfed0 is same with the state(5) to be set 00:30:57.909 [2024-07-15 03:34:03.921053] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dfed0 is same with the state(5) to be set 00:30:57.909 [2024-07-15 03:34:03.921065] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dfed0 is same with the state(5) to be set 00:30:57.909 [2024-07-15 03:34:03.921078] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dfed0 is same with the state(5) to be set 00:30:57.909 [2024-07-15 03:34:03.921090] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dfed0 is same with the state(5) to be set 00:30:57.909 03:34:03 nvmf_tcp.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:31:01.186 03:34:06 nvmf_tcp.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:01.186 [2024-07-15 03:34:07.221974] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:01.186 03:34:07 nvmf_tcp.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:31:02.117 03:34:08 nvmf_tcp.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:31:02.374 [2024-07-15 03:34:08.479501] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1999fa0 is same with the state(5) to be set 00:31:02.375 03:34:08 nvmf_tcp.nvmf_failover -- host/failover.sh@59 -- # wait 3308878 00:31:08.945 0 00:31:08.945 03:34:14 nvmf_tcp.nvmf_failover -- host/failover.sh@61 -- # killprocess 3308744 00:31:08.945 03:34:14 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 3308744 ']' 00:31:08.945 03:34:14 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 3308744 00:31:08.945 03:34:14 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:31:08.945 03:34:14 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:31:08.945 03:34:14 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3308744 00:31:08.945 03:34:14 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:31:08.945 03:34:14 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:31:08.945 03:34:14 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3308744' 00:31:08.945 killing process with pid 3308744 00:31:08.945 03:34:14 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # kill 3308744 00:31:08.945 03:34:14 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@972 -- # wait 3308744 00:31:08.945 03:34:14 nvmf_tcp.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:31:08.945 [2024-07-15 03:33:57.888639] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:31:08.945 [2024-07-15 03:33:57.888733] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3308744 ] 00:31:08.945 EAL: No free 2048 kB hugepages reported on node 1 00:31:08.945 [2024-07-15 03:33:57.954218] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:08.945 [2024-07-15 03:33:58.043719] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:31:08.945 Running I/O for 15 seconds... 00:31:08.945 [2024-07-15 03:34:00.116684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:77792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.945 [2024-07-15 03:34:00.116737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.945 [2024-07-15 03:34:00.116767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:77800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.945 [2024-07-15 03:34:00.116785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.945 [2024-07-15 03:34:00.116802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:77808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.945 [2024-07-15 03:34:00.116818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.945 [2024-07-15 03:34:00.116834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:77816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.945 [2024-07-15 03:34:00.116849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.945 [2024-07-15 03:34:00.116867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:77824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.945 [2024-07-15 03:34:00.116889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.945 [2024-07-15 03:34:00.116922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:77832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.945 [2024-07-15 03:34:00.116937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.945 [2024-07-15 03:34:00.116952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:77840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.945 [2024-07-15 03:34:00.116966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.945 [2024-07-15 03:34:00.116981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:77848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.945 [2024-07-15 03:34:00.116994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.945 [2024-07-15 03:34:00.117009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:77856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.945 [2024-07-15 03:34:00.117024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.945 [2024-07-15 03:34:00.117043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:77864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.945 [2024-07-15 03:34:00.117057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.945 [2024-07-15 03:34:00.117073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:77872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.945 [2024-07-15 03:34:00.117087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.945 [2024-07-15 03:34:00.117109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:77880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.945 [2024-07-15 03:34:00.117124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.945 [2024-07-15 03:34:00.117139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:77888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.945 [2024-07-15 03:34:00.117159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.945 [2024-07-15 03:34:00.117174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:77896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.945 [2024-07-15 03:34:00.117202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.945 [2024-07-15 03:34:00.117218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:77904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.945 [2024-07-15 03:34:00.117232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.945 [2024-07-15 03:34:00.117247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:77912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.945 [2024-07-15 03:34:00.117260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.946 [2024-07-15 03:34:00.117274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:77920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.946 [2024-07-15 03:34:00.117287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.946 [2024-07-15 03:34:00.117301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:77928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.946 [2024-07-15 03:34:00.117314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.946 [2024-07-15 03:34:00.117329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:77936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.946 [2024-07-15 03:34:00.117341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.946 [2024-07-15 03:34:00.117356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:77944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.946 [2024-07-15 03:34:00.117370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.946 [2024-07-15 03:34:00.117384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:77952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.946 [2024-07-15 03:34:00.117398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.946 [2024-07-15 03:34:00.117412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:77960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.946 [2024-07-15 03:34:00.117425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.946 [2024-07-15 03:34:00.117440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:77968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.946 [2024-07-15 03:34:00.117453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.946 [2024-07-15 03:34:00.117468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:77976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.946 [2024-07-15 03:34:00.117486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.946 [2024-07-15 03:34:00.117501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:77984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.946 [2024-07-15 03:34:00.117514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.946 [2024-07-15 03:34:00.117529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:77992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.946 [2024-07-15 03:34:00.117542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.946 [2024-07-15 03:34:00.117557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:78000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.946 [2024-07-15 03:34:00.117571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.946 [2024-07-15 03:34:00.117586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:78008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.946 [2024-07-15 03:34:00.117598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.946 [2024-07-15 03:34:00.117613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:78016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.946 [2024-07-15 03:34:00.117626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.946 [2024-07-15 03:34:00.117640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:78024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.946 [2024-07-15 03:34:00.117658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.946 [2024-07-15 03:34:00.117672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:78032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.946 [2024-07-15 03:34:00.117685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.946 [2024-07-15 03:34:00.117716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:78040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.946 [2024-07-15 03:34:00.117730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.946 [2024-07-15 03:34:00.117745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:78048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.946 [2024-07-15 03:34:00.117758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.946 [2024-07-15 03:34:00.117789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:78056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.946 [2024-07-15 03:34:00.117803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.946 [2024-07-15 03:34:00.117818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:78064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.946 [2024-07-15 03:34:00.117832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.946 [2024-07-15 03:34:00.117848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:78072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.946 [2024-07-15 03:34:00.117862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.946 [2024-07-15 03:34:00.117887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:78080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.946 [2024-07-15 03:34:00.117904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.946 [2024-07-15 03:34:00.117919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:78088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.946 [2024-07-15 03:34:00.117933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.946 [2024-07-15 03:34:00.117948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:78096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.946 [2024-07-15 03:34:00.117963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.946 [2024-07-15 03:34:00.117978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:78104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.946 [2024-07-15 03:34:00.118004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.946 [2024-07-15 03:34:00.118019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:78112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.946 [2024-07-15 03:34:00.118033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.946 [2024-07-15 03:34:00.118048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:78120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.946 [2024-07-15 03:34:00.118063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.946 [2024-07-15 03:34:00.118078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:78128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.946 [2024-07-15 03:34:00.118092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.946 [2024-07-15 03:34:00.118108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:78136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.946 [2024-07-15 03:34:00.118122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.946 [2024-07-15 03:34:00.118137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:77648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:08.946 [2024-07-15 03:34:00.118152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.946 [2024-07-15 03:34:00.118167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:77656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:08.946 [2024-07-15 03:34:00.118196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.946 [2024-07-15 03:34:00.118211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:78144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.946 [2024-07-15 03:34:00.118224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.946 [2024-07-15 03:34:00.118238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:78152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.946 [2024-07-15 03:34:00.118252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.946 [2024-07-15 03:34:00.118266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:78160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.946 [2024-07-15 03:34:00.118279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.946 [2024-07-15 03:34:00.118299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:78168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.946 [2024-07-15 03:34:00.118314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.946 [2024-07-15 03:34:00.118329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:78176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.946 [2024-07-15 03:34:00.118343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.946 [2024-07-15 03:34:00.118358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:78184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.946 [2024-07-15 03:34:00.118371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.946 [2024-07-15 03:34:00.118386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:78192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.946 [2024-07-15 03:34:00.118399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.946 [2024-07-15 03:34:00.118414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:78200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.946 [2024-07-15 03:34:00.118427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.946 [2024-07-15 03:34:00.118442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:78208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.946 [2024-07-15 03:34:00.118455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.947 [2024-07-15 03:34:00.118470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:78216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.947 [2024-07-15 03:34:00.118483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.947 [2024-07-15 03:34:00.118497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.947 [2024-07-15 03:34:00.118510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.947 [2024-07-15 03:34:00.118525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:78232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.947 [2024-07-15 03:34:00.118540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.947 [2024-07-15 03:34:00.118554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:78240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.947 [2024-07-15 03:34:00.118568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.947 [2024-07-15 03:34:00.118583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:78248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.947 [2024-07-15 03:34:00.118596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.947 [2024-07-15 03:34:00.118612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:78256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.947 [2024-07-15 03:34:00.118625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.947 [2024-07-15 03:34:00.118641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:78264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.947 [2024-07-15 03:34:00.118658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.947 [2024-07-15 03:34:00.118688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:78272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.947 [2024-07-15 03:34:00.118703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.947 [2024-07-15 03:34:00.118719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:78280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.947 [2024-07-15 03:34:00.118733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.947 [2024-07-15 03:34:00.118747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:78288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.947 [2024-07-15 03:34:00.118761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.947 [2024-07-15 03:34:00.118776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:78296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.947 [2024-07-15 03:34:00.118790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.947 [2024-07-15 03:34:00.118805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:78304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.947 [2024-07-15 03:34:00.118818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.947 [2024-07-15 03:34:00.118833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:78312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.947 [2024-07-15 03:34:00.118848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.947 [2024-07-15 03:34:00.118863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:78320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.947 [2024-07-15 03:34:00.118883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.947 [2024-07-15 03:34:00.118916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:78328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.947 [2024-07-15 03:34:00.118931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.947 [2024-07-15 03:34:00.118947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:78336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.947 [2024-07-15 03:34:00.118961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.947 [2024-07-15 03:34:00.118977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:78344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.947 [2024-07-15 03:34:00.118991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.947 [2024-07-15 03:34:00.119006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:78352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.947 [2024-07-15 03:34:00.119021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.947 [2024-07-15 03:34:00.119036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:78360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.947 [2024-07-15 03:34:00.119050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.947 [2024-07-15 03:34:00.119069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:78368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.947 [2024-07-15 03:34:00.119084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.947 [2024-07-15 03:34:00.119100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:78376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.947 [2024-07-15 03:34:00.119114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.947 [2024-07-15 03:34:00.119130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:78384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.947 [2024-07-15 03:34:00.119144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.947 [2024-07-15 03:34:00.119160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:78392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.947 [2024-07-15 03:34:00.119174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.947 [2024-07-15 03:34:00.119205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:78400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.947 [2024-07-15 03:34:00.119220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.947 [2024-07-15 03:34:00.119235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:78408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.947 [2024-07-15 03:34:00.119248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.947 [2024-07-15 03:34:00.119263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:78416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.947 [2024-07-15 03:34:00.119277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.947 [2024-07-15 03:34:00.119292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:78424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.947 [2024-07-15 03:34:00.119305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.947 [2024-07-15 03:34:00.119320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:78432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.947 [2024-07-15 03:34:00.119334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.947 [2024-07-15 03:34:00.119349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:78440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.947 [2024-07-15 03:34:00.119362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.947 [2024-07-15 03:34:00.119378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:78448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.947 [2024-07-15 03:34:00.119391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.947 [2024-07-15 03:34:00.119406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:78456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.947 [2024-07-15 03:34:00.119419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.947 [2024-07-15 03:34:00.119435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:78464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.947 [2024-07-15 03:34:00.119452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.947 [2024-07-15 03:34:00.119467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:78472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.947 [2024-07-15 03:34:00.119481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.947 [2024-07-15 03:34:00.119495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:78480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.947 [2024-07-15 03:34:00.119509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.947 [2024-07-15 03:34:00.119524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:78488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.947 [2024-07-15 03:34:00.119538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.947 [2024-07-15 03:34:00.119552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:78496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.947 [2024-07-15 03:34:00.119566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.947 [2024-07-15 03:34:00.119581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:78504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.947 [2024-07-15 03:34:00.119595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.948 [2024-07-15 03:34:00.119611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:78512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.948 [2024-07-15 03:34:00.119625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.948 [2024-07-15 03:34:00.119640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:78520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.948 [2024-07-15 03:34:00.119654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.948 [2024-07-15 03:34:00.119669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:78528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.948 [2024-07-15 03:34:00.119683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.948 [2024-07-15 03:34:00.119698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:78536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.948 [2024-07-15 03:34:00.119711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.948 [2024-07-15 03:34:00.119726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:78544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.948 [2024-07-15 03:34:00.119739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.948 [2024-07-15 03:34:00.119778] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:08.948 [2024-07-15 03:34:00.119811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78552 len:8 PRP1 0x0 PRP2 0x0 00:31:08.948 [2024-07-15 03:34:00.119825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.948 [2024-07-15 03:34:00.119872] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:31:08.948 [2024-07-15 03:34:00.119902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.948 [2024-07-15 03:34:00.119922] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:31:08.948 [2024-07-15 03:34:00.119936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.948 [2024-07-15 03:34:00.119950] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:31:08.948 [2024-07-15 03:34:00.119964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.948 [2024-07-15 03:34:00.119978] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:31:08.948 [2024-07-15 03:34:00.119990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.948 [2024-07-15 03:34:00.120003] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1036bd0 is same with the state(5) to be set 00:31:08.948 [2024-07-15 03:34:00.120229] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:08.948 [2024-07-15 03:34:00.120250] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:08.948 [2024-07-15 03:34:00.120277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78560 len:8 PRP1 0x0 PRP2 0x0 00:31:08.948 [2024-07-15 03:34:00.120291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.948 [2024-07-15 03:34:00.120307] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:08.948 [2024-07-15 03:34:00.120318] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:08.948 [2024-07-15 03:34:00.120330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78568 len:8 PRP1 0x0 PRP2 0x0 00:31:08.948 [2024-07-15 03:34:00.120343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.948 [2024-07-15 03:34:00.120361] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:08.948 [2024-07-15 03:34:00.120373] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:08.948 [2024-07-15 03:34:00.120383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78576 len:8 PRP1 0x0 PRP2 0x0 00:31:08.948 [2024-07-15 03:34:00.120395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.948 [2024-07-15 03:34:00.120408] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:08.948 [2024-07-15 03:34:00.120419] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:08.948 [2024-07-15 03:34:00.120430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78584 len:8 PRP1 0x0 PRP2 0x0 00:31:08.948 [2024-07-15 03:34:00.120442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.948 [2024-07-15 03:34:00.120455] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:08.948 [2024-07-15 03:34:00.120466] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:08.948 [2024-07-15 03:34:00.120476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78592 len:8 PRP1 0x0 PRP2 0x0 00:31:08.948 [2024-07-15 03:34:00.120489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.948 [2024-07-15 03:34:00.120501] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:08.948 [2024-07-15 03:34:00.120517] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:08.948 [2024-07-15 03:34:00.120528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78600 len:8 PRP1 0x0 PRP2 0x0 00:31:08.948 [2024-07-15 03:34:00.120544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.948 [2024-07-15 03:34:00.120558] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:08.948 [2024-07-15 03:34:00.120569] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:08.948 [2024-07-15 03:34:00.120580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78608 len:8 PRP1 0x0 PRP2 0x0 00:31:08.948 [2024-07-15 03:34:00.120593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.948 [2024-07-15 03:34:00.120606] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:08.948 [2024-07-15 03:34:00.120616] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:08.948 [2024-07-15 03:34:00.120627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78616 len:8 PRP1 0x0 PRP2 0x0 00:31:08.948 [2024-07-15 03:34:00.120640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.948 [2024-07-15 03:34:00.120653] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:08.948 [2024-07-15 03:34:00.120664] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:08.948 [2024-07-15 03:34:00.120675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78624 len:8 PRP1 0x0 PRP2 0x0 00:31:08.948 [2024-07-15 03:34:00.120688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.948 [2024-07-15 03:34:00.120700] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:08.948 [2024-07-15 03:34:00.120711] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:08.948 [2024-07-15 03:34:00.120722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78632 len:8 PRP1 0x0 PRP2 0x0 00:31:08.948 [2024-07-15 03:34:00.120734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.948 [2024-07-15 03:34:00.120752] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:08.948 [2024-07-15 03:34:00.120763] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:08.948 [2024-07-15 03:34:00.120774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78640 len:8 PRP1 0x0 PRP2 0x0 00:31:08.948 [2024-07-15 03:34:00.120786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.948 [2024-07-15 03:34:00.120799] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:08.948 [2024-07-15 03:34:00.120809] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:08.948 [2024-07-15 03:34:00.120820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78648 len:8 PRP1 0x0 PRP2 0x0 00:31:08.948 [2024-07-15 03:34:00.120832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.948 [2024-07-15 03:34:00.120845] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:08.948 [2024-07-15 03:34:00.120873] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:08.948 [2024-07-15 03:34:00.120895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78656 len:8 PRP1 0x0 PRP2 0x0 00:31:08.948 [2024-07-15 03:34:00.120908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.948 [2024-07-15 03:34:00.120922] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:08.948 [2024-07-15 03:34:00.120937] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:08.948 [2024-07-15 03:34:00.120949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:77664 len:8 PRP1 0x0 PRP2 0x0 00:31:08.948 [2024-07-15 03:34:00.120962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.948 [2024-07-15 03:34:00.120976] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:08.948 [2024-07-15 03:34:00.120987] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:08.948 [2024-07-15 03:34:00.120999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:77672 len:8 PRP1 0x0 PRP2 0x0 00:31:08.948 [2024-07-15 03:34:00.121011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.948 [2024-07-15 03:34:00.121024] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:08.948 [2024-07-15 03:34:00.121035] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:08.949 [2024-07-15 03:34:00.121046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:77680 len:8 PRP1 0x0 PRP2 0x0 00:31:08.949 [2024-07-15 03:34:00.121059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.949 [2024-07-15 03:34:00.121072] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:08.949 [2024-07-15 03:34:00.121083] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:08.949 [2024-07-15 03:34:00.121094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:77688 len:8 PRP1 0x0 PRP2 0x0 00:31:08.949 [2024-07-15 03:34:00.121107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.949 [2024-07-15 03:34:00.121120] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:08.949 [2024-07-15 03:34:00.121130] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:08.949 [2024-07-15 03:34:00.121141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:77696 len:8 PRP1 0x0 PRP2 0x0 00:31:08.949 [2024-07-15 03:34:00.121154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.949 [2024-07-15 03:34:00.121182] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:08.949 [2024-07-15 03:34:00.121208] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:08.949 [2024-07-15 03:34:00.121219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:77704 len:8 PRP1 0x0 PRP2 0x0 00:31:08.949 [2024-07-15 03:34:00.121232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.949 [2024-07-15 03:34:00.121245] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:08.949 [2024-07-15 03:34:00.121256] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:08.949 [2024-07-15 03:34:00.121266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:77712 len:8 PRP1 0x0 PRP2 0x0 00:31:08.949 [2024-07-15 03:34:00.121278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.949 [2024-07-15 03:34:00.121296] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:08.949 [2024-07-15 03:34:00.121307] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:08.949 [2024-07-15 03:34:00.121319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78664 len:8 PRP1 0x0 PRP2 0x0 00:31:08.949 [2024-07-15 03:34:00.121331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.949 [2024-07-15 03:34:00.121347] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:08.949 [2024-07-15 03:34:00.121359] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:08.949 [2024-07-15 03:34:00.121370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:77720 len:8 PRP1 0x0 PRP2 0x0 00:31:08.949 [2024-07-15 03:34:00.121383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.949 [2024-07-15 03:34:00.121396] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:08.949 [2024-07-15 03:34:00.121407] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:08.949 [2024-07-15 03:34:00.121418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:77728 len:8 PRP1 0x0 PRP2 0x0 00:31:08.949 [2024-07-15 03:34:00.121431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.949 [2024-07-15 03:34:00.121443] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:08.949 [2024-07-15 03:34:00.121454] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:08.949 [2024-07-15 03:34:00.121465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:77736 len:8 PRP1 0x0 PRP2 0x0 00:31:08.949 [2024-07-15 03:34:00.121478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.949 [2024-07-15 03:34:00.121490] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:08.949 [2024-07-15 03:34:00.121501] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:08.949 [2024-07-15 03:34:00.121512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:77744 len:8 PRP1 0x0 PRP2 0x0 00:31:08.949 [2024-07-15 03:34:00.121525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.949 [2024-07-15 03:34:00.121537] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:08.949 [2024-07-15 03:34:00.121548] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:08.949 [2024-07-15 03:34:00.121559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:77752 len:8 PRP1 0x0 PRP2 0x0 00:31:08.949 [2024-07-15 03:34:00.121571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.949 [2024-07-15 03:34:00.121585] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:08.949 [2024-07-15 03:34:00.121596] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:08.949 [2024-07-15 03:34:00.121608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:77760 len:8 PRP1 0x0 PRP2 0x0 00:31:08.949 [2024-07-15 03:34:00.121621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.949 [2024-07-15 03:34:00.121634] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:08.949 [2024-07-15 03:34:00.121644] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:08.949 [2024-07-15 03:34:00.121655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:77768 len:8 PRP1 0x0 PRP2 0x0 00:31:08.949 [2024-07-15 03:34:00.121669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.949 [2024-07-15 03:34:00.121683] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:08.949 [2024-07-15 03:34:00.121694] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:08.949 [2024-07-15 03:34:00.121705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:77776 len:8 PRP1 0x0 PRP2 0x0 00:31:08.949 [2024-07-15 03:34:00.121724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.949 [2024-07-15 03:34:00.121738] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:08.949 [2024-07-15 03:34:00.121749] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:08.949 [2024-07-15 03:34:00.121761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:77784 len:8 PRP1 0x0 PRP2 0x0 00:31:08.949 [2024-07-15 03:34:00.121774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.949 [2024-07-15 03:34:00.121787] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:08.949 [2024-07-15 03:34:00.121799] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:08.949 [2024-07-15 03:34:00.121811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77792 len:8 PRP1 0x0 PRP2 0x0 00:31:08.949 [2024-07-15 03:34:00.121825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.949 [2024-07-15 03:34:00.121838] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:08.949 [2024-07-15 03:34:00.121849] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:08.949 [2024-07-15 03:34:00.121892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77800 len:8 PRP1 0x0 PRP2 0x0 00:31:08.949 [2024-07-15 03:34:00.121909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.949 [2024-07-15 03:34:00.121923] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:08.949 [2024-07-15 03:34:00.121935] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:08.949 [2024-07-15 03:34:00.121957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77808 len:8 PRP1 0x0 PRP2 0x0 00:31:08.949 [2024-07-15 03:34:00.121970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.949 [2024-07-15 03:34:00.121984] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:08.949 [2024-07-15 03:34:00.121996] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:08.949 [2024-07-15 03:34:00.122007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77816 len:8 PRP1 0x0 PRP2 0x0 00:31:08.949 [2024-07-15 03:34:00.122021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.949 [2024-07-15 03:34:00.122034] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:08.949 [2024-07-15 03:34:00.122046] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:08.949 [2024-07-15 03:34:00.122058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77824 len:8 PRP1 0x0 PRP2 0x0 00:31:08.949 [2024-07-15 03:34:00.122072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.949 [2024-07-15 03:34:00.122085] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:08.949 [2024-07-15 03:34:00.122097] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:08.949 [2024-07-15 03:34:00.122108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77832 len:8 PRP1 0x0 PRP2 0x0 00:31:08.949 [2024-07-15 03:34:00.122121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.949 [2024-07-15 03:34:00.122135] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:08.949 [2024-07-15 03:34:00.122146] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:08.949 [2024-07-15 03:34:00.122172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77840 len:8 PRP1 0x0 PRP2 0x0 00:31:08.949 [2024-07-15 03:34:00.122186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.949 [2024-07-15 03:34:00.122199] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:08.949 [2024-07-15 03:34:00.122211] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:08.950 [2024-07-15 03:34:00.122223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77848 len:8 PRP1 0x0 PRP2 0x0 00:31:08.950 [2024-07-15 03:34:00.122245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.950 [2024-07-15 03:34:00.122259] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:08.951 [2024-07-15 03:34:00.122280] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:08.951 [2024-07-15 03:34:00.122306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77856 len:8 PRP1 0x0 PRP2 0x0 00:31:08.951 [2024-07-15 03:34:00.122319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.951 [2024-07-15 03:34:00.122333] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:08.951 [2024-07-15 03:34:00.122345] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:08.951 [2024-07-15 03:34:00.122356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77864 len:8 PRP1 0x0 PRP2 0x0 00:31:08.951 [2024-07-15 03:34:00.122368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.951 [2024-07-15 03:34:00.122381] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:08.951 [2024-07-15 03:34:00.122392] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:08.951 [2024-07-15 03:34:00.122403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77872 len:8 PRP1 0x0 PRP2 0x0 00:31:08.951 [2024-07-15 03:34:00.122415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.951 [2024-07-15 03:34:00.122428] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:08.951 [2024-07-15 03:34:00.122438] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:08.951 [2024-07-15 03:34:00.122450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77880 len:8 PRP1 0x0 PRP2 0x0 00:31:08.951 [2024-07-15 03:34:00.122463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.951 [2024-07-15 03:34:00.122476] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:08.951 [2024-07-15 03:34:00.122487] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:08.951 [2024-07-15 03:34:00.122497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77888 len:8 PRP1 0x0 PRP2 0x0 00:31:08.951 [2024-07-15 03:34:00.122510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.951 [2024-07-15 03:34:00.122522] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:08.951 [2024-07-15 03:34:00.122533] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:08.951 [2024-07-15 03:34:00.122544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77896 len:8 PRP1 0x0 PRP2 0x0 00:31:08.951 [2024-07-15 03:34:00.122557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.951 [2024-07-15 03:34:00.122570] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:08.951 [2024-07-15 03:34:00.122583] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:08.951 [2024-07-15 03:34:00.122595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77904 len:8 PRP1 0x0 PRP2 0x0 00:31:08.951 [2024-07-15 03:34:00.122607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.951 [2024-07-15 03:34:00.122620] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:08.951 [2024-07-15 03:34:00.122631] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:08.951 [2024-07-15 03:34:00.122642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77912 len:8 PRP1 0x0 PRP2 0x0 00:31:08.951 [2024-07-15 03:34:00.122659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.951 [2024-07-15 03:34:00.122673] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:08.951 [2024-07-15 03:34:00.122684] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:08.951 [2024-07-15 03:34:00.122695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77920 len:8 PRP1 0x0 PRP2 0x0 00:31:08.951 [2024-07-15 03:34:00.122708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.951 [2024-07-15 03:34:00.122721] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:08.951 [2024-07-15 03:34:00.122732] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:08.951 [2024-07-15 03:34:00.122743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77928 len:8 PRP1 0x0 PRP2 0x0 00:31:08.951 [2024-07-15 03:34:00.122755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.951 [2024-07-15 03:34:00.122768] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:08.951 [2024-07-15 03:34:00.122779] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:08.951 [2024-07-15 03:34:00.122790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77936 len:8 PRP1 0x0 PRP2 0x0 00:31:08.951 [2024-07-15 03:34:00.122802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.951 [2024-07-15 03:34:00.122815] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:08.951 [2024-07-15 03:34:00.122825] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:08.951 [2024-07-15 03:34:00.122836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77944 len:8 PRP1 0x0 PRP2 0x0 00:31:08.952 [2024-07-15 03:34:00.122849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.952 [2024-07-15 03:34:00.122884] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:08.952 [2024-07-15 03:34:00.122898] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:08.952 [2024-07-15 03:34:00.122910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77952 len:8 PRP1 0x0 PRP2 0x0 00:31:08.952 [2024-07-15 03:34:00.122923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.952 [2024-07-15 03:34:00.122936] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:08.952 [2024-07-15 03:34:00.122947] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:08.952 [2024-07-15 03:34:00.122958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77960 len:8 PRP1 0x0 PRP2 0x0 00:31:08.952 [2024-07-15 03:34:00.122971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.952 [2024-07-15 03:34:00.122988] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:08.952 [2024-07-15 03:34:00.122999] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:08.952 [2024-07-15 03:34:00.123011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77968 len:8 PRP1 0x0 PRP2 0x0 00:31:08.952 [2024-07-15 03:34:00.123024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.952 [2024-07-15 03:34:00.123037] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:08.952 [2024-07-15 03:34:00.123048] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:08.952 [2024-07-15 03:34:00.123059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77976 len:8 PRP1 0x0 PRP2 0x0 00:31:08.952 [2024-07-15 03:34:00.123073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.952 [2024-07-15 03:34:00.123087] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:08.952 [2024-07-15 03:34:00.123098] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:08.952 [2024-07-15 03:34:00.123110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77984 len:8 PRP1 0x0 PRP2 0x0 00:31:08.952 [2024-07-15 03:34:00.123123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.952 [2024-07-15 03:34:00.123136] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:08.952 [2024-07-15 03:34:00.123147] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:08.952 [2024-07-15 03:34:00.123158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77992 len:8 PRP1 0x0 PRP2 0x0 00:31:08.952 [2024-07-15 03:34:00.123171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.952 [2024-07-15 03:34:00.123184] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:08.952 [2024-07-15 03:34:00.123195] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:08.952 [2024-07-15 03:34:00.123206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78000 len:8 PRP1 0x0 PRP2 0x0 00:31:08.952 [2024-07-15 03:34:00.123219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.952 [2024-07-15 03:34:00.123233] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:08.952 [2024-07-15 03:34:00.123243] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:08.952 [2024-07-15 03:34:00.123255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78008 len:8 PRP1 0x0 PRP2 0x0 00:31:08.952 [2024-07-15 03:34:00.123273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.952 [2024-07-15 03:34:00.123286] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:08.952 [2024-07-15 03:34:00.123298] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:08.952 [2024-07-15 03:34:00.123309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78016 len:8 PRP1 0x0 PRP2 0x0 00:31:08.952 [2024-07-15 03:34:00.123322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.952 [2024-07-15 03:34:00.123349] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:08.952 [2024-07-15 03:34:00.123360] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:08.952 [2024-07-15 03:34:00.123371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78024 len:8 PRP1 0x0 PRP2 0x0 00:31:08.952 [2024-07-15 03:34:00.123387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.952 [2024-07-15 03:34:00.123400] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:08.952 [2024-07-15 03:34:00.123416] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:08.952 [2024-07-15 03:34:00.123426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78032 len:8 PRP1 0x0 PRP2 0x0 00:31:08.952 [2024-07-15 03:34:00.123439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.952 [2024-07-15 03:34:00.123452] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:08.952 [2024-07-15 03:34:00.123462] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:08.952 [2024-07-15 03:34:00.123473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78040 len:8 PRP1 0x0 PRP2 0x0 00:31:08.952 [2024-07-15 03:34:00.123485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.952 [2024-07-15 03:34:00.123498] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:08.952 [2024-07-15 03:34:00.123509] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:08.952 [2024-07-15 03:34:00.123520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78048 len:8 PRP1 0x0 PRP2 0x0 00:31:08.952 [2024-07-15 03:34:00.123532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.952 [2024-07-15 03:34:00.123545] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:08.952 [2024-07-15 03:34:00.123555] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:08.952 [2024-07-15 03:34:00.123566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78056 len:8 PRP1 0x0 PRP2 0x0 00:31:08.952 [2024-07-15 03:34:00.123578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.952 [2024-07-15 03:34:00.123591] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:08.952 [2024-07-15 03:34:00.123601] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:08.952 [2024-07-15 03:34:00.123612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78064 len:8 PRP1 0x0 PRP2 0x0 00:31:08.952 [2024-07-15 03:34:00.123624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.952 [2024-07-15 03:34:00.123637] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:08.952 [2024-07-15 03:34:00.123647] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:08.952 [2024-07-15 03:34:00.123658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78072 len:8 PRP1 0x0 PRP2 0x0 00:31:08.952 [2024-07-15 03:34:00.123679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.952 [2024-07-15 03:34:00.123692] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:08.952 [2024-07-15 03:34:00.123703] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:08.952 [2024-07-15 03:34:00.123714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78080 len:8 PRP1 0x0 PRP2 0x0 00:31:08.952 [2024-07-15 03:34:00.123726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.952 [2024-07-15 03:34:00.123739] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:08.952 [2024-07-15 03:34:00.123750] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:08.952 [2024-07-15 03:34:00.123764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78088 len:8 PRP1 0x0 PRP2 0x0 00:31:08.952 [2024-07-15 03:34:00.123777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.952 [2024-07-15 03:34:00.123789] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:08.952 [2024-07-15 03:34:00.123800] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:08.952 [2024-07-15 03:34:00.123811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78096 len:8 PRP1 0x0 PRP2 0x0 00:31:08.952 [2024-07-15 03:34:00.123823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.952 [2024-07-15 03:34:00.123836] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:08.952 [2024-07-15 03:34:00.123847] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:08.952 [2024-07-15 03:34:00.123874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78104 len:8 PRP1 0x0 PRP2 0x0 00:31:08.952 [2024-07-15 03:34:00.123894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.952 [2024-07-15 03:34:00.123908] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:08.952 [2024-07-15 03:34:00.123918] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:08.952 [2024-07-15 03:34:00.123930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78112 len:8 PRP1 0x0 PRP2 0x0 00:31:08.952 [2024-07-15 03:34:00.123942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.952 [2024-07-15 03:34:00.123956] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:08.952 [2024-07-15 03:34:00.123967] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:08.953 [2024-07-15 03:34:00.123978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78120 len:8 PRP1 0x0 PRP2 0x0 00:31:08.953 [2024-07-15 03:34:00.123991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.953 [2024-07-15 03:34:00.124004] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:08.953 [2024-07-15 03:34:00.124015] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:08.953 [2024-07-15 03:34:00.124026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78128 len:8 PRP1 0x0 PRP2 0x0 00:31:08.953 [2024-07-15 03:34:00.124038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.953 [2024-07-15 03:34:00.124051] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:08.953 [2024-07-15 03:34:00.124062] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:08.953 [2024-07-15 03:34:00.124074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78136 len:8 PRP1 0x0 PRP2 0x0 00:31:08.953 [2024-07-15 03:34:00.124097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.953 [2024-07-15 03:34:00.124111] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:08.953 [2024-07-15 03:34:00.124122] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:08.953 [2024-07-15 03:34:00.124133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:77648 len:8 PRP1 0x0 PRP2 0x0 00:31:08.953 [2024-07-15 03:34:00.124156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.953 [2024-07-15 03:34:00.124172] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:08.953 [2024-07-15 03:34:00.124183] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:08.953 [2024-07-15 03:34:00.124195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:77656 len:8 PRP1 0x0 PRP2 0x0 00:31:08.953 [2024-07-15 03:34:00.124208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.953 [2024-07-15 03:34:00.124221] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:08.953 [2024-07-15 03:34:00.124232] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:08.953 [2024-07-15 03:34:00.124242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78144 len:8 PRP1 0x0 PRP2 0x0 00:31:08.953 [2024-07-15 03:34:00.124255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.953 [2024-07-15 03:34:00.124268] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:08.953 [2024-07-15 03:34:00.124280] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:08.953 [2024-07-15 03:34:00.124291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78152 len:8 PRP1 0x0 PRP2 0x0 00:31:08.953 [2024-07-15 03:34:00.124303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.953 [2024-07-15 03:34:00.124316] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:08.953 [2024-07-15 03:34:00.124327] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:08.953 [2024-07-15 03:34:00.124338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78160 len:8 PRP1 0x0 PRP2 0x0 00:31:08.953 [2024-07-15 03:34:00.124350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.953 [2024-07-15 03:34:00.124364] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:08.953 [2024-07-15 03:34:00.124374] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:08.953 [2024-07-15 03:34:00.124385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78168 len:8 PRP1 0x0 PRP2 0x0 00:31:08.953 [2024-07-15 03:34:00.124398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.953 [2024-07-15 03:34:00.124411] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:08.953 [2024-07-15 03:34:00.124421] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:08.953 [2024-07-15 03:34:00.124432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78176 len:8 PRP1 0x0 PRP2 0x0 00:31:08.953 [2024-07-15 03:34:00.124460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.953 [2024-07-15 03:34:00.124474] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:08.953 [2024-07-15 03:34:00.124488] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:08.953 [2024-07-15 03:34:00.124499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78184 len:8 PRP1 0x0 PRP2 0x0 00:31:08.953 [2024-07-15 03:34:00.124521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.953 [2024-07-15 03:34:00.124535] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:08.953 [2024-07-15 03:34:00.124546] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:08.953 [2024-07-15 03:34:00.124556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78192 len:8 PRP1 0x0 PRP2 0x0 00:31:08.953 [2024-07-15 03:34:00.124571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.953 [2024-07-15 03:34:00.124584] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:08.953 [2024-07-15 03:34:00.124595] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:08.953 [2024-07-15 03:34:00.124606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78200 len:8 PRP1 0x0 PRP2 0x0 00:31:08.953 [2024-07-15 03:34:00.124618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.953 [2024-07-15 03:34:00.124630] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:08.953 [2024-07-15 03:34:00.124641] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:08.953 [2024-07-15 03:34:00.124652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78208 len:8 PRP1 0x0 PRP2 0x0 00:31:08.953 [2024-07-15 03:34:00.124663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.953 [2024-07-15 03:34:00.124676] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:08.953 [2024-07-15 03:34:00.124687] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:08.953 [2024-07-15 03:34:00.124698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78216 len:8 PRP1 0x0 PRP2 0x0 00:31:08.953 [2024-07-15 03:34:00.124710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.953 [2024-07-15 03:34:00.124723] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:08.953 [2024-07-15 03:34:00.124733] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:08.953 [2024-07-15 03:34:00.124744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78224 len:8 PRP1 0x0 PRP2 0x0 00:31:08.953 [2024-07-15 03:34:00.124756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.953 [2024-07-15 03:34:00.124768] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:08.953 [2024-07-15 03:34:00.124778] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:08.953 [2024-07-15 03:34:00.124789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78232 len:8 PRP1 0x0 PRP2 0x0 00:31:08.953 [2024-07-15 03:34:00.124801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.953 [2024-07-15 03:34:00.131089] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:08.953 [2024-07-15 03:34:00.131118] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:08.953 [2024-07-15 03:34:00.131132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78240 len:8 PRP1 0x0 PRP2 0x0 00:31:08.953 [2024-07-15 03:34:00.131146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.953 [2024-07-15 03:34:00.131160] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:08.953 [2024-07-15 03:34:00.131171] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:08.953 [2024-07-15 03:34:00.131183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78248 len:8 PRP1 0x0 PRP2 0x0 00:31:08.953 [2024-07-15 03:34:00.131225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.953 [2024-07-15 03:34:00.131239] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:08.953 [2024-07-15 03:34:00.131250] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:08.953 [2024-07-15 03:34:00.131267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78256 len:8 PRP1 0x0 PRP2 0x0 00:31:08.953 [2024-07-15 03:34:00.131280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.953 [2024-07-15 03:34:00.131294] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:08.953 [2024-07-15 03:34:00.131305] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:08.953 [2024-07-15 03:34:00.131315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78264 len:8 PRP1 0x0 PRP2 0x0 00:31:08.953 [2024-07-15 03:34:00.131329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.953 [2024-07-15 03:34:00.131341] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:08.953 [2024-07-15 03:34:00.131352] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:08.953 [2024-07-15 03:34:00.131363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78272 len:8 PRP1 0x0 PRP2 0x0 00:31:08.953 [2024-07-15 03:34:00.131375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.953 [2024-07-15 03:34:00.131389] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:08.953 [2024-07-15 03:34:00.131399] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:08.954 [2024-07-15 03:34:00.131410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78280 len:8 PRP1 0x0 PRP2 0x0 00:31:08.954 [2024-07-15 03:34:00.131422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.954 [2024-07-15 03:34:00.131435] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:08.954 [2024-07-15 03:34:00.131445] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:08.954 [2024-07-15 03:34:00.131456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78288 len:8 PRP1 0x0 PRP2 0x0 00:31:08.954 [2024-07-15 03:34:00.131469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.954 [2024-07-15 03:34:00.131482] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:08.954 [2024-07-15 03:34:00.131492] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:08.954 [2024-07-15 03:34:00.131503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78296 len:8 PRP1 0x0 PRP2 0x0 00:31:08.954 [2024-07-15 03:34:00.131516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.954 [2024-07-15 03:34:00.131528] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:08.954 [2024-07-15 03:34:00.131539] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:08.954 [2024-07-15 03:34:00.131550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78304 len:8 PRP1 0x0 PRP2 0x0 00:31:08.954 [2024-07-15 03:34:00.131562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.954 [2024-07-15 03:34:00.131575] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:08.954 [2024-07-15 03:34:00.131586] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:08.954 [2024-07-15 03:34:00.131596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78312 len:8 PRP1 0x0 PRP2 0x0 00:31:08.954 [2024-07-15 03:34:00.131609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.954 [2024-07-15 03:34:00.131622] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:08.954 [2024-07-15 03:34:00.131636] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:08.954 [2024-07-15 03:34:00.131648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78320 len:8 PRP1 0x0 PRP2 0x0 00:31:08.954 [2024-07-15 03:34:00.131660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.954 [2024-07-15 03:34:00.131673] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:08.954 [2024-07-15 03:34:00.131684] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:08.954 [2024-07-15 03:34:00.131695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78328 len:8 PRP1 0x0 PRP2 0x0 00:31:08.954 [2024-07-15 03:34:00.131707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.954 [2024-07-15 03:34:00.131720] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:08.954 [2024-07-15 03:34:00.131730] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:08.954 [2024-07-15 03:34:00.131741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78336 len:8 PRP1 0x0 PRP2 0x0 00:31:08.954 [2024-07-15 03:34:00.131753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.954 [2024-07-15 03:34:00.131767] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:08.954 [2024-07-15 03:34:00.131777] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:08.954 [2024-07-15 03:34:00.131788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78344 len:8 PRP1 0x0 PRP2 0x0 00:31:08.954 [2024-07-15 03:34:00.131800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.954 [2024-07-15 03:34:00.131813] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:08.954 [2024-07-15 03:34:00.131824] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:08.954 [2024-07-15 03:34:00.131835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78352 len:8 PRP1 0x0 PRP2 0x0 00:31:08.954 [2024-07-15 03:34:00.131847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.954 [2024-07-15 03:34:00.131884] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:08.954 [2024-07-15 03:34:00.131898] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:08.954 [2024-07-15 03:34:00.131909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78360 len:8 PRP1 0x0 PRP2 0x0 00:31:08.954 [2024-07-15 03:34:00.131923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.954 [2024-07-15 03:34:00.131936] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:08.954 [2024-07-15 03:34:00.131947] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:08.954 [2024-07-15 03:34:00.131959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78368 len:8 PRP1 0x0 PRP2 0x0 00:31:08.954 [2024-07-15 03:34:00.131972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.954 [2024-07-15 03:34:00.131985] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:08.954 [2024-07-15 03:34:00.131995] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:08.954 [2024-07-15 03:34:00.132007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78376 len:8 PRP1 0x0 PRP2 0x0 00:31:08.954 [2024-07-15 03:34:00.132020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.954 [2024-07-15 03:34:00.132037] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:08.954 [2024-07-15 03:34:00.132048] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:08.954 [2024-07-15 03:34:00.132060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78384 len:8 PRP1 0x0 PRP2 0x0 00:31:08.954 [2024-07-15 03:34:00.132073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.954 [2024-07-15 03:34:00.132087] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:08.954 [2024-07-15 03:34:00.132098] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:08.954 [2024-07-15 03:34:00.132109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78392 len:8 PRP1 0x0 PRP2 0x0 00:31:08.954 [2024-07-15 03:34:00.132122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.954 [2024-07-15 03:34:00.132135] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:08.954 [2024-07-15 03:34:00.132146] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:08.954 [2024-07-15 03:34:00.132164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78400 len:8 PRP1 0x0 PRP2 0x0 00:31:08.954 [2024-07-15 03:34:00.132192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.954 [2024-07-15 03:34:00.132206] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:08.954 [2024-07-15 03:34:00.132216] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:08.954 [2024-07-15 03:34:00.132227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78408 len:8 PRP1 0x0 PRP2 0x0 00:31:08.954 [2024-07-15 03:34:00.132239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.954 [2024-07-15 03:34:00.132252] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:08.954 [2024-07-15 03:34:00.132263] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:08.954 [2024-07-15 03:34:00.132274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78416 len:8 PRP1 0x0 PRP2 0x0 00:31:08.954 [2024-07-15 03:34:00.132286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.954 [2024-07-15 03:34:00.132299] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:08.954 [2024-07-15 03:34:00.132309] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:08.954 [2024-07-15 03:34:00.132320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78424 len:8 PRP1 0x0 PRP2 0x0 00:31:08.954 [2024-07-15 03:34:00.132343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.954 [2024-07-15 03:34:00.132356] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:08.954 [2024-07-15 03:34:00.132366] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:08.954 [2024-07-15 03:34:00.132376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78432 len:8 PRP1 0x0 PRP2 0x0 00:31:08.954 [2024-07-15 03:34:00.132389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.954 [2024-07-15 03:34:00.132401] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:08.954 [2024-07-15 03:34:00.132412] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:08.954 [2024-07-15 03:34:00.132423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78440 len:8 PRP1 0x0 PRP2 0x0 00:31:08.954 [2024-07-15 03:34:00.132440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.954 [2024-07-15 03:34:00.132453] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:08.954 [2024-07-15 03:34:00.132464] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:08.954 [2024-07-15 03:34:00.132475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78448 len:8 PRP1 0x0 PRP2 0x0 00:31:08.954 [2024-07-15 03:34:00.132487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.954 [2024-07-15 03:34:00.132500] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:08.954 [2024-07-15 03:34:00.132510] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:08.955 [2024-07-15 03:34:00.132521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78456 len:8 PRP1 0x0 PRP2 0x0 00:31:08.955 [2024-07-15 03:34:00.132534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.955 [2024-07-15 03:34:00.132546] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:08.955 [2024-07-15 03:34:00.132557] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:08.955 [2024-07-15 03:34:00.132568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78464 len:8 PRP1 0x0 PRP2 0x0 00:31:08.955 [2024-07-15 03:34:00.132580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.955 [2024-07-15 03:34:00.132593] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:08.955 [2024-07-15 03:34:00.132603] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:08.955 [2024-07-15 03:34:00.132614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78472 len:8 PRP1 0x0 PRP2 0x0 00:31:08.955 [2024-07-15 03:34:00.132626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.955 [2024-07-15 03:34:00.132640] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:08.955 [2024-07-15 03:34:00.132651] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:08.955 [2024-07-15 03:34:00.132661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78480 len:8 PRP1 0x0 PRP2 0x0 00:31:08.955 [2024-07-15 03:34:00.132674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.955 [2024-07-15 03:34:00.132687] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:08.955 [2024-07-15 03:34:00.132697] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:08.955 [2024-07-15 03:34:00.132709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78488 len:8 PRP1 0x0 PRP2 0x0 00:31:08.955 [2024-07-15 03:34:00.132721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.955 [2024-07-15 03:34:00.132734] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:08.955 [2024-07-15 03:34:00.132744] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:08.955 [2024-07-15 03:34:00.132755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78496 len:8 PRP1 0x0 PRP2 0x0 00:31:08.955 [2024-07-15 03:34:00.132767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.955 [2024-07-15 03:34:00.132779] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:08.955 [2024-07-15 03:34:00.132790] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:08.955 [2024-07-15 03:34:00.132804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78504 len:8 PRP1 0x0 PRP2 0x0 00:31:08.955 [2024-07-15 03:34:00.132818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.955 [2024-07-15 03:34:00.132831] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:08.955 [2024-07-15 03:34:00.132842] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:08.955 [2024-07-15 03:34:00.132852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78512 len:8 PRP1 0x0 PRP2 0x0 00:31:08.955 [2024-07-15 03:34:00.132898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.955 [2024-07-15 03:34:00.132914] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:08.955 [2024-07-15 03:34:00.132925] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:08.955 [2024-07-15 03:34:00.132936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78520 len:8 PRP1 0x0 PRP2 0x0 00:31:08.955 [2024-07-15 03:34:00.132949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.955 [2024-07-15 03:34:00.132962] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:08.955 [2024-07-15 03:34:00.132973] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:08.955 [2024-07-15 03:34:00.132985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78528 len:8 PRP1 0x0 PRP2 0x0 00:31:08.955 [2024-07-15 03:34:00.132998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.955 [2024-07-15 03:34:00.133011] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:08.955 [2024-07-15 03:34:00.133022] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:08.955 [2024-07-15 03:34:00.133033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78536 len:8 PRP1 0x0 PRP2 0x0 00:31:08.955 [2024-07-15 03:34:00.133045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.955 [2024-07-15 03:34:00.133058] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:08.955 [2024-07-15 03:34:00.133069] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:08.955 [2024-07-15 03:34:00.133081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78544 len:8 PRP1 0x0 PRP2 0x0 00:31:08.955 [2024-07-15 03:34:00.133094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.955 [2024-07-15 03:34:00.133107] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:08.955 [2024-07-15 03:34:00.133118] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:08.955 [2024-07-15 03:34:00.133129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78552 len:8 PRP1 0x0 PRP2 0x0 00:31:08.955 [2024-07-15 03:34:00.133142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.955 [2024-07-15 03:34:00.133237] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x105d250 was disconnected and freed. reset controller. 00:31:08.955 [2024-07-15 03:34:00.133256] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:31:08.955 [2024-07-15 03:34:00.133272] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:08.955 [2024-07-15 03:34:00.133344] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1036bd0 (9): Bad file descriptor 00:31:08.955 [2024-07-15 03:34:00.136657] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:08.955 [2024-07-15 03:34:00.327496] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:31:08.955 [2024-07-15 03:34:03.921594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:123296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:08.955 [2024-07-15 03:34:03.921637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.955 [2024-07-15 03:34:03.921682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:123304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:08.955 [2024-07-15 03:34:03.921698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.955 [2024-07-15 03:34:03.921714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:123312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:08.955 [2024-07-15 03:34:03.921728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.955 [2024-07-15 03:34:03.921743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:123320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:08.955 [2024-07-15 03:34:03.921771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.955 [2024-07-15 03:34:03.921787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:123328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:08.955 [2024-07-15 03:34:03.921801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.955 [2024-07-15 03:34:03.921816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:123336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:08.955 [2024-07-15 03:34:03.921831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.955 [2024-07-15 03:34:03.921847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:123344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:08.955 [2024-07-15 03:34:03.921861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.955 [2024-07-15 03:34:03.921884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:123352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:08.955 [2024-07-15 03:34:03.921917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.955 [2024-07-15 03:34:03.921933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:123432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.955 [2024-07-15 03:34:03.921947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.955 [2024-07-15 03:34:03.921962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:123440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.955 [2024-07-15 03:34:03.921976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.955 [2024-07-15 03:34:03.921990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:123448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.955 [2024-07-15 03:34:03.922004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.955 [2024-07-15 03:34:03.922019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:123456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.955 [2024-07-15 03:34:03.922033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.955 [2024-07-15 03:34:03.922063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:123464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.955 [2024-07-15 03:34:03.922078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.955 [2024-07-15 03:34:03.922092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:123472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.956 [2024-07-15 03:34:03.922106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.956 [2024-07-15 03:34:03.922121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:123480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.956 [2024-07-15 03:34:03.922134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.956 [2024-07-15 03:34:03.922149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:123488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.956 [2024-07-15 03:34:03.922162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.956 [2024-07-15 03:34:03.922177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:123496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.956 [2024-07-15 03:34:03.922206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.956 [2024-07-15 03:34:03.922221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:123504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.956 [2024-07-15 03:34:03.922235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.956 [2024-07-15 03:34:03.922249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:123512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.956 [2024-07-15 03:34:03.922262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.956 [2024-07-15 03:34:03.922277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:123520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.956 [2024-07-15 03:34:03.922290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.956 [2024-07-15 03:34:03.922304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:123528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.956 [2024-07-15 03:34:03.922318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.956 [2024-07-15 03:34:03.922333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:123536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.956 [2024-07-15 03:34:03.922346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.956 [2024-07-15 03:34:03.922361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:123544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.956 [2024-07-15 03:34:03.922374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.956 [2024-07-15 03:34:03.922389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:123552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.956 [2024-07-15 03:34:03.922402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.956 [2024-07-15 03:34:03.922417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:123360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:08.956 [2024-07-15 03:34:03.922435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.956 [2024-07-15 03:34:03.922450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:123368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:08.956 [2024-07-15 03:34:03.922463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.956 [2024-07-15 03:34:03.922477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:123376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:08.956 [2024-07-15 03:34:03.922491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.956 [2024-07-15 03:34:03.922505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:123384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:08.956 [2024-07-15 03:34:03.922518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.956 [2024-07-15 03:34:03.922532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:123392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:08.956 [2024-07-15 03:34:03.922545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.956 [2024-07-15 03:34:03.922559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:123400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:08.956 [2024-07-15 03:34:03.922572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.956 [2024-07-15 03:34:03.922586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:123408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:08.956 [2024-07-15 03:34:03.922599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.956 [2024-07-15 03:34:03.922613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:123416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:08.956 [2024-07-15 03:34:03.922627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.956 [2024-07-15 03:34:03.922641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:123424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:08.956 [2024-07-15 03:34:03.922655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.956 [2024-07-15 03:34:03.922669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:123560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.956 [2024-07-15 03:34:03.922682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.956 [2024-07-15 03:34:03.922696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:123568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.956 [2024-07-15 03:34:03.922709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.956 [2024-07-15 03:34:03.922723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:123576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.956 [2024-07-15 03:34:03.922736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.956 [2024-07-15 03:34:03.922751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:123584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.956 [2024-07-15 03:34:03.922764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.956 [2024-07-15 03:34:03.922782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:123592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.956 [2024-07-15 03:34:03.922795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.956 [2024-07-15 03:34:03.922810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:123600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.956 [2024-07-15 03:34:03.922824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.956 [2024-07-15 03:34:03.922838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:123608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.956 [2024-07-15 03:34:03.922851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.956 [2024-07-15 03:34:03.922865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:123616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.956 [2024-07-15 03:34:03.922898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.956 [2024-07-15 03:34:03.922916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:123624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.956 [2024-07-15 03:34:03.922930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.956 [2024-07-15 03:34:03.922945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:123632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.956 [2024-07-15 03:34:03.922959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.956 [2024-07-15 03:34:03.922974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:123640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.956 [2024-07-15 03:34:03.922988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.956 [2024-07-15 03:34:03.923002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:123648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.956 [2024-07-15 03:34:03.923016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.957 [2024-07-15 03:34:03.923030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:123656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.957 [2024-07-15 03:34:03.923044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.957 [2024-07-15 03:34:03.923059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:123664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.957 [2024-07-15 03:34:03.923072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.957 [2024-07-15 03:34:03.923087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:123672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.957 [2024-07-15 03:34:03.923100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.957 [2024-07-15 03:34:03.923115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:123680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.957 [2024-07-15 03:34:03.923128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.957 [2024-07-15 03:34:03.923143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:123688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.957 [2024-07-15 03:34:03.923160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.957 [2024-07-15 03:34:03.923175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:123696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.957 [2024-07-15 03:34:03.923204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.957 [2024-07-15 03:34:03.923219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:123704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.957 [2024-07-15 03:34:03.923231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.957 [2024-07-15 03:34:03.923246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:123712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.957 [2024-07-15 03:34:03.923259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.957 [2024-07-15 03:34:03.923273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:123720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.957 [2024-07-15 03:34:03.923286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.957 [2024-07-15 03:34:03.923300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:123728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.957 [2024-07-15 03:34:03.923314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.957 [2024-07-15 03:34:03.923328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:123736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.957 [2024-07-15 03:34:03.923341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.957 [2024-07-15 03:34:03.923355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:123744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.957 [2024-07-15 03:34:03.923368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.957 [2024-07-15 03:34:03.923382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:123752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.957 [2024-07-15 03:34:03.923396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.957 [2024-07-15 03:34:03.923410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:123760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.957 [2024-07-15 03:34:03.923423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.957 [2024-07-15 03:34:03.923437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:123768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.957 [2024-07-15 03:34:03.923450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.957 [2024-07-15 03:34:03.923464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:123776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.957 [2024-07-15 03:34:03.923476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.957 [2024-07-15 03:34:03.923490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:123784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.957 [2024-07-15 03:34:03.923503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.957 [2024-07-15 03:34:03.923521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:123792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.957 [2024-07-15 03:34:03.923534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.957 [2024-07-15 03:34:03.923549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:123800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.957 [2024-07-15 03:34:03.923562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.957 [2024-07-15 03:34:03.923576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:123808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.957 [2024-07-15 03:34:03.923589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.957 [2024-07-15 03:34:03.923603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:123816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.957 [2024-07-15 03:34:03.923616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.957 [2024-07-15 03:34:03.923630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:123824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.957 [2024-07-15 03:34:03.923643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.957 [2024-07-15 03:34:03.923658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:123832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.957 [2024-07-15 03:34:03.923671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.957 [2024-07-15 03:34:03.923685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:123840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.957 [2024-07-15 03:34:03.923698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.957 [2024-07-15 03:34:03.923713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:123848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.957 [2024-07-15 03:34:03.923726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.957 [2024-07-15 03:34:03.923740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:123856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.957 [2024-07-15 03:34:03.923753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.957 [2024-07-15 03:34:03.923768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:123864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.957 [2024-07-15 03:34:03.923781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.957 [2024-07-15 03:34:03.923795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:123872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.957 [2024-07-15 03:34:03.923808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.957 [2024-07-15 03:34:03.923822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:123880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.957 [2024-07-15 03:34:03.923836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.957 [2024-07-15 03:34:03.923850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:123888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.957 [2024-07-15 03:34:03.923883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.957 [2024-07-15 03:34:03.923904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:123896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.957 [2024-07-15 03:34:03.923918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.957 [2024-07-15 03:34:03.923948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:123904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.957 [2024-07-15 03:34:03.923963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.957 [2024-07-15 03:34:03.923979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:123912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.957 [2024-07-15 03:34:03.923993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.957 [2024-07-15 03:34:03.924008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:123920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.957 [2024-07-15 03:34:03.924021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.957 [2024-07-15 03:34:03.924036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:123928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.957 [2024-07-15 03:34:03.924050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.957 [2024-07-15 03:34:03.924065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:123936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.957 [2024-07-15 03:34:03.924079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.957 [2024-07-15 03:34:03.924095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:123944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.958 [2024-07-15 03:34:03.924108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.958 [2024-07-15 03:34:03.924123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:123952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.958 [2024-07-15 03:34:03.924137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.958 [2024-07-15 03:34:03.924152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:123960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.958 [2024-07-15 03:34:03.924165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.958 [2024-07-15 03:34:03.924195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:123968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.958 [2024-07-15 03:34:03.924209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.958 [2024-07-15 03:34:03.924223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:123976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.958 [2024-07-15 03:34:03.924238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.958 [2024-07-15 03:34:03.924252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:123984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.958 [2024-07-15 03:34:03.924266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.958 [2024-07-15 03:34:03.924281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:123992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.958 [2024-07-15 03:34:03.924298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.958 [2024-07-15 03:34:03.924313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:124000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.958 [2024-07-15 03:34:03.924327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.958 [2024-07-15 03:34:03.924342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:124008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.958 [2024-07-15 03:34:03.924356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.958 [2024-07-15 03:34:03.924372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:124016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.958 [2024-07-15 03:34:03.924386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.958 [2024-07-15 03:34:03.924401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:124024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.958 [2024-07-15 03:34:03.924414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.958 [2024-07-15 03:34:03.924429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:124032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.958 [2024-07-15 03:34:03.924442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.958 [2024-07-15 03:34:03.924457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:124040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.958 [2024-07-15 03:34:03.924470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.958 [2024-07-15 03:34:03.924485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:124048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.958 [2024-07-15 03:34:03.924499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.958 [2024-07-15 03:34:03.924513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:124056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.958 [2024-07-15 03:34:03.924527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.958 [2024-07-15 03:34:03.924542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:124064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.958 [2024-07-15 03:34:03.924555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.958 [2024-07-15 03:34:03.924570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:124072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.958 [2024-07-15 03:34:03.924583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.958 [2024-07-15 03:34:03.924598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:124080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.958 [2024-07-15 03:34:03.924611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.958 [2024-07-15 03:34:03.924626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:124088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.958 [2024-07-15 03:34:03.924639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.958 [2024-07-15 03:34:03.924658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:124096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.958 [2024-07-15 03:34:03.924672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.958 [2024-07-15 03:34:03.924687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:124104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.958 [2024-07-15 03:34:03.924701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.958 [2024-07-15 03:34:03.924716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:124112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.958 [2024-07-15 03:34:03.924729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.958 [2024-07-15 03:34:03.924744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:124120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.958 [2024-07-15 03:34:03.924757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.958 [2024-07-15 03:34:03.924772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:124128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.958 [2024-07-15 03:34:03.924786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.958 [2024-07-15 03:34:03.924800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:124136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.958 [2024-07-15 03:34:03.924814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.958 [2024-07-15 03:34:03.924829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:124144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.958 [2024-07-15 03:34:03.924843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.958 [2024-07-15 03:34:03.924857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:124152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.958 [2024-07-15 03:34:03.924870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.958 [2024-07-15 03:34:03.924909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:124160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.958 [2024-07-15 03:34:03.924925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.958 [2024-07-15 03:34:03.924941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:124168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.958 [2024-07-15 03:34:03.924955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.958 [2024-07-15 03:34:03.924971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:124176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.958 [2024-07-15 03:34:03.924985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.958 [2024-07-15 03:34:03.925000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:124184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.958 [2024-07-15 03:34:03.925014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.958 [2024-07-15 03:34:03.925029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:124192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.958 [2024-07-15 03:34:03.925046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.958 [2024-07-15 03:34:03.925080] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:08.958 [2024-07-15 03:34:03.925098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:124200 len:8 PRP1 0x0 PRP2 0x0 00:31:08.958 [2024-07-15 03:34:03.925112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.958 [2024-07-15 03:34:03.925130] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:08.958 [2024-07-15 03:34:03.925142] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:08.958 [2024-07-15 03:34:03.925154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:124208 len:8 PRP1 0x0 PRP2 0x0 00:31:08.958 [2024-07-15 03:34:03.925173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.958 [2024-07-15 03:34:03.925203] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:08.958 [2024-07-15 03:34:03.925214] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:08.958 [2024-07-15 03:34:03.925225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:124216 len:8 PRP1 0x0 PRP2 0x0 00:31:08.958 [2024-07-15 03:34:03.925237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.958 [2024-07-15 03:34:03.925251] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:08.958 [2024-07-15 03:34:03.925261] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:08.958 [2024-07-15 03:34:03.925272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:124224 len:8 PRP1 0x0 PRP2 0x0 00:31:08.959 [2024-07-15 03:34:03.925284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.959 [2024-07-15 03:34:03.925298] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:08.959 [2024-07-15 03:34:03.925308] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:08.959 [2024-07-15 03:34:03.925319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:124232 len:8 PRP1 0x0 PRP2 0x0 00:31:08.959 [2024-07-15 03:34:03.925332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.959 [2024-07-15 03:34:03.925345] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:08.959 [2024-07-15 03:34:03.925355] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:08.959 [2024-07-15 03:34:03.925366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:124240 len:8 PRP1 0x0 PRP2 0x0 00:31:08.959 [2024-07-15 03:34:03.925378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.959 [2024-07-15 03:34:03.925391] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:08.959 [2024-07-15 03:34:03.925402] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:08.959 [2024-07-15 03:34:03.925412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:124248 len:8 PRP1 0x0 PRP2 0x0 00:31:08.959 [2024-07-15 03:34:03.925425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.959 [2024-07-15 03:34:03.925438] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:08.959 [2024-07-15 03:34:03.925449] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:08.959 [2024-07-15 03:34:03.925459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:124256 len:8 PRP1 0x0 PRP2 0x0 00:31:08.959 [2024-07-15 03:34:03.925476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.959 [2024-07-15 03:34:03.925489] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:08.959 [2024-07-15 03:34:03.925500] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:08.959 [2024-07-15 03:34:03.925511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:124264 len:8 PRP1 0x0 PRP2 0x0 00:31:08.959 [2024-07-15 03:34:03.925524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.959 [2024-07-15 03:34:03.925536] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:08.959 [2024-07-15 03:34:03.925547] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:08.959 [2024-07-15 03:34:03.925558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:124272 len:8 PRP1 0x0 PRP2 0x0 00:31:08.959 [2024-07-15 03:34:03.925570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.959 [2024-07-15 03:34:03.925583] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:08.959 [2024-07-15 03:34:03.925594] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:08.959 [2024-07-15 03:34:03.925605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:124280 len:8 PRP1 0x0 PRP2 0x0 00:31:08.959 [2024-07-15 03:34:03.925617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.959 [2024-07-15 03:34:03.925630] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:08.959 [2024-07-15 03:34:03.925640] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:08.959 [2024-07-15 03:34:03.925652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:124288 len:8 PRP1 0x0 PRP2 0x0 00:31:08.959 [2024-07-15 03:34:03.925665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.959 [2024-07-15 03:34:03.925677] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:08.959 [2024-07-15 03:34:03.925688] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:08.959 [2024-07-15 03:34:03.925700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:124296 len:8 PRP1 0x0 PRP2 0x0 00:31:08.959 [2024-07-15 03:34:03.925712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.959 [2024-07-15 03:34:03.925728] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:08.959 [2024-07-15 03:34:03.925739] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:08.959 [2024-07-15 03:34:03.925750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:124304 len:8 PRP1 0x0 PRP2 0x0 00:31:08.959 [2024-07-15 03:34:03.925763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.959 [2024-07-15 03:34:03.925776] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:08.959 [2024-07-15 03:34:03.925787] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:08.959 [2024-07-15 03:34:03.925798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:124312 len:8 PRP1 0x0 PRP2 0x0 00:31:08.959 [2024-07-15 03:34:03.925811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.959 [2024-07-15 03:34:03.925872] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1201a00 was disconnected and freed. reset controller. 00:31:08.959 [2024-07-15 03:34:03.925918] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:31:08.959 [2024-07-15 03:34:03.925954] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:31:08.959 [2024-07-15 03:34:03.925973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.959 [2024-07-15 03:34:03.925989] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:31:08.959 [2024-07-15 03:34:03.926003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.959 [2024-07-15 03:34:03.926017] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:31:08.959 [2024-07-15 03:34:03.926030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.959 [2024-07-15 03:34:03.926045] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:31:08.959 [2024-07-15 03:34:03.926058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.959 [2024-07-15 03:34:03.926072] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:08.959 [2024-07-15 03:34:03.926111] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1036bd0 (9): Bad file descriptor 00:31:08.959 [2024-07-15 03:34:03.929363] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:08.959 [2024-07-15 03:34:03.958496] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:31:08.959 [2024-07-15 03:34:08.481636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:52040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:08.959 [2024-07-15 03:34:08.481681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.959 [2024-07-15 03:34:08.481717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:52048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:08.959 [2024-07-15 03:34:08.481733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.959 [2024-07-15 03:34:08.481750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:52056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:08.959 [2024-07-15 03:34:08.481765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.959 [2024-07-15 03:34:08.481781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:52064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:08.959 [2024-07-15 03:34:08.481796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.959 [2024-07-15 03:34:08.481813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:52072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:08.959 [2024-07-15 03:34:08.481828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.959 [2024-07-15 03:34:08.481860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:52080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:08.959 [2024-07-15 03:34:08.481889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.959 [2024-07-15 03:34:08.481921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:52088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:08.959 [2024-07-15 03:34:08.481947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.959 [2024-07-15 03:34:08.481964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:52096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:08.959 [2024-07-15 03:34:08.481993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.959 [2024-07-15 03:34:08.482008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:52104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:08.959 [2024-07-15 03:34:08.482022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.959 [2024-07-15 03:34:08.482037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:52112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:08.959 [2024-07-15 03:34:08.482050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.959 [2024-07-15 03:34:08.482065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:52120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:08.959 [2024-07-15 03:34:08.482079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.959 [2024-07-15 03:34:08.482093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:52128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:08.959 [2024-07-15 03:34:08.482107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.960 [2024-07-15 03:34:08.482123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:52136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.960 [2024-07-15 03:34:08.482137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.960 [2024-07-15 03:34:08.482151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:52144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.960 [2024-07-15 03:34:08.482173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.960 [2024-07-15 03:34:08.482188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:52152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.960 [2024-07-15 03:34:08.482216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.960 [2024-07-15 03:34:08.482238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:52160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.960 [2024-07-15 03:34:08.482251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.960 [2024-07-15 03:34:08.482265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:52168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.960 [2024-07-15 03:34:08.482279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.960 [2024-07-15 03:34:08.482294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:52176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.960 [2024-07-15 03:34:08.482307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.960 [2024-07-15 03:34:08.482322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:52184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.960 [2024-07-15 03:34:08.482335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.960 [2024-07-15 03:34:08.482349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:52192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.960 [2024-07-15 03:34:08.482366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.960 [2024-07-15 03:34:08.482381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:52200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.960 [2024-07-15 03:34:08.482395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.960 [2024-07-15 03:34:08.482409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:52208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.960 [2024-07-15 03:34:08.482423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.960 [2024-07-15 03:34:08.482438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:52216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.960 [2024-07-15 03:34:08.482451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.960 [2024-07-15 03:34:08.482466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:52224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.960 [2024-07-15 03:34:08.482479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.960 [2024-07-15 03:34:08.482494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:52232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.960 [2024-07-15 03:34:08.482507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.960 [2024-07-15 03:34:08.482521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:52240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.960 [2024-07-15 03:34:08.482535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.960 [2024-07-15 03:34:08.482552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:52248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.960 [2024-07-15 03:34:08.482565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.960 [2024-07-15 03:34:08.482579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:52256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.960 [2024-07-15 03:34:08.482592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.960 [2024-07-15 03:34:08.482606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:52264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.960 [2024-07-15 03:34:08.482619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.960 [2024-07-15 03:34:08.482633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:52272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.960 [2024-07-15 03:34:08.482646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.960 [2024-07-15 03:34:08.482660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:52280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.960 [2024-07-15 03:34:08.482673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.960 [2024-07-15 03:34:08.482687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:52288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.960 [2024-07-15 03:34:08.482699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.960 [2024-07-15 03:34:08.482717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:52296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.960 [2024-07-15 03:34:08.482730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.960 [2024-07-15 03:34:08.482744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:52304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.960 [2024-07-15 03:34:08.482758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.960 [2024-07-15 03:34:08.482772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:52312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.960 [2024-07-15 03:34:08.482785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.960 [2024-07-15 03:34:08.482799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:52320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.960 [2024-07-15 03:34:08.482812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.960 [2024-07-15 03:34:08.482826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:52328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.960 [2024-07-15 03:34:08.482839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.960 [2024-07-15 03:34:08.482853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:52336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.960 [2024-07-15 03:34:08.482888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.960 [2024-07-15 03:34:08.482904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:52344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.960 [2024-07-15 03:34:08.482918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.960 [2024-07-15 03:34:08.482948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:52352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.960 [2024-07-15 03:34:08.482963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.960 [2024-07-15 03:34:08.482978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:52360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.960 [2024-07-15 03:34:08.482991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.960 [2024-07-15 03:34:08.483006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:52368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.960 [2024-07-15 03:34:08.483020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.960 [2024-07-15 03:34:08.483036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:52376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.960 [2024-07-15 03:34:08.483049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.960 [2024-07-15 03:34:08.483065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:52384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.960 [2024-07-15 03:34:08.483079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.960 [2024-07-15 03:34:08.483093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:52392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.960 [2024-07-15 03:34:08.483111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.960 [2024-07-15 03:34:08.483126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:52400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.960 [2024-07-15 03:34:08.483140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.960 [2024-07-15 03:34:08.483162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:52408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.960 [2024-07-15 03:34:08.483191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.961 [2024-07-15 03:34:08.483206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:52416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.961 [2024-07-15 03:34:08.483220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.961 [2024-07-15 03:34:08.483235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:52424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.961 [2024-07-15 03:34:08.483248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.961 [2024-07-15 03:34:08.483262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:52432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.961 [2024-07-15 03:34:08.483276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.961 [2024-07-15 03:34:08.483290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:52440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.961 [2024-07-15 03:34:08.483304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.961 [2024-07-15 03:34:08.483318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:52448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.961 [2024-07-15 03:34:08.483332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.961 [2024-07-15 03:34:08.483347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:52456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.961 [2024-07-15 03:34:08.483360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.961 [2024-07-15 03:34:08.483375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:52464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.961 [2024-07-15 03:34:08.483389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.961 [2024-07-15 03:34:08.483403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:52472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.961 [2024-07-15 03:34:08.483416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.961 [2024-07-15 03:34:08.483431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:52480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.961 [2024-07-15 03:34:08.483444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.961 [2024-07-15 03:34:08.483459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:52488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.961 [2024-07-15 03:34:08.483472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.961 [2024-07-15 03:34:08.483487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:52496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.961 [2024-07-15 03:34:08.483507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.961 [2024-07-15 03:34:08.483523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:52504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.961 [2024-07-15 03:34:08.483537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.961 [2024-07-15 03:34:08.483551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:52512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.961 [2024-07-15 03:34:08.483565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.961 [2024-07-15 03:34:08.483579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:52520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.961 [2024-07-15 03:34:08.483593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.961 [2024-07-15 03:34:08.483608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:52528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.961 [2024-07-15 03:34:08.483621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.961 [2024-07-15 03:34:08.483636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:52536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.961 [2024-07-15 03:34:08.483649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.961 [2024-07-15 03:34:08.483663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:52544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.961 [2024-07-15 03:34:08.483678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.961 [2024-07-15 03:34:08.483693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:52552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.961 [2024-07-15 03:34:08.483706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.961 [2024-07-15 03:34:08.483721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:52560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.961 [2024-07-15 03:34:08.483734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.961 [2024-07-15 03:34:08.483749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:52568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.961 [2024-07-15 03:34:08.483762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.961 [2024-07-15 03:34:08.483777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:52576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.961 [2024-07-15 03:34:08.483791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.961 [2024-07-15 03:34:08.483805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:52584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.961 [2024-07-15 03:34:08.483819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.961 [2024-07-15 03:34:08.483834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:52592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.961 [2024-07-15 03:34:08.483847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.961 [2024-07-15 03:34:08.483875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:52600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.961 [2024-07-15 03:34:08.483915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.961 [2024-07-15 03:34:08.483932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:52608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.961 [2024-07-15 03:34:08.483946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.961 [2024-07-15 03:34:08.483961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:52616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.961 [2024-07-15 03:34:08.483975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.961 [2024-07-15 03:34:08.483990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:52624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.961 [2024-07-15 03:34:08.484004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.961 [2024-07-15 03:34:08.484020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:52632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.961 [2024-07-15 03:34:08.484035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.961 [2024-07-15 03:34:08.484050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:52640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.961 [2024-07-15 03:34:08.484064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.961 [2024-07-15 03:34:08.484079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:52648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.961 [2024-07-15 03:34:08.484092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.961 [2024-07-15 03:34:08.484107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:52656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.961 [2024-07-15 03:34:08.484122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.961 [2024-07-15 03:34:08.484137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:52664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.961 [2024-07-15 03:34:08.484151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.961 [2024-07-15 03:34:08.484166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:52672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.961 [2024-07-15 03:34:08.484199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.961 [2024-07-15 03:34:08.484214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:52680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.961 [2024-07-15 03:34:08.484228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.962 [2024-07-15 03:34:08.484242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:52688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.962 [2024-07-15 03:34:08.484256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.962 [2024-07-15 03:34:08.484271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:52696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.962 [2024-07-15 03:34:08.484288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.962 [2024-07-15 03:34:08.484303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:52704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.962 [2024-07-15 03:34:08.484316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.962 [2024-07-15 03:34:08.484331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:52712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.962 [2024-07-15 03:34:08.484345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.962 [2024-07-15 03:34:08.484360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:52720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.962 [2024-07-15 03:34:08.484373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.962 [2024-07-15 03:34:08.484388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:52728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.962 [2024-07-15 03:34:08.484401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.962 [2024-07-15 03:34:08.484416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:52736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.962 [2024-07-15 03:34:08.484430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.962 [2024-07-15 03:34:08.484444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:52744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.962 [2024-07-15 03:34:08.484459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.962 [2024-07-15 03:34:08.484473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:52752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.962 [2024-07-15 03:34:08.484487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.962 [2024-07-15 03:34:08.484502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:52760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.962 [2024-07-15 03:34:08.484516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.962 [2024-07-15 03:34:08.484530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:52768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:08.962 [2024-07-15 03:34:08.484544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.962 [2024-07-15 03:34:08.484575] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:08.962 [2024-07-15 03:34:08.484593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:52776 len:8 PRP1 0x0 PRP2 0x0 00:31:08.962 [2024-07-15 03:34:08.484606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.962 [2024-07-15 03:34:08.484623] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:08.962 [2024-07-15 03:34:08.484635] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:08.962 [2024-07-15 03:34:08.484646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:52784 len:8 PRP1 0x0 PRP2 0x0 00:31:08.962 [2024-07-15 03:34:08.484661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.962 [2024-07-15 03:34:08.484674] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:08.962 [2024-07-15 03:34:08.484689] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:08.962 [2024-07-15 03:34:08.484700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:52792 len:8 PRP1 0x0 PRP2 0x0 00:31:08.962 [2024-07-15 03:34:08.484713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.962 [2024-07-15 03:34:08.484726] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:08.962 [2024-07-15 03:34:08.484737] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:08.962 [2024-07-15 03:34:08.484748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:52800 len:8 PRP1 0x0 PRP2 0x0 00:31:08.962 [2024-07-15 03:34:08.484761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.962 [2024-07-15 03:34:08.484773] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:08.962 [2024-07-15 03:34:08.484784] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:08.962 [2024-07-15 03:34:08.484795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:52808 len:8 PRP1 0x0 PRP2 0x0 00:31:08.962 [2024-07-15 03:34:08.484807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.962 [2024-07-15 03:34:08.484820] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:08.962 [2024-07-15 03:34:08.484831] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:08.962 [2024-07-15 03:34:08.484842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:52816 len:8 PRP1 0x0 PRP2 0x0 00:31:08.962 [2024-07-15 03:34:08.484854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.962 [2024-07-15 03:34:08.484891] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:08.962 [2024-07-15 03:34:08.484903] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:08.962 [2024-07-15 03:34:08.484915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:52824 len:8 PRP1 0x0 PRP2 0x0 00:31:08.962 [2024-07-15 03:34:08.484928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.962 [2024-07-15 03:34:08.484943] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:08.962 [2024-07-15 03:34:08.484955] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:08.962 [2024-07-15 03:34:08.484967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:52832 len:8 PRP1 0x0 PRP2 0x0 00:31:08.962 [2024-07-15 03:34:08.484980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.962 [2024-07-15 03:34:08.484993] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:08.962 [2024-07-15 03:34:08.485005] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:08.962 [2024-07-15 03:34:08.485016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:52840 len:8 PRP1 0x0 PRP2 0x0 00:31:08.962 [2024-07-15 03:34:08.485029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.962 [2024-07-15 03:34:08.485043] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:08.962 [2024-07-15 03:34:08.485054] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:08.962 [2024-07-15 03:34:08.485066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:52848 len:8 PRP1 0x0 PRP2 0x0 00:31:08.962 [2024-07-15 03:34:08.485079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.962 [2024-07-15 03:34:08.485096] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:08.962 [2024-07-15 03:34:08.485107] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:08.962 [2024-07-15 03:34:08.485119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:52856 len:8 PRP1 0x0 PRP2 0x0 00:31:08.962 [2024-07-15 03:34:08.485132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.962 [2024-07-15 03:34:08.485145] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:08.962 [2024-07-15 03:34:08.485164] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:08.962 [2024-07-15 03:34:08.485191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:52864 len:8 PRP1 0x0 PRP2 0x0 00:31:08.962 [2024-07-15 03:34:08.485204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.962 [2024-07-15 03:34:08.485217] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:08.962 [2024-07-15 03:34:08.485228] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:08.962 [2024-07-15 03:34:08.485239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:52872 len:8 PRP1 0x0 PRP2 0x0 00:31:08.962 [2024-07-15 03:34:08.485252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.962 [2024-07-15 03:34:08.485265] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:08.962 [2024-07-15 03:34:08.485276] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:08.962 [2024-07-15 03:34:08.485288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:52880 len:8 PRP1 0x0 PRP2 0x0 00:31:08.962 [2024-07-15 03:34:08.485300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.962 [2024-07-15 03:34:08.485313] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:08.962 [2024-07-15 03:34:08.485324] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:08.962 [2024-07-15 03:34:08.485335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:52888 len:8 PRP1 0x0 PRP2 0x0 00:31:08.962 [2024-07-15 03:34:08.485347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.962 [2024-07-15 03:34:08.485360] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:08.962 [2024-07-15 03:34:08.485371] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:08.962 [2024-07-15 03:34:08.485382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:52896 len:8 PRP1 0x0 PRP2 0x0 00:31:08.962 [2024-07-15 03:34:08.485395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.963 [2024-07-15 03:34:08.485408] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:08.963 [2024-07-15 03:34:08.485420] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:08.963 [2024-07-15 03:34:08.485431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:52904 len:8 PRP1 0x0 PRP2 0x0 00:31:08.963 [2024-07-15 03:34:08.485444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.963 [2024-07-15 03:34:08.485457] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:08.963 [2024-07-15 03:34:08.485468] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:08.963 [2024-07-15 03:34:08.485479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:52912 len:8 PRP1 0x0 PRP2 0x0 00:31:08.963 [2024-07-15 03:34:08.485495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.963 [2024-07-15 03:34:08.485508] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:08.963 [2024-07-15 03:34:08.485519] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:08.963 [2024-07-15 03:34:08.485530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:52920 len:8 PRP1 0x0 PRP2 0x0 00:31:08.963 [2024-07-15 03:34:08.485542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.963 [2024-07-15 03:34:08.485555] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:08.963 [2024-07-15 03:34:08.485566] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:08.963 [2024-07-15 03:34:08.485578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:52928 len:8 PRP1 0x0 PRP2 0x0 00:31:08.963 [2024-07-15 03:34:08.485590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.963 [2024-07-15 03:34:08.485603] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:08.963 [2024-07-15 03:34:08.485614] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:08.963 [2024-07-15 03:34:08.485625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:52936 len:8 PRP1 0x0 PRP2 0x0 00:31:08.963 [2024-07-15 03:34:08.485638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.963 [2024-07-15 03:34:08.485650] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:08.963 [2024-07-15 03:34:08.485661] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:08.963 [2024-07-15 03:34:08.485671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:52944 len:8 PRP1 0x0 PRP2 0x0 00:31:08.963 [2024-07-15 03:34:08.485684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.963 [2024-07-15 03:34:08.485696] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:08.963 [2024-07-15 03:34:08.485707] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:08.963 [2024-07-15 03:34:08.485717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:52952 len:8 PRP1 0x0 PRP2 0x0 00:31:08.963 [2024-07-15 03:34:08.485729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.963 [2024-07-15 03:34:08.485742] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:08.963 [2024-07-15 03:34:08.485753] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:08.963 [2024-07-15 03:34:08.485764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:52960 len:8 PRP1 0x0 PRP2 0x0 00:31:08.963 [2024-07-15 03:34:08.485777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.963 [2024-07-15 03:34:08.485790] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:08.963 [2024-07-15 03:34:08.485800] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:08.963 [2024-07-15 03:34:08.485811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:52968 len:8 PRP1 0x0 PRP2 0x0 00:31:08.963 [2024-07-15 03:34:08.485822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.963 [2024-07-15 03:34:08.485835] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:08.963 [2024-07-15 03:34:08.485848] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:08.963 [2024-07-15 03:34:08.485869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:52976 len:8 PRP1 0x0 PRP2 0x0 00:31:08.963 [2024-07-15 03:34:08.485904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.963 [2024-07-15 03:34:08.485920] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:08.963 [2024-07-15 03:34:08.485930] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:08.963 [2024-07-15 03:34:08.485941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:52984 len:8 PRP1 0x0 PRP2 0x0 00:31:08.963 [2024-07-15 03:34:08.485954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.963 [2024-07-15 03:34:08.485967] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:08.963 [2024-07-15 03:34:08.485978] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:08.963 [2024-07-15 03:34:08.485989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:52992 len:8 PRP1 0x0 PRP2 0x0 00:31:08.963 [2024-07-15 03:34:08.486001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.963 [2024-07-15 03:34:08.486014] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:08.963 [2024-07-15 03:34:08.486024] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:08.963 [2024-07-15 03:34:08.486035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:53000 len:8 PRP1 0x0 PRP2 0x0 00:31:08.963 [2024-07-15 03:34:08.486048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.963 [2024-07-15 03:34:08.486061] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:08.963 [2024-07-15 03:34:08.486071] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:08.963 [2024-07-15 03:34:08.486083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:53008 len:8 PRP1 0x0 PRP2 0x0 00:31:08.963 [2024-07-15 03:34:08.486095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.963 [2024-07-15 03:34:08.486108] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:08.963 [2024-07-15 03:34:08.486118] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:08.963 [2024-07-15 03:34:08.486129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:53016 len:8 PRP1 0x0 PRP2 0x0 00:31:08.963 [2024-07-15 03:34:08.486141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.963 [2024-07-15 03:34:08.486154] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:08.963 [2024-07-15 03:34:08.486177] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:08.963 [2024-07-15 03:34:08.486204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:53024 len:8 PRP1 0x0 PRP2 0x0 00:31:08.963 [2024-07-15 03:34:08.486217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.963 [2024-07-15 03:34:08.486230] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:08.963 [2024-07-15 03:34:08.486240] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:08.963 [2024-07-15 03:34:08.486250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:53032 len:8 PRP1 0x0 PRP2 0x0 00:31:08.963 [2024-07-15 03:34:08.486263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.963 [2024-07-15 03:34:08.486280] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:08.963 [2024-07-15 03:34:08.486291] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:08.963 [2024-07-15 03:34:08.486302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:53040 len:8 PRP1 0x0 PRP2 0x0 00:31:08.963 [2024-07-15 03:34:08.486314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.963 [2024-07-15 03:34:08.486327] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:08.963 [2024-07-15 03:34:08.486337] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:08.963 [2024-07-15 03:34:08.486348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:53048 len:8 PRP1 0x0 PRP2 0x0 00:31:08.963 [2024-07-15 03:34:08.486360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.963 [2024-07-15 03:34:08.486373] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:08.963 [2024-07-15 03:34:08.486383] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:08.963 [2024-07-15 03:34:08.486394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:53056 len:8 PRP1 0x0 PRP2 0x0 00:31:08.963 [2024-07-15 03:34:08.486407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.963 [2024-07-15 03:34:08.486466] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x12017f0 was disconnected and freed. reset controller. 00:31:08.963 [2024-07-15 03:34:08.486484] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:31:08.963 [2024-07-15 03:34:08.486532] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:31:08.963 [2024-07-15 03:34:08.486551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.963 [2024-07-15 03:34:08.486567] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:31:08.963 [2024-07-15 03:34:08.486580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.963 [2024-07-15 03:34:08.486594] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:31:08.964 [2024-07-15 03:34:08.486607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.964 [2024-07-15 03:34:08.486621] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:31:08.964 [2024-07-15 03:34:08.486634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.964 [2024-07-15 03:34:08.486648] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:08.964 [2024-07-15 03:34:08.486703] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1036bd0 (9): Bad file descriptor 00:31:08.964 [2024-07-15 03:34:08.489974] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:08.964 [2024-07-15 03:34:08.651005] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:31:08.964 00:31:08.964 Latency(us) 00:31:08.964 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:08.964 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:31:08.964 Verification LBA range: start 0x0 length 0x4000 00:31:08.964 NVMe0n1 : 15.01 8103.96 31.66 993.85 0.00 14040.96 807.06 22622.06 00:31:08.964 =================================================================================================================== 00:31:08.964 Total : 8103.96 31.66 993.85 0.00 14040.96 807.06 22622.06 00:31:08.964 Received shutdown signal, test time was about 15.000000 seconds 00:31:08.964 00:31:08.964 Latency(us) 00:31:08.964 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:08.964 =================================================================================================================== 00:31:08.964 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:31:08.964 03:34:14 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:31:08.964 03:34:14 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # count=3 00:31:08.964 03:34:14 nvmf_tcp.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:31:08.964 03:34:14 nvmf_tcp.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=3310739 00:31:08.964 03:34:14 nvmf_tcp.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:31:08.964 03:34:14 nvmf_tcp.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 3310739 /var/tmp/bdevperf.sock 00:31:08.964 03:34:14 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 3310739 ']' 00:31:08.964 03:34:14 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:31:08.964 03:34:14 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:31:08.964 03:34:14 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:31:08.964 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:31:08.964 03:34:14 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:31:08.964 03:34:14 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:31:08.964 03:34:14 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:31:08.964 03:34:14 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:31:08.964 03:34:14 nvmf_tcp.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:31:08.964 [2024-07-15 03:34:14.787619] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:31:08.964 03:34:14 nvmf_tcp.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:31:08.964 [2024-07-15 03:34:15.048361] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:31:08.964 03:34:15 nvmf_tcp.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:31:09.530 NVMe0n1 00:31:09.530 03:34:15 nvmf_tcp.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:31:09.788 00:31:09.788 03:34:15 nvmf_tcp.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:31:10.352 00:31:10.352 03:34:16 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:31:10.352 03:34:16 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:31:10.609 03:34:16 nvmf_tcp.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:31:10.867 03:34:16 nvmf_tcp.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:31:14.147 03:34:19 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:31:14.147 03:34:19 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:31:14.147 03:34:20 nvmf_tcp.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=3311402 00:31:14.147 03:34:20 nvmf_tcp.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:31:14.147 03:34:20 nvmf_tcp.nvmf_failover -- host/failover.sh@92 -- # wait 3311402 00:31:15.085 0 00:31:15.085 03:34:21 nvmf_tcp.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:31:15.085 [2024-07-15 03:34:14.320517] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:31:15.085 [2024-07-15 03:34:14.320621] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3310739 ] 00:31:15.085 EAL: No free 2048 kB hugepages reported on node 1 00:31:15.085 [2024-07-15 03:34:14.382722] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:15.085 [2024-07-15 03:34:14.465842] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:31:15.085 [2024-07-15 03:34:16.785852] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:31:15.085 [2024-07-15 03:34:16.785967] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:31:15.085 [2024-07-15 03:34:16.785992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:15.085 [2024-07-15 03:34:16.786008] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:31:15.085 [2024-07-15 03:34:16.786021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:15.085 [2024-07-15 03:34:16.786035] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:31:15.085 [2024-07-15 03:34:16.786047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:15.085 [2024-07-15 03:34:16.786061] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:31:15.085 [2024-07-15 03:34:16.786075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:15.085 [2024-07-15 03:34:16.786088] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:15.085 [2024-07-15 03:34:16.786132] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:15.085 [2024-07-15 03:34:16.786163] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fa1bd0 (9): Bad file descriptor 00:31:15.085 [2024-07-15 03:34:16.831412] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:31:15.085 Running I/O for 1 seconds... 00:31:15.085 00:31:15.085 Latency(us) 00:31:15.085 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:15.085 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:31:15.085 Verification LBA range: start 0x0 length 0x4000 00:31:15.085 NVMe0n1 : 1.00 8158.55 31.87 0.00 0.00 15625.08 1225.77 14175.19 00:31:15.085 =================================================================================================================== 00:31:15.085 Total : 8158.55 31.87 0.00 0.00 15625.08 1225.77 14175.19 00:31:15.085 03:34:21 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:31:15.085 03:34:21 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:31:15.343 03:34:21 nvmf_tcp.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:31:15.601 03:34:21 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:31:15.601 03:34:21 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:31:15.859 03:34:21 nvmf_tcp.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:31:16.117 03:34:22 nvmf_tcp.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:31:19.435 03:34:25 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:31:19.435 03:34:25 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:31:19.435 03:34:25 nvmf_tcp.nvmf_failover -- host/failover.sh@108 -- # killprocess 3310739 00:31:19.435 03:34:25 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 3310739 ']' 00:31:19.435 03:34:25 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 3310739 00:31:19.435 03:34:25 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:31:19.435 03:34:25 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:31:19.435 03:34:25 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3310739 00:31:19.435 03:34:25 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:31:19.435 03:34:25 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:31:19.435 03:34:25 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3310739' 00:31:19.435 killing process with pid 3310739 00:31:19.435 03:34:25 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # kill 3310739 00:31:19.435 03:34:25 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@972 -- # wait 3310739 00:31:19.693 03:34:25 nvmf_tcp.nvmf_failover -- host/failover.sh@110 -- # sync 00:31:19.693 03:34:25 nvmf_tcp.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:19.951 03:34:26 nvmf_tcp.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:31:19.951 03:34:26 nvmf_tcp.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:31:19.951 03:34:26 nvmf_tcp.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:31:19.951 03:34:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@488 -- # nvmfcleanup 00:31:19.951 03:34:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@117 -- # sync 00:31:19.951 03:34:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:31:19.951 03:34:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@120 -- # set +e 00:31:19.951 03:34:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@121 -- # for i in {1..20} 00:31:19.951 03:34:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:31:19.951 rmmod nvme_tcp 00:31:19.951 rmmod nvme_fabrics 00:31:19.951 rmmod nvme_keyring 00:31:19.951 03:34:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:31:19.951 03:34:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@124 -- # set -e 00:31:19.951 03:34:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@125 -- # return 0 00:31:19.951 03:34:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@489 -- # '[' -n 3308573 ']' 00:31:19.951 03:34:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@490 -- # killprocess 3308573 00:31:19.951 03:34:26 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 3308573 ']' 00:31:19.951 03:34:26 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 3308573 00:31:19.951 03:34:26 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:31:19.951 03:34:26 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:31:19.951 03:34:26 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3308573 00:31:19.951 03:34:26 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:31:19.951 03:34:26 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:31:19.951 03:34:26 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3308573' 00:31:19.951 killing process with pid 3308573 00:31:20.209 03:34:26 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # kill 3308573 00:31:20.209 03:34:26 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@972 -- # wait 3308573 00:31:20.209 03:34:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:31:20.209 03:34:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:31:20.209 03:34:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:31:20.209 03:34:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:31:20.209 03:34:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@278 -- # remove_spdk_ns 00:31:20.209 03:34:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:20.209 03:34:26 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:20.209 03:34:26 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:22.751 03:34:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:31:22.751 00:31:22.751 real 0m34.891s 00:31:22.751 user 2m1.700s 00:31:22.751 sys 0m6.373s 00:31:22.751 03:34:28 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1124 -- # xtrace_disable 00:31:22.751 03:34:28 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:31:22.751 ************************************ 00:31:22.751 END TEST nvmf_failover 00:31:22.751 ************************************ 00:31:22.751 03:34:28 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:31:22.751 03:34:28 nvmf_tcp -- nvmf/nvmf.sh@101 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:31:22.751 03:34:28 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:31:22.751 03:34:28 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:22.751 03:34:28 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:22.751 ************************************ 00:31:22.751 START TEST nvmf_host_discovery 00:31:22.751 ************************************ 00:31:22.751 03:34:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:31:22.751 * Looking for test storage... 00:31:22.751 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:31:22.751 03:34:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:22.751 03:34:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:31:22.751 03:34:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:22.751 03:34:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:22.751 03:34:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:22.751 03:34:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:22.751 03:34:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:22.751 03:34:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:22.751 03:34:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:22.751 03:34:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:22.751 03:34:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:22.751 03:34:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:22.751 03:34:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:31:22.751 03:34:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:31:22.751 03:34:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:22.751 03:34:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:22.751 03:34:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:22.751 03:34:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:22.751 03:34:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:22.751 03:34:28 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:22.751 03:34:28 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:22.752 03:34:28 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:22.752 03:34:28 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:22.752 03:34:28 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:22.752 03:34:28 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:22.752 03:34:28 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:31:22.752 03:34:28 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:22.752 03:34:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@47 -- # : 0 00:31:22.752 03:34:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:31:22.752 03:34:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:31:22.752 03:34:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:22.752 03:34:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:22.752 03:34:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:22.752 03:34:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:31:22.752 03:34:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:31:22.752 03:34:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:31:22.752 03:34:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:31:22.752 03:34:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:31:22.752 03:34:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:31:22.752 03:34:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:31:22.752 03:34:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:31:22.752 03:34:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:31:22.752 03:34:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:31:22.752 03:34:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:31:22.752 03:34:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:22.752 03:34:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:31:22.752 03:34:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:31:22.752 03:34:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:31:22.752 03:34:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:22.752 03:34:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:22.752 03:34:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:22.752 03:34:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:31:22.752 03:34:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:31:22.752 03:34:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:31:22.752 03:34:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:24.653 03:34:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:24.653 03:34:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:31:24.653 03:34:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:31:24.653 03:34:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:31:24.653 03:34:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:31:24.653 03:34:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:31:24.653 03:34:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:31:24.653 03:34:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:31:24.653 03:34:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:31:24.653 03:34:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@296 -- # e810=() 00:31:24.653 03:34:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:31:24.653 03:34:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@297 -- # x722=() 00:31:24.653 03:34:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:31:24.653 03:34:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@298 -- # mlx=() 00:31:24.653 03:34:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:31:24.653 03:34:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:24.653 03:34:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:24.653 03:34:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:24.653 03:34:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:24.653 03:34:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:24.653 03:34:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:24.653 03:34:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:24.653 03:34:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:24.653 03:34:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:24.653 03:34:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:24.653 03:34:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:24.653 03:34:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:31:24.653 03:34:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:31:24.653 03:34:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:31:24.653 03:34:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:31:24.653 03:34:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:31:24.653 03:34:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:31:24.653 03:34:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:24.653 03:34:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:31:24.653 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:31:24.653 03:34:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:24.653 03:34:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:24.653 03:34:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:24.653 03:34:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:24.653 03:34:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:24.653 03:34:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:24.653 03:34:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:31:24.653 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:31:24.653 03:34:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:24.653 03:34:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:24.653 03:34:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:24.653 03:34:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:24.654 03:34:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:24.654 03:34:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:31:24.654 03:34:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:31:24.654 03:34:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:31:24.654 03:34:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:24.654 03:34:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:24.654 03:34:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:24.654 03:34:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:24.654 03:34:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:24.654 03:34:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:24.654 03:34:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:24.654 03:34:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:31:24.654 Found net devices under 0000:0a:00.0: cvl_0_0 00:31:24.654 03:34:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:24.654 03:34:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:24.654 03:34:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:24.654 03:34:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:24.654 03:34:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:24.654 03:34:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:24.654 03:34:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:24.654 03:34:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:24.654 03:34:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:31:24.654 Found net devices under 0000:0a:00.1: cvl_0_1 00:31:24.654 03:34:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:24.654 03:34:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:31:24.654 03:34:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:31:24.654 03:34:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:31:24.654 03:34:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:31:24.654 03:34:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:31:24.654 03:34:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:24.654 03:34:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:24.654 03:34:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:24.654 03:34:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:31:24.654 03:34:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:24.654 03:34:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:24.654 03:34:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:31:24.654 03:34:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:24.654 03:34:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:24.654 03:34:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:31:24.654 03:34:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:31:24.654 03:34:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:31:24.654 03:34:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:24.654 03:34:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:24.654 03:34:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:24.654 03:34:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:31:24.654 03:34:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:24.654 03:34:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:24.654 03:34:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:24.654 03:34:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:31:24.654 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:24.654 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.226 ms 00:31:24.654 00:31:24.654 --- 10.0.0.2 ping statistics --- 00:31:24.654 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:24.654 rtt min/avg/max/mdev = 0.226/0.226/0.226/0.000 ms 00:31:24.654 03:34:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:24.654 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:24.654 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.149 ms 00:31:24.654 00:31:24.654 --- 10.0.0.1 ping statistics --- 00:31:24.654 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:24.654 rtt min/avg/max/mdev = 0.149/0.149/0.149/0.000 ms 00:31:24.654 03:34:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:24.654 03:34:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@422 -- # return 0 00:31:24.654 03:34:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:31:24.654 03:34:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:24.654 03:34:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:31:24.654 03:34:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:31:24.654 03:34:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:24.654 03:34:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:31:24.654 03:34:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:31:24.654 03:34:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:31:24.654 03:34:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:31:24.654 03:34:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@722 -- # xtrace_disable 00:31:24.654 03:34:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:24.654 03:34:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@481 -- # nvmfpid=3314004 00:31:24.654 03:34:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:31:24.654 03:34:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@482 -- # waitforlisten 3314004 00:31:24.654 03:34:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@829 -- # '[' -z 3314004 ']' 00:31:24.654 03:34:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:24.654 03:34:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:31:24.654 03:34:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:24.654 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:24.654 03:34:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:31:24.654 03:34:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:24.654 [2024-07-15 03:34:30.618539] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:31:24.654 [2024-07-15 03:34:30.618610] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:24.654 EAL: No free 2048 kB hugepages reported on node 1 00:31:24.654 [2024-07-15 03:34:30.685753] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:24.654 [2024-07-15 03:34:30.776411] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:24.654 [2024-07-15 03:34:30.776470] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:24.654 [2024-07-15 03:34:30.776504] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:24.654 [2024-07-15 03:34:30.776518] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:24.654 [2024-07-15 03:34:30.776529] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:24.654 [2024-07-15 03:34:30.776557] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:31:24.912 03:34:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:31:24.912 03:34:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@862 -- # return 0 00:31:24.912 03:34:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:31:24.912 03:34:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@728 -- # xtrace_disable 00:31:24.912 03:34:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:24.912 03:34:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:24.912 03:34:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:31:24.912 03:34:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:24.912 03:34:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:24.912 [2024-07-15 03:34:30.915909] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:24.912 03:34:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:24.912 03:34:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:31:24.912 03:34:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:24.912 03:34:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:24.912 [2024-07-15 03:34:30.924100] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:31:24.912 03:34:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:24.912 03:34:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:31:24.912 03:34:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:24.912 03:34:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:24.912 null0 00:31:24.912 03:34:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:24.912 03:34:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:31:24.912 03:34:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:24.912 03:34:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:24.912 null1 00:31:24.912 03:34:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:24.912 03:34:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:31:24.912 03:34:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:24.912 03:34:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:24.912 03:34:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:24.912 03:34:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=3314139 00:31:24.912 03:34:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:31:24.912 03:34:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 3314139 /tmp/host.sock 00:31:24.912 03:34:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@829 -- # '[' -z 3314139 ']' 00:31:24.912 03:34:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:31:24.912 03:34:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:31:24.912 03:34:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:31:24.912 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:31:24.912 03:34:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:31:24.912 03:34:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:24.912 [2024-07-15 03:34:30.999594] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:31:24.912 [2024-07-15 03:34:30.999674] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3314139 ] 00:31:24.912 EAL: No free 2048 kB hugepages reported on node 1 00:31:25.169 [2024-07-15 03:34:31.066072] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:25.169 [2024-07-15 03:34:31.155778] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:31:25.169 03:34:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:31:25.169 03:34:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@862 -- # return 0 00:31:25.169 03:34:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:31:25.169 03:34:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:31:25.169 03:34:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:25.169 03:34:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:25.169 03:34:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:25.169 03:34:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:31:25.169 03:34:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:25.169 03:34:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:25.169 03:34:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:25.169 03:34:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:31:25.169 03:34:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:31:25.169 03:34:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:31:25.169 03:34:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:25.169 03:34:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:31:25.169 03:34:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:25.169 03:34:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:31:25.169 03:34:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:31:25.169 03:34:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:25.426 03:34:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:31:25.426 03:34:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:31:25.426 03:34:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:25.426 03:34:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:25.426 03:34:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:25.426 03:34:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:25.426 03:34:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:31:25.426 03:34:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:31:25.426 03:34:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:25.426 03:34:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:31:25.426 03:34:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:31:25.426 03:34:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:25.426 03:34:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:25.426 03:34:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:25.426 03:34:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:31:25.426 03:34:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:31:25.426 03:34:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:31:25.426 03:34:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:25.426 03:34:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:25.426 03:34:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:31:25.426 03:34:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:31:25.426 03:34:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:25.426 03:34:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:31:25.426 03:34:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:31:25.426 03:34:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:25.427 03:34:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:25.427 03:34:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:25.427 03:34:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:25.427 03:34:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:31:25.427 03:34:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:31:25.427 03:34:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:25.427 03:34:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:31:25.427 03:34:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:31:25.427 03:34:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:25.427 03:34:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:25.427 03:34:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:25.427 03:34:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:31:25.427 03:34:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:31:25.427 03:34:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:25.427 03:34:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:31:25.427 03:34:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:25.427 03:34:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:31:25.427 03:34:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:31:25.427 03:34:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:25.427 03:34:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:31:25.427 03:34:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:31:25.427 03:34:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:25.427 03:34:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:25.427 03:34:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:25.427 03:34:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:25.427 03:34:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:31:25.427 03:34:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:31:25.427 03:34:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:25.427 03:34:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:31:25.427 03:34:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:25.427 03:34:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:25.427 03:34:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:25.427 [2024-07-15 03:34:31.557801] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:25.427 03:34:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:25.427 03:34:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:31:25.427 03:34:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:31:25.427 03:34:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:31:25.427 03:34:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:25.427 03:34:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:25.427 03:34:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:31:25.427 03:34:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:31:25.685 03:34:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:25.685 03:34:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:31:25.685 03:34:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:31:25.685 03:34:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:25.685 03:34:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:25.685 03:34:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:25.685 03:34:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:25.685 03:34:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:31:25.685 03:34:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:31:25.685 03:34:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:25.685 03:34:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:31:25.685 03:34:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:31:25.685 03:34:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:31:25.685 03:34:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:31:25.685 03:34:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:31:25.685 03:34:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:31:25.685 03:34:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:31:25.685 03:34:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:31:25.685 03:34:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:31:25.685 03:34:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:31:25.685 03:34:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:31:25.685 03:34:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:25.685 03:34:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:25.685 03:34:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:25.685 03:34:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:31:25.685 03:34:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:31:25.685 03:34:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:31:25.685 03:34:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:31:25.685 03:34:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:31:25.685 03:34:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:25.685 03:34:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:25.685 03:34:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:25.685 03:34:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:31:25.685 03:34:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:31:25.685 03:34:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:31:25.685 03:34:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:31:25.685 03:34:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:31:25.685 03:34:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:31:25.685 03:34:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:31:25.685 03:34:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:25.685 03:34:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:31:25.685 03:34:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:25.685 03:34:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:31:25.685 03:34:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:31:25.685 03:34:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:25.685 03:34:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == \n\v\m\e\0 ]] 00:31:25.685 03:34:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@918 -- # sleep 1 00:31:26.255 [2024-07-15 03:34:32.293649] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:31:26.255 [2024-07-15 03:34:32.293694] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:31:26.255 [2024-07-15 03:34:32.293722] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:31:26.255 [2024-07-15 03:34:32.381994] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:31:26.515 [2024-07-15 03:34:32.444474] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:31:26.515 [2024-07-15 03:34:32.444497] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:31:26.776 03:34:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:31:26.776 03:34:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:31:26.776 03:34:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:31:26.776 03:34:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:31:26.776 03:34:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:31:26.776 03:34:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:26.776 03:34:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:26.776 03:34:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:31:26.776 03:34:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:31:26.776 03:34:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:26.776 03:34:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:26.776 03:34:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:31:26.776 03:34:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:31:26.776 03:34:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:31:26.776 03:34:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:31:26.776 03:34:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:31:26.776 03:34:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:31:26.776 03:34:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:31:26.776 03:34:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:26.776 03:34:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:26.776 03:34:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:26.776 03:34:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:26.776 03:34:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:31:26.776 03:34:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:31:26.776 03:34:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:26.776 03:34:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:31:26.776 03:34:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:31:26.776 03:34:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:31:26.776 03:34:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:31:26.776 03:34:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:31:26.776 03:34:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:31:26.776 03:34:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:31:26.776 03:34:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:31:26.776 03:34:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:31:26.776 03:34:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:26.776 03:34:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:31:26.776 03:34:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:26.777 03:34:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:31:26.777 03:34:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:31:26.777 03:34:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:26.777 03:34:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 == \4\4\2\0 ]] 00:31:26.777 03:34:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:31:26.777 03:34:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:31:26.777 03:34:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:31:26.777 03:34:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:31:26.777 03:34:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:31:26.777 03:34:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:31:26.777 03:34:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:31:26.777 03:34:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:31:26.777 03:34:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:31:26.777 03:34:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:31:26.777 03:34:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:31:26.777 03:34:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:26.777 03:34:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:26.777 03:34:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:26.777 03:34:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:31:26.777 03:34:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:31:26.777 03:34:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:31:26.777 03:34:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:31:26.777 03:34:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:31:26.777 03:34:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:26.777 03:34:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:27.036 03:34:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:27.036 03:34:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:31:27.036 03:34:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:31:27.036 03:34:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:31:27.036 03:34:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:31:27.036 03:34:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:31:27.036 03:34:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:31:27.036 03:34:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:27.036 03:34:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:27.036 03:34:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:27.036 03:34:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:27.036 03:34:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:31:27.036 03:34:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:31:27.294 03:34:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:27.294 03:34:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:31:27.294 03:34:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:31:27.294 03:34:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:31:27.294 03:34:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:31:27.294 03:34:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:31:27.294 03:34:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:31:27.294 03:34:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:31:27.294 03:34:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:31:27.294 03:34:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:31:27.294 03:34:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:31:27.294 03:34:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:31:27.294 03:34:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:31:27.294 03:34:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:27.294 03:34:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:27.294 03:34:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:27.294 03:34:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:31:27.294 03:34:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:31:27.294 03:34:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:31:27.294 03:34:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:31:27.294 03:34:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:31:27.294 03:34:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:27.294 03:34:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:27.294 [2024-07-15 03:34:33.246631] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:31:27.294 [2024-07-15 03:34:33.246922] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:31:27.294 [2024-07-15 03:34:33.246970] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:31:27.294 03:34:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:27.294 03:34:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:31:27.294 03:34:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:31:27.294 03:34:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:31:27.294 03:34:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:31:27.294 03:34:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:31:27.294 03:34:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:31:27.294 03:34:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:31:27.294 03:34:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:31:27.294 03:34:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:27.294 03:34:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:27.294 03:34:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:31:27.294 03:34:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:31:27.294 03:34:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:27.294 03:34:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:27.294 03:34:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:31:27.294 03:34:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:31:27.294 03:34:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:31:27.294 03:34:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:31:27.294 03:34:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:31:27.294 03:34:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:31:27.294 03:34:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:31:27.294 03:34:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:27.294 03:34:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:27.294 03:34:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:27.294 03:34:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:31:27.294 03:34:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:27.294 03:34:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:31:27.294 03:34:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:27.294 03:34:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:31:27.294 03:34:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:31:27.294 03:34:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:31:27.294 03:34:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:31:27.294 03:34:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:31:27.294 03:34:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:31:27.294 03:34:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:31:27.294 03:34:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:31:27.294 03:34:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:31:27.294 03:34:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:31:27.294 03:34:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:27.294 03:34:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:27.294 03:34:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:31:27.294 03:34:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:31:27.294 03:34:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:27.294 [2024-07-15 03:34:33.374783] bdev_nvme.c:6907:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:31:27.294 03:34:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:31:27.294 03:34:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@918 -- # sleep 1 00:31:27.553 [2024-07-15 03:34:33.642103] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:31:27.553 [2024-07-15 03:34:33.642125] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:31:27.553 [2024-07-15 03:34:33.642134] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:31:28.491 03:34:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:31:28.491 03:34:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:31:28.491 03:34:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:31:28.491 03:34:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:31:28.491 03:34:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:31:28.491 03:34:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:28.491 03:34:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:28.491 03:34:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:31:28.492 03:34:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:31:28.492 03:34:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:28.492 03:34:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:31:28.492 03:34:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:31:28.492 03:34:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:31:28.492 03:34:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:31:28.492 03:34:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:31:28.492 03:34:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:31:28.492 03:34:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:31:28.492 03:34:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:31:28.492 03:34:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:31:28.492 03:34:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:31:28.492 03:34:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:31:28.492 03:34:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:28.492 03:34:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:31:28.492 03:34:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:28.492 03:34:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:28.492 03:34:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:31:28.492 03:34:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:31:28.492 03:34:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:31:28.492 03:34:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:31:28.492 03:34:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:28.492 03:34:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:28.492 03:34:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:28.492 [2024-07-15 03:34:34.482852] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:31:28.492 [2024-07-15 03:34:34.482903] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:31:28.492 03:34:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:28.492 03:34:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:31:28.492 03:34:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:31:28.492 [2024-07-15 03:34:34.487094] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 ns 03:34:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:31:28.492 id:0 cdw10:00000000 cdw11:00000000 00:31:28.492 [2024-07-15 03:34:34.487132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:28.492 [2024-07-15 03:34:34.487150] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:31:28.492 [2024-07-15 03:34:34.487163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:28.492 [2024-07-15 03:34:34.487177] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:31:28.492 [2024-07-15 03:34:34.487191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:28.492 [2024-07-15 03:34:34.487206] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:31:28.492 03:34:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:31:28.492 [2024-07-15 03:34:34.487219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:28.492 [2024-07-15 03:34:34.487233] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa49640 is same with the state(5) to be set 00:31:28.492 03:34:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:31:28.492 03:34:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:31:28.492 03:34:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:31:28.492 03:34:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:31:28.492 03:34:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:28.492 03:34:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:28.492 03:34:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:31:28.492 03:34:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:31:28.492 [2024-07-15 03:34:34.497085] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa49640 (9): Bad file descriptor 00:31:28.492 03:34:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:28.492 [2024-07-15 03:34:34.507130] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:31:28.492 [2024-07-15 03:34:34.507416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:28.492 [2024-07-15 03:34:34.507447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa49640 with addr=10.0.0.2, port=4420 00:31:28.492 [2024-07-15 03:34:34.507474] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa49640 is same with the state(5) to be set 00:31:28.492 [2024-07-15 03:34:34.507498] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa49640 (9): Bad file descriptor 00:31:28.492 [2024-07-15 03:34:34.507522] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:31:28.492 [2024-07-15 03:34:34.507539] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:31:28.492 [2024-07-15 03:34:34.507556] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:31:28.492 [2024-07-15 03:34:34.507578] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:28.492 [2024-07-15 03:34:34.517211] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:31:28.492 [2024-07-15 03:34:34.517439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:28.492 [2024-07-15 03:34:34.517467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa49640 with addr=10.0.0.2, port=4420 00:31:28.492 [2024-07-15 03:34:34.517484] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa49640 is same with the state(5) to be set 00:31:28.492 [2024-07-15 03:34:34.517507] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa49640 (9): Bad file descriptor 00:31:28.492 [2024-07-15 03:34:34.517542] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:31:28.492 [2024-07-15 03:34:34.517560] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:31:28.492 [2024-07-15 03:34:34.517574] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:31:28.492 [2024-07-15 03:34:34.517593] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:28.492 [2024-07-15 03:34:34.527297] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:31:28.492 [2024-07-15 03:34:34.527459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:28.492 [2024-07-15 03:34:34.527487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa49640 with addr=10.0.0.2, port=4420 00:31:28.492 [2024-07-15 03:34:34.527504] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa49640 is same with the state(5) to be set 00:31:28.492 [2024-07-15 03:34:34.527526] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa49640 (9): Bad file descriptor 00:31:28.492 [2024-07-15 03:34:34.527548] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:31:28.492 [2024-07-15 03:34:34.527562] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:31:28.492 [2024-07-15 03:34:34.527575] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:31:28.492 [2024-07-15 03:34:34.527594] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:28.492 03:34:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:28.492 03:34:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:31:28.492 03:34:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:31:28.492 03:34:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:31:28.492 03:34:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:31:28.492 03:34:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:31:28.492 03:34:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:31:28.492 03:34:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:31:28.492 03:34:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:28.492 [2024-07-15 03:34:34.537372] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:31:28.492 03:34:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:28.492 03:34:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:28.492 [2024-07-15 03:34:34.537634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:28.492 [2024-07-15 03:34:34.537665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa49640 with addr=10.0.0.2, port=4420 00:31:28.492 [2024-07-15 03:34:34.537683] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa49640 is same with the state(5) to be set 00:31:28.492 03:34:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:31:28.492 03:34:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:28.493 [2024-07-15 03:34:34.537705] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa49640 (9): Bad file descriptor 00:31:28.493 [2024-07-15 03:34:34.537741] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:31:28.493 [2024-07-15 03:34:34.537760] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:31:28.493 [2024-07-15 03:34:34.537775] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:31:28.493 [2024-07-15 03:34:34.537795] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:28.493 03:34:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:31:28.493 [2024-07-15 03:34:34.547444] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:31:28.493 [2024-07-15 03:34:34.547629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:28.493 [2024-07-15 03:34:34.547658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa49640 with addr=10.0.0.2, port=4420 00:31:28.493 [2024-07-15 03:34:34.547675] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa49640 is same with the state(5) to be set 00:31:28.493 [2024-07-15 03:34:34.547698] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa49640 (9): Bad file descriptor 00:31:28.493 [2024-07-15 03:34:34.547732] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:31:28.493 [2024-07-15 03:34:34.547750] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:31:28.493 [2024-07-15 03:34:34.547764] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:31:28.493 [2024-07-15 03:34:34.547785] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:28.493 [2024-07-15 03:34:34.557531] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:31:28.493 [2024-07-15 03:34:34.557764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:28.493 [2024-07-15 03:34:34.557792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa49640 with addr=10.0.0.2, port=4420 00:31:28.493 [2024-07-15 03:34:34.557809] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa49640 is same with the state(5) to be set 00:31:28.493 [2024-07-15 03:34:34.557832] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa49640 (9): Bad file descriptor 00:31:28.493 [2024-07-15 03:34:34.557887] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:31:28.493 [2024-07-15 03:34:34.557908] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:31:28.493 [2024-07-15 03:34:34.557922] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:31:28.493 [2024-07-15 03:34:34.557947] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:28.493 03:34:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:28.493 [2024-07-15 03:34:34.567599] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:31:28.493 [2024-07-15 03:34:34.567790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:28.493 [2024-07-15 03:34:34.567818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa49640 with addr=10.0.0.2, port=4420 00:31:28.493 [2024-07-15 03:34:34.567834] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa49640 is same with the state(5) to be set 00:31:28.493 [2024-07-15 03:34:34.567856] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa49640 (9): Bad file descriptor 00:31:28.493 [2024-07-15 03:34:34.567904] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:31:28.493 [2024-07-15 03:34:34.567920] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:31:28.493 [2024-07-15 03:34:34.567933] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:31:28.493 [2024-07-15 03:34:34.567965] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:28.493 [2024-07-15 03:34:34.570947] bdev_nvme.c:6770:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:31:28.493 [2024-07-15 03:34:34.570976] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:31:28.493 03:34:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:31:28.493 03:34:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:31:28.493 03:34:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:31:28.493 03:34:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:31:28.493 03:34:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:31:28.493 03:34:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:31:28.493 03:34:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:31:28.493 03:34:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:31:28.493 03:34:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:31:28.493 03:34:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:31:28.493 03:34:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:28.493 03:34:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:28.493 03:34:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:31:28.493 03:34:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:31:28.493 03:34:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:28.493 03:34:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4421 == \4\4\2\1 ]] 00:31:28.493 03:34:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:31:28.493 03:34:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:31:28.493 03:34:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:31:28.493 03:34:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:31:28.493 03:34:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:31:28.493 03:34:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:31:28.493 03:34:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:31:28.493 03:34:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:31:28.493 03:34:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:31:28.493 03:34:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:31:28.493 03:34:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:31:28.493 03:34:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:28.493 03:34:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:28.754 03:34:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:28.754 03:34:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:31:28.754 03:34:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:31:28.754 03:34:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:31:28.754 03:34:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:31:28.754 03:34:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:31:28.754 03:34:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:28.754 03:34:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:28.754 03:34:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:28.754 03:34:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:31:28.754 03:34:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:31:28.754 03:34:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:31:28.754 03:34:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:31:28.754 03:34:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:31:28.754 03:34:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:31:28.754 03:34:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:31:28.754 03:34:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:31:28.754 03:34:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:28.754 03:34:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:28.754 03:34:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:31:28.754 03:34:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:31:28.754 03:34:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:28.754 03:34:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == '' ]] 00:31:28.754 03:34:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:31:28.754 03:34:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:31:28.754 03:34:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:31:28.754 03:34:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:31:28.754 03:34:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:31:28.754 03:34:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:31:28.754 03:34:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:31:28.754 03:34:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:28.754 03:34:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:28.754 03:34:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:28.754 03:34:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:28.754 03:34:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:31:28.754 03:34:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:31:28.754 03:34:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:28.754 03:34:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == '' ]] 00:31:28.754 03:34:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:31:28.754 03:34:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:31:28.754 03:34:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:31:28.754 03:34:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:31:28.754 03:34:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:31:28.754 03:34:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:31:28.754 03:34:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:31:28.754 03:34:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:31:28.754 03:34:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:31:28.754 03:34:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:31:28.754 03:34:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:28.754 03:34:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:31:28.754 03:34:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:28.754 03:34:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:28.754 03:34:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:31:28.754 03:34:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:31:28.754 03:34:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:31:28.754 03:34:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:31:28.754 03:34:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:31:28.754 03:34:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:28.754 03:34:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:30.133 [2024-07-15 03:34:35.864740] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:31:30.133 [2024-07-15 03:34:35.864777] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:31:30.133 [2024-07-15 03:34:35.864805] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:31:30.133 [2024-07-15 03:34:35.993261] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:31:30.133 [2024-07-15 03:34:36.098579] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:31:30.133 [2024-07-15 03:34:36.098634] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:31:30.133 03:34:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:30.133 03:34:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:31:30.133 03:34:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:31:30.133 03:34:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:31:30.133 03:34:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:31:30.133 03:34:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:31:30.133 03:34:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:31:30.133 03:34:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:31:30.133 03:34:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:31:30.133 03:34:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:30.133 03:34:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:30.133 request: 00:31:30.133 { 00:31:30.133 "name": "nvme", 00:31:30.133 "trtype": "tcp", 00:31:30.133 "traddr": "10.0.0.2", 00:31:30.133 "adrfam": "ipv4", 00:31:30.133 "trsvcid": "8009", 00:31:30.133 "hostnqn": "nqn.2021-12.io.spdk:test", 00:31:30.133 "wait_for_attach": true, 00:31:30.133 "method": "bdev_nvme_start_discovery", 00:31:30.133 "req_id": 1 00:31:30.133 } 00:31:30.133 Got JSON-RPC error response 00:31:30.133 response: 00:31:30.133 { 00:31:30.133 "code": -17, 00:31:30.133 "message": "File exists" 00:31:30.133 } 00:31:30.133 03:34:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:31:30.133 03:34:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:31:30.133 03:34:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:31:30.133 03:34:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:31:30.133 03:34:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:31:30.133 03:34:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:31:30.133 03:34:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:31:30.133 03:34:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:31:30.133 03:34:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:30.133 03:34:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:31:30.133 03:34:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:30.133 03:34:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:31:30.133 03:34:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:30.133 03:34:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:31:30.133 03:34:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:31:30.133 03:34:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:30.133 03:34:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:30.133 03:34:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:30.133 03:34:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:30.133 03:34:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:31:30.133 03:34:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:31:30.133 03:34:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:30.133 03:34:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:31:30.133 03:34:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:31:30.133 03:34:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:31:30.133 03:34:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:31:30.133 03:34:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:31:30.133 03:34:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:31:30.133 03:34:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:31:30.133 03:34:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:31:30.133 03:34:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:31:30.133 03:34:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:30.133 03:34:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:30.133 request: 00:31:30.133 { 00:31:30.133 "name": "nvme_second", 00:31:30.133 "trtype": "tcp", 00:31:30.133 "traddr": "10.0.0.2", 00:31:30.133 "adrfam": "ipv4", 00:31:30.133 "trsvcid": "8009", 00:31:30.133 "hostnqn": "nqn.2021-12.io.spdk:test", 00:31:30.133 "wait_for_attach": true, 00:31:30.133 "method": "bdev_nvme_start_discovery", 00:31:30.133 "req_id": 1 00:31:30.133 } 00:31:30.133 Got JSON-RPC error response 00:31:30.133 response: 00:31:30.134 { 00:31:30.134 "code": -17, 00:31:30.134 "message": "File exists" 00:31:30.134 } 00:31:30.134 03:34:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:31:30.134 03:34:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:31:30.134 03:34:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:31:30.134 03:34:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:31:30.134 03:34:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:31:30.134 03:34:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:31:30.134 03:34:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:31:30.134 03:34:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:30.134 03:34:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:31:30.134 03:34:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:30.134 03:34:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:31:30.134 03:34:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:31:30.134 03:34:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:30.134 03:34:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:31:30.134 03:34:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:31:30.134 03:34:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:30.134 03:34:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:30.134 03:34:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:30.134 03:34:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:31:30.134 03:34:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:30.134 03:34:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:31:30.392 03:34:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:30.392 03:34:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:31:30.392 03:34:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:31:30.392 03:34:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:31:30.392 03:34:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:31:30.392 03:34:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:31:30.392 03:34:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:31:30.392 03:34:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:31:30.392 03:34:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:31:30.392 03:34:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:31:30.392 03:34:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:30.392 03:34:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:31.335 [2024-07-15 03:34:37.306008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.335 [2024-07-15 03:34:37.306060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa491e0 with addr=10.0.0.2, port=8010 00:31:31.335 [2024-07-15 03:34:37.306085] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:31:31.335 [2024-07-15 03:34:37.306099] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:31:31.335 [2024-07-15 03:34:37.306112] bdev_nvme.c:7045:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:31:32.271 [2024-07-15 03:34:38.308456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.271 [2024-07-15 03:34:38.308499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa87b00 with addr=10.0.0.2, port=8010 00:31:32.271 [2024-07-15 03:34:38.308522] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:31:32.271 [2024-07-15 03:34:38.308536] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:31:32.272 [2024-07-15 03:34:38.308548] bdev_nvme.c:7045:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:31:33.210 [2024-07-15 03:34:39.310709] bdev_nvme.c:7026:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:31:33.210 request: 00:31:33.210 { 00:31:33.210 "name": "nvme_second", 00:31:33.210 "trtype": "tcp", 00:31:33.210 "traddr": "10.0.0.2", 00:31:33.210 "adrfam": "ipv4", 00:31:33.210 "trsvcid": "8010", 00:31:33.210 "hostnqn": "nqn.2021-12.io.spdk:test", 00:31:33.210 "wait_for_attach": false, 00:31:33.210 "attach_timeout_ms": 3000, 00:31:33.210 "method": "bdev_nvme_start_discovery", 00:31:33.210 "req_id": 1 00:31:33.210 } 00:31:33.210 Got JSON-RPC error response 00:31:33.210 response: 00:31:33.210 { 00:31:33.210 "code": -110, 00:31:33.210 "message": "Connection timed out" 00:31:33.210 } 00:31:33.210 03:34:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:31:33.210 03:34:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:31:33.210 03:34:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:31:33.210 03:34:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:31:33.210 03:34:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:31:33.210 03:34:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:31:33.210 03:34:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:31:33.211 03:34:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:31:33.211 03:34:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:33.211 03:34:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:33.211 03:34:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:31:33.211 03:34:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:31:33.211 03:34:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:33.471 03:34:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:31:33.471 03:34:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:31:33.471 03:34:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 3314139 00:31:33.471 03:34:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:31:33.471 03:34:39 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:31:33.471 03:34:39 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@117 -- # sync 00:31:33.471 03:34:39 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:31:33.471 03:34:39 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@120 -- # set +e 00:31:33.471 03:34:39 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:31:33.471 03:34:39 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:31:33.471 rmmod nvme_tcp 00:31:33.471 rmmod nvme_fabrics 00:31:33.471 rmmod nvme_keyring 00:31:33.471 03:34:39 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:31:33.471 03:34:39 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@124 -- # set -e 00:31:33.471 03:34:39 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@125 -- # return 0 00:31:33.471 03:34:39 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@489 -- # '[' -n 3314004 ']' 00:31:33.471 03:34:39 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@490 -- # killprocess 3314004 00:31:33.471 03:34:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@948 -- # '[' -z 3314004 ']' 00:31:33.471 03:34:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@952 -- # kill -0 3314004 00:31:33.471 03:34:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@953 -- # uname 00:31:33.471 03:34:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:31:33.471 03:34:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3314004 00:31:33.471 03:34:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:31:33.471 03:34:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:31:33.471 03:34:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3314004' 00:31:33.471 killing process with pid 3314004 00:31:33.471 03:34:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@967 -- # kill 3314004 00:31:33.471 03:34:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@972 -- # wait 3314004 00:31:33.732 03:34:39 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:31:33.732 03:34:39 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:31:33.732 03:34:39 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:31:33.732 03:34:39 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:31:33.732 03:34:39 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:31:33.732 03:34:39 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:33.732 03:34:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:33.732 03:34:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:35.635 03:34:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:31:35.635 00:31:35.635 real 0m13.320s 00:31:35.635 user 0m19.342s 00:31:35.635 sys 0m2.872s 00:31:35.635 03:34:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1124 -- # xtrace_disable 00:31:35.635 03:34:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:35.635 ************************************ 00:31:35.635 END TEST nvmf_host_discovery 00:31:35.635 ************************************ 00:31:35.635 03:34:41 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:31:35.635 03:34:41 nvmf_tcp -- nvmf/nvmf.sh@102 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:31:35.635 03:34:41 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:31:35.635 03:34:41 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:35.635 03:34:41 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:35.893 ************************************ 00:31:35.893 START TEST nvmf_host_multipath_status 00:31:35.893 ************************************ 00:31:35.893 03:34:41 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:31:35.893 * Looking for test storage... 00:31:35.893 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:31:35.893 03:34:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:35.893 03:34:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:31:35.893 03:34:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:35.893 03:34:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:35.893 03:34:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:35.893 03:34:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:35.893 03:34:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:35.893 03:34:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:35.893 03:34:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:35.893 03:34:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:35.893 03:34:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:35.893 03:34:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:35.893 03:34:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:31:35.893 03:34:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:31:35.893 03:34:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:35.893 03:34:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:35.893 03:34:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:35.893 03:34:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:35.893 03:34:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:35.893 03:34:41 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:35.893 03:34:41 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:35.893 03:34:41 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:35.894 03:34:41 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:35.894 03:34:41 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:35.894 03:34:41 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:35.894 03:34:41 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:31:35.894 03:34:41 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:35.894 03:34:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@47 -- # : 0 00:31:35.894 03:34:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:31:35.894 03:34:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:31:35.894 03:34:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:35.894 03:34:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:35.894 03:34:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:35.894 03:34:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:31:35.894 03:34:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:31:35.894 03:34:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # have_pci_nics=0 00:31:35.894 03:34:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:31:35.894 03:34:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:31:35.894 03:34:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:31:35.894 03:34:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:31:35.894 03:34:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:31:35.894 03:34:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:31:35.894 03:34:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:31:35.894 03:34:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:31:35.894 03:34:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:35.894 03:34:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@448 -- # prepare_net_devs 00:31:35.894 03:34:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # local -g is_hw=no 00:31:35.894 03:34:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@412 -- # remove_spdk_ns 00:31:35.894 03:34:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:35.894 03:34:41 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:35.894 03:34:41 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:35.894 03:34:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:31:35.894 03:34:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:31:35.894 03:34:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@285 -- # xtrace_disable 00:31:35.894 03:34:41 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:31:37.848 03:34:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:37.848 03:34:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # pci_devs=() 00:31:37.848 03:34:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # local -a pci_devs 00:31:37.848 03:34:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # pci_net_devs=() 00:31:37.848 03:34:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:31:37.848 03:34:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # pci_drivers=() 00:31:37.848 03:34:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # local -A pci_drivers 00:31:37.848 03:34:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # net_devs=() 00:31:37.848 03:34:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # local -ga net_devs 00:31:37.848 03:34:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # e810=() 00:31:37.848 03:34:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # local -ga e810 00:31:37.848 03:34:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # x722=() 00:31:37.848 03:34:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # local -ga x722 00:31:37.848 03:34:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # mlx=() 00:31:37.848 03:34:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # local -ga mlx 00:31:37.848 03:34:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:37.848 03:34:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:37.848 03:34:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:37.848 03:34:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:37.848 03:34:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:37.848 03:34:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:37.848 03:34:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:37.848 03:34:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:37.848 03:34:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:37.848 03:34:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:37.848 03:34:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:37.848 03:34:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:31:37.849 03:34:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:31:37.849 03:34:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:31:37.849 03:34:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:31:37.849 03:34:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:31:37.849 03:34:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:31:37.849 03:34:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:37.849 03:34:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:31:37.849 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:31:37.849 03:34:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:37.849 03:34:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:37.849 03:34:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:37.849 03:34:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:37.849 03:34:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:37.849 03:34:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:37.849 03:34:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:31:37.849 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:31:37.849 03:34:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:37.849 03:34:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:37.849 03:34:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:37.849 03:34:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:37.849 03:34:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:37.849 03:34:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:31:37.849 03:34:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:31:37.849 03:34:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:31:37.849 03:34:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:37.849 03:34:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:37.849 03:34:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:37.849 03:34:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:37.849 03:34:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:37.849 03:34:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:37.849 03:34:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:37.849 03:34:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:31:37.849 Found net devices under 0000:0a:00.0: cvl_0_0 00:31:37.849 03:34:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:37.849 03:34:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:37.849 03:34:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:37.849 03:34:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:37.849 03:34:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:37.849 03:34:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:37.849 03:34:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:37.849 03:34:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:37.849 03:34:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:31:37.849 Found net devices under 0000:0a:00.1: cvl_0_1 00:31:37.849 03:34:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:37.849 03:34:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:31:37.849 03:34:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # is_hw=yes 00:31:37.849 03:34:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:31:37.849 03:34:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:31:37.849 03:34:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:31:37.849 03:34:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:37.849 03:34:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:37.849 03:34:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:37.849 03:34:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:31:37.849 03:34:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:37.849 03:34:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:37.849 03:34:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:31:37.849 03:34:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:37.849 03:34:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:37.849 03:34:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:31:37.849 03:34:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:31:37.849 03:34:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:31:37.849 03:34:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:37.849 03:34:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:37.849 03:34:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:37.849 03:34:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:31:37.849 03:34:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:37.849 03:34:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:37.849 03:34:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:37.849 03:34:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:31:37.849 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:37.849 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.204 ms 00:31:37.849 00:31:37.849 --- 10.0.0.2 ping statistics --- 00:31:37.849 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:37.849 rtt min/avg/max/mdev = 0.204/0.204/0.204/0.000 ms 00:31:37.849 03:34:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:37.849 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:37.849 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.086 ms 00:31:37.849 00:31:37.849 --- 10.0.0.1 ping statistics --- 00:31:37.849 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:37.849 rtt min/avg/max/mdev = 0.086/0.086/0.086/0.000 ms 00:31:37.849 03:34:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:37.849 03:34:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # return 0 00:31:37.849 03:34:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:31:37.849 03:34:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:37.849 03:34:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:31:37.849 03:34:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:31:37.849 03:34:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:37.849 03:34:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:31:37.849 03:34:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:31:37.849 03:34:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:31:37.849 03:34:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:31:37.849 03:34:43 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@722 -- # xtrace_disable 00:31:37.849 03:34:43 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:31:37.849 03:34:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@481 -- # nvmfpid=3317170 00:31:37.849 03:34:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:31:37.849 03:34:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # waitforlisten 3317170 00:31:37.849 03:34:43 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@829 -- # '[' -z 3317170 ']' 00:31:37.849 03:34:43 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:37.849 03:34:43 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # local max_retries=100 00:31:37.849 03:34:43 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:37.849 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:37.849 03:34:43 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # xtrace_disable 00:31:37.849 03:34:43 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:31:38.107 [2024-07-15 03:34:44.017557] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:31:38.107 [2024-07-15 03:34:44.017635] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:38.107 EAL: No free 2048 kB hugepages reported on node 1 00:31:38.107 [2024-07-15 03:34:44.096566] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:31:38.107 [2024-07-15 03:34:44.196872] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:38.107 [2024-07-15 03:34:44.196963] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:38.107 [2024-07-15 03:34:44.196988] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:38.107 [2024-07-15 03:34:44.197011] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:38.107 [2024-07-15 03:34:44.197030] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:38.107 [2024-07-15 03:34:44.197097] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:31:38.107 [2024-07-15 03:34:44.197106] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:31:38.365 03:34:44 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:31:38.365 03:34:44 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # return 0 00:31:38.365 03:34:44 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:31:38.365 03:34:44 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@728 -- # xtrace_disable 00:31:38.365 03:34:44 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:31:38.365 03:34:44 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:38.365 03:34:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=3317170 00:31:38.365 03:34:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:31:38.622 [2024-07-15 03:34:44.602557] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:38.622 03:34:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:31:38.879 Malloc0 00:31:38.879 03:34:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:31:39.136 03:34:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:31:39.392 03:34:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:39.650 [2024-07-15 03:34:45.611580] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:39.650 03:34:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:31:39.907 [2024-07-15 03:34:45.868278] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:31:39.907 03:34:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=3317452 00:31:39.907 03:34:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:31:39.907 03:34:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 3317452 /var/tmp/bdevperf.sock 00:31:39.907 03:34:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:31:39.907 03:34:45 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@829 -- # '[' -z 3317452 ']' 00:31:39.907 03:34:45 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:31:39.907 03:34:45 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # local max_retries=100 00:31:39.907 03:34:45 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:31:39.907 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:31:39.907 03:34:45 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # xtrace_disable 00:31:39.907 03:34:45 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:31:40.165 03:34:46 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:31:40.165 03:34:46 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # return 0 00:31:40.165 03:34:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:31:40.423 03:34:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:31:40.990 Nvme0n1 00:31:40.990 03:34:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:31:41.249 Nvme0n1 00:31:41.249 03:34:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:31:41.249 03:34:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:31:43.786 03:34:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:31:43.786 03:34:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:31:43.786 03:34:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:31:43.786 03:34:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:31:44.723 03:34:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:31:44.723 03:34:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:31:44.723 03:34:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:44.723 03:34:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:31:44.981 03:34:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:44.981 03:34:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:31:44.981 03:34:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:44.981 03:34:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:31:45.239 03:34:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:45.239 03:34:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:31:45.239 03:34:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:45.239 03:34:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:31:45.497 03:34:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:45.497 03:34:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:31:45.497 03:34:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:45.497 03:34:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:31:45.754 03:34:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:45.754 03:34:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:31:45.754 03:34:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:45.754 03:34:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:31:46.012 03:34:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:46.012 03:34:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:31:46.012 03:34:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:46.012 03:34:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:31:46.269 03:34:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:46.269 03:34:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:31:46.269 03:34:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:31:46.528 03:34:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:31:46.786 03:34:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:31:48.160 03:34:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:31:48.160 03:34:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:31:48.160 03:34:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:48.160 03:34:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:31:48.160 03:34:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:48.160 03:34:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:31:48.160 03:34:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:48.161 03:34:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:31:48.418 03:34:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:48.418 03:34:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:31:48.418 03:34:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:48.418 03:34:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:31:48.676 03:34:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:48.676 03:34:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:31:48.676 03:34:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:48.676 03:34:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:31:48.934 03:34:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:48.934 03:34:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:31:48.934 03:34:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:48.934 03:34:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:31:49.191 03:34:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:49.191 03:34:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:31:49.191 03:34:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:49.191 03:34:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:31:49.449 03:34:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:49.449 03:34:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:31:49.449 03:34:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:31:49.706 03:34:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:31:49.966 03:34:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:31:50.904 03:34:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:31:50.904 03:34:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:31:50.904 03:34:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:50.904 03:34:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:31:51.163 03:34:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:51.163 03:34:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:31:51.163 03:34:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:51.163 03:34:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:31:51.421 03:34:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:51.421 03:34:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:31:51.421 03:34:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:51.421 03:34:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:31:51.680 03:34:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:51.680 03:34:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:31:51.680 03:34:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:51.680 03:34:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:31:51.937 03:34:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:51.937 03:34:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:31:51.937 03:34:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:51.937 03:34:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:31:52.194 03:34:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:52.194 03:34:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:31:52.194 03:34:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:52.194 03:34:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:31:52.459 03:34:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:52.459 03:34:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:31:52.459 03:34:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:31:52.718 03:34:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:31:52.977 03:34:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:31:53.916 03:34:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:31:53.916 03:34:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:31:53.916 03:34:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:53.916 03:34:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:31:54.174 03:35:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:54.174 03:35:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:31:54.174 03:35:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:54.174 03:35:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:31:54.466 03:35:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:54.466 03:35:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:31:54.466 03:35:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:54.466 03:35:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:31:54.749 03:35:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:54.749 03:35:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:31:54.749 03:35:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:54.749 03:35:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:31:55.007 03:35:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:55.007 03:35:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:31:55.007 03:35:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:55.007 03:35:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:31:55.264 03:35:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:55.264 03:35:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:31:55.264 03:35:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:55.264 03:35:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:31:55.521 03:35:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:55.521 03:35:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:31:55.521 03:35:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:31:55.779 03:35:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:31:56.037 03:35:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:31:56.973 03:35:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:31:56.973 03:35:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:31:56.973 03:35:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:56.973 03:35:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:31:57.233 03:35:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:57.233 03:35:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:31:57.233 03:35:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:57.233 03:35:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:31:57.491 03:35:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:57.491 03:35:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:31:57.491 03:35:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:57.491 03:35:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:31:57.747 03:35:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:57.747 03:35:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:31:57.748 03:35:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:57.748 03:35:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:31:58.005 03:35:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:58.005 03:35:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:31:58.005 03:35:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:58.005 03:35:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:31:58.262 03:35:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:58.262 03:35:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:31:58.262 03:35:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:58.262 03:35:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:31:58.519 03:35:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:58.519 03:35:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:31:58.519 03:35:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:31:58.776 03:35:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:31:59.035 03:35:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:31:59.971 03:35:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:31:59.971 03:35:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:31:59.971 03:35:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:59.971 03:35:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:32:00.229 03:35:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:32:00.229 03:35:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:32:00.229 03:35:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:00.229 03:35:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:32:00.487 03:35:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:00.487 03:35:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:32:00.487 03:35:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:00.487 03:35:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:32:00.745 03:35:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:00.745 03:35:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:32:00.745 03:35:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:00.745 03:35:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:32:01.002 03:35:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:01.003 03:35:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:32:01.003 03:35:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:01.003 03:35:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:32:01.260 03:35:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:32:01.260 03:35:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:32:01.260 03:35:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:01.260 03:35:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:32:01.518 03:35:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:01.518 03:35:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:32:01.776 03:35:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:32:01.776 03:35:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:32:02.034 03:35:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:32:02.291 03:35:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:32:03.226 03:35:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:32:03.226 03:35:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:32:03.226 03:35:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:03.226 03:35:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:32:03.483 03:35:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:03.483 03:35:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:32:03.483 03:35:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:03.483 03:35:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:32:03.740 03:35:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:03.740 03:35:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:32:03.740 03:35:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:03.741 03:35:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:32:03.998 03:35:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:03.998 03:35:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:32:03.998 03:35:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:03.998 03:35:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:32:04.255 03:35:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:04.255 03:35:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:32:04.255 03:35:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:04.255 03:35:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:32:04.514 03:35:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:04.514 03:35:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:32:04.514 03:35:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:04.514 03:35:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:32:04.771 03:35:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:04.771 03:35:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:32:04.771 03:35:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:32:05.028 03:35:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:32:05.284 03:35:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:32:06.216 03:35:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:32:06.216 03:35:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:32:06.216 03:35:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:06.216 03:35:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:32:06.473 03:35:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:32:06.473 03:35:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:32:06.473 03:35:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:06.473 03:35:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:32:06.730 03:35:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:06.730 03:35:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:32:06.730 03:35:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:06.730 03:35:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:32:06.987 03:35:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:06.987 03:35:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:32:06.987 03:35:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:06.987 03:35:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:32:07.242 03:35:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:07.242 03:35:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:32:07.242 03:35:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:07.242 03:35:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:32:07.498 03:35:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:07.498 03:35:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:32:07.498 03:35:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:07.498 03:35:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:32:07.755 03:35:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:07.755 03:35:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:32:07.755 03:35:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:32:08.011 03:35:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:32:08.270 03:35:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:32:09.205 03:35:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:32:09.205 03:35:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:32:09.205 03:35:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:09.205 03:35:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:32:09.462 03:35:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:09.462 03:35:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:32:09.462 03:35:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:09.462 03:35:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:32:09.720 03:35:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:09.720 03:35:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:32:09.720 03:35:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:09.720 03:35:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:32:09.978 03:35:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:09.978 03:35:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:32:09.978 03:35:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:09.978 03:35:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:32:10.302 03:35:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:10.302 03:35:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:32:10.302 03:35:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:10.302 03:35:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:32:10.577 03:35:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:10.578 03:35:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:32:10.578 03:35:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:10.578 03:35:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:32:10.835 03:35:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:10.835 03:35:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:32:10.835 03:35:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:32:11.092 03:35:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:32:11.351 03:35:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:32:12.287 03:35:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:32:12.287 03:35:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:32:12.287 03:35:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:12.287 03:35:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:32:12.545 03:35:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:12.545 03:35:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:32:12.545 03:35:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:12.545 03:35:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:32:12.803 03:35:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:32:12.803 03:35:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:32:12.803 03:35:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:12.803 03:35:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:32:13.061 03:35:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:13.061 03:35:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:32:13.061 03:35:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:13.061 03:35:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:32:13.320 03:35:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:13.320 03:35:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:32:13.320 03:35:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:13.320 03:35:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:32:13.579 03:35:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:13.579 03:35:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:32:13.579 03:35:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:13.579 03:35:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:32:13.838 03:35:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:32:13.838 03:35:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 3317452 00:32:13.838 03:35:19 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@948 -- # '[' -z 3317452 ']' 00:32:13.838 03:35:19 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # kill -0 3317452 00:32:13.838 03:35:19 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # uname 00:32:13.838 03:35:19 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:32:13.838 03:35:19 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3317452 00:32:13.838 03:35:19 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:32:13.838 03:35:19 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:32:13.838 03:35:19 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3317452' 00:32:13.838 killing process with pid 3317452 00:32:13.838 03:35:19 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@967 -- # kill 3317452 00:32:13.838 03:35:19 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # wait 3317452 00:32:14.099 Connection closed with partial response: 00:32:14.099 00:32:14.099 00:32:14.099 03:35:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 3317452 00:32:14.099 03:35:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:32:14.099 [2024-07-15 03:34:45.932033] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:32:14.099 [2024-07-15 03:34:45.932117] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3317452 ] 00:32:14.099 EAL: No free 2048 kB hugepages reported on node 1 00:32:14.099 [2024-07-15 03:34:45.991269] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:14.099 [2024-07-15 03:34:46.075869] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:32:14.099 Running I/O for 90 seconds... 00:32:14.099 [2024-07-15 03:35:01.701073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:50528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.099 [2024-07-15 03:35:01.701145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:32:14.099 [2024-07-15 03:35:01.701230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:50592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.099 [2024-07-15 03:35:01.701251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:32:14.099 [2024-07-15 03:35:01.701276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:50600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.099 [2024-07-15 03:35:01.701293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:32:14.099 [2024-07-15 03:35:01.701314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:50608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.099 [2024-07-15 03:35:01.701330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:32:14.099 [2024-07-15 03:35:01.701352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:50616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.099 [2024-07-15 03:35:01.701367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:32:14.099 [2024-07-15 03:35:01.701390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:50624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.099 [2024-07-15 03:35:01.701406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:32:14.099 [2024-07-15 03:35:01.701428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:50632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.099 [2024-07-15 03:35:01.701445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:32:14.099 [2024-07-15 03:35:01.701469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:50640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.099 [2024-07-15 03:35:01.701486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:32:14.099 [2024-07-15 03:35:01.701507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:50648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.099 [2024-07-15 03:35:01.701523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:32:14.099 [2024-07-15 03:35:01.701544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:50656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.099 [2024-07-15 03:35:01.701561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:32:14.099 [2024-07-15 03:35:01.701585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:50664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.099 [2024-07-15 03:35:01.701615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:32:14.100 [2024-07-15 03:35:01.701637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:50672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.100 [2024-07-15 03:35:01.701655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:32:14.100 [2024-07-15 03:35:01.701677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:50680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.100 [2024-07-15 03:35:01.701694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:32:14.100 [2024-07-15 03:35:01.701715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:50688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.100 [2024-07-15 03:35:01.701732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:32:14.100 [2024-07-15 03:35:01.701753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:50696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.100 [2024-07-15 03:35:01.701769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:32:14.100 [2024-07-15 03:35:01.701805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:50704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.100 [2024-07-15 03:35:01.701822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:32:14.100 [2024-07-15 03:35:01.701844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:50712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.100 [2024-07-15 03:35:01.701886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:32:14.100 [2024-07-15 03:35:01.702248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:50720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.100 [2024-07-15 03:35:01.702272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:32:14.100 [2024-07-15 03:35:01.702302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:50728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.100 [2024-07-15 03:35:01.702320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:14.100 [2024-07-15 03:35:01.702344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:50736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.100 [2024-07-15 03:35:01.702361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:14.100 [2024-07-15 03:35:01.702385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:50744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.100 [2024-07-15 03:35:01.702401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:32:14.100 [2024-07-15 03:35:01.702425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:50752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.100 [2024-07-15 03:35:01.702441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:32:14.100 [2024-07-15 03:35:01.702465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:50760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.100 [2024-07-15 03:35:01.702481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:32:14.100 [2024-07-15 03:35:01.702510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:50768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.100 [2024-07-15 03:35:01.702528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:32:14.100 [2024-07-15 03:35:01.702552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:50776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.100 [2024-07-15 03:35:01.702569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:32:14.100 [2024-07-15 03:35:01.702606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:50784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.100 [2024-07-15 03:35:01.702623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:32:14.100 [2024-07-15 03:35:01.702647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:50792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.100 [2024-07-15 03:35:01.702678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:32:14.100 [2024-07-15 03:35:01.702700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:50800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.100 [2024-07-15 03:35:01.702715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:32:14.100 [2024-07-15 03:35:01.702738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:50808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.100 [2024-07-15 03:35:01.702753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:32:14.100 [2024-07-15 03:35:01.702775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:50816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.100 [2024-07-15 03:35:01.702791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:32:14.100 [2024-07-15 03:35:01.702813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:50824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.100 [2024-07-15 03:35:01.702828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:32:14.100 [2024-07-15 03:35:01.702850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:50832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.100 [2024-07-15 03:35:01.702890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:32:14.100 [2024-07-15 03:35:01.702917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:50840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.100 [2024-07-15 03:35:01.702934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:32:14.100 [2024-07-15 03:35:01.702974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:50848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.100 [2024-07-15 03:35:01.702991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:32:14.100 [2024-07-15 03:35:01.703015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:50856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.100 [2024-07-15 03:35:01.703031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:32:14.100 [2024-07-15 03:35:01.703061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:50864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.100 [2024-07-15 03:35:01.703079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:32:14.100 [2024-07-15 03:35:01.703103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:50872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.100 [2024-07-15 03:35:01.703120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:32:14.100 [2024-07-15 03:35:01.703143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:50880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.100 [2024-07-15 03:35:01.703160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:32:14.100 [2024-07-15 03:35:01.703183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:50888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.100 [2024-07-15 03:35:01.703200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:32:14.100 [2024-07-15 03:35:01.703224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:50896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.100 [2024-07-15 03:35:01.703240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:32:14.100 [2024-07-15 03:35:01.703278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:50904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.100 [2024-07-15 03:35:01.703295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:32:14.100 [2024-07-15 03:35:01.703333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:50912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.100 [2024-07-15 03:35:01.703350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:32:14.100 [2024-07-15 03:35:01.703374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:50920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.100 [2024-07-15 03:35:01.703391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:32:14.100 [2024-07-15 03:35:01.703415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:50928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.100 [2024-07-15 03:35:01.703431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:14.100 [2024-07-15 03:35:01.703454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:50936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.100 [2024-07-15 03:35:01.703471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:32:14.100 [2024-07-15 03:35:01.703495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:50944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.100 [2024-07-15 03:35:01.703511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:32:14.100 [2024-07-15 03:35:01.703535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:50952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.100 [2024-07-15 03:35:01.703552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:32:14.100 [2024-07-15 03:35:01.703591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:50960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.100 [2024-07-15 03:35:01.703612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:32:14.101 [2024-07-15 03:35:01.703637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:50968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.101 [2024-07-15 03:35:01.703669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:32:14.101 [2024-07-15 03:35:01.703695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:50976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.101 [2024-07-15 03:35:01.703711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.101 [2024-07-15 03:35:01.703734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:50984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.101 [2024-07-15 03:35:01.703752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:14.101 [2024-07-15 03:35:01.703776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:50992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.101 [2024-07-15 03:35:01.703792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:14.101 [2024-07-15 03:35:01.703815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:51000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.101 [2024-07-15 03:35:01.703847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:32:14.101 [2024-07-15 03:35:01.703871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:51008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.101 [2024-07-15 03:35:01.703896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:32:14.101 [2024-07-15 03:35:01.703938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:51016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.101 [2024-07-15 03:35:01.703955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:32:14.101 [2024-07-15 03:35:01.703979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:51024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.101 [2024-07-15 03:35:01.703996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:32:14.101 [2024-07-15 03:35:01.704019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:51032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.101 [2024-07-15 03:35:01.704037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:32:14.101 [2024-07-15 03:35:01.704060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:51040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.101 [2024-07-15 03:35:01.704076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:32:14.101 [2024-07-15 03:35:01.704099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:51048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.101 [2024-07-15 03:35:01.704116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:32:14.101 [2024-07-15 03:35:01.704139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:51056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.101 [2024-07-15 03:35:01.704160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:32:14.101 [2024-07-15 03:35:01.704185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:51064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.101 [2024-07-15 03:35:01.704202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:32:14.101 [2024-07-15 03:35:01.704225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:51072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.101 [2024-07-15 03:35:01.704257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:32:14.101 [2024-07-15 03:35:01.704281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:51080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.101 [2024-07-15 03:35:01.704297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:32:14.101 [2024-07-15 03:35:01.704336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:51088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.101 [2024-07-15 03:35:01.704351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:32:14.101 [2024-07-15 03:35:01.704374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:51096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.101 [2024-07-15 03:35:01.704390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:32:14.101 [2024-07-15 03:35:01.704413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:51104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.101 [2024-07-15 03:35:01.704429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:32:14.101 [2024-07-15 03:35:01.704451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:51112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.101 [2024-07-15 03:35:01.704467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:32:14.101 [2024-07-15 03:35:01.704489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:51120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.101 [2024-07-15 03:35:01.704504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:32:14.101 [2024-07-15 03:35:01.704526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:51128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.101 [2024-07-15 03:35:01.704545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:32:14.101 [2024-07-15 03:35:01.704567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:51136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.101 [2024-07-15 03:35:01.704584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:32:14.101 [2024-07-15 03:35:01.704606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:51144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.101 [2024-07-15 03:35:01.704622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:32:14.101 [2024-07-15 03:35:01.704643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:51152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.101 [2024-07-15 03:35:01.704659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:32:14.101 [2024-07-15 03:35:01.704685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:51160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.101 [2024-07-15 03:35:01.704703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:32:14.101 [2024-07-15 03:35:01.704726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:51168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.101 [2024-07-15 03:35:01.704742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:32:14.101 [2024-07-15 03:35:01.704764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:51176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.101 [2024-07-15 03:35:01.704780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:32:14.101 [2024-07-15 03:35:01.704803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:51184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.101 [2024-07-15 03:35:01.704819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:32:14.101 [2024-07-15 03:35:01.704842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:51192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.101 [2024-07-15 03:35:01.704873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:32:14.101 [2024-07-15 03:35:01.704905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:51200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.101 [2024-07-15 03:35:01.704923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:32:14.101 [2024-07-15 03:35:01.704945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:51208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.101 [2024-07-15 03:35:01.704962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:32:14.101 [2024-07-15 03:35:01.704985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:51216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.101 [2024-07-15 03:35:01.705002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:32:14.101 [2024-07-15 03:35:01.705025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:51224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.101 [2024-07-15 03:35:01.705041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:32:14.101 [2024-07-15 03:35:01.705219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:51232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.101 [2024-07-15 03:35:01.705240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:32:14.101 [2024-07-15 03:35:01.705289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:51240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.101 [2024-07-15 03:35:01.705308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:14.101 [2024-07-15 03:35:01.705337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:51248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.101 [2024-07-15 03:35:01.705355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:14.101 [2024-07-15 03:35:01.705388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:51256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.102 [2024-07-15 03:35:01.705406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:32:14.102 [2024-07-15 03:35:01.705434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:51264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.102 [2024-07-15 03:35:01.705452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:32:14.102 [2024-07-15 03:35:01.705479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:51272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.102 [2024-07-15 03:35:01.705496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:32:14.102 [2024-07-15 03:35:01.705524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:51280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.102 [2024-07-15 03:35:01.705541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:32:14.102 [2024-07-15 03:35:01.705584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:51288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.102 [2024-07-15 03:35:01.705601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:32:14.102 [2024-07-15 03:35:01.705629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:51296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.102 [2024-07-15 03:35:01.705645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:32:14.102 [2024-07-15 03:35:01.705672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:51304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.102 [2024-07-15 03:35:01.705689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:32:14.102 [2024-07-15 03:35:01.705715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:51312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.102 [2024-07-15 03:35:01.705733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:32:14.102 [2024-07-15 03:35:01.705760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:51320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.102 [2024-07-15 03:35:01.705776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:32:14.102 [2024-07-15 03:35:01.705804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:51328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.102 [2024-07-15 03:35:01.705821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:32:14.102 [2024-07-15 03:35:01.705848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:51336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.102 [2024-07-15 03:35:01.705890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:32:14.102 [2024-07-15 03:35:01.705924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:50536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.102 [2024-07-15 03:35:01.705941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:32:14.102 [2024-07-15 03:35:01.705969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:50544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.102 [2024-07-15 03:35:01.705991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:32:14.102 [2024-07-15 03:35:01.706020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:50552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.102 [2024-07-15 03:35:01.706037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:32:14.102 [2024-07-15 03:35:01.706065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:50560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.102 [2024-07-15 03:35:01.706082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:32:14.102 [2024-07-15 03:35:01.706110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:50568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.102 [2024-07-15 03:35:01.706128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:32:14.102 [2024-07-15 03:35:01.706155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:50576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.102 [2024-07-15 03:35:01.706187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:32:14.102 [2024-07-15 03:35:01.706215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:50584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.102 [2024-07-15 03:35:01.706232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:32:14.102 [2024-07-15 03:35:01.706259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:51344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.102 [2024-07-15 03:35:01.706276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:32:14.102 [2024-07-15 03:35:01.706303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:51352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.102 [2024-07-15 03:35:01.706319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:32:14.102 [2024-07-15 03:35:01.706346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:51360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.102 [2024-07-15 03:35:01.706362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:32:14.102 [2024-07-15 03:35:01.706389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:51368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.102 [2024-07-15 03:35:01.706405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:32:14.102 [2024-07-15 03:35:01.706434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:51376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.102 [2024-07-15 03:35:01.706450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:32:14.102 [2024-07-15 03:35:01.706477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:51384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.102 [2024-07-15 03:35:01.706494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:32:14.102 [2024-07-15 03:35:01.706521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:51392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.102 [2024-07-15 03:35:01.706541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:32:14.102 [2024-07-15 03:35:01.706569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:51400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.102 [2024-07-15 03:35:01.706585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:32:14.102 [2024-07-15 03:35:01.706612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:51408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.102 [2024-07-15 03:35:01.706629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:32:14.102 [2024-07-15 03:35:01.706656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:51416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.102 [2024-07-15 03:35:01.706672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:32:14.102 [2024-07-15 03:35:01.706699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:51424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.102 [2024-07-15 03:35:01.706716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:32:14.102 [2024-07-15 03:35:01.706743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:51432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.102 [2024-07-15 03:35:01.706760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:32:14.102 [2024-07-15 03:35:01.706787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:51440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.102 [2024-07-15 03:35:01.706803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:14.102 [2024-07-15 03:35:01.706830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:51448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.102 [2024-07-15 03:35:01.706846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:14.102 [2024-07-15 03:35:01.706899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:51456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.102 [2024-07-15 03:35:01.706919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:32:14.102 [2024-07-15 03:35:01.706948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:51464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.102 [2024-07-15 03:35:01.706965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:32:14.102 [2024-07-15 03:35:01.706993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:51472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.102 [2024-07-15 03:35:01.707010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:32:14.102 [2024-07-15 03:35:01.707038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:51480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.102 [2024-07-15 03:35:01.707055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:32:14.102 [2024-07-15 03:35:01.707083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:51488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.102 [2024-07-15 03:35:01.707100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:32:14.102 [2024-07-15 03:35:01.707132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:51496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.103 [2024-07-15 03:35:01.707149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:32:14.103 [2024-07-15 03:35:01.707177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:51504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.103 [2024-07-15 03:35:01.707208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:32:14.103 [2024-07-15 03:35:01.707237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:51512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.103 [2024-07-15 03:35:01.707254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:32:14.103 [2024-07-15 03:35:01.707280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:51520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.103 [2024-07-15 03:35:01.707297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:32:14.103 [2024-07-15 03:35:01.707323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:51528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.103 [2024-07-15 03:35:01.707340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:32:14.103 [2024-07-15 03:35:01.707366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:51536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.103 [2024-07-15 03:35:01.707383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:32:14.103 [2024-07-15 03:35:01.707410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:51544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.103 [2024-07-15 03:35:01.707427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:32:14.103 [2024-07-15 03:35:17.307838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:36296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.103 [2024-07-15 03:35:17.307922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:32:14.103 [2024-07-15 03:35:17.307989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:36328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.103 [2024-07-15 03:35:17.308010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:14.103 [2024-07-15 03:35:17.308035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:36360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.103 [2024-07-15 03:35:17.308052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:32:14.103 [2024-07-15 03:35:17.308814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:36392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.103 [2024-07-15 03:35:17.308839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:32:14.103 [2024-07-15 03:35:17.308895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:36424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.103 [2024-07-15 03:35:17.308915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:32:14.103 [2024-07-15 03:35:17.308949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:36456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.103 [2024-07-15 03:35:17.308968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:32:14.103 [2024-07-15 03:35:17.308990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:36488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.103 [2024-07-15 03:35:17.309007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:32:14.103 [2024-07-15 03:35:17.309029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:36520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.103 [2024-07-15 03:35:17.309046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:32:14.103 [2024-07-15 03:35:17.309084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:36552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.103 [2024-07-15 03:35:17.309101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:32:14.103 [2024-07-15 03:35:17.309123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:36584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.103 [2024-07-15 03:35:17.309139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:32:14.103 [2024-07-15 03:35:17.309160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:36616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.103 [2024-07-15 03:35:17.309176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:32:14.103 [2024-07-15 03:35:17.309214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:36648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.103 [2024-07-15 03:35:17.309230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:32:14.103 [2024-07-15 03:35:17.309251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:36680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.103 [2024-07-15 03:35:17.309272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:32:14.103 [2024-07-15 03:35:17.309293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:36704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.103 [2024-07-15 03:35:17.309308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:32:14.103 [2024-07-15 03:35:17.309329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:36736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.103 [2024-07-15 03:35:17.309344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:32:14.103 [2024-07-15 03:35:17.309365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:36768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.103 [2024-07-15 03:35:17.309380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:32:14.103 [2024-07-15 03:35:17.309401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:36800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.103 [2024-07-15 03:35:17.309433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:32:14.103 [2024-07-15 03:35:17.309456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:36832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.103 [2024-07-15 03:35:17.309480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:32:14.103 [2024-07-15 03:35:17.309503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:36864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.103 [2024-07-15 03:35:17.309520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:32:14.103 [2024-07-15 03:35:17.309542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:36400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.103 [2024-07-15 03:35:17.309558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:32:14.103 [2024-07-15 03:35:17.309760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:36432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.103 [2024-07-15 03:35:17.309797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:32:14.103 [2024-07-15 03:35:17.309826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:36464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.103 [2024-07-15 03:35:17.309844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:32:14.103 [2024-07-15 03:35:17.309867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:36496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.103 [2024-07-15 03:35:17.309895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:32:14.103 [2024-07-15 03:35:17.309919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:36528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.103 [2024-07-15 03:35:17.309937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:32:14.103 [2024-07-15 03:35:17.309959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:36560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.103 [2024-07-15 03:35:17.309976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:32:14.103 [2024-07-15 03:35:17.309998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:36592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.103 [2024-07-15 03:35:17.310015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:32:14.103 [2024-07-15 03:35:17.310037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:36624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.103 [2024-07-15 03:35:17.310054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:32:14.103 [2024-07-15 03:35:17.310075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:36656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.103 [2024-07-15 03:35:17.310092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:14.103 [2024-07-15 03:35:17.310114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:36696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.103 [2024-07-15 03:35:17.310131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:14.103 [2024-07-15 03:35:17.310153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:36728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.103 [2024-07-15 03:35:17.310174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:32:14.104 [2024-07-15 03:35:17.310208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:36760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.104 [2024-07-15 03:35:17.310225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:32:14.104 [2024-07-15 03:35:17.310247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:36792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.104 [2024-07-15 03:35:17.310263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:32:14.104 [2024-07-15 03:35:17.310286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:36824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.104 [2024-07-15 03:35:17.310302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:32:14.104 [2024-07-15 03:35:17.310324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:36856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.104 [2024-07-15 03:35:17.310356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:32:14.104 [2024-07-15 03:35:17.310379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:36888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.104 [2024-07-15 03:35:17.310395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:32:14.104 [2024-07-15 03:35:17.310627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:36896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.104 [2024-07-15 03:35:17.310649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:32:14.104 [2024-07-15 03:35:17.310678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:36928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.104 [2024-07-15 03:35:17.310694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:32:14.104 [2024-07-15 03:35:17.310715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:36960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.104 [2024-07-15 03:35:17.310741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:32:14.104 [2024-07-15 03:35:17.310762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:36992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.104 [2024-07-15 03:35:17.310778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:32:14.104 [2024-07-15 03:35:17.310799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:37008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.104 [2024-07-15 03:35:17.310815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:32:14.104 [2024-07-15 03:35:17.310836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:37040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.104 [2024-07-15 03:35:17.310868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:32:14.104 [2024-07-15 03:35:17.310904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:37072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.104 [2024-07-15 03:35:17.310926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:32:14.104 [2024-07-15 03:35:17.310949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:37104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.104 [2024-07-15 03:35:17.310966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:32:14.104 [2024-07-15 03:35:17.310987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:37264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:14.104 [2024-07-15 03:35:17.311003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:32:14.104 [2024-07-15 03:35:17.311025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:36936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.104 [2024-07-15 03:35:17.311041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:32:14.104 [2024-07-15 03:35:17.311064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:36968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.104 [2024-07-15 03:35:17.311080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:32:14.104 [2024-07-15 03:35:17.311102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:37000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.104 [2024-07-15 03:35:17.311118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:32:14.104 [2024-07-15 03:35:17.311139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:37032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.104 [2024-07-15 03:35:17.311156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:32:14.104 Received shutdown signal, test time was about 32.397229 seconds 00:32:14.104 00:32:14.104 Latency(us) 00:32:14.104 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:14.104 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:32:14.104 Verification LBA range: start 0x0 length 0x4000 00:32:14.104 Nvme0n1 : 32.40 7898.20 30.85 0.00 0.00 16179.48 213.90 4026531.84 00:32:14.104 =================================================================================================================== 00:32:14.104 Total : 7898.20 30.85 0.00 0.00 16179.48 213.90 4026531.84 00:32:14.104 03:35:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:14.364 03:35:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:32:14.364 03:35:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:32:14.364 03:35:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:32:14.364 03:35:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@488 -- # nvmfcleanup 00:32:14.364 03:35:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # sync 00:32:14.364 03:35:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:32:14.364 03:35:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@120 -- # set +e 00:32:14.364 03:35:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # for i in {1..20} 00:32:14.364 03:35:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:32:14.364 rmmod nvme_tcp 00:32:14.364 rmmod nvme_fabrics 00:32:14.364 rmmod nvme_keyring 00:32:14.364 03:35:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:32:14.364 03:35:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set -e 00:32:14.364 03:35:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # return 0 00:32:14.364 03:35:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@489 -- # '[' -n 3317170 ']' 00:32:14.364 03:35:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@490 -- # killprocess 3317170 00:32:14.364 03:35:20 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@948 -- # '[' -z 3317170 ']' 00:32:14.364 03:35:20 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # kill -0 3317170 00:32:14.364 03:35:20 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # uname 00:32:14.364 03:35:20 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:32:14.364 03:35:20 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3317170 00:32:14.364 03:35:20 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:32:14.364 03:35:20 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:32:14.364 03:35:20 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3317170' 00:32:14.364 killing process with pid 3317170 00:32:14.364 03:35:20 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@967 -- # kill 3317170 00:32:14.364 03:35:20 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # wait 3317170 00:32:14.623 03:35:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:32:14.623 03:35:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:32:14.623 03:35:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:32:14.623 03:35:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:32:14.623 03:35:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # remove_spdk_ns 00:32:14.623 03:35:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:14.623 03:35:20 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:14.623 03:35:20 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:17.164 03:35:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:32:17.164 00:32:17.164 real 0m40.978s 00:32:17.164 user 2m3.911s 00:32:17.164 sys 0m10.319s 00:32:17.164 03:35:22 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1124 -- # xtrace_disable 00:32:17.164 03:35:22 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:32:17.164 ************************************ 00:32:17.164 END TEST nvmf_host_multipath_status 00:32:17.164 ************************************ 00:32:17.164 03:35:22 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:32:17.164 03:35:22 nvmf_tcp -- nvmf/nvmf.sh@103 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:32:17.164 03:35:22 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:32:17.164 03:35:22 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:17.164 03:35:22 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:17.164 ************************************ 00:32:17.164 START TEST nvmf_discovery_remove_ifc 00:32:17.164 ************************************ 00:32:17.164 03:35:22 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:32:17.165 * Looking for test storage... 00:32:17.165 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:32:17.165 03:35:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:17.165 03:35:22 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:32:17.165 03:35:22 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:17.165 03:35:22 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:17.165 03:35:22 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:17.165 03:35:22 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:17.165 03:35:22 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:17.165 03:35:22 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:17.165 03:35:22 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:17.165 03:35:22 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:17.165 03:35:22 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:17.165 03:35:22 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:17.165 03:35:22 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:32:17.165 03:35:22 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:32:17.165 03:35:22 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:17.165 03:35:22 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:17.165 03:35:22 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:17.165 03:35:22 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:17.165 03:35:22 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:17.165 03:35:22 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:17.165 03:35:22 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:17.165 03:35:22 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:17.165 03:35:22 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:17.165 03:35:22 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:17.165 03:35:22 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:17.165 03:35:22 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:32:17.165 03:35:22 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:17.165 03:35:22 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@47 -- # : 0 00:32:17.165 03:35:22 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:32:17.165 03:35:22 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:32:17.165 03:35:22 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:17.165 03:35:22 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:17.165 03:35:22 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:17.165 03:35:22 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:32:17.165 03:35:22 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:32:17.165 03:35:22 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:32:17.165 03:35:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:32:17.165 03:35:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:32:17.165 03:35:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:32:17.165 03:35:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:32:17.165 03:35:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:32:17.165 03:35:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:32:17.165 03:35:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:32:17.165 03:35:22 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:32:17.165 03:35:22 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:17.165 03:35:22 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@448 -- # prepare_net_devs 00:32:17.165 03:35:22 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:32:17.165 03:35:22 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:32:17.165 03:35:22 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:17.165 03:35:22 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:17.165 03:35:22 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:17.165 03:35:22 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:32:17.165 03:35:22 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:32:17.165 03:35:22 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@285 -- # xtrace_disable 00:32:17.165 03:35:22 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:19.066 03:35:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:19.066 03:35:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # pci_devs=() 00:32:19.066 03:35:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # local -a pci_devs 00:32:19.066 03:35:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:32:19.066 03:35:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:32:19.066 03:35:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # pci_drivers=() 00:32:19.066 03:35:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:32:19.066 03:35:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # net_devs=() 00:32:19.066 03:35:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # local -ga net_devs 00:32:19.066 03:35:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # e810=() 00:32:19.066 03:35:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # local -ga e810 00:32:19.066 03:35:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # x722=() 00:32:19.066 03:35:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # local -ga x722 00:32:19.066 03:35:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # mlx=() 00:32:19.066 03:35:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # local -ga mlx 00:32:19.066 03:35:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:19.066 03:35:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:19.066 03:35:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:19.066 03:35:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:19.066 03:35:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:19.066 03:35:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:19.066 03:35:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:19.066 03:35:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:19.066 03:35:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:19.066 03:35:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:19.066 03:35:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:19.066 03:35:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:32:19.066 03:35:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:32:19.066 03:35:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:32:19.066 03:35:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:32:19.066 03:35:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:32:19.066 03:35:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:32:19.066 03:35:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:19.066 03:35:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:32:19.066 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:32:19.066 03:35:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:19.066 03:35:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:19.066 03:35:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:19.066 03:35:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:19.066 03:35:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:32:19.066 03:35:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:19.066 03:35:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:32:19.066 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:32:19.066 03:35:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:19.066 03:35:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:19.066 03:35:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:19.066 03:35:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:19.066 03:35:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:32:19.066 03:35:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:32:19.066 03:35:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:32:19.066 03:35:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:32:19.066 03:35:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:19.066 03:35:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:19.066 03:35:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:32:19.066 03:35:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:19.066 03:35:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:32:19.066 03:35:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:19.066 03:35:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:19.066 03:35:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:32:19.066 Found net devices under 0000:0a:00.0: cvl_0_0 00:32:19.066 03:35:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:19.066 03:35:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:19.066 03:35:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:19.066 03:35:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:32:19.066 03:35:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:19.066 03:35:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:32:19.066 03:35:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:19.066 03:35:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:19.066 03:35:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:32:19.066 Found net devices under 0000:0a:00.1: cvl_0_1 00:32:19.066 03:35:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:19.066 03:35:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:32:19.066 03:35:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # is_hw=yes 00:32:19.066 03:35:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:32:19.066 03:35:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:32:19.066 03:35:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:32:19.066 03:35:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:19.066 03:35:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:19.066 03:35:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:19.066 03:35:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:32:19.066 03:35:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:19.066 03:35:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:19.066 03:35:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:32:19.066 03:35:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:19.066 03:35:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:19.066 03:35:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:32:19.066 03:35:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:32:19.066 03:35:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:32:19.066 03:35:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:19.066 03:35:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:19.066 03:35:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:19.066 03:35:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:32:19.066 03:35:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:19.066 03:35:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:19.067 03:35:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:19.067 03:35:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:32:19.067 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:19.067 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.199 ms 00:32:19.067 00:32:19.067 --- 10.0.0.2 ping statistics --- 00:32:19.067 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:19.067 rtt min/avg/max/mdev = 0.199/0.199/0.199/0.000 ms 00:32:19.067 03:35:25 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:19.067 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:19.067 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.060 ms 00:32:19.067 00:32:19.067 --- 10.0.0.1 ping statistics --- 00:32:19.067 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:19.067 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:32:19.067 03:35:25 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:19.067 03:35:25 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # return 0 00:32:19.067 03:35:25 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:32:19.067 03:35:25 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:19.067 03:35:25 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:32:19.067 03:35:25 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:32:19.067 03:35:25 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:19.067 03:35:25 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:32:19.067 03:35:25 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:32:19.067 03:35:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:32:19.067 03:35:25 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:32:19.067 03:35:25 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@722 -- # xtrace_disable 00:32:19.067 03:35:25 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:19.067 03:35:25 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@481 -- # nvmfpid=3323522 00:32:19.067 03:35:25 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:32:19.067 03:35:25 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # waitforlisten 3323522 00:32:19.067 03:35:25 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@829 -- # '[' -z 3323522 ']' 00:32:19.067 03:35:25 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:19.067 03:35:25 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # local max_retries=100 00:32:19.067 03:35:25 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:19.067 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:19.067 03:35:25 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # xtrace_disable 00:32:19.067 03:35:25 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:19.067 [2024-07-15 03:35:25.082402] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:32:19.067 [2024-07-15 03:35:25.082473] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:19.067 EAL: No free 2048 kB hugepages reported on node 1 00:32:19.067 [2024-07-15 03:35:25.145533] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:19.325 [2024-07-15 03:35:25.237200] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:19.325 [2024-07-15 03:35:25.237257] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:19.325 [2024-07-15 03:35:25.237271] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:19.325 [2024-07-15 03:35:25.237282] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:19.325 [2024-07-15 03:35:25.237302] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:19.325 [2024-07-15 03:35:25.237328] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:32:19.325 03:35:25 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:32:19.325 03:35:25 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@862 -- # return 0 00:32:19.325 03:35:25 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:32:19.325 03:35:25 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@728 -- # xtrace_disable 00:32:19.325 03:35:25 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:19.325 03:35:25 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:19.325 03:35:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:32:19.325 03:35:25 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:19.325 03:35:25 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:19.325 [2024-07-15 03:35:25.382311] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:19.325 [2024-07-15 03:35:25.390473] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:32:19.325 null0 00:32:19.325 [2024-07-15 03:35:25.422418] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:19.325 03:35:25 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:19.325 03:35:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=3323661 00:32:19.325 03:35:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:32:19.325 03:35:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 3323661 /tmp/host.sock 00:32:19.326 03:35:25 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@829 -- # '[' -z 3323661 ']' 00:32:19.326 03:35:25 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:32:19.326 03:35:25 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # local max_retries=100 00:32:19.326 03:35:25 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:32:19.326 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:32:19.326 03:35:25 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # xtrace_disable 00:32:19.326 03:35:25 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:19.584 [2024-07-15 03:35:25.486697] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:32:19.584 [2024-07-15 03:35:25.486786] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3323661 ] 00:32:19.584 EAL: No free 2048 kB hugepages reported on node 1 00:32:19.584 [2024-07-15 03:35:25.548240] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:19.584 [2024-07-15 03:35:25.638445] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:32:19.584 03:35:25 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:32:19.584 03:35:25 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@862 -- # return 0 00:32:19.584 03:35:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:32:19.584 03:35:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:32:19.584 03:35:25 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:19.584 03:35:25 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:19.584 03:35:25 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:19.584 03:35:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:32:19.584 03:35:25 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:19.584 03:35:25 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:19.842 03:35:25 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:19.842 03:35:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:32:19.842 03:35:25 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:19.842 03:35:25 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:20.776 [2024-07-15 03:35:26.840029] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:32:20.776 [2024-07-15 03:35:26.840055] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:32:20.776 [2024-07-15 03:35:26.840079] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:32:21.034 [2024-07-15 03:35:26.926370] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:32:21.034 [2024-07-15 03:35:27.033147] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:32:21.034 [2024-07-15 03:35:27.033235] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:32:21.034 [2024-07-15 03:35:27.033278] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:32:21.034 [2024-07-15 03:35:27.033306] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:32:21.034 [2024-07-15 03:35:27.033333] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:32:21.034 03:35:27 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:21.034 03:35:27 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:32:21.034 03:35:27 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:21.034 03:35:27 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:21.034 03:35:27 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:21.034 03:35:27 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:21.034 03:35:27 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:21.034 03:35:27 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:21.034 03:35:27 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:21.034 [2024-07-15 03:35:27.038242] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x15e6300 was disconnected and freed. delete nvme_qpair. 00:32:21.034 03:35:27 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:21.034 03:35:27 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:32:21.035 03:35:27 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:32:21.035 03:35:27 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:32:21.035 03:35:27 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:32:21.035 03:35:27 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:21.035 03:35:27 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:21.035 03:35:27 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:21.035 03:35:27 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:21.035 03:35:27 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:21.035 03:35:27 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:21.035 03:35:27 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:21.035 03:35:27 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:21.035 03:35:27 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:32:21.035 03:35:27 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:32:22.409 03:35:28 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:22.409 03:35:28 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:22.409 03:35:28 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:22.409 03:35:28 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:22.409 03:35:28 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:22.409 03:35:28 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:22.409 03:35:28 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:22.409 03:35:28 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:22.409 03:35:28 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:32:22.409 03:35:28 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:32:23.344 03:35:29 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:23.344 03:35:29 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:23.344 03:35:29 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:23.344 03:35:29 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:23.344 03:35:29 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:23.344 03:35:29 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:23.344 03:35:29 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:23.344 03:35:29 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:23.344 03:35:29 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:32:23.344 03:35:29 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:32:24.281 03:35:30 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:24.281 03:35:30 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:24.281 03:35:30 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:24.281 03:35:30 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:24.281 03:35:30 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:24.281 03:35:30 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:24.281 03:35:30 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:24.281 03:35:30 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:24.281 03:35:30 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:32:24.281 03:35:30 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:32:25.215 03:35:31 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:25.215 03:35:31 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:25.215 03:35:31 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:25.215 03:35:31 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:25.215 03:35:31 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:25.215 03:35:31 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:25.215 03:35:31 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:25.215 03:35:31 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:25.215 03:35:31 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:32:25.215 03:35:31 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:32:26.589 03:35:32 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:26.589 03:35:32 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:26.589 03:35:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:26.589 03:35:32 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:26.589 03:35:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:26.589 03:35:32 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:26.589 03:35:32 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:26.589 03:35:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:26.589 03:35:32 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:32:26.589 03:35:32 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:32:26.589 [2024-07-15 03:35:32.474232] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:32:26.589 [2024-07-15 03:35:32.474303] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:32:26.589 [2024-07-15 03:35:32.474327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:26.589 [2024-07-15 03:35:32.474348] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:32:26.589 [2024-07-15 03:35:32.474364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:26.589 [2024-07-15 03:35:32.474380] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:32:26.589 [2024-07-15 03:35:32.474395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:26.589 [2024-07-15 03:35:32.474411] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:32:26.589 [2024-07-15 03:35:32.474434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:26.589 [2024-07-15 03:35:32.474450] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:32:26.589 [2024-07-15 03:35:32.474464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:26.589 [2024-07-15 03:35:32.474479] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15acce0 is same with the state(5) to be set 00:32:26.589 [2024-07-15 03:35:32.484250] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15acce0 (9): Bad file descriptor 00:32:26.589 [2024-07-15 03:35:32.494299] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:32:27.524 03:35:33 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:27.524 03:35:33 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:27.524 03:35:33 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:27.524 03:35:33 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:27.524 03:35:33 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:27.524 03:35:33 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:27.524 03:35:33 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:27.524 [2024-07-15 03:35:33.522911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:32:27.524 [2024-07-15 03:35:33.522970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15acce0 with addr=10.0.0.2, port=4420 00:32:27.524 [2024-07-15 03:35:33.522997] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15acce0 is same with the state(5) to be set 00:32:27.524 [2024-07-15 03:35:33.523041] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15acce0 (9): Bad file descriptor 00:32:27.524 [2024-07-15 03:35:33.523501] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:32:27.524 [2024-07-15 03:35:33.523537] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:32:27.524 [2024-07-15 03:35:33.523555] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:32:27.524 [2024-07-15 03:35:33.523574] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:32:27.524 [2024-07-15 03:35:33.523603] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:27.524 [2024-07-15 03:35:33.523622] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:32:27.524 03:35:33 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:27.524 03:35:33 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:32:27.524 03:35:33 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:32:28.457 [2024-07-15 03:35:34.526115] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:32:28.457 [2024-07-15 03:35:34.526141] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:32:28.457 [2024-07-15 03:35:34.526170] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:32:28.457 [2024-07-15 03:35:34.526182] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] already in failed state 00:32:28.457 [2024-07-15 03:35:34.526201] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:28.457 [2024-07-15 03:35:34.526263] bdev_nvme.c:6734:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:32:28.457 [2024-07-15 03:35:34.526312] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:32:28.457 [2024-07-15 03:35:34.526338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.457 [2024-07-15 03:35:34.526360] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:32:28.457 [2024-07-15 03:35:34.526375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.457 [2024-07-15 03:35:34.526391] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:32:28.457 [2024-07-15 03:35:34.526405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.457 [2024-07-15 03:35:34.526421] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:32:28.457 [2024-07-15 03:35:34.526435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.457 [2024-07-15 03:35:34.526451] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:32:28.457 [2024-07-15 03:35:34.526465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:28.457 [2024-07-15 03:35:34.526479] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:32:28.457 [2024-07-15 03:35:34.526634] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15ac160 (9): Bad file descriptor 00:32:28.457 [2024-07-15 03:35:34.527659] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:32:28.457 [2024-07-15 03:35:34.527684] nvme_ctrlr.c:1213:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:32:28.457 03:35:34 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:28.457 03:35:34 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:28.457 03:35:34 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:28.457 03:35:34 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:28.457 03:35:34 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:28.457 03:35:34 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:28.457 03:35:34 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:28.457 03:35:34 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:28.457 03:35:34 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:32:28.457 03:35:34 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:28.457 03:35:34 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:28.730 03:35:34 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:32:28.730 03:35:34 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:28.730 03:35:34 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:28.730 03:35:34 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:28.730 03:35:34 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:28.730 03:35:34 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:28.730 03:35:34 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:28.730 03:35:34 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:28.730 03:35:34 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:28.730 03:35:34 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:32:28.730 03:35:34 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:32:29.680 03:35:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:29.680 03:35:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:29.681 03:35:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:29.681 03:35:35 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:29.681 03:35:35 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:29.681 03:35:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:29.681 03:35:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:29.681 03:35:35 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:29.681 03:35:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:32:29.681 03:35:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:32:30.614 [2024-07-15 03:35:36.586052] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:32:30.614 [2024-07-15 03:35:36.586076] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:32:30.614 [2024-07-15 03:35:36.586099] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:32:30.614 [2024-07-15 03:35:36.674437] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:32:30.614 03:35:36 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:30.614 03:35:36 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:30.614 03:35:36 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:30.614 03:35:36 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:30.614 03:35:36 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:30.614 03:35:36 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:30.614 03:35:36 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:30.614 03:35:36 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:30.614 03:35:36 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:32:30.614 03:35:36 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:32:30.871 [2024-07-15 03:35:36.859814] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:32:30.872 [2024-07-15 03:35:36.859874] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:32:30.872 [2024-07-15 03:35:36.859934] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:32:30.872 [2024-07-15 03:35:36.859958] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:32:30.872 [2024-07-15 03:35:36.859971] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:32:30.872 [2024-07-15 03:35:36.864618] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x159b920 was disconnected and freed. delete nvme_qpair. 00:32:31.805 03:35:37 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:31.805 03:35:37 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:31.805 03:35:37 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:31.805 03:35:37 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:31.805 03:35:37 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:31.805 03:35:37 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:31.805 03:35:37 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:31.805 03:35:37 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:31.805 03:35:37 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:32:31.805 03:35:37 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:32:31.805 03:35:37 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 3323661 00:32:31.805 03:35:37 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@948 -- # '[' -z 3323661 ']' 00:32:31.805 03:35:37 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # kill -0 3323661 00:32:31.805 03:35:37 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # uname 00:32:31.805 03:35:37 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:32:31.805 03:35:37 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3323661 00:32:31.805 03:35:37 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:32:31.805 03:35:37 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:32:31.805 03:35:37 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3323661' 00:32:31.805 killing process with pid 3323661 00:32:31.805 03:35:37 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@967 -- # kill 3323661 00:32:31.805 03:35:37 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # wait 3323661 00:32:32.064 03:35:38 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:32:32.064 03:35:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@488 -- # nvmfcleanup 00:32:32.064 03:35:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@117 -- # sync 00:32:32.064 03:35:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:32:32.064 03:35:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@120 -- # set +e 00:32:32.064 03:35:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # for i in {1..20} 00:32:32.064 03:35:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:32:32.064 rmmod nvme_tcp 00:32:32.064 rmmod nvme_fabrics 00:32:32.064 rmmod nvme_keyring 00:32:32.064 03:35:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:32:32.064 03:35:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set -e 00:32:32.064 03:35:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # return 0 00:32:32.064 03:35:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@489 -- # '[' -n 3323522 ']' 00:32:32.064 03:35:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@490 -- # killprocess 3323522 00:32:32.064 03:35:38 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@948 -- # '[' -z 3323522 ']' 00:32:32.064 03:35:38 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # kill -0 3323522 00:32:32.064 03:35:38 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # uname 00:32:32.064 03:35:38 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:32:32.064 03:35:38 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3323522 00:32:32.064 03:35:38 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:32:32.064 03:35:38 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:32:32.064 03:35:38 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3323522' 00:32:32.064 killing process with pid 3323522 00:32:32.064 03:35:38 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@967 -- # kill 3323522 00:32:32.064 03:35:38 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # wait 3323522 00:32:32.323 03:35:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:32:32.323 03:35:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:32:32.323 03:35:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:32:32.323 03:35:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:32:32.323 03:35:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:32:32.323 03:35:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:32.323 03:35:38 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:32.323 03:35:38 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:34.867 03:35:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:32:34.867 00:32:34.867 real 0m17.566s 00:32:34.867 user 0m25.388s 00:32:34.867 sys 0m3.030s 00:32:34.867 03:35:40 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:32:34.867 03:35:40 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:34.867 ************************************ 00:32:34.867 END TEST nvmf_discovery_remove_ifc 00:32:34.867 ************************************ 00:32:34.867 03:35:40 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:32:34.867 03:35:40 nvmf_tcp -- nvmf/nvmf.sh@104 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:32:34.867 03:35:40 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:32:34.867 03:35:40 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:34.867 03:35:40 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:34.867 ************************************ 00:32:34.867 START TEST nvmf_identify_kernel_target 00:32:34.867 ************************************ 00:32:34.867 03:35:40 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:32:34.867 * Looking for test storage... 00:32:34.867 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:32:34.867 03:35:40 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:34.867 03:35:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:32:34.867 03:35:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:34.867 03:35:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:34.867 03:35:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:34.867 03:35:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:34.867 03:35:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:34.867 03:35:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:34.867 03:35:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:34.867 03:35:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:34.867 03:35:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:34.867 03:35:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:34.867 03:35:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:32:34.867 03:35:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:32:34.867 03:35:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:34.867 03:35:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:34.867 03:35:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:34.867 03:35:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:34.867 03:35:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:34.867 03:35:40 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:34.867 03:35:40 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:34.867 03:35:40 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:34.867 03:35:40 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:34.867 03:35:40 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:34.867 03:35:40 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:34.867 03:35:40 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:32:34.867 03:35:40 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:34.867 03:35:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@47 -- # : 0 00:32:34.867 03:35:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:32:34.867 03:35:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:32:34.867 03:35:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:34.867 03:35:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:34.867 03:35:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:34.867 03:35:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:32:34.867 03:35:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:32:34.867 03:35:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:32:34.867 03:35:40 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:32:34.867 03:35:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:32:34.867 03:35:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:34.867 03:35:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:32:34.867 03:35:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:32:34.867 03:35:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:32:34.867 03:35:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:34.867 03:35:40 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:34.867 03:35:40 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:34.867 03:35:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:32:34.867 03:35:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:32:34.867 03:35:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@285 -- # xtrace_disable 00:32:34.867 03:35:40 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:32:36.769 03:35:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:36.769 03:35:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # pci_devs=() 00:32:36.769 03:35:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:32:36.769 03:35:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:32:36.769 03:35:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:32:36.769 03:35:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:32:36.769 03:35:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:32:36.769 03:35:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # net_devs=() 00:32:36.769 03:35:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:32:36.769 03:35:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # e810=() 00:32:36.769 03:35:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # local -ga e810 00:32:36.769 03:35:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # x722=() 00:32:36.769 03:35:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # local -ga x722 00:32:36.769 03:35:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # mlx=() 00:32:36.769 03:35:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # local -ga mlx 00:32:36.769 03:35:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:36.769 03:35:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:36.769 03:35:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:36.769 03:35:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:36.769 03:35:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:36.769 03:35:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:36.769 03:35:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:36.769 03:35:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:36.769 03:35:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:36.769 03:35:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:36.769 03:35:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:36.769 03:35:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:32:36.769 03:35:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:32:36.769 03:35:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:32:36.769 03:35:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:32:36.769 03:35:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:32:36.769 03:35:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:32:36.769 03:35:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:36.769 03:35:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:32:36.769 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:32:36.769 03:35:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:36.769 03:35:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:36.769 03:35:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:36.769 03:35:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:36.769 03:35:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:32:36.769 03:35:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:36.769 03:35:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:32:36.769 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:32:36.769 03:35:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:36.769 03:35:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:36.769 03:35:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:36.769 03:35:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:36.769 03:35:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:32:36.769 03:35:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:32:36.769 03:35:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:32:36.769 03:35:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:32:36.769 03:35:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:36.769 03:35:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:36.769 03:35:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:32:36.769 03:35:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:36.769 03:35:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:32:36.769 03:35:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:36.769 03:35:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:36.769 03:35:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:32:36.769 Found net devices under 0000:0a:00.0: cvl_0_0 00:32:36.769 03:35:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:36.769 03:35:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:36.769 03:35:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:36.769 03:35:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:32:36.769 03:35:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:36.769 03:35:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:32:36.769 03:35:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:36.769 03:35:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:36.769 03:35:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:32:36.769 Found net devices under 0000:0a:00.1: cvl_0_1 00:32:36.769 03:35:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:36.769 03:35:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:32:36.769 03:35:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # is_hw=yes 00:32:36.769 03:35:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:32:36.769 03:35:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:32:36.769 03:35:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:32:36.769 03:35:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:36.769 03:35:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:36.769 03:35:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:36.769 03:35:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:32:36.769 03:35:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:36.769 03:35:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:36.769 03:35:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:32:36.769 03:35:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:36.769 03:35:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:36.769 03:35:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:32:36.769 03:35:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:32:36.769 03:35:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:32:36.769 03:35:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:36.769 03:35:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:36.769 03:35:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:36.769 03:35:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:32:36.769 03:35:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:36.769 03:35:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:36.769 03:35:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:36.769 03:35:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:32:36.770 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:36.770 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.205 ms 00:32:36.770 00:32:36.770 --- 10.0.0.2 ping statistics --- 00:32:36.770 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:36.770 rtt min/avg/max/mdev = 0.205/0.205/0.205/0.000 ms 00:32:36.770 03:35:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:36.770 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:36.770 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.117 ms 00:32:36.770 00:32:36.770 --- 10.0.0.1 ping statistics --- 00:32:36.770 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:36.770 rtt min/avg/max/mdev = 0.117/0.117/0.117/0.000 ms 00:32:36.770 03:35:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:36.770 03:35:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # return 0 00:32:36.770 03:35:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:32:36.770 03:35:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:36.770 03:35:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:32:36.770 03:35:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:32:36.770 03:35:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:36.770 03:35:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:32:36.770 03:35:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:32:36.770 03:35:42 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:32:36.770 03:35:42 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:32:36.770 03:35:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@741 -- # local ip 00:32:36.770 03:35:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:36.770 03:35:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:36.770 03:35:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:36.770 03:35:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:36.770 03:35:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:36.770 03:35:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:36.770 03:35:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:36.770 03:35:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:36.770 03:35:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:36.770 03:35:42 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:32:36.770 03:35:42 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:32:36.770 03:35:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:32:36.770 03:35:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:32:36.770 03:35:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:32:36.770 03:35:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:32:36.770 03:35:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:32:36.770 03:35:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@639 -- # local block nvme 00:32:36.770 03:35:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:32:36.770 03:35:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@642 -- # modprobe nvmet 00:32:36.770 03:35:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:32:36.770 03:35:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:32:37.701 Waiting for block devices as requested 00:32:37.701 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:32:37.701 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:32:37.701 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:32:37.958 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:32:37.958 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:32:37.958 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:32:37.958 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:32:38.215 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:32:38.215 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:32:38.215 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:32:38.215 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:32:38.471 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:32:38.471 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:32:38.471 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:32:38.728 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:32:38.728 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:32:38.728 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:32:38.985 03:35:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:32:38.985 03:35:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:32:38.985 03:35:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:32:38.985 03:35:44 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:32:38.985 03:35:44 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:32:38.985 03:35:44 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:32:38.985 03:35:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:32:38.985 03:35:44 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:32:38.985 03:35:44 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:32:38.985 No valid GPT data, bailing 00:32:38.985 03:35:44 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:32:38.985 03:35:44 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:32:38.985 03:35:44 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:32:38.985 03:35:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:32:38.985 03:35:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:32:38.985 03:35:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:32:38.985 03:35:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:32:38.985 03:35:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:32:38.985 03:35:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:32:38.985 03:35:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # echo 1 00:32:38.985 03:35:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:32:38.985 03:35:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # echo 1 00:32:38.985 03:35:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:32:38.985 03:35:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@672 -- # echo tcp 00:32:38.985 03:35:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # echo 4420 00:32:38.985 03:35:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@674 -- # echo ipv4 00:32:38.985 03:35:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:32:38.985 03:35:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.1 -t tcp -s 4420 00:32:38.985 00:32:38.985 Discovery Log Number of Records 2, Generation counter 2 00:32:38.985 =====Discovery Log Entry 0====== 00:32:38.985 trtype: tcp 00:32:38.985 adrfam: ipv4 00:32:38.985 subtype: current discovery subsystem 00:32:38.985 treq: not specified, sq flow control disable supported 00:32:38.985 portid: 1 00:32:38.985 trsvcid: 4420 00:32:38.985 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:32:38.985 traddr: 10.0.0.1 00:32:38.985 eflags: none 00:32:38.985 sectype: none 00:32:38.985 =====Discovery Log Entry 1====== 00:32:38.985 trtype: tcp 00:32:38.985 adrfam: ipv4 00:32:38.985 subtype: nvme subsystem 00:32:38.985 treq: not specified, sq flow control disable supported 00:32:38.985 portid: 1 00:32:38.985 trsvcid: 4420 00:32:38.985 subnqn: nqn.2016-06.io.spdk:testnqn 00:32:38.985 traddr: 10.0.0.1 00:32:38.985 eflags: none 00:32:38.985 sectype: none 00:32:38.985 03:35:45 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:32:38.985 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:32:38.985 EAL: No free 2048 kB hugepages reported on node 1 00:32:39.244 ===================================================== 00:32:39.244 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:32:39.244 ===================================================== 00:32:39.244 Controller Capabilities/Features 00:32:39.244 ================================ 00:32:39.244 Vendor ID: 0000 00:32:39.244 Subsystem Vendor ID: 0000 00:32:39.244 Serial Number: 637f105093f0bc41b9e1 00:32:39.244 Model Number: Linux 00:32:39.244 Firmware Version: 6.7.0-68 00:32:39.244 Recommended Arb Burst: 0 00:32:39.244 IEEE OUI Identifier: 00 00 00 00:32:39.244 Multi-path I/O 00:32:39.244 May have multiple subsystem ports: No 00:32:39.244 May have multiple controllers: No 00:32:39.244 Associated with SR-IOV VF: No 00:32:39.244 Max Data Transfer Size: Unlimited 00:32:39.244 Max Number of Namespaces: 0 00:32:39.244 Max Number of I/O Queues: 1024 00:32:39.244 NVMe Specification Version (VS): 1.3 00:32:39.244 NVMe Specification Version (Identify): 1.3 00:32:39.244 Maximum Queue Entries: 1024 00:32:39.245 Contiguous Queues Required: No 00:32:39.245 Arbitration Mechanisms Supported 00:32:39.245 Weighted Round Robin: Not Supported 00:32:39.245 Vendor Specific: Not Supported 00:32:39.245 Reset Timeout: 7500 ms 00:32:39.245 Doorbell Stride: 4 bytes 00:32:39.245 NVM Subsystem Reset: Not Supported 00:32:39.245 Command Sets Supported 00:32:39.245 NVM Command Set: Supported 00:32:39.245 Boot Partition: Not Supported 00:32:39.245 Memory Page Size Minimum: 4096 bytes 00:32:39.245 Memory Page Size Maximum: 4096 bytes 00:32:39.245 Persistent Memory Region: Not Supported 00:32:39.245 Optional Asynchronous Events Supported 00:32:39.245 Namespace Attribute Notices: Not Supported 00:32:39.245 Firmware Activation Notices: Not Supported 00:32:39.245 ANA Change Notices: Not Supported 00:32:39.245 PLE Aggregate Log Change Notices: Not Supported 00:32:39.245 LBA Status Info Alert Notices: Not Supported 00:32:39.245 EGE Aggregate Log Change Notices: Not Supported 00:32:39.245 Normal NVM Subsystem Shutdown event: Not Supported 00:32:39.245 Zone Descriptor Change Notices: Not Supported 00:32:39.245 Discovery Log Change Notices: Supported 00:32:39.245 Controller Attributes 00:32:39.245 128-bit Host Identifier: Not Supported 00:32:39.245 Non-Operational Permissive Mode: Not Supported 00:32:39.245 NVM Sets: Not Supported 00:32:39.245 Read Recovery Levels: Not Supported 00:32:39.245 Endurance Groups: Not Supported 00:32:39.245 Predictable Latency Mode: Not Supported 00:32:39.245 Traffic Based Keep ALive: Not Supported 00:32:39.245 Namespace Granularity: Not Supported 00:32:39.245 SQ Associations: Not Supported 00:32:39.245 UUID List: Not Supported 00:32:39.245 Multi-Domain Subsystem: Not Supported 00:32:39.245 Fixed Capacity Management: Not Supported 00:32:39.245 Variable Capacity Management: Not Supported 00:32:39.245 Delete Endurance Group: Not Supported 00:32:39.245 Delete NVM Set: Not Supported 00:32:39.245 Extended LBA Formats Supported: Not Supported 00:32:39.245 Flexible Data Placement Supported: Not Supported 00:32:39.245 00:32:39.245 Controller Memory Buffer Support 00:32:39.245 ================================ 00:32:39.245 Supported: No 00:32:39.245 00:32:39.245 Persistent Memory Region Support 00:32:39.245 ================================ 00:32:39.245 Supported: No 00:32:39.245 00:32:39.245 Admin Command Set Attributes 00:32:39.245 ============================ 00:32:39.245 Security Send/Receive: Not Supported 00:32:39.245 Format NVM: Not Supported 00:32:39.245 Firmware Activate/Download: Not Supported 00:32:39.245 Namespace Management: Not Supported 00:32:39.245 Device Self-Test: Not Supported 00:32:39.245 Directives: Not Supported 00:32:39.245 NVMe-MI: Not Supported 00:32:39.245 Virtualization Management: Not Supported 00:32:39.245 Doorbell Buffer Config: Not Supported 00:32:39.245 Get LBA Status Capability: Not Supported 00:32:39.245 Command & Feature Lockdown Capability: Not Supported 00:32:39.245 Abort Command Limit: 1 00:32:39.245 Async Event Request Limit: 1 00:32:39.245 Number of Firmware Slots: N/A 00:32:39.245 Firmware Slot 1 Read-Only: N/A 00:32:39.245 Firmware Activation Without Reset: N/A 00:32:39.245 Multiple Update Detection Support: N/A 00:32:39.245 Firmware Update Granularity: No Information Provided 00:32:39.245 Per-Namespace SMART Log: No 00:32:39.245 Asymmetric Namespace Access Log Page: Not Supported 00:32:39.245 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:32:39.245 Command Effects Log Page: Not Supported 00:32:39.245 Get Log Page Extended Data: Supported 00:32:39.245 Telemetry Log Pages: Not Supported 00:32:39.245 Persistent Event Log Pages: Not Supported 00:32:39.245 Supported Log Pages Log Page: May Support 00:32:39.245 Commands Supported & Effects Log Page: Not Supported 00:32:39.245 Feature Identifiers & Effects Log Page:May Support 00:32:39.245 NVMe-MI Commands & Effects Log Page: May Support 00:32:39.245 Data Area 4 for Telemetry Log: Not Supported 00:32:39.245 Error Log Page Entries Supported: 1 00:32:39.245 Keep Alive: Not Supported 00:32:39.245 00:32:39.245 NVM Command Set Attributes 00:32:39.245 ========================== 00:32:39.245 Submission Queue Entry Size 00:32:39.245 Max: 1 00:32:39.245 Min: 1 00:32:39.245 Completion Queue Entry Size 00:32:39.245 Max: 1 00:32:39.245 Min: 1 00:32:39.245 Number of Namespaces: 0 00:32:39.245 Compare Command: Not Supported 00:32:39.245 Write Uncorrectable Command: Not Supported 00:32:39.245 Dataset Management Command: Not Supported 00:32:39.245 Write Zeroes Command: Not Supported 00:32:39.245 Set Features Save Field: Not Supported 00:32:39.245 Reservations: Not Supported 00:32:39.245 Timestamp: Not Supported 00:32:39.245 Copy: Not Supported 00:32:39.245 Volatile Write Cache: Not Present 00:32:39.245 Atomic Write Unit (Normal): 1 00:32:39.245 Atomic Write Unit (PFail): 1 00:32:39.245 Atomic Compare & Write Unit: 1 00:32:39.245 Fused Compare & Write: Not Supported 00:32:39.245 Scatter-Gather List 00:32:39.245 SGL Command Set: Supported 00:32:39.245 SGL Keyed: Not Supported 00:32:39.245 SGL Bit Bucket Descriptor: Not Supported 00:32:39.245 SGL Metadata Pointer: Not Supported 00:32:39.245 Oversized SGL: Not Supported 00:32:39.245 SGL Metadata Address: Not Supported 00:32:39.245 SGL Offset: Supported 00:32:39.245 Transport SGL Data Block: Not Supported 00:32:39.245 Replay Protected Memory Block: Not Supported 00:32:39.245 00:32:39.245 Firmware Slot Information 00:32:39.245 ========================= 00:32:39.245 Active slot: 0 00:32:39.245 00:32:39.245 00:32:39.245 Error Log 00:32:39.245 ========= 00:32:39.245 00:32:39.245 Active Namespaces 00:32:39.245 ================= 00:32:39.245 Discovery Log Page 00:32:39.245 ================== 00:32:39.245 Generation Counter: 2 00:32:39.245 Number of Records: 2 00:32:39.245 Record Format: 0 00:32:39.245 00:32:39.245 Discovery Log Entry 0 00:32:39.245 ---------------------- 00:32:39.245 Transport Type: 3 (TCP) 00:32:39.245 Address Family: 1 (IPv4) 00:32:39.245 Subsystem Type: 3 (Current Discovery Subsystem) 00:32:39.245 Entry Flags: 00:32:39.245 Duplicate Returned Information: 0 00:32:39.245 Explicit Persistent Connection Support for Discovery: 0 00:32:39.245 Transport Requirements: 00:32:39.245 Secure Channel: Not Specified 00:32:39.245 Port ID: 1 (0x0001) 00:32:39.245 Controller ID: 65535 (0xffff) 00:32:39.245 Admin Max SQ Size: 32 00:32:39.245 Transport Service Identifier: 4420 00:32:39.245 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:32:39.245 Transport Address: 10.0.0.1 00:32:39.245 Discovery Log Entry 1 00:32:39.245 ---------------------- 00:32:39.245 Transport Type: 3 (TCP) 00:32:39.245 Address Family: 1 (IPv4) 00:32:39.245 Subsystem Type: 2 (NVM Subsystem) 00:32:39.245 Entry Flags: 00:32:39.245 Duplicate Returned Information: 0 00:32:39.245 Explicit Persistent Connection Support for Discovery: 0 00:32:39.245 Transport Requirements: 00:32:39.245 Secure Channel: Not Specified 00:32:39.245 Port ID: 1 (0x0001) 00:32:39.245 Controller ID: 65535 (0xffff) 00:32:39.245 Admin Max SQ Size: 32 00:32:39.245 Transport Service Identifier: 4420 00:32:39.245 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:32:39.245 Transport Address: 10.0.0.1 00:32:39.245 03:35:45 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:32:39.245 EAL: No free 2048 kB hugepages reported on node 1 00:32:39.245 get_feature(0x01) failed 00:32:39.245 get_feature(0x02) failed 00:32:39.245 get_feature(0x04) failed 00:32:39.245 ===================================================== 00:32:39.245 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:32:39.245 ===================================================== 00:32:39.245 Controller Capabilities/Features 00:32:39.245 ================================ 00:32:39.245 Vendor ID: 0000 00:32:39.245 Subsystem Vendor ID: 0000 00:32:39.245 Serial Number: a52c350a94120d8330ac 00:32:39.245 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:32:39.245 Firmware Version: 6.7.0-68 00:32:39.245 Recommended Arb Burst: 6 00:32:39.245 IEEE OUI Identifier: 00 00 00 00:32:39.245 Multi-path I/O 00:32:39.245 May have multiple subsystem ports: Yes 00:32:39.245 May have multiple controllers: Yes 00:32:39.245 Associated with SR-IOV VF: No 00:32:39.245 Max Data Transfer Size: Unlimited 00:32:39.246 Max Number of Namespaces: 1024 00:32:39.246 Max Number of I/O Queues: 128 00:32:39.246 NVMe Specification Version (VS): 1.3 00:32:39.246 NVMe Specification Version (Identify): 1.3 00:32:39.246 Maximum Queue Entries: 1024 00:32:39.246 Contiguous Queues Required: No 00:32:39.246 Arbitration Mechanisms Supported 00:32:39.246 Weighted Round Robin: Not Supported 00:32:39.246 Vendor Specific: Not Supported 00:32:39.246 Reset Timeout: 7500 ms 00:32:39.246 Doorbell Stride: 4 bytes 00:32:39.246 NVM Subsystem Reset: Not Supported 00:32:39.246 Command Sets Supported 00:32:39.246 NVM Command Set: Supported 00:32:39.246 Boot Partition: Not Supported 00:32:39.246 Memory Page Size Minimum: 4096 bytes 00:32:39.246 Memory Page Size Maximum: 4096 bytes 00:32:39.246 Persistent Memory Region: Not Supported 00:32:39.246 Optional Asynchronous Events Supported 00:32:39.246 Namespace Attribute Notices: Supported 00:32:39.246 Firmware Activation Notices: Not Supported 00:32:39.246 ANA Change Notices: Supported 00:32:39.246 PLE Aggregate Log Change Notices: Not Supported 00:32:39.246 LBA Status Info Alert Notices: Not Supported 00:32:39.246 EGE Aggregate Log Change Notices: Not Supported 00:32:39.246 Normal NVM Subsystem Shutdown event: Not Supported 00:32:39.246 Zone Descriptor Change Notices: Not Supported 00:32:39.246 Discovery Log Change Notices: Not Supported 00:32:39.246 Controller Attributes 00:32:39.246 128-bit Host Identifier: Supported 00:32:39.246 Non-Operational Permissive Mode: Not Supported 00:32:39.246 NVM Sets: Not Supported 00:32:39.246 Read Recovery Levels: Not Supported 00:32:39.246 Endurance Groups: Not Supported 00:32:39.246 Predictable Latency Mode: Not Supported 00:32:39.246 Traffic Based Keep ALive: Supported 00:32:39.246 Namespace Granularity: Not Supported 00:32:39.246 SQ Associations: Not Supported 00:32:39.246 UUID List: Not Supported 00:32:39.246 Multi-Domain Subsystem: Not Supported 00:32:39.246 Fixed Capacity Management: Not Supported 00:32:39.246 Variable Capacity Management: Not Supported 00:32:39.246 Delete Endurance Group: Not Supported 00:32:39.246 Delete NVM Set: Not Supported 00:32:39.246 Extended LBA Formats Supported: Not Supported 00:32:39.246 Flexible Data Placement Supported: Not Supported 00:32:39.246 00:32:39.246 Controller Memory Buffer Support 00:32:39.246 ================================ 00:32:39.246 Supported: No 00:32:39.246 00:32:39.246 Persistent Memory Region Support 00:32:39.246 ================================ 00:32:39.246 Supported: No 00:32:39.246 00:32:39.246 Admin Command Set Attributes 00:32:39.246 ============================ 00:32:39.246 Security Send/Receive: Not Supported 00:32:39.246 Format NVM: Not Supported 00:32:39.246 Firmware Activate/Download: Not Supported 00:32:39.246 Namespace Management: Not Supported 00:32:39.246 Device Self-Test: Not Supported 00:32:39.246 Directives: Not Supported 00:32:39.246 NVMe-MI: Not Supported 00:32:39.246 Virtualization Management: Not Supported 00:32:39.246 Doorbell Buffer Config: Not Supported 00:32:39.246 Get LBA Status Capability: Not Supported 00:32:39.246 Command & Feature Lockdown Capability: Not Supported 00:32:39.246 Abort Command Limit: 4 00:32:39.246 Async Event Request Limit: 4 00:32:39.246 Number of Firmware Slots: N/A 00:32:39.246 Firmware Slot 1 Read-Only: N/A 00:32:39.246 Firmware Activation Without Reset: N/A 00:32:39.246 Multiple Update Detection Support: N/A 00:32:39.246 Firmware Update Granularity: No Information Provided 00:32:39.246 Per-Namespace SMART Log: Yes 00:32:39.246 Asymmetric Namespace Access Log Page: Supported 00:32:39.246 ANA Transition Time : 10 sec 00:32:39.246 00:32:39.246 Asymmetric Namespace Access Capabilities 00:32:39.246 ANA Optimized State : Supported 00:32:39.246 ANA Non-Optimized State : Supported 00:32:39.246 ANA Inaccessible State : Supported 00:32:39.246 ANA Persistent Loss State : Supported 00:32:39.246 ANA Change State : Supported 00:32:39.246 ANAGRPID is not changed : No 00:32:39.246 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:32:39.246 00:32:39.246 ANA Group Identifier Maximum : 128 00:32:39.246 Number of ANA Group Identifiers : 128 00:32:39.246 Max Number of Allowed Namespaces : 1024 00:32:39.246 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:32:39.246 Command Effects Log Page: Supported 00:32:39.246 Get Log Page Extended Data: Supported 00:32:39.246 Telemetry Log Pages: Not Supported 00:32:39.246 Persistent Event Log Pages: Not Supported 00:32:39.246 Supported Log Pages Log Page: May Support 00:32:39.246 Commands Supported & Effects Log Page: Not Supported 00:32:39.246 Feature Identifiers & Effects Log Page:May Support 00:32:39.246 NVMe-MI Commands & Effects Log Page: May Support 00:32:39.246 Data Area 4 for Telemetry Log: Not Supported 00:32:39.246 Error Log Page Entries Supported: 128 00:32:39.246 Keep Alive: Supported 00:32:39.246 Keep Alive Granularity: 1000 ms 00:32:39.246 00:32:39.246 NVM Command Set Attributes 00:32:39.246 ========================== 00:32:39.246 Submission Queue Entry Size 00:32:39.246 Max: 64 00:32:39.246 Min: 64 00:32:39.246 Completion Queue Entry Size 00:32:39.246 Max: 16 00:32:39.246 Min: 16 00:32:39.246 Number of Namespaces: 1024 00:32:39.246 Compare Command: Not Supported 00:32:39.246 Write Uncorrectable Command: Not Supported 00:32:39.246 Dataset Management Command: Supported 00:32:39.246 Write Zeroes Command: Supported 00:32:39.246 Set Features Save Field: Not Supported 00:32:39.246 Reservations: Not Supported 00:32:39.246 Timestamp: Not Supported 00:32:39.246 Copy: Not Supported 00:32:39.246 Volatile Write Cache: Present 00:32:39.246 Atomic Write Unit (Normal): 1 00:32:39.246 Atomic Write Unit (PFail): 1 00:32:39.246 Atomic Compare & Write Unit: 1 00:32:39.246 Fused Compare & Write: Not Supported 00:32:39.246 Scatter-Gather List 00:32:39.246 SGL Command Set: Supported 00:32:39.246 SGL Keyed: Not Supported 00:32:39.246 SGL Bit Bucket Descriptor: Not Supported 00:32:39.246 SGL Metadata Pointer: Not Supported 00:32:39.246 Oversized SGL: Not Supported 00:32:39.246 SGL Metadata Address: Not Supported 00:32:39.246 SGL Offset: Supported 00:32:39.246 Transport SGL Data Block: Not Supported 00:32:39.246 Replay Protected Memory Block: Not Supported 00:32:39.246 00:32:39.246 Firmware Slot Information 00:32:39.246 ========================= 00:32:39.246 Active slot: 0 00:32:39.246 00:32:39.246 Asymmetric Namespace Access 00:32:39.246 =========================== 00:32:39.246 Change Count : 0 00:32:39.246 Number of ANA Group Descriptors : 1 00:32:39.246 ANA Group Descriptor : 0 00:32:39.246 ANA Group ID : 1 00:32:39.246 Number of NSID Values : 1 00:32:39.246 Change Count : 0 00:32:39.246 ANA State : 1 00:32:39.246 Namespace Identifier : 1 00:32:39.246 00:32:39.246 Commands Supported and Effects 00:32:39.246 ============================== 00:32:39.246 Admin Commands 00:32:39.246 -------------- 00:32:39.246 Get Log Page (02h): Supported 00:32:39.246 Identify (06h): Supported 00:32:39.246 Abort (08h): Supported 00:32:39.246 Set Features (09h): Supported 00:32:39.246 Get Features (0Ah): Supported 00:32:39.246 Asynchronous Event Request (0Ch): Supported 00:32:39.246 Keep Alive (18h): Supported 00:32:39.246 I/O Commands 00:32:39.246 ------------ 00:32:39.246 Flush (00h): Supported 00:32:39.246 Write (01h): Supported LBA-Change 00:32:39.246 Read (02h): Supported 00:32:39.246 Write Zeroes (08h): Supported LBA-Change 00:32:39.246 Dataset Management (09h): Supported 00:32:39.246 00:32:39.246 Error Log 00:32:39.246 ========= 00:32:39.246 Entry: 0 00:32:39.246 Error Count: 0x3 00:32:39.246 Submission Queue Id: 0x0 00:32:39.246 Command Id: 0x5 00:32:39.246 Phase Bit: 0 00:32:39.246 Status Code: 0x2 00:32:39.246 Status Code Type: 0x0 00:32:39.246 Do Not Retry: 1 00:32:39.246 Error Location: 0x28 00:32:39.246 LBA: 0x0 00:32:39.246 Namespace: 0x0 00:32:39.246 Vendor Log Page: 0x0 00:32:39.246 ----------- 00:32:39.246 Entry: 1 00:32:39.246 Error Count: 0x2 00:32:39.246 Submission Queue Id: 0x0 00:32:39.246 Command Id: 0x5 00:32:39.246 Phase Bit: 0 00:32:39.246 Status Code: 0x2 00:32:39.246 Status Code Type: 0x0 00:32:39.246 Do Not Retry: 1 00:32:39.246 Error Location: 0x28 00:32:39.246 LBA: 0x0 00:32:39.246 Namespace: 0x0 00:32:39.246 Vendor Log Page: 0x0 00:32:39.246 ----------- 00:32:39.246 Entry: 2 00:32:39.246 Error Count: 0x1 00:32:39.246 Submission Queue Id: 0x0 00:32:39.246 Command Id: 0x4 00:32:39.246 Phase Bit: 0 00:32:39.246 Status Code: 0x2 00:32:39.246 Status Code Type: 0x0 00:32:39.247 Do Not Retry: 1 00:32:39.247 Error Location: 0x28 00:32:39.247 LBA: 0x0 00:32:39.247 Namespace: 0x0 00:32:39.247 Vendor Log Page: 0x0 00:32:39.247 00:32:39.247 Number of Queues 00:32:39.247 ================ 00:32:39.247 Number of I/O Submission Queues: 128 00:32:39.247 Number of I/O Completion Queues: 128 00:32:39.247 00:32:39.247 ZNS Specific Controller Data 00:32:39.247 ============================ 00:32:39.247 Zone Append Size Limit: 0 00:32:39.247 00:32:39.247 00:32:39.247 Active Namespaces 00:32:39.247 ================= 00:32:39.247 get_feature(0x05) failed 00:32:39.247 Namespace ID:1 00:32:39.247 Command Set Identifier: NVM (00h) 00:32:39.247 Deallocate: Supported 00:32:39.247 Deallocated/Unwritten Error: Not Supported 00:32:39.247 Deallocated Read Value: Unknown 00:32:39.247 Deallocate in Write Zeroes: Not Supported 00:32:39.247 Deallocated Guard Field: 0xFFFF 00:32:39.247 Flush: Supported 00:32:39.247 Reservation: Not Supported 00:32:39.247 Namespace Sharing Capabilities: Multiple Controllers 00:32:39.247 Size (in LBAs): 1953525168 (931GiB) 00:32:39.247 Capacity (in LBAs): 1953525168 (931GiB) 00:32:39.247 Utilization (in LBAs): 1953525168 (931GiB) 00:32:39.247 UUID: 8aef4b18-cb93-48d1-a698-f98ceb14f2dd 00:32:39.247 Thin Provisioning: Not Supported 00:32:39.247 Per-NS Atomic Units: Yes 00:32:39.247 Atomic Boundary Size (Normal): 0 00:32:39.247 Atomic Boundary Size (PFail): 0 00:32:39.247 Atomic Boundary Offset: 0 00:32:39.247 NGUID/EUI64 Never Reused: No 00:32:39.247 ANA group ID: 1 00:32:39.247 Namespace Write Protected: No 00:32:39.247 Number of LBA Formats: 1 00:32:39.247 Current LBA Format: LBA Format #00 00:32:39.247 LBA Format #00: Data Size: 512 Metadata Size: 0 00:32:39.247 00:32:39.247 03:35:45 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:32:39.247 03:35:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:32:39.247 03:35:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # sync 00:32:39.247 03:35:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:32:39.247 03:35:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@120 -- # set +e 00:32:39.247 03:35:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:32:39.247 03:35:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:32:39.247 rmmod nvme_tcp 00:32:39.247 rmmod nvme_fabrics 00:32:39.247 03:35:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:32:39.247 03:35:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set -e 00:32:39.247 03:35:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # return 0 00:32:39.247 03:35:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:32:39.247 03:35:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:32:39.247 03:35:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:32:39.247 03:35:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:32:39.247 03:35:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:32:39.247 03:35:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:32:39.247 03:35:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:39.247 03:35:45 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:39.247 03:35:45 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:41.777 03:35:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:32:41.777 03:35:47 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:32:41.777 03:35:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:32:41.777 03:35:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # echo 0 00:32:41.777 03:35:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:32:41.777 03:35:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:32:41.777 03:35:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:32:41.777 03:35:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:32:41.777 03:35:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:32:41.777 03:35:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:32:41.777 03:35:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:32:42.711 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:32:42.711 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:32:42.711 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:32:42.711 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:32:42.711 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:32:42.711 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:32:42.711 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:32:42.711 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:32:42.711 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:32:42.711 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:32:42.711 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:32:42.711 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:32:42.711 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:32:42.711 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:32:42.711 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:32:42.711 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:32:43.645 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:32:43.904 00:32:43.904 real 0m9.390s 00:32:43.904 user 0m2.012s 00:32:43.904 sys 0m3.330s 00:32:43.904 03:35:49 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:32:43.904 03:35:49 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:32:43.904 ************************************ 00:32:43.904 END TEST nvmf_identify_kernel_target 00:32:43.904 ************************************ 00:32:43.904 03:35:49 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:32:43.904 03:35:49 nvmf_tcp -- nvmf/nvmf.sh@105 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:32:43.904 03:35:49 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:32:43.904 03:35:49 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:43.904 03:35:49 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:43.904 ************************************ 00:32:43.904 START TEST nvmf_auth_host 00:32:43.904 ************************************ 00:32:43.904 03:35:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:32:43.904 * Looking for test storage... 00:32:43.904 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:32:43.904 03:35:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:43.904 03:35:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:32:43.904 03:35:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:43.904 03:35:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:43.904 03:35:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:43.904 03:35:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:43.904 03:35:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:43.904 03:35:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:43.904 03:35:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:43.904 03:35:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:43.904 03:35:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:43.904 03:35:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:43.904 03:35:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:32:43.904 03:35:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:32:43.904 03:35:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:43.904 03:35:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:43.904 03:35:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:43.904 03:35:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:43.904 03:35:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:43.904 03:35:49 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:43.904 03:35:49 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:43.904 03:35:49 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:43.904 03:35:49 nvmf_tcp.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:43.904 03:35:49 nvmf_tcp.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:43.904 03:35:49 nvmf_tcp.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:43.904 03:35:49 nvmf_tcp.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:32:43.904 03:35:49 nvmf_tcp.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:43.904 03:35:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@47 -- # : 0 00:32:43.904 03:35:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:32:43.904 03:35:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:32:43.904 03:35:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:43.904 03:35:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:43.904 03:35:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:43.904 03:35:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:32:43.904 03:35:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:32:43.904 03:35:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:32:43.904 03:35:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:32:43.904 03:35:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:32:43.904 03:35:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:32:43.904 03:35:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:32:43.904 03:35:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:32:43.904 03:35:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:32:43.904 03:35:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:32:43.904 03:35:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:32:43.904 03:35:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:32:43.904 03:35:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:32:43.904 03:35:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:43.904 03:35:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:32:43.904 03:35:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:32:43.904 03:35:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:32:43.904 03:35:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:43.904 03:35:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:43.904 03:35:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:43.904 03:35:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:32:43.904 03:35:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:32:43.904 03:35:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@285 -- # xtrace_disable 00:32:43.904 03:35:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:46.430 03:35:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:46.430 03:35:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@291 -- # pci_devs=() 00:32:46.430 03:35:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:32:46.430 03:35:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:32:46.430 03:35:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:32:46.430 03:35:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:32:46.430 03:35:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:32:46.430 03:35:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@295 -- # net_devs=() 00:32:46.430 03:35:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:32:46.430 03:35:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@296 -- # e810=() 00:32:46.430 03:35:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@296 -- # local -ga e810 00:32:46.430 03:35:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@297 -- # x722=() 00:32:46.430 03:35:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@297 -- # local -ga x722 00:32:46.430 03:35:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@298 -- # mlx=() 00:32:46.430 03:35:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@298 -- # local -ga mlx 00:32:46.430 03:35:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:46.430 03:35:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:46.430 03:35:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:46.430 03:35:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:46.430 03:35:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:46.430 03:35:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:46.430 03:35:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:46.430 03:35:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:46.430 03:35:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:46.430 03:35:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:46.430 03:35:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:46.430 03:35:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:32:46.430 03:35:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:32:46.430 03:35:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:32:46.430 03:35:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:32:46.430 03:35:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:32:46.430 03:35:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:32:46.430 03:35:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:46.430 03:35:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:32:46.430 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:32:46.430 03:35:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:46.430 03:35:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:46.430 03:35:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:46.430 03:35:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:46.430 03:35:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:32:46.430 03:35:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:46.430 03:35:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:32:46.430 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:32:46.430 03:35:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:46.430 03:35:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:46.430 03:35:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:46.430 03:35:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:46.430 03:35:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:32:46.430 03:35:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:32:46.430 03:35:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:32:46.430 03:35:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:32:46.430 03:35:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:46.430 03:35:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:46.430 03:35:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:32:46.430 03:35:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:46.430 03:35:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:32:46.430 03:35:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:46.430 03:35:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:46.430 03:35:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:32:46.430 Found net devices under 0000:0a:00.0: cvl_0_0 00:32:46.430 03:35:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:46.430 03:35:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:46.430 03:35:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:46.430 03:35:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:32:46.430 03:35:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:46.430 03:35:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:32:46.430 03:35:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:46.430 03:35:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:46.430 03:35:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:32:46.430 Found net devices under 0000:0a:00.1: cvl_0_1 00:32:46.430 03:35:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:46.430 03:35:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:32:46.430 03:35:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # is_hw=yes 00:32:46.430 03:35:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:32:46.430 03:35:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:32:46.430 03:35:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:32:46.430 03:35:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:46.430 03:35:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:46.430 03:35:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:46.430 03:35:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:32:46.430 03:35:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:46.430 03:35:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:46.430 03:35:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:32:46.430 03:35:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:46.430 03:35:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:46.430 03:35:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:32:46.430 03:35:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:32:46.430 03:35:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:32:46.430 03:35:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:46.430 03:35:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:46.430 03:35:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:46.430 03:35:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:32:46.430 03:35:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:46.430 03:35:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:46.430 03:35:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:46.430 03:35:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:32:46.430 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:46.430 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.144 ms 00:32:46.430 00:32:46.430 --- 10.0.0.2 ping statistics --- 00:32:46.430 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:46.430 rtt min/avg/max/mdev = 0.144/0.144/0.144/0.000 ms 00:32:46.430 03:35:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:46.430 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:46.430 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.099 ms 00:32:46.430 00:32:46.430 --- 10.0.0.1 ping statistics --- 00:32:46.430 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:46.430 rtt min/avg/max/mdev = 0.099/0.099/0.099/0.000 ms 00:32:46.430 03:35:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:46.430 03:35:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@422 -- # return 0 00:32:46.430 03:35:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:32:46.430 03:35:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:46.430 03:35:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:32:46.431 03:35:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:32:46.431 03:35:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:46.431 03:35:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:32:46.431 03:35:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:32:46.431 03:35:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:32:46.431 03:35:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:32:46.431 03:35:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@722 -- # xtrace_disable 00:32:46.431 03:35:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:46.431 03:35:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@481 -- # nvmfpid=3330726 00:32:46.431 03:35:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:32:46.431 03:35:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@482 -- # waitforlisten 3330726 00:32:46.431 03:35:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@829 -- # '[' -z 3330726 ']' 00:32:46.431 03:35:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:46.431 03:35:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:32:46.431 03:35:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:46.431 03:35:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:32:46.431 03:35:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:46.431 03:35:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:32:46.431 03:35:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@862 -- # return 0 00:32:46.431 03:35:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:32:46.431 03:35:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@728 -- # xtrace_disable 00:32:46.431 03:35:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:46.431 03:35:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:46.431 03:35:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:32:46.431 03:35:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:32:46.431 03:35:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:32:46.431 03:35:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:32:46.431 03:35:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:32:46.431 03:35:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:32:46.431 03:35:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:32:46.431 03:35:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:32:46.431 03:35:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=f6f026d20ae90449fbbd77781d1086ae 00:32:46.431 03:35:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:32:46.431 03:35:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.LcS 00:32:46.431 03:35:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key f6f026d20ae90449fbbd77781d1086ae 0 00:32:46.431 03:35:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 f6f026d20ae90449fbbd77781d1086ae 0 00:32:46.431 03:35:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:32:46.431 03:35:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:32:46.431 03:35:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=f6f026d20ae90449fbbd77781d1086ae 00:32:46.431 03:35:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:32:46.431 03:35:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:32:46.431 03:35:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.LcS 00:32:46.431 03:35:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.LcS 00:32:46.431 03:35:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.LcS 00:32:46.431 03:35:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:32:46.431 03:35:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:32:46.431 03:35:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:32:46.431 03:35:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:32:46.431 03:35:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:32:46.431 03:35:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:32:46.431 03:35:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:32:46.431 03:35:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=4c9ccb90234ff260434e610b2e9510540dd272c6e2c88f09d3431538c58756be 00:32:46.431 03:35:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:32:46.431 03:35:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.o0b 00:32:46.431 03:35:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 4c9ccb90234ff260434e610b2e9510540dd272c6e2c88f09d3431538c58756be 3 00:32:46.431 03:35:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 4c9ccb90234ff260434e610b2e9510540dd272c6e2c88f09d3431538c58756be 3 00:32:46.431 03:35:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:32:46.431 03:35:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:32:46.431 03:35:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=4c9ccb90234ff260434e610b2e9510540dd272c6e2c88f09d3431538c58756be 00:32:46.431 03:35:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:32:46.431 03:35:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:32:46.688 03:35:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.o0b 00:32:46.688 03:35:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.o0b 00:32:46.688 03:35:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.o0b 00:32:46.688 03:35:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:32:46.688 03:35:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:32:46.688 03:35:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:32:46.688 03:35:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:32:46.688 03:35:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:32:46.688 03:35:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:32:46.688 03:35:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:32:46.688 03:35:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=0b8df8150b64f3ec3c6b39f40b7e77db141c21590d3b65e9 00:32:46.688 03:35:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:32:46.688 03:35:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.hTw 00:32:46.688 03:35:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 0b8df8150b64f3ec3c6b39f40b7e77db141c21590d3b65e9 0 00:32:46.688 03:35:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 0b8df8150b64f3ec3c6b39f40b7e77db141c21590d3b65e9 0 00:32:46.688 03:35:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:32:46.688 03:35:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:32:46.688 03:35:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=0b8df8150b64f3ec3c6b39f40b7e77db141c21590d3b65e9 00:32:46.688 03:35:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:32:46.688 03:35:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:32:46.688 03:35:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.hTw 00:32:46.688 03:35:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.hTw 00:32:46.688 03:35:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.hTw 00:32:46.688 03:35:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:32:46.688 03:35:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:32:46.688 03:35:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:32:46.688 03:35:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:32:46.688 03:35:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:32:46.688 03:35:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:32:46.688 03:35:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:32:46.688 03:35:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=a506f0959989e6155d1a8ea4a9c0b7aeca0d281268a259f5 00:32:46.688 03:35:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:32:46.688 03:35:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.vee 00:32:46.688 03:35:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key a506f0959989e6155d1a8ea4a9c0b7aeca0d281268a259f5 2 00:32:46.688 03:35:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 a506f0959989e6155d1a8ea4a9c0b7aeca0d281268a259f5 2 00:32:46.688 03:35:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:32:46.688 03:35:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:32:46.688 03:35:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=a506f0959989e6155d1a8ea4a9c0b7aeca0d281268a259f5 00:32:46.688 03:35:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:32:46.688 03:35:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:32:46.688 03:35:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.vee 00:32:46.688 03:35:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.vee 00:32:46.688 03:35:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.vee 00:32:46.688 03:35:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:32:46.688 03:35:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:32:46.688 03:35:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:32:46.688 03:35:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:32:46.688 03:35:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:32:46.688 03:35:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:32:46.688 03:35:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:32:46.688 03:35:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=3d7f556504fdf73bd024276cb70944d3 00:32:46.688 03:35:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:32:46.688 03:35:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.pqD 00:32:46.688 03:35:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 3d7f556504fdf73bd024276cb70944d3 1 00:32:46.688 03:35:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 3d7f556504fdf73bd024276cb70944d3 1 00:32:46.688 03:35:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:32:46.688 03:35:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:32:46.688 03:35:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=3d7f556504fdf73bd024276cb70944d3 00:32:46.688 03:35:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:32:46.688 03:35:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:32:46.688 03:35:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.pqD 00:32:46.688 03:35:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.pqD 00:32:46.688 03:35:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.pqD 00:32:46.688 03:35:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:32:46.688 03:35:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:32:46.688 03:35:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:32:46.688 03:35:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:32:46.688 03:35:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:32:46.688 03:35:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:32:46.688 03:35:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:32:46.688 03:35:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=189c07c6ad98c6c5feab03dd540b3089 00:32:46.688 03:35:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:32:46.688 03:35:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.lBn 00:32:46.688 03:35:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 189c07c6ad98c6c5feab03dd540b3089 1 00:32:46.688 03:35:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 189c07c6ad98c6c5feab03dd540b3089 1 00:32:46.688 03:35:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:32:46.688 03:35:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:32:46.688 03:35:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=189c07c6ad98c6c5feab03dd540b3089 00:32:46.688 03:35:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:32:46.688 03:35:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:32:46.688 03:35:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.lBn 00:32:46.688 03:35:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.lBn 00:32:46.688 03:35:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.lBn 00:32:46.688 03:35:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:32:46.688 03:35:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:32:46.689 03:35:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:32:46.689 03:35:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:32:46.689 03:35:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:32:46.689 03:35:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:32:46.689 03:35:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:32:46.689 03:35:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=13ffdc03ab01d1e2bbae043fd1f494e6b14d0411cad3ee6d 00:32:46.689 03:35:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:32:46.689 03:35:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.4mQ 00:32:46.689 03:35:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 13ffdc03ab01d1e2bbae043fd1f494e6b14d0411cad3ee6d 2 00:32:46.689 03:35:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 13ffdc03ab01d1e2bbae043fd1f494e6b14d0411cad3ee6d 2 00:32:46.689 03:35:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:32:46.689 03:35:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:32:46.689 03:35:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=13ffdc03ab01d1e2bbae043fd1f494e6b14d0411cad3ee6d 00:32:46.689 03:35:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:32:46.689 03:35:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:32:46.946 03:35:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.4mQ 00:32:46.946 03:35:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.4mQ 00:32:46.946 03:35:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.4mQ 00:32:46.946 03:35:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:32:46.946 03:35:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:32:46.946 03:35:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:32:46.946 03:35:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:32:46.946 03:35:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:32:46.946 03:35:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:32:46.946 03:35:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:32:46.946 03:35:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=0ef4bbb4ce1357985774cd7658c1d8c0 00:32:46.946 03:35:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:32:46.946 03:35:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.63L 00:32:46.946 03:35:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 0ef4bbb4ce1357985774cd7658c1d8c0 0 00:32:46.946 03:35:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 0ef4bbb4ce1357985774cd7658c1d8c0 0 00:32:46.946 03:35:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:32:46.946 03:35:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:32:46.946 03:35:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=0ef4bbb4ce1357985774cd7658c1d8c0 00:32:46.947 03:35:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:32:46.947 03:35:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:32:46.947 03:35:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.63L 00:32:46.947 03:35:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.63L 00:32:46.947 03:35:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.63L 00:32:46.947 03:35:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:32:46.947 03:35:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:32:46.947 03:35:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:32:46.947 03:35:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:32:46.947 03:35:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:32:46.947 03:35:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:32:46.947 03:35:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:32:46.947 03:35:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=7a9402a4d5d5106d07b6ef0df76478d03bbea73a967814467dedce14951bdd03 00:32:46.947 03:35:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:32:46.947 03:35:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.6wb 00:32:46.947 03:35:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 7a9402a4d5d5106d07b6ef0df76478d03bbea73a967814467dedce14951bdd03 3 00:32:46.947 03:35:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 7a9402a4d5d5106d07b6ef0df76478d03bbea73a967814467dedce14951bdd03 3 00:32:46.947 03:35:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:32:46.947 03:35:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:32:46.947 03:35:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=7a9402a4d5d5106d07b6ef0df76478d03bbea73a967814467dedce14951bdd03 00:32:46.947 03:35:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:32:46.947 03:35:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:32:46.947 03:35:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.6wb 00:32:46.947 03:35:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.6wb 00:32:46.947 03:35:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.6wb 00:32:46.947 03:35:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:32:46.947 03:35:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 3330726 00:32:46.947 03:35:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@829 -- # '[' -z 3330726 ']' 00:32:46.947 03:35:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:46.947 03:35:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:32:46.947 03:35:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:46.947 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:46.947 03:35:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:32:46.947 03:35:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:47.206 03:35:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:32:47.206 03:35:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@862 -- # return 0 00:32:47.206 03:35:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:32:47.206 03:35:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.LcS 00:32:47.206 03:35:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:47.206 03:35:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:47.206 03:35:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:47.206 03:35:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.o0b ]] 00:32:47.206 03:35:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.o0b 00:32:47.206 03:35:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:47.206 03:35:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:47.206 03:35:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:47.206 03:35:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:32:47.206 03:35:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.hTw 00:32:47.206 03:35:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:47.206 03:35:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:47.206 03:35:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:47.206 03:35:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.vee ]] 00:32:47.206 03:35:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.vee 00:32:47.206 03:35:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:47.206 03:35:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:47.206 03:35:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:47.206 03:35:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:32:47.206 03:35:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.pqD 00:32:47.206 03:35:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:47.206 03:35:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:47.206 03:35:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:47.206 03:35:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.lBn ]] 00:32:47.206 03:35:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.lBn 00:32:47.206 03:35:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:47.206 03:35:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:47.206 03:35:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:47.206 03:35:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:32:47.206 03:35:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.4mQ 00:32:47.206 03:35:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:47.206 03:35:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:47.206 03:35:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:47.206 03:35:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.63L ]] 00:32:47.206 03:35:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.63L 00:32:47.206 03:35:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:47.206 03:35:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:47.206 03:35:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:47.206 03:35:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:32:47.206 03:35:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.6wb 00:32:47.206 03:35:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:47.206 03:35:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:47.206 03:35:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:47.206 03:35:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:32:47.206 03:35:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:32:47.206 03:35:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:32:47.206 03:35:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:47.206 03:35:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:47.206 03:35:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:47.206 03:35:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:47.206 03:35:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:47.206 03:35:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:47.206 03:35:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:47.206 03:35:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:47.206 03:35:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:47.206 03:35:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:47.206 03:35:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:32:47.206 03:35:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@632 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:32:47.206 03:35:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:32:47.206 03:35:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:32:47.207 03:35:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:32:47.207 03:35:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:32:47.207 03:35:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@639 -- # local block nvme 00:32:47.207 03:35:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:32:47.207 03:35:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@642 -- # modprobe nvmet 00:32:47.207 03:35:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:32:47.207 03:35:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:32:48.617 Waiting for block devices as requested 00:32:48.617 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:32:48.617 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:32:48.617 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:32:48.617 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:32:48.875 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:32:48.875 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:32:48.875 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:32:48.875 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:32:49.132 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:32:49.132 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:32:49.133 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:32:49.391 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:32:49.391 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:32:49.391 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:32:49.391 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:32:49.649 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:32:49.649 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:32:50.216 03:35:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:32:50.216 03:35:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:32:50.216 03:35:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:32:50.216 03:35:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:32:50.216 03:35:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:32:50.216 03:35:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:32:50.216 03:35:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:32:50.216 03:35:56 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:32:50.216 03:35:56 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:32:50.216 No valid GPT data, bailing 00:32:50.216 03:35:56 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:32:50.216 03:35:56 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:32:50.216 03:35:56 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:32:50.216 03:35:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:32:50.216 03:35:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:32:50.216 03:35:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:32:50.216 03:35:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:32:50.216 03:35:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:32:50.216 03:35:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@665 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:32:50.216 03:35:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@667 -- # echo 1 00:32:50.216 03:35:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:32:50.216 03:35:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@669 -- # echo 1 00:32:50.216 03:35:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:32:50.216 03:35:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@672 -- # echo tcp 00:32:50.216 03:35:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@673 -- # echo 4420 00:32:50.216 03:35:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@674 -- # echo ipv4 00:32:50.216 03:35:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:32:50.216 03:35:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.1 -t tcp -s 4420 00:32:50.216 00:32:50.216 Discovery Log Number of Records 2, Generation counter 2 00:32:50.216 =====Discovery Log Entry 0====== 00:32:50.216 trtype: tcp 00:32:50.216 adrfam: ipv4 00:32:50.216 subtype: current discovery subsystem 00:32:50.216 treq: not specified, sq flow control disable supported 00:32:50.216 portid: 1 00:32:50.216 trsvcid: 4420 00:32:50.216 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:32:50.216 traddr: 10.0.0.1 00:32:50.216 eflags: none 00:32:50.216 sectype: none 00:32:50.216 =====Discovery Log Entry 1====== 00:32:50.216 trtype: tcp 00:32:50.216 adrfam: ipv4 00:32:50.216 subtype: nvme subsystem 00:32:50.216 treq: not specified, sq flow control disable supported 00:32:50.216 portid: 1 00:32:50.216 trsvcid: 4420 00:32:50.216 subnqn: nqn.2024-02.io.spdk:cnode0 00:32:50.216 traddr: 10.0.0.1 00:32:50.216 eflags: none 00:32:50.216 sectype: none 00:32:50.216 03:35:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:32:50.216 03:35:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:32:50.216 03:35:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:32:50.216 03:35:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:32:50.216 03:35:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:50.216 03:35:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:50.216 03:35:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:50.216 03:35:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:50.216 03:35:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGI4ZGY4MTUwYjY0ZjNlYzNjNmIzOWY0MGI3ZTc3ZGIxNDFjMjE1OTBkM2I2NWU5gChLnQ==: 00:32:50.216 03:35:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTUwNmYwOTU5OTg5ZTYxNTVkMWE4ZWE0YTljMGI3YWVjYTBkMjgxMjY4YTI1OWY1q1RGMw==: 00:32:50.216 03:35:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:50.216 03:35:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:50.216 03:35:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGI4ZGY4MTUwYjY0ZjNlYzNjNmIzOWY0MGI3ZTc3ZGIxNDFjMjE1OTBkM2I2NWU5gChLnQ==: 00:32:50.216 03:35:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTUwNmYwOTU5OTg5ZTYxNTVkMWE4ZWE0YTljMGI3YWVjYTBkMjgxMjY4YTI1OWY1q1RGMw==: ]] 00:32:50.216 03:35:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTUwNmYwOTU5OTg5ZTYxNTVkMWE4ZWE0YTljMGI3YWVjYTBkMjgxMjY4YTI1OWY1q1RGMw==: 00:32:50.216 03:35:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:32:50.216 03:35:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:32:50.216 03:35:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:32:50.216 03:35:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:32:50.216 03:35:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:32:50.216 03:35:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:50.216 03:35:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:32:50.216 03:35:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:32:50.216 03:35:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:50.216 03:35:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:50.216 03:35:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:32:50.216 03:35:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:50.216 03:35:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:50.216 03:35:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:50.216 03:35:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:50.216 03:35:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:50.216 03:35:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:50.216 03:35:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:50.216 03:35:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:50.216 03:35:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:50.216 03:35:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:50.216 03:35:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:50.216 03:35:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:50.216 03:35:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:50.216 03:35:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:50.216 03:35:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:50.216 03:35:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:50.216 03:35:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:50.474 nvme0n1 00:32:50.474 03:35:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:50.474 03:35:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:50.474 03:35:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:50.474 03:35:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:50.474 03:35:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:50.474 03:35:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:50.474 03:35:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:50.474 03:35:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:50.474 03:35:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:50.474 03:35:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:50.474 03:35:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:50.474 03:35:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:32:50.474 03:35:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:50.474 03:35:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:50.474 03:35:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:32:50.474 03:35:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:50.474 03:35:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:50.474 03:35:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:50.474 03:35:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:50.474 03:35:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjZmMDI2ZDIwYWU5MDQ0OWZiYmQ3Nzc4MWQxMDg2YWUQ1oG8: 00:32:50.474 03:35:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NGM5Y2NiOTAyMzRmZjI2MDQzNGU2MTBiMmU5NTEwNTQwZGQyNzJjNmUyYzg4ZjA5ZDM0MzE1MzhjNTg3NTZiZfDtY/M=: 00:32:50.474 03:35:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:50.474 03:35:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:50.474 03:35:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjZmMDI2ZDIwYWU5MDQ0OWZiYmQ3Nzc4MWQxMDg2YWUQ1oG8: 00:32:50.474 03:35:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NGM5Y2NiOTAyMzRmZjI2MDQzNGU2MTBiMmU5NTEwNTQwZGQyNzJjNmUyYzg4ZjA5ZDM0MzE1MzhjNTg3NTZiZfDtY/M=: ]] 00:32:50.474 03:35:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NGM5Y2NiOTAyMzRmZjI2MDQzNGU2MTBiMmU5NTEwNTQwZGQyNzJjNmUyYzg4ZjA5ZDM0MzE1MzhjNTg3NTZiZfDtY/M=: 00:32:50.474 03:35:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:32:50.474 03:35:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:50.474 03:35:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:50.474 03:35:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:50.474 03:35:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:50.474 03:35:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:50.474 03:35:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:32:50.474 03:35:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:50.474 03:35:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:50.474 03:35:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:50.474 03:35:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:50.474 03:35:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:50.474 03:35:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:50.474 03:35:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:50.474 03:35:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:50.474 03:35:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:50.474 03:35:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:50.474 03:35:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:50.475 03:35:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:50.475 03:35:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:50.475 03:35:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:50.475 03:35:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:50.475 03:35:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:50.475 03:35:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:50.733 nvme0n1 00:32:50.733 03:35:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:50.733 03:35:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:50.733 03:35:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:50.733 03:35:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:50.733 03:35:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:50.733 03:35:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:50.733 03:35:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:50.733 03:35:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:50.733 03:35:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:50.733 03:35:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:50.733 03:35:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:50.733 03:35:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:50.733 03:35:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:32:50.733 03:35:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:50.733 03:35:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:50.733 03:35:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:50.733 03:35:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:50.733 03:35:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGI4ZGY4MTUwYjY0ZjNlYzNjNmIzOWY0MGI3ZTc3ZGIxNDFjMjE1OTBkM2I2NWU5gChLnQ==: 00:32:50.733 03:35:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTUwNmYwOTU5OTg5ZTYxNTVkMWE4ZWE0YTljMGI3YWVjYTBkMjgxMjY4YTI1OWY1q1RGMw==: 00:32:50.733 03:35:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:50.733 03:35:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:50.733 03:35:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGI4ZGY4MTUwYjY0ZjNlYzNjNmIzOWY0MGI3ZTc3ZGIxNDFjMjE1OTBkM2I2NWU5gChLnQ==: 00:32:50.733 03:35:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTUwNmYwOTU5OTg5ZTYxNTVkMWE4ZWE0YTljMGI3YWVjYTBkMjgxMjY4YTI1OWY1q1RGMw==: ]] 00:32:50.733 03:35:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTUwNmYwOTU5OTg5ZTYxNTVkMWE4ZWE0YTljMGI3YWVjYTBkMjgxMjY4YTI1OWY1q1RGMw==: 00:32:50.733 03:35:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:32:50.733 03:35:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:50.733 03:35:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:50.733 03:35:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:50.733 03:35:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:50.733 03:35:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:50.733 03:35:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:32:50.733 03:35:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:50.733 03:35:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:50.733 03:35:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:50.733 03:35:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:50.733 03:35:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:50.733 03:35:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:50.733 03:35:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:50.733 03:35:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:50.733 03:35:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:50.733 03:35:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:50.733 03:35:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:50.733 03:35:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:50.733 03:35:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:50.733 03:35:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:50.733 03:35:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:50.733 03:35:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:50.733 03:35:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:50.991 nvme0n1 00:32:50.991 03:35:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:50.991 03:35:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:50.991 03:35:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:50.991 03:35:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:50.991 03:35:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:50.991 03:35:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:50.991 03:35:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:50.991 03:35:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:50.991 03:35:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:50.991 03:35:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:50.991 03:35:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:50.991 03:35:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:50.991 03:35:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:32:50.991 03:35:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:50.991 03:35:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:50.991 03:35:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:50.991 03:35:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:50.991 03:35:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:M2Q3ZjU1NjUwNGZkZjczYmQwMjQyNzZjYjcwOTQ0ZDMEr3Bz: 00:32:50.991 03:35:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTg5YzA3YzZhZDk4YzZjNWZlYWIwM2RkNTQwYjMwODlz4mjG: 00:32:50.991 03:35:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:50.991 03:35:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:50.991 03:35:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:M2Q3ZjU1NjUwNGZkZjczYmQwMjQyNzZjYjcwOTQ0ZDMEr3Bz: 00:32:50.991 03:35:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTg5YzA3YzZhZDk4YzZjNWZlYWIwM2RkNTQwYjMwODlz4mjG: ]] 00:32:50.991 03:35:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTg5YzA3YzZhZDk4YzZjNWZlYWIwM2RkNTQwYjMwODlz4mjG: 00:32:50.991 03:35:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:32:50.991 03:35:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:50.991 03:35:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:50.991 03:35:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:50.991 03:35:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:50.991 03:35:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:50.991 03:35:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:32:50.991 03:35:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:50.991 03:35:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:50.991 03:35:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:50.991 03:35:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:50.991 03:35:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:50.991 03:35:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:50.991 03:35:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:50.991 03:35:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:50.991 03:35:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:50.991 03:35:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:50.991 03:35:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:50.991 03:35:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:50.991 03:35:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:50.991 03:35:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:50.991 03:35:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:50.992 03:35:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:50.992 03:35:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:50.992 nvme0n1 00:32:50.992 03:35:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:50.992 03:35:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:50.992 03:35:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:50.992 03:35:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:50.992 03:35:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:51.250 03:35:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:51.250 03:35:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:51.250 03:35:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:51.250 03:35:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:51.250 03:35:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:51.250 03:35:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:51.250 03:35:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:51.250 03:35:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:32:51.250 03:35:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:51.250 03:35:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:51.250 03:35:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:51.250 03:35:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:51.250 03:35:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTNmZmRjMDNhYjAxZDFlMmJiYWUwNDNmZDFmNDk0ZTZiMTRkMDQxMWNhZDNlZTZkD806dA==: 00:32:51.250 03:35:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MGVmNGJiYjRjZTEzNTc5ODU3NzRjZDc2NThjMWQ4YzDGeU7a: 00:32:51.250 03:35:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:51.250 03:35:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:51.250 03:35:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTNmZmRjMDNhYjAxZDFlMmJiYWUwNDNmZDFmNDk0ZTZiMTRkMDQxMWNhZDNlZTZkD806dA==: 00:32:51.250 03:35:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MGVmNGJiYjRjZTEzNTc5ODU3NzRjZDc2NThjMWQ4YzDGeU7a: ]] 00:32:51.250 03:35:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MGVmNGJiYjRjZTEzNTc5ODU3NzRjZDc2NThjMWQ4YzDGeU7a: 00:32:51.250 03:35:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:32:51.250 03:35:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:51.250 03:35:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:51.250 03:35:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:51.250 03:35:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:51.250 03:35:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:51.250 03:35:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:32:51.250 03:35:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:51.250 03:35:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:51.250 03:35:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:51.250 03:35:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:51.250 03:35:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:51.250 03:35:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:51.250 03:35:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:51.250 03:35:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:51.250 03:35:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:51.250 03:35:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:51.250 03:35:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:51.250 03:35:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:51.250 03:35:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:51.250 03:35:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:51.250 03:35:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:51.250 03:35:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:51.250 03:35:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:51.250 nvme0n1 00:32:51.250 03:35:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:51.250 03:35:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:51.250 03:35:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:51.250 03:35:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:51.250 03:35:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:51.250 03:35:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:51.508 03:35:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:51.508 03:35:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:51.508 03:35:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:51.508 03:35:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:51.508 03:35:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:51.508 03:35:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:51.508 03:35:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:32:51.508 03:35:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:51.508 03:35:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:51.508 03:35:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:51.508 03:35:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:51.508 03:35:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:N2E5NDAyYTRkNWQ1MTA2ZDA3YjZlZjBkZjc2NDc4ZDAzYmJlYTczYTk2NzgxNDQ2N2RlZGNlMTQ5NTFiZGQwM2sgfW8=: 00:32:51.508 03:35:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:51.508 03:35:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:51.508 03:35:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:51.508 03:35:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:N2E5NDAyYTRkNWQ1MTA2ZDA3YjZlZjBkZjc2NDc4ZDAzYmJlYTczYTk2NzgxNDQ2N2RlZGNlMTQ5NTFiZGQwM2sgfW8=: 00:32:51.508 03:35:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:51.508 03:35:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:32:51.508 03:35:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:51.508 03:35:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:51.508 03:35:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:51.508 03:35:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:51.509 03:35:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:51.509 03:35:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:32:51.509 03:35:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:51.509 03:35:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:51.509 03:35:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:51.509 03:35:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:51.509 03:35:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:51.509 03:35:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:51.509 03:35:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:51.509 03:35:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:51.509 03:35:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:51.509 03:35:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:51.509 03:35:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:51.509 03:35:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:51.509 03:35:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:51.509 03:35:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:51.509 03:35:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:51.509 03:35:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:51.509 03:35:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:51.509 nvme0n1 00:32:51.509 03:35:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:51.509 03:35:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:51.509 03:35:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:51.509 03:35:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:51.509 03:35:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:51.509 03:35:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:51.509 03:35:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:51.509 03:35:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:51.509 03:35:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:51.509 03:35:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:51.509 03:35:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:51.509 03:35:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:51.509 03:35:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:51.509 03:35:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:32:51.509 03:35:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:51.509 03:35:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:51.509 03:35:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:51.509 03:35:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:51.509 03:35:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjZmMDI2ZDIwYWU5MDQ0OWZiYmQ3Nzc4MWQxMDg2YWUQ1oG8: 00:32:51.509 03:35:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NGM5Y2NiOTAyMzRmZjI2MDQzNGU2MTBiMmU5NTEwNTQwZGQyNzJjNmUyYzg4ZjA5ZDM0MzE1MzhjNTg3NTZiZfDtY/M=: 00:32:51.509 03:35:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:51.509 03:35:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:51.509 03:35:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjZmMDI2ZDIwYWU5MDQ0OWZiYmQ3Nzc4MWQxMDg2YWUQ1oG8: 00:32:51.509 03:35:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NGM5Y2NiOTAyMzRmZjI2MDQzNGU2MTBiMmU5NTEwNTQwZGQyNzJjNmUyYzg4ZjA5ZDM0MzE1MzhjNTg3NTZiZfDtY/M=: ]] 00:32:51.509 03:35:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NGM5Y2NiOTAyMzRmZjI2MDQzNGU2MTBiMmU5NTEwNTQwZGQyNzJjNmUyYzg4ZjA5ZDM0MzE1MzhjNTg3NTZiZfDtY/M=: 00:32:51.509 03:35:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:32:51.509 03:35:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:51.509 03:35:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:51.509 03:35:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:51.509 03:35:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:51.509 03:35:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:51.509 03:35:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:32:51.509 03:35:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:51.509 03:35:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:51.509 03:35:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:51.509 03:35:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:51.509 03:35:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:51.509 03:35:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:51.509 03:35:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:51.509 03:35:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:51.509 03:35:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:51.509 03:35:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:51.509 03:35:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:51.509 03:35:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:51.509 03:35:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:51.509 03:35:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:51.509 03:35:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:51.509 03:35:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:51.509 03:35:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:51.767 nvme0n1 00:32:51.767 03:35:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:51.767 03:35:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:51.767 03:35:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:51.767 03:35:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:51.767 03:35:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:51.767 03:35:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:51.767 03:35:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:51.767 03:35:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:51.767 03:35:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:51.767 03:35:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:51.767 03:35:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:51.767 03:35:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:51.767 03:35:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:32:51.767 03:35:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:51.767 03:35:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:51.767 03:35:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:51.767 03:35:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:51.767 03:35:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGI4ZGY4MTUwYjY0ZjNlYzNjNmIzOWY0MGI3ZTc3ZGIxNDFjMjE1OTBkM2I2NWU5gChLnQ==: 00:32:51.767 03:35:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTUwNmYwOTU5OTg5ZTYxNTVkMWE4ZWE0YTljMGI3YWVjYTBkMjgxMjY4YTI1OWY1q1RGMw==: 00:32:51.767 03:35:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:51.767 03:35:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:51.767 03:35:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGI4ZGY4MTUwYjY0ZjNlYzNjNmIzOWY0MGI3ZTc3ZGIxNDFjMjE1OTBkM2I2NWU5gChLnQ==: 00:32:51.767 03:35:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTUwNmYwOTU5OTg5ZTYxNTVkMWE4ZWE0YTljMGI3YWVjYTBkMjgxMjY4YTI1OWY1q1RGMw==: ]] 00:32:51.767 03:35:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTUwNmYwOTU5OTg5ZTYxNTVkMWE4ZWE0YTljMGI3YWVjYTBkMjgxMjY4YTI1OWY1q1RGMw==: 00:32:51.767 03:35:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:32:51.767 03:35:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:51.767 03:35:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:51.767 03:35:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:51.767 03:35:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:51.767 03:35:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:51.767 03:35:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:32:51.767 03:35:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:51.767 03:35:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:51.767 03:35:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:51.767 03:35:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:51.767 03:35:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:51.767 03:35:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:51.767 03:35:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:51.767 03:35:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:51.767 03:35:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:51.767 03:35:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:51.768 03:35:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:51.768 03:35:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:51.768 03:35:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:51.768 03:35:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:51.768 03:35:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:51.768 03:35:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:51.768 03:35:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:52.026 nvme0n1 00:32:52.026 03:35:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:52.026 03:35:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:52.026 03:35:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:52.026 03:35:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:52.026 03:35:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:52.026 03:35:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:52.026 03:35:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:52.026 03:35:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:52.026 03:35:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:52.026 03:35:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:52.026 03:35:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:52.026 03:35:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:52.026 03:35:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:32:52.026 03:35:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:52.026 03:35:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:52.026 03:35:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:52.026 03:35:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:52.026 03:35:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:M2Q3ZjU1NjUwNGZkZjczYmQwMjQyNzZjYjcwOTQ0ZDMEr3Bz: 00:32:52.026 03:35:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTg5YzA3YzZhZDk4YzZjNWZlYWIwM2RkNTQwYjMwODlz4mjG: 00:32:52.026 03:35:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:52.026 03:35:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:52.026 03:35:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:M2Q3ZjU1NjUwNGZkZjczYmQwMjQyNzZjYjcwOTQ0ZDMEr3Bz: 00:32:52.026 03:35:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTg5YzA3YzZhZDk4YzZjNWZlYWIwM2RkNTQwYjMwODlz4mjG: ]] 00:32:52.026 03:35:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTg5YzA3YzZhZDk4YzZjNWZlYWIwM2RkNTQwYjMwODlz4mjG: 00:32:52.026 03:35:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:32:52.026 03:35:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:52.026 03:35:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:52.026 03:35:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:52.026 03:35:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:52.026 03:35:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:52.026 03:35:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:32:52.026 03:35:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:52.026 03:35:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:52.026 03:35:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:52.026 03:35:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:52.026 03:35:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:52.026 03:35:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:52.026 03:35:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:52.026 03:35:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:52.026 03:35:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:52.026 03:35:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:52.026 03:35:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:52.026 03:35:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:52.026 03:35:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:52.026 03:35:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:52.026 03:35:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:52.026 03:35:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:52.026 03:35:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:52.284 nvme0n1 00:32:52.284 03:35:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:52.284 03:35:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:52.284 03:35:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:52.284 03:35:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:52.284 03:35:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:52.284 03:35:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:52.284 03:35:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:52.284 03:35:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:52.285 03:35:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:52.285 03:35:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:52.285 03:35:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:52.285 03:35:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:52.285 03:35:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:32:52.285 03:35:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:52.285 03:35:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:52.285 03:35:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:52.285 03:35:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:52.285 03:35:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTNmZmRjMDNhYjAxZDFlMmJiYWUwNDNmZDFmNDk0ZTZiMTRkMDQxMWNhZDNlZTZkD806dA==: 00:32:52.285 03:35:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MGVmNGJiYjRjZTEzNTc5ODU3NzRjZDc2NThjMWQ4YzDGeU7a: 00:32:52.285 03:35:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:52.285 03:35:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:52.285 03:35:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTNmZmRjMDNhYjAxZDFlMmJiYWUwNDNmZDFmNDk0ZTZiMTRkMDQxMWNhZDNlZTZkD806dA==: 00:32:52.285 03:35:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MGVmNGJiYjRjZTEzNTc5ODU3NzRjZDc2NThjMWQ4YzDGeU7a: ]] 00:32:52.285 03:35:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MGVmNGJiYjRjZTEzNTc5ODU3NzRjZDc2NThjMWQ4YzDGeU7a: 00:32:52.285 03:35:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:32:52.285 03:35:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:52.285 03:35:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:52.285 03:35:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:52.285 03:35:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:52.285 03:35:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:52.285 03:35:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:32:52.285 03:35:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:52.285 03:35:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:52.285 03:35:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:52.285 03:35:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:52.285 03:35:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:52.285 03:35:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:52.285 03:35:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:52.285 03:35:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:52.285 03:35:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:52.285 03:35:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:52.285 03:35:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:52.285 03:35:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:52.285 03:35:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:52.285 03:35:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:52.285 03:35:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:52.285 03:35:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:52.285 03:35:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:52.543 nvme0n1 00:32:52.543 03:35:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:52.543 03:35:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:52.543 03:35:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:52.543 03:35:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:52.543 03:35:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:52.543 03:35:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:52.543 03:35:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:52.543 03:35:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:52.543 03:35:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:52.543 03:35:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:52.543 03:35:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:52.543 03:35:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:52.543 03:35:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:32:52.543 03:35:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:52.543 03:35:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:52.543 03:35:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:52.543 03:35:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:52.543 03:35:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:N2E5NDAyYTRkNWQ1MTA2ZDA3YjZlZjBkZjc2NDc4ZDAzYmJlYTczYTk2NzgxNDQ2N2RlZGNlMTQ5NTFiZGQwM2sgfW8=: 00:32:52.543 03:35:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:52.543 03:35:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:52.543 03:35:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:52.543 03:35:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:N2E5NDAyYTRkNWQ1MTA2ZDA3YjZlZjBkZjc2NDc4ZDAzYmJlYTczYTk2NzgxNDQ2N2RlZGNlMTQ5NTFiZGQwM2sgfW8=: 00:32:52.543 03:35:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:52.543 03:35:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:32:52.543 03:35:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:52.543 03:35:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:52.543 03:35:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:52.543 03:35:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:52.543 03:35:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:52.543 03:35:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:32:52.543 03:35:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:52.543 03:35:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:52.543 03:35:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:52.543 03:35:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:52.543 03:35:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:52.543 03:35:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:52.543 03:35:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:52.543 03:35:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:52.543 03:35:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:52.543 03:35:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:52.543 03:35:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:52.543 03:35:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:52.543 03:35:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:52.543 03:35:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:52.543 03:35:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:52.543 03:35:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:52.543 03:35:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:52.801 nvme0n1 00:32:52.801 03:35:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:52.801 03:35:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:52.801 03:35:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:52.801 03:35:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:52.801 03:35:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:52.801 03:35:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:52.801 03:35:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:52.801 03:35:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:52.801 03:35:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:52.801 03:35:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:52.801 03:35:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:52.801 03:35:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:52.801 03:35:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:52.801 03:35:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:32:52.801 03:35:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:52.801 03:35:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:52.801 03:35:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:52.801 03:35:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:52.801 03:35:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjZmMDI2ZDIwYWU5MDQ0OWZiYmQ3Nzc4MWQxMDg2YWUQ1oG8: 00:32:52.801 03:35:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NGM5Y2NiOTAyMzRmZjI2MDQzNGU2MTBiMmU5NTEwNTQwZGQyNzJjNmUyYzg4ZjA5ZDM0MzE1MzhjNTg3NTZiZfDtY/M=: 00:32:52.801 03:35:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:52.801 03:35:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:52.801 03:35:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjZmMDI2ZDIwYWU5MDQ0OWZiYmQ3Nzc4MWQxMDg2YWUQ1oG8: 00:32:52.802 03:35:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NGM5Y2NiOTAyMzRmZjI2MDQzNGU2MTBiMmU5NTEwNTQwZGQyNzJjNmUyYzg4ZjA5ZDM0MzE1MzhjNTg3NTZiZfDtY/M=: ]] 00:32:52.802 03:35:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NGM5Y2NiOTAyMzRmZjI2MDQzNGU2MTBiMmU5NTEwNTQwZGQyNzJjNmUyYzg4ZjA5ZDM0MzE1MzhjNTg3NTZiZfDtY/M=: 00:32:52.802 03:35:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:32:52.802 03:35:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:52.802 03:35:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:52.802 03:35:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:52.802 03:35:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:52.802 03:35:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:52.802 03:35:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:32:52.802 03:35:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:52.802 03:35:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:52.802 03:35:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:52.802 03:35:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:52.802 03:35:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:52.802 03:35:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:52.802 03:35:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:53.059 03:35:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:53.059 03:35:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:53.059 03:35:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:53.059 03:35:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:53.059 03:35:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:53.059 03:35:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:53.059 03:35:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:53.059 03:35:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:53.059 03:35:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:53.059 03:35:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:53.318 nvme0n1 00:32:53.318 03:35:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:53.318 03:35:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:53.318 03:35:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:53.318 03:35:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:53.318 03:35:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:53.318 03:35:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:53.318 03:35:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:53.318 03:35:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:53.318 03:35:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:53.318 03:35:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:53.318 03:35:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:53.318 03:35:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:53.318 03:35:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:32:53.318 03:35:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:53.318 03:35:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:53.318 03:35:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:53.318 03:35:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:53.318 03:35:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGI4ZGY4MTUwYjY0ZjNlYzNjNmIzOWY0MGI3ZTc3ZGIxNDFjMjE1OTBkM2I2NWU5gChLnQ==: 00:32:53.318 03:35:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTUwNmYwOTU5OTg5ZTYxNTVkMWE4ZWE0YTljMGI3YWVjYTBkMjgxMjY4YTI1OWY1q1RGMw==: 00:32:53.318 03:35:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:53.318 03:35:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:53.318 03:35:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGI4ZGY4MTUwYjY0ZjNlYzNjNmIzOWY0MGI3ZTc3ZGIxNDFjMjE1OTBkM2I2NWU5gChLnQ==: 00:32:53.318 03:35:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTUwNmYwOTU5OTg5ZTYxNTVkMWE4ZWE0YTljMGI3YWVjYTBkMjgxMjY4YTI1OWY1q1RGMw==: ]] 00:32:53.318 03:35:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTUwNmYwOTU5OTg5ZTYxNTVkMWE4ZWE0YTljMGI3YWVjYTBkMjgxMjY4YTI1OWY1q1RGMw==: 00:32:53.318 03:35:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:32:53.318 03:35:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:53.318 03:35:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:53.318 03:35:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:53.318 03:35:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:53.318 03:35:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:53.318 03:35:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:32:53.318 03:35:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:53.318 03:35:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:53.318 03:35:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:53.318 03:35:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:53.318 03:35:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:53.318 03:35:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:53.318 03:35:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:53.318 03:35:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:53.318 03:35:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:53.318 03:35:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:53.318 03:35:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:53.318 03:35:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:53.318 03:35:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:53.318 03:35:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:53.318 03:35:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:53.318 03:35:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:53.318 03:35:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:53.576 nvme0n1 00:32:53.576 03:35:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:53.576 03:35:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:53.576 03:35:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:53.576 03:35:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:53.576 03:35:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:53.576 03:35:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:53.576 03:35:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:53.576 03:35:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:53.576 03:35:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:53.576 03:35:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:53.576 03:35:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:53.576 03:35:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:53.576 03:35:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:32:53.576 03:35:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:53.576 03:35:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:53.576 03:35:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:53.576 03:35:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:53.576 03:35:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:M2Q3ZjU1NjUwNGZkZjczYmQwMjQyNzZjYjcwOTQ0ZDMEr3Bz: 00:32:53.576 03:35:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTg5YzA3YzZhZDk4YzZjNWZlYWIwM2RkNTQwYjMwODlz4mjG: 00:32:53.576 03:35:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:53.576 03:35:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:53.576 03:35:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:M2Q3ZjU1NjUwNGZkZjczYmQwMjQyNzZjYjcwOTQ0ZDMEr3Bz: 00:32:53.576 03:35:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTg5YzA3YzZhZDk4YzZjNWZlYWIwM2RkNTQwYjMwODlz4mjG: ]] 00:32:53.576 03:35:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTg5YzA3YzZhZDk4YzZjNWZlYWIwM2RkNTQwYjMwODlz4mjG: 00:32:53.576 03:35:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:32:53.576 03:35:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:53.576 03:35:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:53.576 03:35:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:53.576 03:35:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:53.576 03:35:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:53.576 03:35:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:32:53.576 03:35:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:53.576 03:35:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:53.576 03:35:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:53.576 03:35:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:53.576 03:35:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:53.576 03:35:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:53.576 03:35:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:53.576 03:35:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:53.576 03:35:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:53.576 03:35:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:53.576 03:35:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:53.576 03:35:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:53.576 03:35:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:53.576 03:35:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:53.576 03:35:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:53.576 03:35:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:53.576 03:35:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:53.833 nvme0n1 00:32:53.833 03:35:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:53.833 03:35:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:53.833 03:35:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:53.833 03:35:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:53.833 03:35:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:53.833 03:35:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:53.833 03:35:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:53.833 03:35:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:53.833 03:35:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:53.833 03:35:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:54.091 03:35:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:54.091 03:35:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:54.091 03:35:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:32:54.091 03:35:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:54.091 03:35:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:54.091 03:35:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:54.091 03:35:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:54.091 03:35:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTNmZmRjMDNhYjAxZDFlMmJiYWUwNDNmZDFmNDk0ZTZiMTRkMDQxMWNhZDNlZTZkD806dA==: 00:32:54.091 03:35:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MGVmNGJiYjRjZTEzNTc5ODU3NzRjZDc2NThjMWQ4YzDGeU7a: 00:32:54.091 03:35:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:54.091 03:35:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:54.091 03:35:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTNmZmRjMDNhYjAxZDFlMmJiYWUwNDNmZDFmNDk0ZTZiMTRkMDQxMWNhZDNlZTZkD806dA==: 00:32:54.091 03:35:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MGVmNGJiYjRjZTEzNTc5ODU3NzRjZDc2NThjMWQ4YzDGeU7a: ]] 00:32:54.091 03:35:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MGVmNGJiYjRjZTEzNTc5ODU3NzRjZDc2NThjMWQ4YzDGeU7a: 00:32:54.091 03:35:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:32:54.091 03:35:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:54.091 03:35:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:54.091 03:35:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:54.091 03:35:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:54.091 03:35:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:54.091 03:35:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:32:54.091 03:35:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:54.091 03:35:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:54.091 03:36:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:54.091 03:36:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:54.091 03:36:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:54.091 03:36:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:54.091 03:36:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:54.091 03:36:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:54.091 03:36:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:54.091 03:36:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:54.091 03:36:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:54.091 03:36:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:54.091 03:36:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:54.091 03:36:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:54.092 03:36:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:54.092 03:36:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:54.092 03:36:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:54.350 nvme0n1 00:32:54.350 03:36:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:54.350 03:36:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:54.350 03:36:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:54.350 03:36:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:54.350 03:36:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:54.350 03:36:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:54.350 03:36:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:54.350 03:36:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:54.350 03:36:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:54.350 03:36:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:54.350 03:36:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:54.350 03:36:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:54.350 03:36:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:32:54.350 03:36:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:54.350 03:36:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:54.350 03:36:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:54.350 03:36:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:54.350 03:36:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:N2E5NDAyYTRkNWQ1MTA2ZDA3YjZlZjBkZjc2NDc4ZDAzYmJlYTczYTk2NzgxNDQ2N2RlZGNlMTQ5NTFiZGQwM2sgfW8=: 00:32:54.350 03:36:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:54.350 03:36:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:54.350 03:36:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:54.350 03:36:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:N2E5NDAyYTRkNWQ1MTA2ZDA3YjZlZjBkZjc2NDc4ZDAzYmJlYTczYTk2NzgxNDQ2N2RlZGNlMTQ5NTFiZGQwM2sgfW8=: 00:32:54.350 03:36:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:54.350 03:36:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:32:54.350 03:36:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:54.350 03:36:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:54.350 03:36:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:54.350 03:36:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:54.350 03:36:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:54.350 03:36:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:32:54.350 03:36:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:54.350 03:36:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:54.350 03:36:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:54.350 03:36:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:54.350 03:36:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:54.350 03:36:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:54.350 03:36:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:54.350 03:36:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:54.350 03:36:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:54.350 03:36:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:54.350 03:36:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:54.350 03:36:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:54.350 03:36:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:54.350 03:36:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:54.350 03:36:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:54.350 03:36:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:54.350 03:36:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:54.608 nvme0n1 00:32:54.608 03:36:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:54.608 03:36:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:54.608 03:36:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:54.608 03:36:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:54.608 03:36:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:54.608 03:36:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:54.608 03:36:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:54.608 03:36:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:54.608 03:36:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:54.608 03:36:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:54.608 03:36:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:54.608 03:36:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:54.608 03:36:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:54.608 03:36:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:32:54.608 03:36:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:54.608 03:36:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:54.608 03:36:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:54.608 03:36:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:54.608 03:36:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjZmMDI2ZDIwYWU5MDQ0OWZiYmQ3Nzc4MWQxMDg2YWUQ1oG8: 00:32:54.608 03:36:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NGM5Y2NiOTAyMzRmZjI2MDQzNGU2MTBiMmU5NTEwNTQwZGQyNzJjNmUyYzg4ZjA5ZDM0MzE1MzhjNTg3NTZiZfDtY/M=: 00:32:54.608 03:36:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:54.608 03:36:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:54.608 03:36:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjZmMDI2ZDIwYWU5MDQ0OWZiYmQ3Nzc4MWQxMDg2YWUQ1oG8: 00:32:54.608 03:36:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NGM5Y2NiOTAyMzRmZjI2MDQzNGU2MTBiMmU5NTEwNTQwZGQyNzJjNmUyYzg4ZjA5ZDM0MzE1MzhjNTg3NTZiZfDtY/M=: ]] 00:32:54.608 03:36:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NGM5Y2NiOTAyMzRmZjI2MDQzNGU2MTBiMmU5NTEwNTQwZGQyNzJjNmUyYzg4ZjA5ZDM0MzE1MzhjNTg3NTZiZfDtY/M=: 00:32:54.608 03:36:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:32:54.608 03:36:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:54.608 03:36:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:54.608 03:36:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:54.608 03:36:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:54.608 03:36:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:54.608 03:36:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:32:54.608 03:36:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:54.608 03:36:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:54.608 03:36:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:54.608 03:36:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:54.608 03:36:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:54.608 03:36:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:54.608 03:36:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:54.608 03:36:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:54.608 03:36:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:54.608 03:36:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:54.608 03:36:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:54.608 03:36:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:54.608 03:36:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:54.608 03:36:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:54.608 03:36:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:54.608 03:36:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:54.608 03:36:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:55.172 nvme0n1 00:32:55.172 03:36:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:55.172 03:36:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:55.172 03:36:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:55.172 03:36:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:55.172 03:36:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:55.172 03:36:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:55.172 03:36:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:55.172 03:36:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:55.173 03:36:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:55.173 03:36:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:55.173 03:36:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:55.173 03:36:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:55.173 03:36:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:32:55.173 03:36:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:55.173 03:36:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:55.173 03:36:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:55.173 03:36:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:55.173 03:36:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGI4ZGY4MTUwYjY0ZjNlYzNjNmIzOWY0MGI3ZTc3ZGIxNDFjMjE1OTBkM2I2NWU5gChLnQ==: 00:32:55.173 03:36:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTUwNmYwOTU5OTg5ZTYxNTVkMWE4ZWE0YTljMGI3YWVjYTBkMjgxMjY4YTI1OWY1q1RGMw==: 00:32:55.173 03:36:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:55.173 03:36:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:55.173 03:36:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGI4ZGY4MTUwYjY0ZjNlYzNjNmIzOWY0MGI3ZTc3ZGIxNDFjMjE1OTBkM2I2NWU5gChLnQ==: 00:32:55.173 03:36:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTUwNmYwOTU5OTg5ZTYxNTVkMWE4ZWE0YTljMGI3YWVjYTBkMjgxMjY4YTI1OWY1q1RGMw==: ]] 00:32:55.173 03:36:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTUwNmYwOTU5OTg5ZTYxNTVkMWE4ZWE0YTljMGI3YWVjYTBkMjgxMjY4YTI1OWY1q1RGMw==: 00:32:55.173 03:36:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:32:55.173 03:36:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:55.173 03:36:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:55.173 03:36:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:55.173 03:36:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:55.173 03:36:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:55.173 03:36:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:32:55.173 03:36:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:55.173 03:36:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:55.173 03:36:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:55.173 03:36:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:55.173 03:36:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:55.173 03:36:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:55.173 03:36:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:55.173 03:36:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:55.173 03:36:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:55.173 03:36:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:55.173 03:36:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:55.173 03:36:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:55.173 03:36:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:55.173 03:36:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:55.173 03:36:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:55.173 03:36:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:55.173 03:36:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:55.737 nvme0n1 00:32:55.737 03:36:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:55.737 03:36:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:55.737 03:36:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:55.737 03:36:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:55.737 03:36:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:55.737 03:36:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:55.994 03:36:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:55.994 03:36:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:55.994 03:36:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:55.994 03:36:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:55.994 03:36:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:55.994 03:36:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:55.994 03:36:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:32:55.994 03:36:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:55.994 03:36:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:55.994 03:36:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:55.994 03:36:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:55.994 03:36:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:M2Q3ZjU1NjUwNGZkZjczYmQwMjQyNzZjYjcwOTQ0ZDMEr3Bz: 00:32:55.994 03:36:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTg5YzA3YzZhZDk4YzZjNWZlYWIwM2RkNTQwYjMwODlz4mjG: 00:32:55.994 03:36:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:55.994 03:36:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:55.994 03:36:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:M2Q3ZjU1NjUwNGZkZjczYmQwMjQyNzZjYjcwOTQ0ZDMEr3Bz: 00:32:55.995 03:36:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTg5YzA3YzZhZDk4YzZjNWZlYWIwM2RkNTQwYjMwODlz4mjG: ]] 00:32:55.995 03:36:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTg5YzA3YzZhZDk4YzZjNWZlYWIwM2RkNTQwYjMwODlz4mjG: 00:32:55.995 03:36:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:32:55.995 03:36:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:55.995 03:36:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:55.995 03:36:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:55.995 03:36:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:55.995 03:36:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:55.995 03:36:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:32:55.995 03:36:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:55.995 03:36:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:55.995 03:36:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:55.995 03:36:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:55.995 03:36:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:55.995 03:36:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:55.995 03:36:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:55.995 03:36:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:55.995 03:36:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:55.995 03:36:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:55.995 03:36:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:55.995 03:36:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:55.995 03:36:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:55.995 03:36:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:55.995 03:36:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:55.995 03:36:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:55.995 03:36:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:56.557 nvme0n1 00:32:56.557 03:36:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:56.557 03:36:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:56.557 03:36:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:56.557 03:36:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:56.557 03:36:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:56.557 03:36:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:56.557 03:36:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:56.557 03:36:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:56.557 03:36:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:56.557 03:36:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:56.557 03:36:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:56.557 03:36:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:56.557 03:36:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:32:56.557 03:36:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:56.557 03:36:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:56.557 03:36:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:56.557 03:36:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:56.557 03:36:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTNmZmRjMDNhYjAxZDFlMmJiYWUwNDNmZDFmNDk0ZTZiMTRkMDQxMWNhZDNlZTZkD806dA==: 00:32:56.557 03:36:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MGVmNGJiYjRjZTEzNTc5ODU3NzRjZDc2NThjMWQ4YzDGeU7a: 00:32:56.557 03:36:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:56.557 03:36:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:56.557 03:36:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTNmZmRjMDNhYjAxZDFlMmJiYWUwNDNmZDFmNDk0ZTZiMTRkMDQxMWNhZDNlZTZkD806dA==: 00:32:56.557 03:36:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MGVmNGJiYjRjZTEzNTc5ODU3NzRjZDc2NThjMWQ4YzDGeU7a: ]] 00:32:56.557 03:36:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MGVmNGJiYjRjZTEzNTc5ODU3NzRjZDc2NThjMWQ4YzDGeU7a: 00:32:56.557 03:36:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:32:56.557 03:36:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:56.557 03:36:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:56.557 03:36:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:56.557 03:36:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:56.557 03:36:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:56.557 03:36:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:32:56.557 03:36:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:56.557 03:36:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:56.557 03:36:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:56.557 03:36:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:56.557 03:36:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:56.557 03:36:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:56.557 03:36:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:56.557 03:36:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:56.557 03:36:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:56.557 03:36:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:56.557 03:36:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:56.557 03:36:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:56.557 03:36:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:56.557 03:36:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:56.557 03:36:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:56.557 03:36:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:56.557 03:36:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:57.120 nvme0n1 00:32:57.120 03:36:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:57.120 03:36:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:57.120 03:36:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:57.120 03:36:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:57.120 03:36:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:57.120 03:36:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:57.120 03:36:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:57.120 03:36:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:57.120 03:36:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:57.120 03:36:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:57.120 03:36:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:57.120 03:36:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:57.120 03:36:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:32:57.120 03:36:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:57.120 03:36:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:57.120 03:36:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:57.120 03:36:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:57.120 03:36:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:N2E5NDAyYTRkNWQ1MTA2ZDA3YjZlZjBkZjc2NDc4ZDAzYmJlYTczYTk2NzgxNDQ2N2RlZGNlMTQ5NTFiZGQwM2sgfW8=: 00:32:57.120 03:36:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:57.120 03:36:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:57.120 03:36:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:57.120 03:36:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:N2E5NDAyYTRkNWQ1MTA2ZDA3YjZlZjBkZjc2NDc4ZDAzYmJlYTczYTk2NzgxNDQ2N2RlZGNlMTQ5NTFiZGQwM2sgfW8=: 00:32:57.120 03:36:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:57.120 03:36:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:32:57.120 03:36:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:57.120 03:36:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:57.120 03:36:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:57.120 03:36:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:57.120 03:36:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:57.120 03:36:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:32:57.120 03:36:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:57.120 03:36:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:57.120 03:36:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:57.120 03:36:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:57.120 03:36:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:57.120 03:36:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:57.120 03:36:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:57.120 03:36:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:57.120 03:36:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:57.120 03:36:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:57.120 03:36:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:57.120 03:36:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:57.120 03:36:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:57.120 03:36:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:57.120 03:36:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:57.120 03:36:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:57.120 03:36:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:57.684 nvme0n1 00:32:57.684 03:36:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:57.684 03:36:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:57.684 03:36:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:57.684 03:36:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:57.684 03:36:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:57.684 03:36:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:57.684 03:36:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:57.684 03:36:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:57.684 03:36:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:57.684 03:36:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:57.684 03:36:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:57.684 03:36:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:57.684 03:36:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:57.684 03:36:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:32:57.684 03:36:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:57.684 03:36:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:57.684 03:36:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:57.684 03:36:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:57.684 03:36:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjZmMDI2ZDIwYWU5MDQ0OWZiYmQ3Nzc4MWQxMDg2YWUQ1oG8: 00:32:57.684 03:36:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NGM5Y2NiOTAyMzRmZjI2MDQzNGU2MTBiMmU5NTEwNTQwZGQyNzJjNmUyYzg4ZjA5ZDM0MzE1MzhjNTg3NTZiZfDtY/M=: 00:32:57.684 03:36:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:57.684 03:36:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:57.684 03:36:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjZmMDI2ZDIwYWU5MDQ0OWZiYmQ3Nzc4MWQxMDg2YWUQ1oG8: 00:32:57.684 03:36:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NGM5Y2NiOTAyMzRmZjI2MDQzNGU2MTBiMmU5NTEwNTQwZGQyNzJjNmUyYzg4ZjA5ZDM0MzE1MzhjNTg3NTZiZfDtY/M=: ]] 00:32:57.684 03:36:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NGM5Y2NiOTAyMzRmZjI2MDQzNGU2MTBiMmU5NTEwNTQwZGQyNzJjNmUyYzg4ZjA5ZDM0MzE1MzhjNTg3NTZiZfDtY/M=: 00:32:57.684 03:36:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:32:57.684 03:36:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:57.684 03:36:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:57.684 03:36:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:57.684 03:36:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:57.684 03:36:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:57.684 03:36:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:32:57.684 03:36:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:57.684 03:36:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:57.684 03:36:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:57.684 03:36:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:57.684 03:36:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:57.684 03:36:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:57.684 03:36:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:57.684 03:36:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:57.684 03:36:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:57.684 03:36:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:57.684 03:36:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:57.684 03:36:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:57.684 03:36:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:57.684 03:36:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:57.684 03:36:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:57.684 03:36:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:57.684 03:36:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:58.616 nvme0n1 00:32:58.616 03:36:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:58.616 03:36:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:58.616 03:36:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:58.616 03:36:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:58.616 03:36:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:58.616 03:36:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:58.616 03:36:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:58.616 03:36:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:58.616 03:36:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:58.616 03:36:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:58.616 03:36:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:58.616 03:36:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:58.616 03:36:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:32:58.616 03:36:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:58.616 03:36:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:58.616 03:36:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:58.616 03:36:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:58.616 03:36:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGI4ZGY4MTUwYjY0ZjNlYzNjNmIzOWY0MGI3ZTc3ZGIxNDFjMjE1OTBkM2I2NWU5gChLnQ==: 00:32:58.616 03:36:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTUwNmYwOTU5OTg5ZTYxNTVkMWE4ZWE0YTljMGI3YWVjYTBkMjgxMjY4YTI1OWY1q1RGMw==: 00:32:58.616 03:36:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:58.616 03:36:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:58.616 03:36:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGI4ZGY4MTUwYjY0ZjNlYzNjNmIzOWY0MGI3ZTc3ZGIxNDFjMjE1OTBkM2I2NWU5gChLnQ==: 00:32:58.616 03:36:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTUwNmYwOTU5OTg5ZTYxNTVkMWE4ZWE0YTljMGI3YWVjYTBkMjgxMjY4YTI1OWY1q1RGMw==: ]] 00:32:58.616 03:36:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTUwNmYwOTU5OTg5ZTYxNTVkMWE4ZWE0YTljMGI3YWVjYTBkMjgxMjY4YTI1OWY1q1RGMw==: 00:32:58.616 03:36:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:32:58.616 03:36:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:58.616 03:36:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:58.616 03:36:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:58.616 03:36:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:58.616 03:36:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:58.616 03:36:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:32:58.616 03:36:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:58.616 03:36:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:58.616 03:36:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:58.616 03:36:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:58.616 03:36:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:58.616 03:36:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:58.616 03:36:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:58.616 03:36:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:58.616 03:36:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:58.616 03:36:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:58.616 03:36:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:58.616 03:36:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:58.616 03:36:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:58.616 03:36:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:58.616 03:36:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:58.616 03:36:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:58.616 03:36:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:59.547 nvme0n1 00:32:59.547 03:36:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:59.547 03:36:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:59.547 03:36:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:59.547 03:36:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:59.547 03:36:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:59.547 03:36:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:59.547 03:36:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:59.547 03:36:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:59.547 03:36:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:59.547 03:36:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:59.805 03:36:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:59.805 03:36:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:59.805 03:36:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:32:59.805 03:36:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:59.805 03:36:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:59.805 03:36:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:59.805 03:36:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:59.805 03:36:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:M2Q3ZjU1NjUwNGZkZjczYmQwMjQyNzZjYjcwOTQ0ZDMEr3Bz: 00:32:59.805 03:36:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTg5YzA3YzZhZDk4YzZjNWZlYWIwM2RkNTQwYjMwODlz4mjG: 00:32:59.805 03:36:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:59.805 03:36:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:59.805 03:36:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:M2Q3ZjU1NjUwNGZkZjczYmQwMjQyNzZjYjcwOTQ0ZDMEr3Bz: 00:32:59.805 03:36:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTg5YzA3YzZhZDk4YzZjNWZlYWIwM2RkNTQwYjMwODlz4mjG: ]] 00:32:59.805 03:36:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTg5YzA3YzZhZDk4YzZjNWZlYWIwM2RkNTQwYjMwODlz4mjG: 00:32:59.805 03:36:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:32:59.805 03:36:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:59.805 03:36:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:59.805 03:36:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:59.805 03:36:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:59.805 03:36:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:59.805 03:36:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:32:59.806 03:36:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:59.806 03:36:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:59.806 03:36:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:59.806 03:36:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:59.806 03:36:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:59.806 03:36:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:59.806 03:36:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:59.806 03:36:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:59.806 03:36:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:59.806 03:36:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:59.806 03:36:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:59.806 03:36:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:59.806 03:36:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:59.806 03:36:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:59.806 03:36:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:59.806 03:36:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:59.806 03:36:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:00.741 nvme0n1 00:33:00.741 03:36:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:00.741 03:36:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:00.741 03:36:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:00.741 03:36:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:00.741 03:36:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:00.741 03:36:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:00.741 03:36:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:00.741 03:36:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:00.741 03:36:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:00.741 03:36:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:00.741 03:36:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:00.741 03:36:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:00.741 03:36:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:33:00.741 03:36:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:00.741 03:36:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:00.741 03:36:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:33:00.741 03:36:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:33:00.741 03:36:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTNmZmRjMDNhYjAxZDFlMmJiYWUwNDNmZDFmNDk0ZTZiMTRkMDQxMWNhZDNlZTZkD806dA==: 00:33:00.741 03:36:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MGVmNGJiYjRjZTEzNTc5ODU3NzRjZDc2NThjMWQ4YzDGeU7a: 00:33:00.741 03:36:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:00.741 03:36:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:33:00.741 03:36:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTNmZmRjMDNhYjAxZDFlMmJiYWUwNDNmZDFmNDk0ZTZiMTRkMDQxMWNhZDNlZTZkD806dA==: 00:33:00.741 03:36:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MGVmNGJiYjRjZTEzNTc5ODU3NzRjZDc2NThjMWQ4YzDGeU7a: ]] 00:33:00.741 03:36:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MGVmNGJiYjRjZTEzNTc5ODU3NzRjZDc2NThjMWQ4YzDGeU7a: 00:33:00.741 03:36:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:33:00.741 03:36:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:00.741 03:36:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:00.741 03:36:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:33:00.741 03:36:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:33:00.741 03:36:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:00.741 03:36:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:33:00.741 03:36:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:00.741 03:36:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:00.741 03:36:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:00.741 03:36:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:00.741 03:36:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:00.741 03:36:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:00.741 03:36:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:00.741 03:36:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:00.741 03:36:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:00.741 03:36:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:00.741 03:36:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:00.741 03:36:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:00.741 03:36:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:00.741 03:36:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:00.741 03:36:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:33:00.741 03:36:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:00.741 03:36:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:01.675 nvme0n1 00:33:01.675 03:36:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:01.675 03:36:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:01.675 03:36:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:01.675 03:36:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:01.675 03:36:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:01.675 03:36:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:01.675 03:36:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:01.675 03:36:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:01.675 03:36:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:01.675 03:36:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:01.675 03:36:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:01.675 03:36:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:01.675 03:36:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:33:01.675 03:36:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:01.675 03:36:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:01.675 03:36:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:33:01.675 03:36:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:33:01.675 03:36:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:N2E5NDAyYTRkNWQ1MTA2ZDA3YjZlZjBkZjc2NDc4ZDAzYmJlYTczYTk2NzgxNDQ2N2RlZGNlMTQ5NTFiZGQwM2sgfW8=: 00:33:01.675 03:36:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:33:01.675 03:36:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:01.675 03:36:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:33:01.675 03:36:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:N2E5NDAyYTRkNWQ1MTA2ZDA3YjZlZjBkZjc2NDc4ZDAzYmJlYTczYTk2NzgxNDQ2N2RlZGNlMTQ5NTFiZGQwM2sgfW8=: 00:33:01.675 03:36:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:33:01.675 03:36:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:33:01.675 03:36:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:01.675 03:36:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:01.675 03:36:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:33:01.675 03:36:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:33:01.675 03:36:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:01.675 03:36:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:33:01.675 03:36:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:01.675 03:36:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:01.675 03:36:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:01.675 03:36:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:01.675 03:36:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:01.675 03:36:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:01.675 03:36:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:01.675 03:36:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:01.675 03:36:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:01.675 03:36:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:01.675 03:36:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:01.675 03:36:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:01.675 03:36:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:01.675 03:36:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:01.675 03:36:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:33:01.675 03:36:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:01.675 03:36:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:02.670 nvme0n1 00:33:02.670 03:36:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:02.670 03:36:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:02.670 03:36:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:02.670 03:36:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:02.670 03:36:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:02.670 03:36:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:02.670 03:36:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:02.670 03:36:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:02.670 03:36:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:02.670 03:36:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:02.670 03:36:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:02.670 03:36:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:33:02.670 03:36:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:33:02.670 03:36:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:02.670 03:36:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:33:02.670 03:36:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:02.670 03:36:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:02.670 03:36:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:33:02.670 03:36:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:33:02.670 03:36:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjZmMDI2ZDIwYWU5MDQ0OWZiYmQ3Nzc4MWQxMDg2YWUQ1oG8: 00:33:02.670 03:36:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NGM5Y2NiOTAyMzRmZjI2MDQzNGU2MTBiMmU5NTEwNTQwZGQyNzJjNmUyYzg4ZjA5ZDM0MzE1MzhjNTg3NTZiZfDtY/M=: 00:33:02.670 03:36:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:02.670 03:36:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:33:02.670 03:36:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjZmMDI2ZDIwYWU5MDQ0OWZiYmQ3Nzc4MWQxMDg2YWUQ1oG8: 00:33:02.670 03:36:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NGM5Y2NiOTAyMzRmZjI2MDQzNGU2MTBiMmU5NTEwNTQwZGQyNzJjNmUyYzg4ZjA5ZDM0MzE1MzhjNTg3NTZiZfDtY/M=: ]] 00:33:02.670 03:36:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NGM5Y2NiOTAyMzRmZjI2MDQzNGU2MTBiMmU5NTEwNTQwZGQyNzJjNmUyYzg4ZjA5ZDM0MzE1MzhjNTg3NTZiZfDtY/M=: 00:33:02.670 03:36:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:33:02.670 03:36:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:02.670 03:36:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:02.670 03:36:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:33:02.670 03:36:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:33:02.670 03:36:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:02.670 03:36:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:33:02.670 03:36:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:02.670 03:36:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:02.670 03:36:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:02.670 03:36:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:02.670 03:36:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:02.670 03:36:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:02.670 03:36:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:02.670 03:36:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:02.670 03:36:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:02.670 03:36:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:02.670 03:36:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:02.670 03:36:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:02.670 03:36:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:02.670 03:36:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:02.670 03:36:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:02.670 03:36:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:02.670 03:36:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:02.670 nvme0n1 00:33:02.670 03:36:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:02.670 03:36:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:02.670 03:36:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:02.670 03:36:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:02.670 03:36:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:02.670 03:36:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:02.670 03:36:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:02.670 03:36:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:02.670 03:36:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:02.670 03:36:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:02.670 03:36:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:02.670 03:36:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:02.670 03:36:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:33:02.670 03:36:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:02.670 03:36:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:02.670 03:36:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:33:02.670 03:36:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:02.670 03:36:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGI4ZGY4MTUwYjY0ZjNlYzNjNmIzOWY0MGI3ZTc3ZGIxNDFjMjE1OTBkM2I2NWU5gChLnQ==: 00:33:02.670 03:36:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTUwNmYwOTU5OTg5ZTYxNTVkMWE4ZWE0YTljMGI3YWVjYTBkMjgxMjY4YTI1OWY1q1RGMw==: 00:33:02.670 03:36:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:02.670 03:36:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:33:02.670 03:36:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGI4ZGY4MTUwYjY0ZjNlYzNjNmIzOWY0MGI3ZTc3ZGIxNDFjMjE1OTBkM2I2NWU5gChLnQ==: 00:33:02.670 03:36:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTUwNmYwOTU5OTg5ZTYxNTVkMWE4ZWE0YTljMGI3YWVjYTBkMjgxMjY4YTI1OWY1q1RGMw==: ]] 00:33:02.670 03:36:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTUwNmYwOTU5OTg5ZTYxNTVkMWE4ZWE0YTljMGI3YWVjYTBkMjgxMjY4YTI1OWY1q1RGMw==: 00:33:02.670 03:36:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:33:02.670 03:36:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:02.670 03:36:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:02.670 03:36:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:33:02.670 03:36:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:33:02.670 03:36:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:02.670 03:36:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:33:02.670 03:36:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:02.670 03:36:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:02.928 03:36:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:02.928 03:36:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:02.928 03:36:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:02.928 03:36:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:02.928 03:36:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:02.928 03:36:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:02.928 03:36:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:02.928 03:36:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:02.928 03:36:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:02.928 03:36:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:02.928 03:36:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:02.928 03:36:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:02.928 03:36:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:02.928 03:36:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:02.928 03:36:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:02.928 nvme0n1 00:33:02.928 03:36:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:02.928 03:36:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:02.928 03:36:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:02.928 03:36:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:02.928 03:36:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:02.928 03:36:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:02.928 03:36:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:02.928 03:36:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:02.928 03:36:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:02.928 03:36:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:02.928 03:36:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:02.928 03:36:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:02.928 03:36:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:33:02.928 03:36:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:02.928 03:36:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:02.928 03:36:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:33:02.928 03:36:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:33:02.928 03:36:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:M2Q3ZjU1NjUwNGZkZjczYmQwMjQyNzZjYjcwOTQ0ZDMEr3Bz: 00:33:02.928 03:36:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTg5YzA3YzZhZDk4YzZjNWZlYWIwM2RkNTQwYjMwODlz4mjG: 00:33:02.928 03:36:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:02.928 03:36:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:33:02.928 03:36:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:M2Q3ZjU1NjUwNGZkZjczYmQwMjQyNzZjYjcwOTQ0ZDMEr3Bz: 00:33:02.928 03:36:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTg5YzA3YzZhZDk4YzZjNWZlYWIwM2RkNTQwYjMwODlz4mjG: ]] 00:33:02.928 03:36:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTg5YzA3YzZhZDk4YzZjNWZlYWIwM2RkNTQwYjMwODlz4mjG: 00:33:02.928 03:36:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:33:02.928 03:36:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:02.928 03:36:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:02.928 03:36:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:33:02.928 03:36:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:33:02.928 03:36:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:02.928 03:36:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:33:02.928 03:36:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:02.928 03:36:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:02.928 03:36:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:02.928 03:36:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:02.928 03:36:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:02.928 03:36:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:02.928 03:36:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:02.928 03:36:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:02.928 03:36:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:02.928 03:36:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:02.928 03:36:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:02.928 03:36:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:02.928 03:36:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:02.928 03:36:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:02.928 03:36:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:02.928 03:36:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:02.928 03:36:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:03.186 nvme0n1 00:33:03.186 03:36:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:03.186 03:36:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:03.186 03:36:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:03.186 03:36:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:03.186 03:36:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:03.186 03:36:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:03.186 03:36:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:03.186 03:36:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:03.186 03:36:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:03.186 03:36:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:03.186 03:36:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:03.186 03:36:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:03.186 03:36:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:33:03.186 03:36:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:03.186 03:36:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:03.186 03:36:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:33:03.186 03:36:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:33:03.186 03:36:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTNmZmRjMDNhYjAxZDFlMmJiYWUwNDNmZDFmNDk0ZTZiMTRkMDQxMWNhZDNlZTZkD806dA==: 00:33:03.186 03:36:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MGVmNGJiYjRjZTEzNTc5ODU3NzRjZDc2NThjMWQ4YzDGeU7a: 00:33:03.186 03:36:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:03.186 03:36:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:33:03.186 03:36:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTNmZmRjMDNhYjAxZDFlMmJiYWUwNDNmZDFmNDk0ZTZiMTRkMDQxMWNhZDNlZTZkD806dA==: 00:33:03.186 03:36:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MGVmNGJiYjRjZTEzNTc5ODU3NzRjZDc2NThjMWQ4YzDGeU7a: ]] 00:33:03.186 03:36:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MGVmNGJiYjRjZTEzNTc5ODU3NzRjZDc2NThjMWQ4YzDGeU7a: 00:33:03.186 03:36:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:33:03.186 03:36:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:03.186 03:36:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:03.186 03:36:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:33:03.186 03:36:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:33:03.186 03:36:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:03.186 03:36:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:33:03.186 03:36:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:03.186 03:36:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:03.186 03:36:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:03.186 03:36:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:03.186 03:36:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:03.186 03:36:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:03.186 03:36:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:03.186 03:36:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:03.186 03:36:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:03.186 03:36:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:03.186 03:36:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:03.186 03:36:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:03.186 03:36:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:03.186 03:36:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:03.187 03:36:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:33:03.187 03:36:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:03.187 03:36:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:03.444 nvme0n1 00:33:03.444 03:36:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:03.444 03:36:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:03.444 03:36:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:03.444 03:36:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:03.444 03:36:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:03.444 03:36:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:03.444 03:36:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:03.444 03:36:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:03.444 03:36:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:03.444 03:36:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:03.444 03:36:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:03.444 03:36:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:03.444 03:36:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:33:03.444 03:36:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:03.444 03:36:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:03.444 03:36:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:33:03.444 03:36:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:33:03.444 03:36:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:N2E5NDAyYTRkNWQ1MTA2ZDA3YjZlZjBkZjc2NDc4ZDAzYmJlYTczYTk2NzgxNDQ2N2RlZGNlMTQ5NTFiZGQwM2sgfW8=: 00:33:03.444 03:36:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:33:03.444 03:36:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:03.444 03:36:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:33:03.444 03:36:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:N2E5NDAyYTRkNWQ1MTA2ZDA3YjZlZjBkZjc2NDc4ZDAzYmJlYTczYTk2NzgxNDQ2N2RlZGNlMTQ5NTFiZGQwM2sgfW8=: 00:33:03.444 03:36:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:33:03.444 03:36:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:33:03.445 03:36:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:03.445 03:36:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:03.445 03:36:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:33:03.445 03:36:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:33:03.445 03:36:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:03.445 03:36:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:33:03.445 03:36:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:03.445 03:36:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:03.445 03:36:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:03.445 03:36:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:03.445 03:36:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:03.445 03:36:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:03.445 03:36:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:03.445 03:36:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:03.445 03:36:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:03.445 03:36:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:03.445 03:36:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:03.445 03:36:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:03.445 03:36:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:03.445 03:36:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:03.445 03:36:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:33:03.445 03:36:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:03.445 03:36:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:03.702 nvme0n1 00:33:03.702 03:36:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:03.702 03:36:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:03.702 03:36:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:03.702 03:36:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:03.702 03:36:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:03.702 03:36:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:03.703 03:36:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:03.703 03:36:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:03.703 03:36:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:03.703 03:36:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:03.703 03:36:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:03.703 03:36:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:33:03.703 03:36:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:03.703 03:36:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:33:03.703 03:36:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:03.703 03:36:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:03.703 03:36:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:33:03.703 03:36:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:33:03.703 03:36:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjZmMDI2ZDIwYWU5MDQ0OWZiYmQ3Nzc4MWQxMDg2YWUQ1oG8: 00:33:03.703 03:36:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NGM5Y2NiOTAyMzRmZjI2MDQzNGU2MTBiMmU5NTEwNTQwZGQyNzJjNmUyYzg4ZjA5ZDM0MzE1MzhjNTg3NTZiZfDtY/M=: 00:33:03.703 03:36:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:03.703 03:36:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:33:03.703 03:36:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjZmMDI2ZDIwYWU5MDQ0OWZiYmQ3Nzc4MWQxMDg2YWUQ1oG8: 00:33:03.703 03:36:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NGM5Y2NiOTAyMzRmZjI2MDQzNGU2MTBiMmU5NTEwNTQwZGQyNzJjNmUyYzg4ZjA5ZDM0MzE1MzhjNTg3NTZiZfDtY/M=: ]] 00:33:03.703 03:36:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NGM5Y2NiOTAyMzRmZjI2MDQzNGU2MTBiMmU5NTEwNTQwZGQyNzJjNmUyYzg4ZjA5ZDM0MzE1MzhjNTg3NTZiZfDtY/M=: 00:33:03.703 03:36:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:33:03.703 03:36:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:03.703 03:36:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:03.703 03:36:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:33:03.703 03:36:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:33:03.703 03:36:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:03.703 03:36:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:33:03.703 03:36:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:03.703 03:36:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:03.703 03:36:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:03.703 03:36:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:03.703 03:36:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:03.703 03:36:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:03.703 03:36:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:03.703 03:36:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:03.703 03:36:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:03.703 03:36:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:03.703 03:36:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:03.703 03:36:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:03.703 03:36:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:03.703 03:36:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:03.703 03:36:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:03.703 03:36:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:03.703 03:36:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:03.961 nvme0n1 00:33:03.961 03:36:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:03.961 03:36:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:03.961 03:36:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:03.961 03:36:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:03.961 03:36:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:03.961 03:36:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:03.961 03:36:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:03.961 03:36:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:03.961 03:36:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:03.961 03:36:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:03.961 03:36:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:03.961 03:36:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:03.962 03:36:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:33:03.962 03:36:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:03.962 03:36:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:03.962 03:36:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:33:03.962 03:36:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:03.962 03:36:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGI4ZGY4MTUwYjY0ZjNlYzNjNmIzOWY0MGI3ZTc3ZGIxNDFjMjE1OTBkM2I2NWU5gChLnQ==: 00:33:03.962 03:36:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTUwNmYwOTU5OTg5ZTYxNTVkMWE4ZWE0YTljMGI3YWVjYTBkMjgxMjY4YTI1OWY1q1RGMw==: 00:33:03.962 03:36:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:03.962 03:36:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:33:03.962 03:36:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGI4ZGY4MTUwYjY0ZjNlYzNjNmIzOWY0MGI3ZTc3ZGIxNDFjMjE1OTBkM2I2NWU5gChLnQ==: 00:33:03.962 03:36:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTUwNmYwOTU5OTg5ZTYxNTVkMWE4ZWE0YTljMGI3YWVjYTBkMjgxMjY4YTI1OWY1q1RGMw==: ]] 00:33:03.962 03:36:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTUwNmYwOTU5OTg5ZTYxNTVkMWE4ZWE0YTljMGI3YWVjYTBkMjgxMjY4YTI1OWY1q1RGMw==: 00:33:03.962 03:36:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:33:03.962 03:36:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:03.962 03:36:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:03.962 03:36:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:33:03.962 03:36:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:33:03.962 03:36:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:03.962 03:36:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:33:03.962 03:36:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:03.962 03:36:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:03.962 03:36:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:03.962 03:36:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:03.962 03:36:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:03.962 03:36:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:03.962 03:36:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:03.962 03:36:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:03.962 03:36:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:03.962 03:36:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:03.962 03:36:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:03.962 03:36:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:03.962 03:36:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:03.962 03:36:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:03.962 03:36:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:03.962 03:36:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:03.962 03:36:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:04.219 nvme0n1 00:33:04.219 03:36:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:04.219 03:36:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:04.219 03:36:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:04.219 03:36:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:04.219 03:36:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:04.219 03:36:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:04.219 03:36:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:04.219 03:36:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:04.219 03:36:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:04.219 03:36:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:04.219 03:36:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:04.219 03:36:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:04.219 03:36:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:33:04.219 03:36:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:04.219 03:36:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:04.219 03:36:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:33:04.219 03:36:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:33:04.219 03:36:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:M2Q3ZjU1NjUwNGZkZjczYmQwMjQyNzZjYjcwOTQ0ZDMEr3Bz: 00:33:04.219 03:36:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTg5YzA3YzZhZDk4YzZjNWZlYWIwM2RkNTQwYjMwODlz4mjG: 00:33:04.219 03:36:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:04.219 03:36:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:33:04.219 03:36:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:M2Q3ZjU1NjUwNGZkZjczYmQwMjQyNzZjYjcwOTQ0ZDMEr3Bz: 00:33:04.219 03:36:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTg5YzA3YzZhZDk4YzZjNWZlYWIwM2RkNTQwYjMwODlz4mjG: ]] 00:33:04.219 03:36:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTg5YzA3YzZhZDk4YzZjNWZlYWIwM2RkNTQwYjMwODlz4mjG: 00:33:04.219 03:36:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:33:04.219 03:36:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:04.219 03:36:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:04.219 03:36:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:33:04.219 03:36:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:33:04.219 03:36:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:04.219 03:36:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:33:04.219 03:36:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:04.219 03:36:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:04.219 03:36:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:04.219 03:36:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:04.219 03:36:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:04.219 03:36:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:04.219 03:36:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:04.220 03:36:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:04.220 03:36:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:04.220 03:36:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:04.220 03:36:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:04.220 03:36:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:04.220 03:36:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:04.220 03:36:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:04.220 03:36:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:04.220 03:36:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:04.220 03:36:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:04.478 nvme0n1 00:33:04.478 03:36:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:04.478 03:36:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:04.478 03:36:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:04.478 03:36:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:04.478 03:36:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:04.478 03:36:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:04.478 03:36:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:04.478 03:36:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:04.478 03:36:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:04.478 03:36:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:04.478 03:36:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:04.478 03:36:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:04.478 03:36:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:33:04.478 03:36:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:04.478 03:36:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:04.478 03:36:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:33:04.478 03:36:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:33:04.478 03:36:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTNmZmRjMDNhYjAxZDFlMmJiYWUwNDNmZDFmNDk0ZTZiMTRkMDQxMWNhZDNlZTZkD806dA==: 00:33:04.478 03:36:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MGVmNGJiYjRjZTEzNTc5ODU3NzRjZDc2NThjMWQ4YzDGeU7a: 00:33:04.478 03:36:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:04.478 03:36:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:33:04.478 03:36:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTNmZmRjMDNhYjAxZDFlMmJiYWUwNDNmZDFmNDk0ZTZiMTRkMDQxMWNhZDNlZTZkD806dA==: 00:33:04.478 03:36:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MGVmNGJiYjRjZTEzNTc5ODU3NzRjZDc2NThjMWQ4YzDGeU7a: ]] 00:33:04.478 03:36:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MGVmNGJiYjRjZTEzNTc5ODU3NzRjZDc2NThjMWQ4YzDGeU7a: 00:33:04.478 03:36:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:33:04.478 03:36:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:04.478 03:36:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:04.478 03:36:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:33:04.478 03:36:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:33:04.478 03:36:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:04.478 03:36:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:33:04.478 03:36:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:04.478 03:36:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:04.478 03:36:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:04.478 03:36:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:04.478 03:36:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:04.478 03:36:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:04.478 03:36:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:04.478 03:36:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:04.478 03:36:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:04.478 03:36:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:04.478 03:36:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:04.478 03:36:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:04.478 03:36:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:04.478 03:36:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:04.478 03:36:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:33:04.478 03:36:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:04.478 03:36:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:04.736 nvme0n1 00:33:04.736 03:36:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:04.736 03:36:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:04.736 03:36:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:04.736 03:36:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:04.736 03:36:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:04.736 03:36:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:04.736 03:36:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:04.736 03:36:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:04.736 03:36:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:04.736 03:36:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:04.736 03:36:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:04.736 03:36:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:04.736 03:36:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:33:04.737 03:36:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:04.737 03:36:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:04.737 03:36:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:33:04.737 03:36:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:33:04.737 03:36:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:N2E5NDAyYTRkNWQ1MTA2ZDA3YjZlZjBkZjc2NDc4ZDAzYmJlYTczYTk2NzgxNDQ2N2RlZGNlMTQ5NTFiZGQwM2sgfW8=: 00:33:04.737 03:36:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:33:04.737 03:36:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:04.737 03:36:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:33:04.737 03:36:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:N2E5NDAyYTRkNWQ1MTA2ZDA3YjZlZjBkZjc2NDc4ZDAzYmJlYTczYTk2NzgxNDQ2N2RlZGNlMTQ5NTFiZGQwM2sgfW8=: 00:33:04.737 03:36:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:33:04.737 03:36:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:33:04.737 03:36:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:04.737 03:36:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:04.737 03:36:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:33:04.737 03:36:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:33:04.737 03:36:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:04.737 03:36:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:33:04.737 03:36:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:04.737 03:36:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:04.737 03:36:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:04.737 03:36:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:04.737 03:36:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:04.737 03:36:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:04.737 03:36:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:04.737 03:36:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:04.737 03:36:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:04.737 03:36:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:04.737 03:36:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:04.737 03:36:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:04.737 03:36:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:04.737 03:36:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:04.737 03:36:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:33:04.737 03:36:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:04.737 03:36:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:04.995 nvme0n1 00:33:04.995 03:36:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:04.995 03:36:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:04.995 03:36:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:04.995 03:36:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:04.995 03:36:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:04.995 03:36:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:04.995 03:36:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:04.995 03:36:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:04.995 03:36:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:04.995 03:36:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:04.995 03:36:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:04.995 03:36:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:33:04.995 03:36:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:04.995 03:36:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:33:04.995 03:36:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:04.995 03:36:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:04.995 03:36:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:33:04.995 03:36:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:33:04.995 03:36:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjZmMDI2ZDIwYWU5MDQ0OWZiYmQ3Nzc4MWQxMDg2YWUQ1oG8: 00:33:04.995 03:36:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NGM5Y2NiOTAyMzRmZjI2MDQzNGU2MTBiMmU5NTEwNTQwZGQyNzJjNmUyYzg4ZjA5ZDM0MzE1MzhjNTg3NTZiZfDtY/M=: 00:33:04.995 03:36:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:04.995 03:36:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:33:04.995 03:36:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjZmMDI2ZDIwYWU5MDQ0OWZiYmQ3Nzc4MWQxMDg2YWUQ1oG8: 00:33:04.995 03:36:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NGM5Y2NiOTAyMzRmZjI2MDQzNGU2MTBiMmU5NTEwNTQwZGQyNzJjNmUyYzg4ZjA5ZDM0MzE1MzhjNTg3NTZiZfDtY/M=: ]] 00:33:04.995 03:36:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NGM5Y2NiOTAyMzRmZjI2MDQzNGU2MTBiMmU5NTEwNTQwZGQyNzJjNmUyYzg4ZjA5ZDM0MzE1MzhjNTg3NTZiZfDtY/M=: 00:33:04.995 03:36:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:33:04.995 03:36:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:04.995 03:36:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:04.995 03:36:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:33:04.995 03:36:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:33:04.995 03:36:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:04.995 03:36:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:33:04.995 03:36:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:04.995 03:36:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:04.995 03:36:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:04.995 03:36:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:04.995 03:36:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:04.995 03:36:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:04.995 03:36:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:04.995 03:36:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:04.995 03:36:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:04.995 03:36:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:04.995 03:36:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:04.995 03:36:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:04.995 03:36:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:04.995 03:36:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:04.996 03:36:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:04.996 03:36:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:04.996 03:36:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:05.254 nvme0n1 00:33:05.254 03:36:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:05.254 03:36:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:05.254 03:36:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:05.254 03:36:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:05.254 03:36:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:05.254 03:36:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:05.254 03:36:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:05.254 03:36:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:05.254 03:36:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:05.254 03:36:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:05.512 03:36:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:05.512 03:36:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:05.512 03:36:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:33:05.512 03:36:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:05.512 03:36:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:05.512 03:36:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:33:05.512 03:36:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:05.512 03:36:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGI4ZGY4MTUwYjY0ZjNlYzNjNmIzOWY0MGI3ZTc3ZGIxNDFjMjE1OTBkM2I2NWU5gChLnQ==: 00:33:05.512 03:36:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTUwNmYwOTU5OTg5ZTYxNTVkMWE4ZWE0YTljMGI3YWVjYTBkMjgxMjY4YTI1OWY1q1RGMw==: 00:33:05.512 03:36:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:05.512 03:36:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:33:05.512 03:36:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGI4ZGY4MTUwYjY0ZjNlYzNjNmIzOWY0MGI3ZTc3ZGIxNDFjMjE1OTBkM2I2NWU5gChLnQ==: 00:33:05.512 03:36:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTUwNmYwOTU5OTg5ZTYxNTVkMWE4ZWE0YTljMGI3YWVjYTBkMjgxMjY4YTI1OWY1q1RGMw==: ]] 00:33:05.512 03:36:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTUwNmYwOTU5OTg5ZTYxNTVkMWE4ZWE0YTljMGI3YWVjYTBkMjgxMjY4YTI1OWY1q1RGMw==: 00:33:05.512 03:36:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:33:05.512 03:36:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:05.512 03:36:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:05.512 03:36:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:33:05.512 03:36:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:33:05.512 03:36:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:05.512 03:36:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:33:05.512 03:36:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:05.512 03:36:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:05.512 03:36:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:05.512 03:36:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:05.512 03:36:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:05.512 03:36:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:05.512 03:36:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:05.512 03:36:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:05.512 03:36:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:05.512 03:36:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:05.512 03:36:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:05.512 03:36:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:05.512 03:36:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:05.512 03:36:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:05.512 03:36:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:05.513 03:36:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:05.513 03:36:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:05.771 nvme0n1 00:33:05.771 03:36:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:05.771 03:36:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:05.771 03:36:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:05.771 03:36:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:05.771 03:36:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:05.771 03:36:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:05.771 03:36:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:05.771 03:36:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:05.771 03:36:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:05.771 03:36:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:05.771 03:36:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:05.771 03:36:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:05.771 03:36:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:33:05.771 03:36:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:05.771 03:36:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:05.771 03:36:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:33:05.771 03:36:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:33:05.771 03:36:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:M2Q3ZjU1NjUwNGZkZjczYmQwMjQyNzZjYjcwOTQ0ZDMEr3Bz: 00:33:05.771 03:36:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTg5YzA3YzZhZDk4YzZjNWZlYWIwM2RkNTQwYjMwODlz4mjG: 00:33:05.771 03:36:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:05.771 03:36:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:33:05.771 03:36:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:M2Q3ZjU1NjUwNGZkZjczYmQwMjQyNzZjYjcwOTQ0ZDMEr3Bz: 00:33:05.771 03:36:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTg5YzA3YzZhZDk4YzZjNWZlYWIwM2RkNTQwYjMwODlz4mjG: ]] 00:33:05.771 03:36:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTg5YzA3YzZhZDk4YzZjNWZlYWIwM2RkNTQwYjMwODlz4mjG: 00:33:05.771 03:36:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:33:05.771 03:36:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:05.771 03:36:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:05.771 03:36:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:33:05.771 03:36:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:33:05.771 03:36:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:05.771 03:36:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:33:05.771 03:36:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:05.771 03:36:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:05.771 03:36:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:05.771 03:36:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:05.771 03:36:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:05.771 03:36:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:05.771 03:36:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:05.771 03:36:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:05.771 03:36:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:05.771 03:36:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:05.771 03:36:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:05.771 03:36:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:05.771 03:36:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:05.771 03:36:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:05.771 03:36:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:05.771 03:36:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:05.771 03:36:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:06.030 nvme0n1 00:33:06.030 03:36:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:06.030 03:36:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:06.030 03:36:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:06.030 03:36:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:06.030 03:36:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:06.030 03:36:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:06.030 03:36:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:06.030 03:36:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:06.030 03:36:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:06.030 03:36:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:06.030 03:36:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:06.030 03:36:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:06.030 03:36:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:33:06.030 03:36:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:06.030 03:36:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:06.030 03:36:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:33:06.031 03:36:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:33:06.031 03:36:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTNmZmRjMDNhYjAxZDFlMmJiYWUwNDNmZDFmNDk0ZTZiMTRkMDQxMWNhZDNlZTZkD806dA==: 00:33:06.031 03:36:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MGVmNGJiYjRjZTEzNTc5ODU3NzRjZDc2NThjMWQ4YzDGeU7a: 00:33:06.031 03:36:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:06.031 03:36:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:33:06.031 03:36:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTNmZmRjMDNhYjAxZDFlMmJiYWUwNDNmZDFmNDk0ZTZiMTRkMDQxMWNhZDNlZTZkD806dA==: 00:33:06.031 03:36:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MGVmNGJiYjRjZTEzNTc5ODU3NzRjZDc2NThjMWQ4YzDGeU7a: ]] 00:33:06.031 03:36:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MGVmNGJiYjRjZTEzNTc5ODU3NzRjZDc2NThjMWQ4YzDGeU7a: 00:33:06.031 03:36:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:33:06.031 03:36:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:06.031 03:36:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:06.031 03:36:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:33:06.031 03:36:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:33:06.031 03:36:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:06.031 03:36:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:33:06.031 03:36:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:06.031 03:36:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:06.031 03:36:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:06.031 03:36:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:06.031 03:36:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:06.031 03:36:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:06.031 03:36:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:06.031 03:36:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:06.031 03:36:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:06.031 03:36:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:06.031 03:36:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:06.031 03:36:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:06.031 03:36:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:06.031 03:36:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:06.031 03:36:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:33:06.031 03:36:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:06.031 03:36:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:06.597 nvme0n1 00:33:06.597 03:36:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:06.597 03:36:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:06.597 03:36:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:06.597 03:36:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:06.598 03:36:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:06.598 03:36:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:06.598 03:36:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:06.598 03:36:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:06.598 03:36:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:06.598 03:36:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:06.598 03:36:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:06.598 03:36:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:06.598 03:36:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:33:06.598 03:36:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:06.598 03:36:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:06.598 03:36:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:33:06.598 03:36:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:33:06.598 03:36:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:N2E5NDAyYTRkNWQ1MTA2ZDA3YjZlZjBkZjc2NDc4ZDAzYmJlYTczYTk2NzgxNDQ2N2RlZGNlMTQ5NTFiZGQwM2sgfW8=: 00:33:06.598 03:36:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:33:06.598 03:36:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:06.598 03:36:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:33:06.598 03:36:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:N2E5NDAyYTRkNWQ1MTA2ZDA3YjZlZjBkZjc2NDc4ZDAzYmJlYTczYTk2NzgxNDQ2N2RlZGNlMTQ5NTFiZGQwM2sgfW8=: 00:33:06.598 03:36:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:33:06.598 03:36:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:33:06.598 03:36:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:06.598 03:36:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:06.598 03:36:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:33:06.598 03:36:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:33:06.598 03:36:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:06.598 03:36:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:33:06.598 03:36:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:06.598 03:36:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:06.598 03:36:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:06.598 03:36:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:06.598 03:36:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:06.598 03:36:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:06.598 03:36:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:06.598 03:36:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:06.598 03:36:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:06.598 03:36:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:06.598 03:36:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:06.598 03:36:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:06.598 03:36:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:06.598 03:36:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:06.598 03:36:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:33:06.598 03:36:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:06.598 03:36:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:06.856 nvme0n1 00:33:06.856 03:36:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:06.856 03:36:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:06.856 03:36:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:06.856 03:36:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:06.856 03:36:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:06.856 03:36:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:06.856 03:36:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:06.856 03:36:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:06.856 03:36:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:06.856 03:36:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:06.856 03:36:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:06.856 03:36:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:33:06.856 03:36:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:06.856 03:36:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:33:06.856 03:36:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:06.856 03:36:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:06.857 03:36:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:33:06.857 03:36:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:33:06.857 03:36:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjZmMDI2ZDIwYWU5MDQ0OWZiYmQ3Nzc4MWQxMDg2YWUQ1oG8: 00:33:06.857 03:36:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NGM5Y2NiOTAyMzRmZjI2MDQzNGU2MTBiMmU5NTEwNTQwZGQyNzJjNmUyYzg4ZjA5ZDM0MzE1MzhjNTg3NTZiZfDtY/M=: 00:33:06.857 03:36:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:06.857 03:36:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:33:06.857 03:36:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjZmMDI2ZDIwYWU5MDQ0OWZiYmQ3Nzc4MWQxMDg2YWUQ1oG8: 00:33:06.857 03:36:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NGM5Y2NiOTAyMzRmZjI2MDQzNGU2MTBiMmU5NTEwNTQwZGQyNzJjNmUyYzg4ZjA5ZDM0MzE1MzhjNTg3NTZiZfDtY/M=: ]] 00:33:06.857 03:36:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NGM5Y2NiOTAyMzRmZjI2MDQzNGU2MTBiMmU5NTEwNTQwZGQyNzJjNmUyYzg4ZjA5ZDM0MzE1MzhjNTg3NTZiZfDtY/M=: 00:33:06.857 03:36:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:33:06.857 03:36:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:06.857 03:36:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:06.857 03:36:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:33:06.857 03:36:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:33:06.857 03:36:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:06.857 03:36:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:33:06.857 03:36:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:06.857 03:36:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:06.857 03:36:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:06.857 03:36:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:06.857 03:36:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:06.857 03:36:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:06.857 03:36:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:06.857 03:36:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:06.857 03:36:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:06.857 03:36:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:06.857 03:36:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:06.857 03:36:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:06.857 03:36:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:06.857 03:36:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:06.857 03:36:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:06.857 03:36:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:06.857 03:36:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:07.422 nvme0n1 00:33:07.422 03:36:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:07.422 03:36:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:07.422 03:36:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:07.422 03:36:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:07.422 03:36:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:07.422 03:36:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:07.422 03:36:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:07.422 03:36:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:07.422 03:36:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:07.422 03:36:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:07.422 03:36:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:07.422 03:36:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:07.422 03:36:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:33:07.422 03:36:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:07.422 03:36:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:07.422 03:36:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:33:07.422 03:36:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:07.422 03:36:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGI4ZGY4MTUwYjY0ZjNlYzNjNmIzOWY0MGI3ZTc3ZGIxNDFjMjE1OTBkM2I2NWU5gChLnQ==: 00:33:07.422 03:36:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTUwNmYwOTU5OTg5ZTYxNTVkMWE4ZWE0YTljMGI3YWVjYTBkMjgxMjY4YTI1OWY1q1RGMw==: 00:33:07.422 03:36:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:07.422 03:36:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:33:07.422 03:36:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGI4ZGY4MTUwYjY0ZjNlYzNjNmIzOWY0MGI3ZTc3ZGIxNDFjMjE1OTBkM2I2NWU5gChLnQ==: 00:33:07.422 03:36:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTUwNmYwOTU5OTg5ZTYxNTVkMWE4ZWE0YTljMGI3YWVjYTBkMjgxMjY4YTI1OWY1q1RGMw==: ]] 00:33:07.422 03:36:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTUwNmYwOTU5OTg5ZTYxNTVkMWE4ZWE0YTljMGI3YWVjYTBkMjgxMjY4YTI1OWY1q1RGMw==: 00:33:07.422 03:36:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:33:07.422 03:36:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:07.422 03:36:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:07.422 03:36:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:33:07.422 03:36:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:33:07.422 03:36:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:07.422 03:36:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:33:07.422 03:36:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:07.422 03:36:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:07.422 03:36:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:07.422 03:36:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:07.422 03:36:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:07.422 03:36:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:07.422 03:36:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:07.422 03:36:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:07.422 03:36:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:07.422 03:36:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:07.422 03:36:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:07.422 03:36:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:07.422 03:36:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:07.422 03:36:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:07.422 03:36:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:07.422 03:36:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:07.422 03:36:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:07.987 nvme0n1 00:33:07.987 03:36:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:07.987 03:36:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:07.987 03:36:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:07.987 03:36:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:07.987 03:36:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:07.987 03:36:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:07.987 03:36:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:07.987 03:36:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:07.987 03:36:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:07.987 03:36:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:07.987 03:36:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:07.987 03:36:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:07.987 03:36:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:33:07.987 03:36:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:07.987 03:36:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:07.987 03:36:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:33:07.987 03:36:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:33:07.987 03:36:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:M2Q3ZjU1NjUwNGZkZjczYmQwMjQyNzZjYjcwOTQ0ZDMEr3Bz: 00:33:07.987 03:36:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTg5YzA3YzZhZDk4YzZjNWZlYWIwM2RkNTQwYjMwODlz4mjG: 00:33:07.987 03:36:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:07.987 03:36:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:33:07.987 03:36:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:M2Q3ZjU1NjUwNGZkZjczYmQwMjQyNzZjYjcwOTQ0ZDMEr3Bz: 00:33:07.987 03:36:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTg5YzA3YzZhZDk4YzZjNWZlYWIwM2RkNTQwYjMwODlz4mjG: ]] 00:33:07.987 03:36:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTg5YzA3YzZhZDk4YzZjNWZlYWIwM2RkNTQwYjMwODlz4mjG: 00:33:07.987 03:36:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:33:07.987 03:36:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:07.987 03:36:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:07.987 03:36:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:33:07.987 03:36:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:33:07.987 03:36:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:07.987 03:36:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:33:07.987 03:36:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:07.987 03:36:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:07.987 03:36:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:07.987 03:36:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:07.987 03:36:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:07.987 03:36:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:07.987 03:36:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:07.987 03:36:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:07.987 03:36:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:07.987 03:36:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:07.987 03:36:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:07.987 03:36:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:07.987 03:36:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:07.987 03:36:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:07.987 03:36:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:07.987 03:36:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:07.987 03:36:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:08.553 nvme0n1 00:33:08.553 03:36:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:08.553 03:36:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:08.553 03:36:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:08.553 03:36:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:08.553 03:36:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:08.553 03:36:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:08.553 03:36:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:08.553 03:36:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:08.553 03:36:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:08.553 03:36:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:08.553 03:36:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:08.553 03:36:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:08.553 03:36:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:33:08.553 03:36:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:08.553 03:36:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:08.553 03:36:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:33:08.553 03:36:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:33:08.553 03:36:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTNmZmRjMDNhYjAxZDFlMmJiYWUwNDNmZDFmNDk0ZTZiMTRkMDQxMWNhZDNlZTZkD806dA==: 00:33:08.553 03:36:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MGVmNGJiYjRjZTEzNTc5ODU3NzRjZDc2NThjMWQ4YzDGeU7a: 00:33:08.553 03:36:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:08.553 03:36:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:33:08.553 03:36:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTNmZmRjMDNhYjAxZDFlMmJiYWUwNDNmZDFmNDk0ZTZiMTRkMDQxMWNhZDNlZTZkD806dA==: 00:33:08.553 03:36:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MGVmNGJiYjRjZTEzNTc5ODU3NzRjZDc2NThjMWQ4YzDGeU7a: ]] 00:33:08.553 03:36:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MGVmNGJiYjRjZTEzNTc5ODU3NzRjZDc2NThjMWQ4YzDGeU7a: 00:33:08.553 03:36:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:33:08.553 03:36:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:08.553 03:36:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:08.553 03:36:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:33:08.553 03:36:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:33:08.553 03:36:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:08.553 03:36:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:33:08.553 03:36:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:08.553 03:36:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:08.553 03:36:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:08.553 03:36:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:08.553 03:36:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:08.553 03:36:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:08.553 03:36:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:08.553 03:36:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:08.553 03:36:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:08.553 03:36:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:08.553 03:36:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:08.553 03:36:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:08.553 03:36:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:08.553 03:36:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:08.553 03:36:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:33:08.553 03:36:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:08.553 03:36:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:09.119 nvme0n1 00:33:09.119 03:36:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:09.119 03:36:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:09.119 03:36:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:09.119 03:36:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:09.119 03:36:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:09.119 03:36:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:09.119 03:36:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:09.119 03:36:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:09.119 03:36:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:09.119 03:36:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:09.119 03:36:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:09.119 03:36:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:09.119 03:36:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:33:09.119 03:36:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:09.119 03:36:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:09.119 03:36:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:33:09.119 03:36:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:33:09.120 03:36:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:N2E5NDAyYTRkNWQ1MTA2ZDA3YjZlZjBkZjc2NDc4ZDAzYmJlYTczYTk2NzgxNDQ2N2RlZGNlMTQ5NTFiZGQwM2sgfW8=: 00:33:09.120 03:36:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:33:09.120 03:36:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:09.120 03:36:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:33:09.120 03:36:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:N2E5NDAyYTRkNWQ1MTA2ZDA3YjZlZjBkZjc2NDc4ZDAzYmJlYTczYTk2NzgxNDQ2N2RlZGNlMTQ5NTFiZGQwM2sgfW8=: 00:33:09.120 03:36:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:33:09.120 03:36:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:33:09.120 03:36:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:09.120 03:36:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:09.120 03:36:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:33:09.120 03:36:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:33:09.120 03:36:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:09.120 03:36:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:33:09.120 03:36:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:09.120 03:36:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:09.120 03:36:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:09.120 03:36:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:09.120 03:36:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:09.120 03:36:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:09.120 03:36:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:09.120 03:36:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:09.120 03:36:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:09.120 03:36:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:09.120 03:36:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:09.120 03:36:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:09.120 03:36:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:09.120 03:36:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:09.120 03:36:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:33:09.120 03:36:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:09.120 03:36:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:09.686 nvme0n1 00:33:09.686 03:36:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:09.686 03:36:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:09.686 03:36:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:09.686 03:36:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:09.686 03:36:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:09.686 03:36:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:09.943 03:36:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:09.943 03:36:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:09.943 03:36:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:09.943 03:36:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:09.943 03:36:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:09.943 03:36:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:33:09.943 03:36:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:09.943 03:36:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:33:09.943 03:36:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:09.943 03:36:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:09.943 03:36:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:33:09.943 03:36:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:33:09.943 03:36:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjZmMDI2ZDIwYWU5MDQ0OWZiYmQ3Nzc4MWQxMDg2YWUQ1oG8: 00:33:09.943 03:36:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NGM5Y2NiOTAyMzRmZjI2MDQzNGU2MTBiMmU5NTEwNTQwZGQyNzJjNmUyYzg4ZjA5ZDM0MzE1MzhjNTg3NTZiZfDtY/M=: 00:33:09.943 03:36:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:09.943 03:36:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:33:09.943 03:36:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjZmMDI2ZDIwYWU5MDQ0OWZiYmQ3Nzc4MWQxMDg2YWUQ1oG8: 00:33:09.943 03:36:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NGM5Y2NiOTAyMzRmZjI2MDQzNGU2MTBiMmU5NTEwNTQwZGQyNzJjNmUyYzg4ZjA5ZDM0MzE1MzhjNTg3NTZiZfDtY/M=: ]] 00:33:09.943 03:36:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NGM5Y2NiOTAyMzRmZjI2MDQzNGU2MTBiMmU5NTEwNTQwZGQyNzJjNmUyYzg4ZjA5ZDM0MzE1MzhjNTg3NTZiZfDtY/M=: 00:33:09.943 03:36:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:33:09.943 03:36:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:09.943 03:36:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:09.943 03:36:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:33:09.943 03:36:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:33:09.943 03:36:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:09.943 03:36:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:33:09.943 03:36:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:09.943 03:36:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:09.943 03:36:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:09.943 03:36:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:09.944 03:36:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:09.944 03:36:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:09.944 03:36:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:09.944 03:36:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:09.944 03:36:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:09.944 03:36:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:09.944 03:36:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:09.944 03:36:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:09.944 03:36:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:09.944 03:36:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:09.944 03:36:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:09.944 03:36:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:09.944 03:36:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:10.874 nvme0n1 00:33:10.874 03:36:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:10.874 03:36:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:10.874 03:36:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:10.874 03:36:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:10.874 03:36:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:10.874 03:36:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:10.874 03:36:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:10.874 03:36:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:10.874 03:36:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:10.874 03:36:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:10.874 03:36:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:10.874 03:36:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:10.874 03:36:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:33:10.874 03:36:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:10.874 03:36:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:10.874 03:36:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:33:10.874 03:36:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:10.874 03:36:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGI4ZGY4MTUwYjY0ZjNlYzNjNmIzOWY0MGI3ZTc3ZGIxNDFjMjE1OTBkM2I2NWU5gChLnQ==: 00:33:10.874 03:36:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTUwNmYwOTU5OTg5ZTYxNTVkMWE4ZWE0YTljMGI3YWVjYTBkMjgxMjY4YTI1OWY1q1RGMw==: 00:33:10.874 03:36:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:10.874 03:36:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:33:10.874 03:36:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGI4ZGY4MTUwYjY0ZjNlYzNjNmIzOWY0MGI3ZTc3ZGIxNDFjMjE1OTBkM2I2NWU5gChLnQ==: 00:33:10.874 03:36:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTUwNmYwOTU5OTg5ZTYxNTVkMWE4ZWE0YTljMGI3YWVjYTBkMjgxMjY4YTI1OWY1q1RGMw==: ]] 00:33:10.874 03:36:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTUwNmYwOTU5OTg5ZTYxNTVkMWE4ZWE0YTljMGI3YWVjYTBkMjgxMjY4YTI1OWY1q1RGMw==: 00:33:10.874 03:36:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:33:10.874 03:36:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:10.874 03:36:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:10.874 03:36:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:33:10.874 03:36:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:33:10.874 03:36:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:10.874 03:36:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:33:10.874 03:36:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:10.874 03:36:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:10.874 03:36:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:10.874 03:36:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:10.874 03:36:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:10.874 03:36:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:10.874 03:36:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:10.875 03:36:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:10.875 03:36:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:10.875 03:36:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:10.875 03:36:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:10.875 03:36:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:10.875 03:36:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:10.875 03:36:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:10.875 03:36:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:10.875 03:36:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:10.875 03:36:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:11.807 nvme0n1 00:33:11.807 03:36:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:11.807 03:36:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:11.807 03:36:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:11.807 03:36:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:11.807 03:36:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:11.807 03:36:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:11.807 03:36:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:11.807 03:36:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:11.807 03:36:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:11.807 03:36:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:11.807 03:36:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:11.807 03:36:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:11.807 03:36:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:33:11.807 03:36:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:11.807 03:36:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:11.807 03:36:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:33:11.807 03:36:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:33:11.807 03:36:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:M2Q3ZjU1NjUwNGZkZjczYmQwMjQyNzZjYjcwOTQ0ZDMEr3Bz: 00:33:11.807 03:36:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTg5YzA3YzZhZDk4YzZjNWZlYWIwM2RkNTQwYjMwODlz4mjG: 00:33:11.807 03:36:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:11.807 03:36:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:33:11.807 03:36:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:M2Q3ZjU1NjUwNGZkZjczYmQwMjQyNzZjYjcwOTQ0ZDMEr3Bz: 00:33:11.807 03:36:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTg5YzA3YzZhZDk4YzZjNWZlYWIwM2RkNTQwYjMwODlz4mjG: ]] 00:33:11.807 03:36:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTg5YzA3YzZhZDk4YzZjNWZlYWIwM2RkNTQwYjMwODlz4mjG: 00:33:11.807 03:36:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:33:11.807 03:36:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:11.807 03:36:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:11.807 03:36:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:33:11.807 03:36:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:33:11.807 03:36:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:11.807 03:36:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:33:11.807 03:36:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:11.807 03:36:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:12.064 03:36:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:12.065 03:36:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:12.065 03:36:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:12.065 03:36:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:12.065 03:36:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:12.065 03:36:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:12.065 03:36:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:12.065 03:36:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:12.065 03:36:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:12.065 03:36:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:12.065 03:36:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:12.065 03:36:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:12.065 03:36:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:12.065 03:36:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:12.065 03:36:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:12.998 nvme0n1 00:33:12.998 03:36:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:12.998 03:36:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:12.998 03:36:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:12.998 03:36:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:12.998 03:36:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:12.998 03:36:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:12.998 03:36:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:12.998 03:36:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:12.998 03:36:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:12.998 03:36:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:12.998 03:36:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:12.998 03:36:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:12.998 03:36:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:33:12.998 03:36:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:12.998 03:36:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:12.998 03:36:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:33:12.998 03:36:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:33:12.998 03:36:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTNmZmRjMDNhYjAxZDFlMmJiYWUwNDNmZDFmNDk0ZTZiMTRkMDQxMWNhZDNlZTZkD806dA==: 00:33:12.998 03:36:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MGVmNGJiYjRjZTEzNTc5ODU3NzRjZDc2NThjMWQ4YzDGeU7a: 00:33:12.998 03:36:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:12.998 03:36:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:33:12.998 03:36:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTNmZmRjMDNhYjAxZDFlMmJiYWUwNDNmZDFmNDk0ZTZiMTRkMDQxMWNhZDNlZTZkD806dA==: 00:33:12.998 03:36:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MGVmNGJiYjRjZTEzNTc5ODU3NzRjZDc2NThjMWQ4YzDGeU7a: ]] 00:33:12.998 03:36:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MGVmNGJiYjRjZTEzNTc5ODU3NzRjZDc2NThjMWQ4YzDGeU7a: 00:33:12.998 03:36:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:33:12.998 03:36:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:12.998 03:36:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:12.998 03:36:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:33:12.998 03:36:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:33:12.998 03:36:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:12.998 03:36:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:33:12.998 03:36:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:12.998 03:36:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:12.998 03:36:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:12.998 03:36:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:12.999 03:36:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:12.999 03:36:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:12.999 03:36:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:12.999 03:36:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:12.999 03:36:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:12.999 03:36:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:12.999 03:36:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:12.999 03:36:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:12.999 03:36:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:12.999 03:36:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:12.999 03:36:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:33:12.999 03:36:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:12.999 03:36:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:13.932 nvme0n1 00:33:13.932 03:36:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:13.932 03:36:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:13.932 03:36:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:13.932 03:36:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:13.932 03:36:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:13.932 03:36:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:13.932 03:36:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:13.932 03:36:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:13.932 03:36:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:13.932 03:36:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:13.932 03:36:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:13.932 03:36:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:13.932 03:36:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:33:13.932 03:36:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:13.932 03:36:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:13.932 03:36:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:33:13.932 03:36:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:33:13.932 03:36:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:N2E5NDAyYTRkNWQ1MTA2ZDA3YjZlZjBkZjc2NDc4ZDAzYmJlYTczYTk2NzgxNDQ2N2RlZGNlMTQ5NTFiZGQwM2sgfW8=: 00:33:13.932 03:36:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:33:13.932 03:36:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:13.932 03:36:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:33:13.932 03:36:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:N2E5NDAyYTRkNWQ1MTA2ZDA3YjZlZjBkZjc2NDc4ZDAzYmJlYTczYTk2NzgxNDQ2N2RlZGNlMTQ5NTFiZGQwM2sgfW8=: 00:33:13.932 03:36:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:33:13.932 03:36:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:33:13.932 03:36:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:13.932 03:36:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:13.932 03:36:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:33:13.932 03:36:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:33:13.932 03:36:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:13.932 03:36:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:33:13.932 03:36:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:13.932 03:36:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:13.932 03:36:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:13.932 03:36:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:13.932 03:36:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:13.932 03:36:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:13.932 03:36:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:13.932 03:36:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:13.932 03:36:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:13.932 03:36:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:13.932 03:36:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:13.932 03:36:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:13.932 03:36:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:13.932 03:36:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:13.932 03:36:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:33:13.932 03:36:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:13.932 03:36:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:14.865 nvme0n1 00:33:14.865 03:36:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:14.865 03:36:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:14.865 03:36:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:14.865 03:36:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:14.865 03:36:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:14.865 03:36:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:14.865 03:36:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:14.865 03:36:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:14.865 03:36:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:14.865 03:36:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:14.865 03:36:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:14.865 03:36:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:33:14.865 03:36:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:33:14.865 03:36:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:14.865 03:36:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:33:14.865 03:36:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:14.865 03:36:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:14.865 03:36:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:33:14.865 03:36:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:33:14.865 03:36:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjZmMDI2ZDIwYWU5MDQ0OWZiYmQ3Nzc4MWQxMDg2YWUQ1oG8: 00:33:14.865 03:36:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NGM5Y2NiOTAyMzRmZjI2MDQzNGU2MTBiMmU5NTEwNTQwZGQyNzJjNmUyYzg4ZjA5ZDM0MzE1MzhjNTg3NTZiZfDtY/M=: 00:33:14.865 03:36:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:14.865 03:36:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:33:14.865 03:36:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjZmMDI2ZDIwYWU5MDQ0OWZiYmQ3Nzc4MWQxMDg2YWUQ1oG8: 00:33:14.865 03:36:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NGM5Y2NiOTAyMzRmZjI2MDQzNGU2MTBiMmU5NTEwNTQwZGQyNzJjNmUyYzg4ZjA5ZDM0MzE1MzhjNTg3NTZiZfDtY/M=: ]] 00:33:14.865 03:36:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NGM5Y2NiOTAyMzRmZjI2MDQzNGU2MTBiMmU5NTEwNTQwZGQyNzJjNmUyYzg4ZjA5ZDM0MzE1MzhjNTg3NTZiZfDtY/M=: 00:33:14.865 03:36:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:33:14.865 03:36:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:14.865 03:36:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:14.865 03:36:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:33:14.865 03:36:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:33:14.865 03:36:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:14.866 03:36:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:33:14.866 03:36:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:14.866 03:36:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:14.866 03:36:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:14.866 03:36:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:14.866 03:36:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:14.866 03:36:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:14.866 03:36:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:14.866 03:36:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:14.866 03:36:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:14.866 03:36:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:14.866 03:36:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:14.866 03:36:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:14.866 03:36:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:14.866 03:36:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:14.866 03:36:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:14.866 03:36:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:14.866 03:36:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:15.124 nvme0n1 00:33:15.124 03:36:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:15.124 03:36:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:15.124 03:36:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:15.124 03:36:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:15.124 03:36:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:15.124 03:36:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:15.124 03:36:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:15.124 03:36:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:15.124 03:36:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:15.124 03:36:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:15.124 03:36:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:15.124 03:36:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:15.124 03:36:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:33:15.124 03:36:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:15.124 03:36:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:15.124 03:36:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:33:15.124 03:36:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:15.124 03:36:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGI4ZGY4MTUwYjY0ZjNlYzNjNmIzOWY0MGI3ZTc3ZGIxNDFjMjE1OTBkM2I2NWU5gChLnQ==: 00:33:15.124 03:36:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTUwNmYwOTU5OTg5ZTYxNTVkMWE4ZWE0YTljMGI3YWVjYTBkMjgxMjY4YTI1OWY1q1RGMw==: 00:33:15.124 03:36:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:15.124 03:36:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:33:15.124 03:36:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGI4ZGY4MTUwYjY0ZjNlYzNjNmIzOWY0MGI3ZTc3ZGIxNDFjMjE1OTBkM2I2NWU5gChLnQ==: 00:33:15.124 03:36:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTUwNmYwOTU5OTg5ZTYxNTVkMWE4ZWE0YTljMGI3YWVjYTBkMjgxMjY4YTI1OWY1q1RGMw==: ]] 00:33:15.124 03:36:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTUwNmYwOTU5OTg5ZTYxNTVkMWE4ZWE0YTljMGI3YWVjYTBkMjgxMjY4YTI1OWY1q1RGMw==: 00:33:15.124 03:36:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:33:15.124 03:36:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:15.125 03:36:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:15.125 03:36:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:33:15.125 03:36:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:33:15.125 03:36:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:15.125 03:36:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:33:15.125 03:36:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:15.125 03:36:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:15.125 03:36:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:15.125 03:36:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:15.125 03:36:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:15.125 03:36:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:15.125 03:36:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:15.125 03:36:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:15.125 03:36:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:15.125 03:36:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:15.125 03:36:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:15.125 03:36:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:15.125 03:36:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:15.125 03:36:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:15.125 03:36:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:15.125 03:36:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:15.125 03:36:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:15.383 nvme0n1 00:33:15.383 03:36:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:15.383 03:36:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:15.383 03:36:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:15.383 03:36:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:15.383 03:36:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:15.383 03:36:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:15.383 03:36:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:15.383 03:36:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:15.383 03:36:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:15.383 03:36:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:15.383 03:36:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:15.383 03:36:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:15.383 03:36:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:33:15.383 03:36:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:15.383 03:36:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:15.383 03:36:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:33:15.383 03:36:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:33:15.383 03:36:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:M2Q3ZjU1NjUwNGZkZjczYmQwMjQyNzZjYjcwOTQ0ZDMEr3Bz: 00:33:15.383 03:36:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTg5YzA3YzZhZDk4YzZjNWZlYWIwM2RkNTQwYjMwODlz4mjG: 00:33:15.383 03:36:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:15.383 03:36:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:33:15.383 03:36:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:M2Q3ZjU1NjUwNGZkZjczYmQwMjQyNzZjYjcwOTQ0ZDMEr3Bz: 00:33:15.383 03:36:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTg5YzA3YzZhZDk4YzZjNWZlYWIwM2RkNTQwYjMwODlz4mjG: ]] 00:33:15.383 03:36:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTg5YzA3YzZhZDk4YzZjNWZlYWIwM2RkNTQwYjMwODlz4mjG: 00:33:15.383 03:36:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:33:15.383 03:36:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:15.383 03:36:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:15.383 03:36:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:33:15.383 03:36:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:33:15.383 03:36:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:15.383 03:36:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:33:15.383 03:36:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:15.383 03:36:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:15.383 03:36:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:15.383 03:36:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:15.383 03:36:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:15.383 03:36:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:15.383 03:36:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:15.383 03:36:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:15.383 03:36:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:15.383 03:36:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:15.383 03:36:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:15.383 03:36:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:15.383 03:36:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:15.383 03:36:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:15.383 03:36:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:15.383 03:36:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:15.383 03:36:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:15.641 nvme0n1 00:33:15.641 03:36:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:15.641 03:36:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:15.641 03:36:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:15.641 03:36:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:15.641 03:36:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:15.641 03:36:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:15.641 03:36:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:15.641 03:36:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:15.641 03:36:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:15.641 03:36:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:15.641 03:36:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:15.641 03:36:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:15.641 03:36:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:33:15.641 03:36:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:15.641 03:36:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:15.641 03:36:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:33:15.641 03:36:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:33:15.641 03:36:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTNmZmRjMDNhYjAxZDFlMmJiYWUwNDNmZDFmNDk0ZTZiMTRkMDQxMWNhZDNlZTZkD806dA==: 00:33:15.641 03:36:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MGVmNGJiYjRjZTEzNTc5ODU3NzRjZDc2NThjMWQ4YzDGeU7a: 00:33:15.641 03:36:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:15.641 03:36:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:33:15.641 03:36:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTNmZmRjMDNhYjAxZDFlMmJiYWUwNDNmZDFmNDk0ZTZiMTRkMDQxMWNhZDNlZTZkD806dA==: 00:33:15.641 03:36:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MGVmNGJiYjRjZTEzNTc5ODU3NzRjZDc2NThjMWQ4YzDGeU7a: ]] 00:33:15.641 03:36:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MGVmNGJiYjRjZTEzNTc5ODU3NzRjZDc2NThjMWQ4YzDGeU7a: 00:33:15.641 03:36:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:33:15.641 03:36:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:15.641 03:36:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:15.641 03:36:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:33:15.641 03:36:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:33:15.641 03:36:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:15.641 03:36:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:33:15.641 03:36:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:15.641 03:36:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:15.641 03:36:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:15.641 03:36:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:15.641 03:36:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:15.641 03:36:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:15.641 03:36:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:15.641 03:36:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:15.641 03:36:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:15.641 03:36:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:15.641 03:36:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:15.641 03:36:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:15.641 03:36:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:15.641 03:36:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:15.641 03:36:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:33:15.641 03:36:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:15.641 03:36:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:15.899 nvme0n1 00:33:15.899 03:36:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:15.899 03:36:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:15.899 03:36:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:15.899 03:36:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:15.900 03:36:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:15.900 03:36:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:15.900 03:36:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:15.900 03:36:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:15.900 03:36:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:15.900 03:36:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:15.900 03:36:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:15.900 03:36:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:15.900 03:36:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:33:15.900 03:36:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:15.900 03:36:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:15.900 03:36:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:33:15.900 03:36:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:33:15.900 03:36:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:N2E5NDAyYTRkNWQ1MTA2ZDA3YjZlZjBkZjc2NDc4ZDAzYmJlYTczYTk2NzgxNDQ2N2RlZGNlMTQ5NTFiZGQwM2sgfW8=: 00:33:15.900 03:36:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:33:15.900 03:36:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:15.900 03:36:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:33:15.900 03:36:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:N2E5NDAyYTRkNWQ1MTA2ZDA3YjZlZjBkZjc2NDc4ZDAzYmJlYTczYTk2NzgxNDQ2N2RlZGNlMTQ5NTFiZGQwM2sgfW8=: 00:33:15.900 03:36:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:33:15.900 03:36:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:33:15.900 03:36:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:15.900 03:36:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:15.900 03:36:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:33:15.900 03:36:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:33:15.900 03:36:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:15.900 03:36:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:33:15.900 03:36:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:15.900 03:36:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:15.900 03:36:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:15.900 03:36:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:15.900 03:36:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:15.900 03:36:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:15.900 03:36:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:15.900 03:36:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:15.900 03:36:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:15.900 03:36:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:15.900 03:36:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:15.900 03:36:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:15.900 03:36:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:15.900 03:36:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:15.900 03:36:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:33:15.900 03:36:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:15.900 03:36:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:15.900 nvme0n1 00:33:15.900 03:36:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:15.900 03:36:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:15.900 03:36:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:15.900 03:36:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:15.900 03:36:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:15.900 03:36:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:16.157 03:36:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:16.157 03:36:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:16.157 03:36:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:16.157 03:36:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:16.158 03:36:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:16.158 03:36:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:33:16.158 03:36:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:16.158 03:36:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:33:16.158 03:36:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:16.158 03:36:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:16.158 03:36:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:33:16.158 03:36:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:33:16.158 03:36:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjZmMDI2ZDIwYWU5MDQ0OWZiYmQ3Nzc4MWQxMDg2YWUQ1oG8: 00:33:16.158 03:36:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NGM5Y2NiOTAyMzRmZjI2MDQzNGU2MTBiMmU5NTEwNTQwZGQyNzJjNmUyYzg4ZjA5ZDM0MzE1MzhjNTg3NTZiZfDtY/M=: 00:33:16.158 03:36:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:16.158 03:36:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:33:16.158 03:36:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjZmMDI2ZDIwYWU5MDQ0OWZiYmQ3Nzc4MWQxMDg2YWUQ1oG8: 00:33:16.158 03:36:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NGM5Y2NiOTAyMzRmZjI2MDQzNGU2MTBiMmU5NTEwNTQwZGQyNzJjNmUyYzg4ZjA5ZDM0MzE1MzhjNTg3NTZiZfDtY/M=: ]] 00:33:16.158 03:36:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NGM5Y2NiOTAyMzRmZjI2MDQzNGU2MTBiMmU5NTEwNTQwZGQyNzJjNmUyYzg4ZjA5ZDM0MzE1MzhjNTg3NTZiZfDtY/M=: 00:33:16.158 03:36:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:33:16.158 03:36:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:16.158 03:36:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:16.158 03:36:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:33:16.158 03:36:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:33:16.158 03:36:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:16.158 03:36:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:33:16.158 03:36:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:16.158 03:36:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:16.158 03:36:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:16.158 03:36:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:16.158 03:36:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:16.158 03:36:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:16.158 03:36:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:16.158 03:36:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:16.158 03:36:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:16.158 03:36:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:16.158 03:36:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:16.158 03:36:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:16.158 03:36:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:16.158 03:36:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:16.158 03:36:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:16.158 03:36:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:16.158 03:36:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:16.158 nvme0n1 00:33:16.158 03:36:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:16.158 03:36:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:16.158 03:36:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:16.158 03:36:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:16.158 03:36:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:16.158 03:36:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:16.416 03:36:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:16.416 03:36:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:16.416 03:36:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:16.416 03:36:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:16.416 03:36:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:16.416 03:36:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:16.416 03:36:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:33:16.416 03:36:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:16.416 03:36:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:16.416 03:36:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:33:16.416 03:36:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:16.416 03:36:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGI4ZGY4MTUwYjY0ZjNlYzNjNmIzOWY0MGI3ZTc3ZGIxNDFjMjE1OTBkM2I2NWU5gChLnQ==: 00:33:16.416 03:36:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTUwNmYwOTU5OTg5ZTYxNTVkMWE4ZWE0YTljMGI3YWVjYTBkMjgxMjY4YTI1OWY1q1RGMw==: 00:33:16.416 03:36:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:16.416 03:36:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:33:16.416 03:36:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGI4ZGY4MTUwYjY0ZjNlYzNjNmIzOWY0MGI3ZTc3ZGIxNDFjMjE1OTBkM2I2NWU5gChLnQ==: 00:33:16.416 03:36:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTUwNmYwOTU5OTg5ZTYxNTVkMWE4ZWE0YTljMGI3YWVjYTBkMjgxMjY4YTI1OWY1q1RGMw==: ]] 00:33:16.416 03:36:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTUwNmYwOTU5OTg5ZTYxNTVkMWE4ZWE0YTljMGI3YWVjYTBkMjgxMjY4YTI1OWY1q1RGMw==: 00:33:16.416 03:36:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:33:16.416 03:36:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:16.416 03:36:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:16.416 03:36:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:33:16.416 03:36:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:33:16.416 03:36:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:16.416 03:36:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:33:16.416 03:36:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:16.416 03:36:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:16.416 03:36:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:16.416 03:36:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:16.416 03:36:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:16.416 03:36:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:16.416 03:36:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:16.416 03:36:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:16.416 03:36:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:16.416 03:36:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:16.416 03:36:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:16.416 03:36:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:16.416 03:36:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:16.416 03:36:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:16.416 03:36:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:16.416 03:36:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:16.416 03:36:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:16.416 nvme0n1 00:33:16.416 03:36:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:16.416 03:36:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:16.416 03:36:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:16.416 03:36:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:16.416 03:36:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:16.674 03:36:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:16.674 03:36:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:16.674 03:36:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:16.674 03:36:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:16.674 03:36:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:16.674 03:36:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:16.674 03:36:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:16.674 03:36:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:33:16.674 03:36:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:16.674 03:36:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:16.674 03:36:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:33:16.674 03:36:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:33:16.674 03:36:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:M2Q3ZjU1NjUwNGZkZjczYmQwMjQyNzZjYjcwOTQ0ZDMEr3Bz: 00:33:16.674 03:36:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTg5YzA3YzZhZDk4YzZjNWZlYWIwM2RkNTQwYjMwODlz4mjG: 00:33:16.674 03:36:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:16.674 03:36:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:33:16.674 03:36:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:M2Q3ZjU1NjUwNGZkZjczYmQwMjQyNzZjYjcwOTQ0ZDMEr3Bz: 00:33:16.674 03:36:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTg5YzA3YzZhZDk4YzZjNWZlYWIwM2RkNTQwYjMwODlz4mjG: ]] 00:33:16.674 03:36:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTg5YzA3YzZhZDk4YzZjNWZlYWIwM2RkNTQwYjMwODlz4mjG: 00:33:16.674 03:36:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:33:16.674 03:36:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:16.674 03:36:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:16.674 03:36:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:33:16.674 03:36:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:33:16.674 03:36:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:16.674 03:36:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:33:16.674 03:36:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:16.674 03:36:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:16.674 03:36:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:16.674 03:36:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:16.674 03:36:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:16.674 03:36:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:16.674 03:36:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:16.674 03:36:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:16.674 03:36:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:16.674 03:36:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:16.674 03:36:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:16.674 03:36:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:16.674 03:36:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:16.674 03:36:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:16.674 03:36:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:16.674 03:36:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:16.674 03:36:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:16.933 nvme0n1 00:33:16.933 03:36:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:16.933 03:36:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:16.933 03:36:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:16.933 03:36:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:16.933 03:36:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:16.933 03:36:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:16.933 03:36:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:16.933 03:36:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:16.933 03:36:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:16.933 03:36:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:16.933 03:36:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:16.933 03:36:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:16.933 03:36:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:33:16.933 03:36:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:16.933 03:36:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:16.933 03:36:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:33:16.933 03:36:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:33:16.933 03:36:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTNmZmRjMDNhYjAxZDFlMmJiYWUwNDNmZDFmNDk0ZTZiMTRkMDQxMWNhZDNlZTZkD806dA==: 00:33:16.933 03:36:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MGVmNGJiYjRjZTEzNTc5ODU3NzRjZDc2NThjMWQ4YzDGeU7a: 00:33:16.933 03:36:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:16.933 03:36:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:33:16.933 03:36:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTNmZmRjMDNhYjAxZDFlMmJiYWUwNDNmZDFmNDk0ZTZiMTRkMDQxMWNhZDNlZTZkD806dA==: 00:33:16.933 03:36:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MGVmNGJiYjRjZTEzNTc5ODU3NzRjZDc2NThjMWQ4YzDGeU7a: ]] 00:33:16.933 03:36:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MGVmNGJiYjRjZTEzNTc5ODU3NzRjZDc2NThjMWQ4YzDGeU7a: 00:33:16.933 03:36:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:33:16.933 03:36:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:16.933 03:36:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:16.933 03:36:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:33:16.933 03:36:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:33:16.933 03:36:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:16.933 03:36:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:33:16.933 03:36:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:16.933 03:36:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:16.933 03:36:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:16.933 03:36:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:16.933 03:36:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:16.933 03:36:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:16.933 03:36:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:16.933 03:36:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:16.933 03:36:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:16.933 03:36:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:16.933 03:36:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:16.933 03:36:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:16.933 03:36:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:16.933 03:36:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:16.933 03:36:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:33:16.933 03:36:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:16.933 03:36:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:17.191 nvme0n1 00:33:17.191 03:36:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:17.191 03:36:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:17.191 03:36:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:17.191 03:36:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:17.191 03:36:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:17.191 03:36:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:17.191 03:36:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:17.191 03:36:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:17.191 03:36:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:17.191 03:36:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:17.191 03:36:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:17.191 03:36:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:17.191 03:36:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:33:17.191 03:36:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:17.191 03:36:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:17.191 03:36:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:33:17.191 03:36:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:33:17.191 03:36:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:N2E5NDAyYTRkNWQ1MTA2ZDA3YjZlZjBkZjc2NDc4ZDAzYmJlYTczYTk2NzgxNDQ2N2RlZGNlMTQ5NTFiZGQwM2sgfW8=: 00:33:17.191 03:36:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:33:17.191 03:36:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:17.191 03:36:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:33:17.191 03:36:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:N2E5NDAyYTRkNWQ1MTA2ZDA3YjZlZjBkZjc2NDc4ZDAzYmJlYTczYTk2NzgxNDQ2N2RlZGNlMTQ5NTFiZGQwM2sgfW8=: 00:33:17.191 03:36:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:33:17.191 03:36:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:33:17.191 03:36:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:17.191 03:36:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:17.191 03:36:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:33:17.191 03:36:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:33:17.191 03:36:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:17.191 03:36:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:33:17.191 03:36:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:17.191 03:36:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:17.191 03:36:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:17.191 03:36:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:17.191 03:36:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:17.191 03:36:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:17.191 03:36:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:17.191 03:36:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:17.191 03:36:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:17.191 03:36:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:17.191 03:36:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:17.191 03:36:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:17.191 03:36:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:17.191 03:36:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:17.191 03:36:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:33:17.191 03:36:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:17.191 03:36:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:17.448 nvme0n1 00:33:17.448 03:36:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:17.448 03:36:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:17.448 03:36:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:17.448 03:36:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:17.448 03:36:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:17.448 03:36:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:17.448 03:36:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:17.448 03:36:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:17.448 03:36:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:17.448 03:36:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:17.448 03:36:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:17.448 03:36:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:33:17.448 03:36:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:17.448 03:36:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:33:17.448 03:36:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:17.448 03:36:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:17.448 03:36:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:33:17.448 03:36:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:33:17.448 03:36:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjZmMDI2ZDIwYWU5MDQ0OWZiYmQ3Nzc4MWQxMDg2YWUQ1oG8: 00:33:17.448 03:36:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NGM5Y2NiOTAyMzRmZjI2MDQzNGU2MTBiMmU5NTEwNTQwZGQyNzJjNmUyYzg4ZjA5ZDM0MzE1MzhjNTg3NTZiZfDtY/M=: 00:33:17.448 03:36:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:17.449 03:36:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:33:17.449 03:36:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjZmMDI2ZDIwYWU5MDQ0OWZiYmQ3Nzc4MWQxMDg2YWUQ1oG8: 00:33:17.449 03:36:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NGM5Y2NiOTAyMzRmZjI2MDQzNGU2MTBiMmU5NTEwNTQwZGQyNzJjNmUyYzg4ZjA5ZDM0MzE1MzhjNTg3NTZiZfDtY/M=: ]] 00:33:17.449 03:36:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NGM5Y2NiOTAyMzRmZjI2MDQzNGU2MTBiMmU5NTEwNTQwZGQyNzJjNmUyYzg4ZjA5ZDM0MzE1MzhjNTg3NTZiZfDtY/M=: 00:33:17.449 03:36:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:33:17.449 03:36:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:17.449 03:36:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:17.449 03:36:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:33:17.449 03:36:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:33:17.449 03:36:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:17.449 03:36:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:33:17.449 03:36:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:17.449 03:36:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:17.449 03:36:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:17.449 03:36:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:17.449 03:36:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:17.449 03:36:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:17.449 03:36:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:17.449 03:36:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:17.449 03:36:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:17.449 03:36:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:17.449 03:36:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:17.449 03:36:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:17.449 03:36:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:17.449 03:36:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:17.449 03:36:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:17.449 03:36:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:17.449 03:36:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:17.706 nvme0n1 00:33:17.706 03:36:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:17.706 03:36:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:17.706 03:36:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:17.706 03:36:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:17.706 03:36:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:17.706 03:36:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:17.706 03:36:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:17.706 03:36:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:17.706 03:36:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:17.706 03:36:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:17.706 03:36:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:17.706 03:36:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:17.706 03:36:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:33:17.706 03:36:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:17.706 03:36:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:17.706 03:36:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:33:17.706 03:36:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:17.706 03:36:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGI4ZGY4MTUwYjY0ZjNlYzNjNmIzOWY0MGI3ZTc3ZGIxNDFjMjE1OTBkM2I2NWU5gChLnQ==: 00:33:17.706 03:36:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTUwNmYwOTU5OTg5ZTYxNTVkMWE4ZWE0YTljMGI3YWVjYTBkMjgxMjY4YTI1OWY1q1RGMw==: 00:33:17.706 03:36:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:17.706 03:36:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:33:17.706 03:36:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGI4ZGY4MTUwYjY0ZjNlYzNjNmIzOWY0MGI3ZTc3ZGIxNDFjMjE1OTBkM2I2NWU5gChLnQ==: 00:33:17.706 03:36:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTUwNmYwOTU5OTg5ZTYxNTVkMWE4ZWE0YTljMGI3YWVjYTBkMjgxMjY4YTI1OWY1q1RGMw==: ]] 00:33:17.706 03:36:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTUwNmYwOTU5OTg5ZTYxNTVkMWE4ZWE0YTljMGI3YWVjYTBkMjgxMjY4YTI1OWY1q1RGMw==: 00:33:17.706 03:36:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:33:17.706 03:36:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:17.706 03:36:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:17.706 03:36:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:33:17.706 03:36:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:33:17.706 03:36:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:17.706 03:36:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:33:17.706 03:36:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:17.706 03:36:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:17.706 03:36:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:17.706 03:36:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:17.706 03:36:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:17.706 03:36:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:17.706 03:36:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:17.706 03:36:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:17.706 03:36:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:17.706 03:36:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:17.706 03:36:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:17.706 03:36:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:17.706 03:36:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:17.706 03:36:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:17.706 03:36:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:17.706 03:36:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:17.706 03:36:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:17.963 nvme0n1 00:33:17.963 03:36:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:17.963 03:36:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:17.963 03:36:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:17.964 03:36:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:17.964 03:36:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:17.964 03:36:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:17.964 03:36:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:17.964 03:36:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:17.964 03:36:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:17.964 03:36:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:17.964 03:36:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:17.964 03:36:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:17.964 03:36:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:33:17.964 03:36:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:17.964 03:36:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:17.964 03:36:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:33:17.964 03:36:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:33:17.964 03:36:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:M2Q3ZjU1NjUwNGZkZjczYmQwMjQyNzZjYjcwOTQ0ZDMEr3Bz: 00:33:17.964 03:36:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTg5YzA3YzZhZDk4YzZjNWZlYWIwM2RkNTQwYjMwODlz4mjG: 00:33:17.964 03:36:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:17.964 03:36:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:33:17.964 03:36:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:M2Q3ZjU1NjUwNGZkZjczYmQwMjQyNzZjYjcwOTQ0ZDMEr3Bz: 00:33:17.964 03:36:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTg5YzA3YzZhZDk4YzZjNWZlYWIwM2RkNTQwYjMwODlz4mjG: ]] 00:33:17.964 03:36:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTg5YzA3YzZhZDk4YzZjNWZlYWIwM2RkNTQwYjMwODlz4mjG: 00:33:17.964 03:36:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:33:17.964 03:36:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:17.964 03:36:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:17.964 03:36:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:33:18.221 03:36:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:33:18.221 03:36:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:18.221 03:36:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:33:18.221 03:36:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:18.221 03:36:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:18.221 03:36:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:18.221 03:36:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:18.221 03:36:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:18.221 03:36:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:18.221 03:36:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:18.221 03:36:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:18.221 03:36:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:18.221 03:36:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:18.221 03:36:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:18.221 03:36:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:18.221 03:36:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:18.221 03:36:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:18.221 03:36:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:18.221 03:36:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:18.221 03:36:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:18.478 nvme0n1 00:33:18.478 03:36:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:18.478 03:36:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:18.478 03:36:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:18.478 03:36:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:18.478 03:36:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:18.478 03:36:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:18.478 03:36:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:18.478 03:36:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:18.478 03:36:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:18.478 03:36:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:18.478 03:36:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:18.478 03:36:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:18.478 03:36:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:33:18.478 03:36:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:18.478 03:36:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:18.478 03:36:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:33:18.478 03:36:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:33:18.478 03:36:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTNmZmRjMDNhYjAxZDFlMmJiYWUwNDNmZDFmNDk0ZTZiMTRkMDQxMWNhZDNlZTZkD806dA==: 00:33:18.478 03:36:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MGVmNGJiYjRjZTEzNTc5ODU3NzRjZDc2NThjMWQ4YzDGeU7a: 00:33:18.478 03:36:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:18.478 03:36:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:33:18.478 03:36:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTNmZmRjMDNhYjAxZDFlMmJiYWUwNDNmZDFmNDk0ZTZiMTRkMDQxMWNhZDNlZTZkD806dA==: 00:33:18.478 03:36:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MGVmNGJiYjRjZTEzNTc5ODU3NzRjZDc2NThjMWQ4YzDGeU7a: ]] 00:33:18.478 03:36:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MGVmNGJiYjRjZTEzNTc5ODU3NzRjZDc2NThjMWQ4YzDGeU7a: 00:33:18.478 03:36:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:33:18.478 03:36:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:18.478 03:36:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:18.478 03:36:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:33:18.478 03:36:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:33:18.478 03:36:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:18.478 03:36:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:33:18.478 03:36:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:18.478 03:36:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:18.478 03:36:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:18.478 03:36:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:18.478 03:36:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:18.478 03:36:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:18.478 03:36:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:18.478 03:36:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:18.478 03:36:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:18.478 03:36:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:18.478 03:36:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:18.478 03:36:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:18.478 03:36:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:18.478 03:36:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:18.478 03:36:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:33:18.478 03:36:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:18.478 03:36:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:18.736 nvme0n1 00:33:18.736 03:36:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:18.736 03:36:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:18.736 03:36:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:18.736 03:36:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:18.736 03:36:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:18.736 03:36:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:18.736 03:36:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:18.736 03:36:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:18.736 03:36:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:18.736 03:36:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:18.736 03:36:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:18.736 03:36:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:18.736 03:36:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:33:18.736 03:36:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:18.736 03:36:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:18.736 03:36:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:33:18.736 03:36:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:33:18.736 03:36:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:N2E5NDAyYTRkNWQ1MTA2ZDA3YjZlZjBkZjc2NDc4ZDAzYmJlYTczYTk2NzgxNDQ2N2RlZGNlMTQ5NTFiZGQwM2sgfW8=: 00:33:18.736 03:36:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:33:18.736 03:36:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:18.736 03:36:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:33:18.736 03:36:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:N2E5NDAyYTRkNWQ1MTA2ZDA3YjZlZjBkZjc2NDc4ZDAzYmJlYTczYTk2NzgxNDQ2N2RlZGNlMTQ5NTFiZGQwM2sgfW8=: 00:33:18.736 03:36:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:33:18.736 03:36:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:33:18.736 03:36:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:18.736 03:36:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:18.736 03:36:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:33:18.736 03:36:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:33:18.736 03:36:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:18.736 03:36:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:33:18.736 03:36:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:18.736 03:36:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:18.736 03:36:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:18.736 03:36:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:18.736 03:36:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:18.736 03:36:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:18.736 03:36:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:18.736 03:36:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:18.736 03:36:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:18.736 03:36:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:18.736 03:36:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:18.736 03:36:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:18.736 03:36:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:18.736 03:36:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:18.737 03:36:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:33:18.737 03:36:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:18.737 03:36:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:18.995 nvme0n1 00:33:19.253 03:36:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:19.253 03:36:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:19.253 03:36:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:19.253 03:36:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:19.253 03:36:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:19.253 03:36:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:19.253 03:36:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:19.253 03:36:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:19.253 03:36:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:19.253 03:36:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:19.253 03:36:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:19.253 03:36:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:33:19.253 03:36:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:19.253 03:36:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:33:19.253 03:36:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:19.253 03:36:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:19.253 03:36:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:33:19.253 03:36:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:33:19.253 03:36:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjZmMDI2ZDIwYWU5MDQ0OWZiYmQ3Nzc4MWQxMDg2YWUQ1oG8: 00:33:19.253 03:36:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NGM5Y2NiOTAyMzRmZjI2MDQzNGU2MTBiMmU5NTEwNTQwZGQyNzJjNmUyYzg4ZjA5ZDM0MzE1MzhjNTg3NTZiZfDtY/M=: 00:33:19.253 03:36:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:19.253 03:36:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:33:19.253 03:36:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjZmMDI2ZDIwYWU5MDQ0OWZiYmQ3Nzc4MWQxMDg2YWUQ1oG8: 00:33:19.253 03:36:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NGM5Y2NiOTAyMzRmZjI2MDQzNGU2MTBiMmU5NTEwNTQwZGQyNzJjNmUyYzg4ZjA5ZDM0MzE1MzhjNTg3NTZiZfDtY/M=: ]] 00:33:19.253 03:36:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NGM5Y2NiOTAyMzRmZjI2MDQzNGU2MTBiMmU5NTEwNTQwZGQyNzJjNmUyYzg4ZjA5ZDM0MzE1MzhjNTg3NTZiZfDtY/M=: 00:33:19.253 03:36:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:33:19.253 03:36:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:19.253 03:36:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:19.253 03:36:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:33:19.254 03:36:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:33:19.254 03:36:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:19.254 03:36:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:33:19.254 03:36:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:19.254 03:36:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:19.254 03:36:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:19.254 03:36:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:19.254 03:36:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:19.254 03:36:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:19.254 03:36:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:19.254 03:36:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:19.254 03:36:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:19.254 03:36:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:19.254 03:36:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:19.254 03:36:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:19.254 03:36:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:19.254 03:36:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:19.254 03:36:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:19.254 03:36:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:19.254 03:36:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:19.821 nvme0n1 00:33:19.821 03:36:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:19.821 03:36:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:19.821 03:36:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:19.821 03:36:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:19.821 03:36:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:19.821 03:36:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:19.821 03:36:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:19.821 03:36:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:19.821 03:36:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:19.821 03:36:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:19.821 03:36:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:19.821 03:36:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:19.821 03:36:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:33:19.821 03:36:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:19.821 03:36:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:19.821 03:36:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:33:19.821 03:36:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:19.821 03:36:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGI4ZGY4MTUwYjY0ZjNlYzNjNmIzOWY0MGI3ZTc3ZGIxNDFjMjE1OTBkM2I2NWU5gChLnQ==: 00:33:19.821 03:36:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTUwNmYwOTU5OTg5ZTYxNTVkMWE4ZWE0YTljMGI3YWVjYTBkMjgxMjY4YTI1OWY1q1RGMw==: 00:33:19.821 03:36:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:19.821 03:36:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:33:19.821 03:36:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGI4ZGY4MTUwYjY0ZjNlYzNjNmIzOWY0MGI3ZTc3ZGIxNDFjMjE1OTBkM2I2NWU5gChLnQ==: 00:33:19.821 03:36:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTUwNmYwOTU5OTg5ZTYxNTVkMWE4ZWE0YTljMGI3YWVjYTBkMjgxMjY4YTI1OWY1q1RGMw==: ]] 00:33:19.821 03:36:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTUwNmYwOTU5OTg5ZTYxNTVkMWE4ZWE0YTljMGI3YWVjYTBkMjgxMjY4YTI1OWY1q1RGMw==: 00:33:19.821 03:36:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:33:19.821 03:36:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:19.821 03:36:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:19.821 03:36:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:33:19.821 03:36:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:33:19.821 03:36:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:19.821 03:36:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:33:19.821 03:36:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:19.821 03:36:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:19.821 03:36:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:19.821 03:36:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:19.821 03:36:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:19.821 03:36:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:19.821 03:36:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:19.821 03:36:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:19.821 03:36:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:19.821 03:36:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:19.821 03:36:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:19.821 03:36:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:19.821 03:36:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:19.821 03:36:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:19.821 03:36:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:19.821 03:36:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:19.821 03:36:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:20.388 nvme0n1 00:33:20.388 03:36:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:20.388 03:36:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:20.388 03:36:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:20.388 03:36:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:20.388 03:36:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:20.388 03:36:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:20.388 03:36:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:20.388 03:36:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:20.388 03:36:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:20.388 03:36:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:20.388 03:36:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:20.388 03:36:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:20.388 03:36:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:33:20.388 03:36:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:20.388 03:36:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:20.388 03:36:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:33:20.388 03:36:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:33:20.388 03:36:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:M2Q3ZjU1NjUwNGZkZjczYmQwMjQyNzZjYjcwOTQ0ZDMEr3Bz: 00:33:20.388 03:36:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTg5YzA3YzZhZDk4YzZjNWZlYWIwM2RkNTQwYjMwODlz4mjG: 00:33:20.388 03:36:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:20.388 03:36:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:33:20.388 03:36:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:M2Q3ZjU1NjUwNGZkZjczYmQwMjQyNzZjYjcwOTQ0ZDMEr3Bz: 00:33:20.388 03:36:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTg5YzA3YzZhZDk4YzZjNWZlYWIwM2RkNTQwYjMwODlz4mjG: ]] 00:33:20.388 03:36:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTg5YzA3YzZhZDk4YzZjNWZlYWIwM2RkNTQwYjMwODlz4mjG: 00:33:20.388 03:36:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:33:20.388 03:36:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:20.389 03:36:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:20.389 03:36:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:33:20.389 03:36:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:33:20.389 03:36:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:20.389 03:36:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:33:20.389 03:36:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:20.389 03:36:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:20.389 03:36:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:20.389 03:36:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:20.389 03:36:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:20.389 03:36:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:20.389 03:36:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:20.389 03:36:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:20.389 03:36:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:20.389 03:36:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:20.389 03:36:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:20.389 03:36:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:20.389 03:36:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:20.389 03:36:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:20.389 03:36:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:20.389 03:36:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:20.389 03:36:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:20.954 nvme0n1 00:33:20.954 03:36:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:20.954 03:36:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:20.954 03:36:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:20.954 03:36:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:20.954 03:36:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:20.954 03:36:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:20.954 03:36:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:20.954 03:36:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:20.954 03:36:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:20.954 03:36:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:20.954 03:36:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:20.954 03:36:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:20.954 03:36:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:33:20.954 03:36:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:20.954 03:36:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:20.954 03:36:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:33:20.954 03:36:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:33:20.954 03:36:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTNmZmRjMDNhYjAxZDFlMmJiYWUwNDNmZDFmNDk0ZTZiMTRkMDQxMWNhZDNlZTZkD806dA==: 00:33:20.954 03:36:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MGVmNGJiYjRjZTEzNTc5ODU3NzRjZDc2NThjMWQ4YzDGeU7a: 00:33:20.954 03:36:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:20.954 03:36:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:33:20.954 03:36:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTNmZmRjMDNhYjAxZDFlMmJiYWUwNDNmZDFmNDk0ZTZiMTRkMDQxMWNhZDNlZTZkD806dA==: 00:33:20.954 03:36:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MGVmNGJiYjRjZTEzNTc5ODU3NzRjZDc2NThjMWQ4YzDGeU7a: ]] 00:33:20.954 03:36:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MGVmNGJiYjRjZTEzNTc5ODU3NzRjZDc2NThjMWQ4YzDGeU7a: 00:33:20.954 03:36:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:33:20.954 03:36:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:20.954 03:36:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:20.954 03:36:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:33:20.954 03:36:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:33:20.954 03:36:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:20.954 03:36:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:33:20.954 03:36:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:20.954 03:36:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:21.213 03:36:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:21.213 03:36:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:21.213 03:36:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:21.213 03:36:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:21.213 03:36:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:21.213 03:36:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:21.213 03:36:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:21.213 03:36:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:21.213 03:36:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:21.213 03:36:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:21.213 03:36:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:21.213 03:36:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:21.213 03:36:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:33:21.213 03:36:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:21.213 03:36:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:21.472 nvme0n1 00:33:21.472 03:36:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:21.472 03:36:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:21.472 03:36:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:21.472 03:36:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:21.472 03:36:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:21.472 03:36:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:21.730 03:36:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:21.730 03:36:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:21.730 03:36:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:21.730 03:36:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:21.730 03:36:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:21.730 03:36:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:21.730 03:36:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:33:21.730 03:36:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:21.730 03:36:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:21.730 03:36:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:33:21.730 03:36:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:33:21.730 03:36:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:N2E5NDAyYTRkNWQ1MTA2ZDA3YjZlZjBkZjc2NDc4ZDAzYmJlYTczYTk2NzgxNDQ2N2RlZGNlMTQ5NTFiZGQwM2sgfW8=: 00:33:21.730 03:36:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:33:21.730 03:36:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:21.730 03:36:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:33:21.730 03:36:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:N2E5NDAyYTRkNWQ1MTA2ZDA3YjZlZjBkZjc2NDc4ZDAzYmJlYTczYTk2NzgxNDQ2N2RlZGNlMTQ5NTFiZGQwM2sgfW8=: 00:33:21.731 03:36:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:33:21.731 03:36:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:33:21.731 03:36:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:21.731 03:36:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:21.731 03:36:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:33:21.731 03:36:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:33:21.731 03:36:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:21.731 03:36:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:33:21.731 03:36:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:21.731 03:36:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:21.731 03:36:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:21.731 03:36:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:21.731 03:36:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:21.731 03:36:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:21.731 03:36:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:21.731 03:36:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:21.731 03:36:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:21.731 03:36:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:21.731 03:36:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:21.731 03:36:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:21.731 03:36:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:21.731 03:36:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:21.731 03:36:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:33:21.731 03:36:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:21.731 03:36:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:22.077 nvme0n1 00:33:22.077 03:36:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:22.077 03:36:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:22.077 03:36:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:22.077 03:36:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:22.077 03:36:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:22.077 03:36:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:22.077 03:36:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:22.077 03:36:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:22.077 03:36:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:22.077 03:36:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:22.362 03:36:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:22.362 03:36:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:33:22.362 03:36:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:22.362 03:36:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:33:22.362 03:36:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:22.362 03:36:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:22.362 03:36:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:33:22.362 03:36:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:33:22.362 03:36:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjZmMDI2ZDIwYWU5MDQ0OWZiYmQ3Nzc4MWQxMDg2YWUQ1oG8: 00:33:22.362 03:36:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NGM5Y2NiOTAyMzRmZjI2MDQzNGU2MTBiMmU5NTEwNTQwZGQyNzJjNmUyYzg4ZjA5ZDM0MzE1MzhjNTg3NTZiZfDtY/M=: 00:33:22.362 03:36:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:22.362 03:36:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:33:22.362 03:36:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjZmMDI2ZDIwYWU5MDQ0OWZiYmQ3Nzc4MWQxMDg2YWUQ1oG8: 00:33:22.362 03:36:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NGM5Y2NiOTAyMzRmZjI2MDQzNGU2MTBiMmU5NTEwNTQwZGQyNzJjNmUyYzg4ZjA5ZDM0MzE1MzhjNTg3NTZiZfDtY/M=: ]] 00:33:22.362 03:36:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NGM5Y2NiOTAyMzRmZjI2MDQzNGU2MTBiMmU5NTEwNTQwZGQyNzJjNmUyYzg4ZjA5ZDM0MzE1MzhjNTg3NTZiZfDtY/M=: 00:33:22.362 03:36:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:33:22.362 03:36:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:22.363 03:36:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:22.363 03:36:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:33:22.363 03:36:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:33:22.363 03:36:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:22.363 03:36:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:33:22.363 03:36:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:22.363 03:36:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:22.363 03:36:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:22.363 03:36:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:22.363 03:36:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:22.363 03:36:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:22.363 03:36:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:22.363 03:36:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:22.363 03:36:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:22.363 03:36:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:22.363 03:36:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:22.363 03:36:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:22.363 03:36:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:22.363 03:36:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:22.363 03:36:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:22.363 03:36:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:22.363 03:36:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:23.297 nvme0n1 00:33:23.297 03:36:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:23.297 03:36:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:23.297 03:36:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:23.297 03:36:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:23.297 03:36:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:23.297 03:36:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:23.297 03:36:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:23.297 03:36:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:23.297 03:36:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:23.297 03:36:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:23.297 03:36:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:23.297 03:36:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:23.297 03:36:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:33:23.297 03:36:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:23.297 03:36:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:23.297 03:36:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:33:23.297 03:36:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:23.297 03:36:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGI4ZGY4MTUwYjY0ZjNlYzNjNmIzOWY0MGI3ZTc3ZGIxNDFjMjE1OTBkM2I2NWU5gChLnQ==: 00:33:23.297 03:36:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTUwNmYwOTU5OTg5ZTYxNTVkMWE4ZWE0YTljMGI3YWVjYTBkMjgxMjY4YTI1OWY1q1RGMw==: 00:33:23.297 03:36:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:23.297 03:36:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:33:23.297 03:36:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGI4ZGY4MTUwYjY0ZjNlYzNjNmIzOWY0MGI3ZTc3ZGIxNDFjMjE1OTBkM2I2NWU5gChLnQ==: 00:33:23.297 03:36:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTUwNmYwOTU5OTg5ZTYxNTVkMWE4ZWE0YTljMGI3YWVjYTBkMjgxMjY4YTI1OWY1q1RGMw==: ]] 00:33:23.297 03:36:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTUwNmYwOTU5OTg5ZTYxNTVkMWE4ZWE0YTljMGI3YWVjYTBkMjgxMjY4YTI1OWY1q1RGMw==: 00:33:23.297 03:36:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:33:23.297 03:36:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:23.297 03:36:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:23.297 03:36:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:33:23.297 03:36:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:33:23.297 03:36:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:23.297 03:36:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:33:23.297 03:36:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:23.297 03:36:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:23.297 03:36:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:23.297 03:36:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:23.297 03:36:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:23.297 03:36:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:23.297 03:36:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:23.297 03:36:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:23.297 03:36:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:23.297 03:36:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:23.297 03:36:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:23.297 03:36:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:23.297 03:36:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:23.297 03:36:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:23.297 03:36:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:23.297 03:36:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:23.297 03:36:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:24.241 nvme0n1 00:33:24.241 03:36:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:24.241 03:36:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:24.241 03:36:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:24.241 03:36:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:24.241 03:36:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:24.241 03:36:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:24.241 03:36:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:24.241 03:36:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:24.241 03:36:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:24.241 03:36:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:24.241 03:36:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:24.241 03:36:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:24.241 03:36:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:33:24.241 03:36:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:24.241 03:36:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:24.241 03:36:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:33:24.241 03:36:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:33:24.241 03:36:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:M2Q3ZjU1NjUwNGZkZjczYmQwMjQyNzZjYjcwOTQ0ZDMEr3Bz: 00:33:24.241 03:36:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTg5YzA3YzZhZDk4YzZjNWZlYWIwM2RkNTQwYjMwODlz4mjG: 00:33:24.241 03:36:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:24.241 03:36:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:33:24.241 03:36:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:M2Q3ZjU1NjUwNGZkZjczYmQwMjQyNzZjYjcwOTQ0ZDMEr3Bz: 00:33:24.241 03:36:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTg5YzA3YzZhZDk4YzZjNWZlYWIwM2RkNTQwYjMwODlz4mjG: ]] 00:33:24.241 03:36:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTg5YzA3YzZhZDk4YzZjNWZlYWIwM2RkNTQwYjMwODlz4mjG: 00:33:24.241 03:36:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:33:24.241 03:36:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:24.241 03:36:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:24.241 03:36:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:33:24.241 03:36:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:33:24.241 03:36:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:24.241 03:36:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:33:24.241 03:36:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:24.241 03:36:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:24.241 03:36:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:24.241 03:36:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:24.241 03:36:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:24.241 03:36:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:24.241 03:36:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:24.241 03:36:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:24.241 03:36:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:24.241 03:36:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:24.241 03:36:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:24.241 03:36:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:24.241 03:36:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:24.241 03:36:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:24.241 03:36:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:24.241 03:36:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:24.241 03:36:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:25.175 nvme0n1 00:33:25.175 03:36:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:25.175 03:36:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:25.175 03:36:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:25.175 03:36:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:25.175 03:36:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:25.175 03:36:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:25.175 03:36:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:25.175 03:36:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:25.175 03:36:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:25.175 03:36:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:25.175 03:36:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:25.175 03:36:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:25.175 03:36:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:33:25.175 03:36:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:25.175 03:36:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:25.175 03:36:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:33:25.175 03:36:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:33:25.175 03:36:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTNmZmRjMDNhYjAxZDFlMmJiYWUwNDNmZDFmNDk0ZTZiMTRkMDQxMWNhZDNlZTZkD806dA==: 00:33:25.175 03:36:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MGVmNGJiYjRjZTEzNTc5ODU3NzRjZDc2NThjMWQ4YzDGeU7a: 00:33:25.175 03:36:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:25.175 03:36:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:33:25.175 03:36:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTNmZmRjMDNhYjAxZDFlMmJiYWUwNDNmZDFmNDk0ZTZiMTRkMDQxMWNhZDNlZTZkD806dA==: 00:33:25.175 03:36:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MGVmNGJiYjRjZTEzNTc5ODU3NzRjZDc2NThjMWQ4YzDGeU7a: ]] 00:33:25.175 03:36:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MGVmNGJiYjRjZTEzNTc5ODU3NzRjZDc2NThjMWQ4YzDGeU7a: 00:33:25.175 03:36:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:33:25.175 03:36:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:25.175 03:36:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:25.175 03:36:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:33:25.175 03:36:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:33:25.175 03:36:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:25.175 03:36:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:33:25.175 03:36:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:25.175 03:36:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:25.175 03:36:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:25.175 03:36:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:25.175 03:36:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:25.175 03:36:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:25.175 03:36:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:25.175 03:36:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:25.175 03:36:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:25.175 03:36:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:25.175 03:36:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:25.175 03:36:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:25.175 03:36:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:25.175 03:36:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:25.175 03:36:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:33:25.175 03:36:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:25.175 03:36:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:26.108 nvme0n1 00:33:26.108 03:36:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:26.108 03:36:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:26.108 03:36:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:26.108 03:36:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:26.108 03:36:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:26.108 03:36:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:26.108 03:36:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:26.108 03:36:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:26.108 03:36:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:26.108 03:36:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:26.366 03:36:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:26.366 03:36:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:26.366 03:36:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:33:26.366 03:36:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:26.366 03:36:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:26.366 03:36:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:33:26.366 03:36:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:33:26.366 03:36:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:N2E5NDAyYTRkNWQ1MTA2ZDA3YjZlZjBkZjc2NDc4ZDAzYmJlYTczYTk2NzgxNDQ2N2RlZGNlMTQ5NTFiZGQwM2sgfW8=: 00:33:26.366 03:36:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:33:26.366 03:36:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:26.366 03:36:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:33:26.366 03:36:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:N2E5NDAyYTRkNWQ1MTA2ZDA3YjZlZjBkZjc2NDc4ZDAzYmJlYTczYTk2NzgxNDQ2N2RlZGNlMTQ5NTFiZGQwM2sgfW8=: 00:33:26.366 03:36:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:33:26.366 03:36:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:33:26.366 03:36:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:26.366 03:36:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:26.366 03:36:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:33:26.366 03:36:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:33:26.366 03:36:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:26.366 03:36:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:33:26.366 03:36:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:26.366 03:36:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:26.366 03:36:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:26.366 03:36:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:26.366 03:36:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:26.366 03:36:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:26.366 03:36:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:26.366 03:36:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:26.366 03:36:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:26.366 03:36:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:26.366 03:36:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:26.366 03:36:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:26.366 03:36:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:26.366 03:36:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:26.366 03:36:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:33:26.366 03:36:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:26.366 03:36:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:27.297 nvme0n1 00:33:27.297 03:36:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:27.297 03:36:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:27.297 03:36:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:27.297 03:36:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:27.297 03:36:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:27.297 03:36:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:27.297 03:36:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:27.297 03:36:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:27.297 03:36:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:27.297 03:36:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:27.297 03:36:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:27.297 03:36:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:33:27.297 03:36:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:27.297 03:36:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:27.297 03:36:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:33:27.297 03:36:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:27.297 03:36:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGI4ZGY4MTUwYjY0ZjNlYzNjNmIzOWY0MGI3ZTc3ZGIxNDFjMjE1OTBkM2I2NWU5gChLnQ==: 00:33:27.297 03:36:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTUwNmYwOTU5OTg5ZTYxNTVkMWE4ZWE0YTljMGI3YWVjYTBkMjgxMjY4YTI1OWY1q1RGMw==: 00:33:27.297 03:36:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:27.297 03:36:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:33:27.297 03:36:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGI4ZGY4MTUwYjY0ZjNlYzNjNmIzOWY0MGI3ZTc3ZGIxNDFjMjE1OTBkM2I2NWU5gChLnQ==: 00:33:27.297 03:36:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTUwNmYwOTU5OTg5ZTYxNTVkMWE4ZWE0YTljMGI3YWVjYTBkMjgxMjY4YTI1OWY1q1RGMw==: ]] 00:33:27.297 03:36:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTUwNmYwOTU5OTg5ZTYxNTVkMWE4ZWE0YTljMGI3YWVjYTBkMjgxMjY4YTI1OWY1q1RGMw==: 00:33:27.297 03:36:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:33:27.297 03:36:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:27.297 03:36:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:27.297 03:36:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:27.297 03:36:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:33:27.297 03:36:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:27.297 03:36:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:27.297 03:36:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:27.297 03:36:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:27.297 03:36:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:27.297 03:36:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:27.297 03:36:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:27.297 03:36:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:27.297 03:36:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:27.297 03:36:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:27.297 03:36:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:33:27.297 03:36:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:33:27.297 03:36:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:33:27.297 03:36:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:33:27.297 03:36:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:27.297 03:36:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:33:27.297 03:36:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:27.297 03:36:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:33:27.297 03:36:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:27.297 03:36:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:27.297 request: 00:33:27.297 { 00:33:27.297 "name": "nvme0", 00:33:27.297 "trtype": "tcp", 00:33:27.297 "traddr": "10.0.0.1", 00:33:27.297 "adrfam": "ipv4", 00:33:27.297 "trsvcid": "4420", 00:33:27.297 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:33:27.297 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:33:27.297 "prchk_reftag": false, 00:33:27.297 "prchk_guard": false, 00:33:27.297 "hdgst": false, 00:33:27.297 "ddgst": false, 00:33:27.297 "method": "bdev_nvme_attach_controller", 00:33:27.297 "req_id": 1 00:33:27.297 } 00:33:27.297 Got JSON-RPC error response 00:33:27.297 response: 00:33:27.297 { 00:33:27.297 "code": -5, 00:33:27.297 "message": "Input/output error" 00:33:27.297 } 00:33:27.297 03:36:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:33:27.297 03:36:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:33:27.297 03:36:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:33:27.297 03:36:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:33:27.297 03:36:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:33:27.297 03:36:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:33:27.297 03:36:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:27.297 03:36:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:33:27.297 03:36:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:27.297 03:36:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:27.297 03:36:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:33:27.297 03:36:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:33:27.297 03:36:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:27.297 03:36:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:27.297 03:36:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:27.297 03:36:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:27.297 03:36:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:27.297 03:36:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:27.297 03:36:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:27.297 03:36:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:27.297 03:36:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:27.297 03:36:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:27.297 03:36:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:33:27.297 03:36:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:33:27.297 03:36:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:33:27.297 03:36:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:33:27.297 03:36:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:27.297 03:36:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:33:27.297 03:36:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:27.297 03:36:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:33:27.297 03:36:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:27.297 03:36:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:27.554 request: 00:33:27.554 { 00:33:27.554 "name": "nvme0", 00:33:27.554 "trtype": "tcp", 00:33:27.554 "traddr": "10.0.0.1", 00:33:27.554 "adrfam": "ipv4", 00:33:27.554 "trsvcid": "4420", 00:33:27.554 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:33:27.554 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:33:27.554 "prchk_reftag": false, 00:33:27.554 "prchk_guard": false, 00:33:27.554 "hdgst": false, 00:33:27.554 "ddgst": false, 00:33:27.554 "dhchap_key": "key2", 00:33:27.554 "method": "bdev_nvme_attach_controller", 00:33:27.554 "req_id": 1 00:33:27.554 } 00:33:27.554 Got JSON-RPC error response 00:33:27.554 response: 00:33:27.554 { 00:33:27.554 "code": -5, 00:33:27.554 "message": "Input/output error" 00:33:27.554 } 00:33:27.554 03:36:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:33:27.554 03:36:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:33:27.554 03:36:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:33:27.554 03:36:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:33:27.554 03:36:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:33:27.554 03:36:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:33:27.554 03:36:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:33:27.554 03:36:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:27.554 03:36:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:27.554 03:36:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:27.554 03:36:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:33:27.554 03:36:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:33:27.554 03:36:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:27.554 03:36:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:27.554 03:36:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:27.554 03:36:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:27.554 03:36:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:27.554 03:36:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:27.554 03:36:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:27.554 03:36:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:27.554 03:36:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:27.554 03:36:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:27.554 03:36:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:33:27.554 03:36:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:33:27.554 03:36:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:33:27.554 03:36:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:33:27.554 03:36:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:27.554 03:36:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:33:27.554 03:36:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:27.554 03:36:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:33:27.554 03:36:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:27.554 03:36:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:27.554 request: 00:33:27.554 { 00:33:27.554 "name": "nvme0", 00:33:27.554 "trtype": "tcp", 00:33:27.554 "traddr": "10.0.0.1", 00:33:27.554 "adrfam": "ipv4", 00:33:27.554 "trsvcid": "4420", 00:33:27.554 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:33:27.554 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:33:27.554 "prchk_reftag": false, 00:33:27.554 "prchk_guard": false, 00:33:27.554 "hdgst": false, 00:33:27.554 "ddgst": false, 00:33:27.554 "dhchap_key": "key1", 00:33:27.554 "dhchap_ctrlr_key": "ckey2", 00:33:27.554 "method": "bdev_nvme_attach_controller", 00:33:27.554 "req_id": 1 00:33:27.554 } 00:33:27.554 Got JSON-RPC error response 00:33:27.554 response: 00:33:27.554 { 00:33:27.554 "code": -5, 00:33:27.554 "message": "Input/output error" 00:33:27.554 } 00:33:27.554 03:36:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:33:27.554 03:36:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:33:27.554 03:36:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:33:27.554 03:36:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:33:27.554 03:36:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:33:27.554 03:36:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@127 -- # trap - SIGINT SIGTERM EXIT 00:33:27.554 03:36:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@128 -- # cleanup 00:33:27.554 03:36:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:33:27.554 03:36:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:33:27.554 03:36:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@117 -- # sync 00:33:27.554 03:36:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:33:27.554 03:36:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@120 -- # set +e 00:33:27.554 03:36:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:33:27.554 03:36:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:33:27.554 rmmod nvme_tcp 00:33:27.554 rmmod nvme_fabrics 00:33:27.554 03:36:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:33:27.554 03:36:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@124 -- # set -e 00:33:27.554 03:36:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@125 -- # return 0 00:33:27.554 03:36:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@489 -- # '[' -n 3330726 ']' 00:33:27.554 03:36:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@490 -- # killprocess 3330726 00:33:27.554 03:36:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@948 -- # '[' -z 3330726 ']' 00:33:27.554 03:36:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@952 -- # kill -0 3330726 00:33:27.554 03:36:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@953 -- # uname 00:33:27.554 03:36:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:33:27.554 03:36:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3330726 00:33:27.811 03:36:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:33:27.811 03:36:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:33:27.812 03:36:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3330726' 00:33:27.812 killing process with pid 3330726 00:33:27.812 03:36:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@967 -- # kill 3330726 00:33:27.812 03:36:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@972 -- # wait 3330726 00:33:27.812 03:36:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:33:27.812 03:36:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:33:27.812 03:36:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:33:27.812 03:36:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:33:27.812 03:36:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:33:27.812 03:36:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:27.812 03:36:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:33:27.812 03:36:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:30.346 03:36:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:33:30.346 03:36:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:33:30.346 03:36:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:33:30.346 03:36:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:33:30.346 03:36:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:33:30.346 03:36:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@686 -- # echo 0 00:33:30.346 03:36:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:33:30.346 03:36:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:33:30.346 03:36:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:33:30.346 03:36:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:33:30.346 03:36:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:33:30.346 03:36:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:33:30.346 03:36:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:33:31.281 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:33:31.281 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:33:31.281 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:33:31.281 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:33:31.281 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:33:31.281 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:33:31.281 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:33:31.281 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:33:31.281 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:33:31.281 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:33:31.281 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:33:31.281 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:33:31.281 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:33:31.281 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:33:31.282 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:33:31.282 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:33:32.215 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:33:32.215 03:36:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.LcS /tmp/spdk.key-null.hTw /tmp/spdk.key-sha256.pqD /tmp/spdk.key-sha384.4mQ /tmp/spdk.key-sha512.6wb /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:33:32.215 03:36:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:33:33.588 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:33:33.588 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:33:33.588 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:33:33.588 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:33:33.588 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:33:33.588 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:33:33.588 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:33:33.588 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:33:33.588 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:33:33.588 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:33:33.588 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:33:33.588 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:33:33.588 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:33:33.588 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:33:33.588 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:33:33.588 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:33:33.588 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:33:33.588 00:33:33.588 real 0m49.720s 00:33:33.588 user 0m47.539s 00:33:33.588 sys 0m5.802s 00:33:33.588 03:36:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1124 -- # xtrace_disable 00:33:33.588 03:36:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:33.588 ************************************ 00:33:33.588 END TEST nvmf_auth_host 00:33:33.588 ************************************ 00:33:33.588 03:36:39 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:33:33.588 03:36:39 nvmf_tcp -- nvmf/nvmf.sh@107 -- # [[ tcp == \t\c\p ]] 00:33:33.588 03:36:39 nvmf_tcp -- nvmf/nvmf.sh@108 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:33:33.588 03:36:39 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:33:33.588 03:36:39 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:33:33.588 03:36:39 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:33.588 ************************************ 00:33:33.588 START TEST nvmf_digest 00:33:33.588 ************************************ 00:33:33.588 03:36:39 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:33:33.588 * Looking for test storage... 00:33:33.588 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:33:33.588 03:36:39 nvmf_tcp.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:33.588 03:36:39 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:33:33.588 03:36:39 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:33.588 03:36:39 nvmf_tcp.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:33.588 03:36:39 nvmf_tcp.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:33.588 03:36:39 nvmf_tcp.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:33.588 03:36:39 nvmf_tcp.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:33.588 03:36:39 nvmf_tcp.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:33.588 03:36:39 nvmf_tcp.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:33.588 03:36:39 nvmf_tcp.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:33.588 03:36:39 nvmf_tcp.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:33.589 03:36:39 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:33.589 03:36:39 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:33:33.589 03:36:39 nvmf_tcp.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:33:33.589 03:36:39 nvmf_tcp.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:33.589 03:36:39 nvmf_tcp.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:33.589 03:36:39 nvmf_tcp.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:33.589 03:36:39 nvmf_tcp.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:33.589 03:36:39 nvmf_tcp.nvmf_digest -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:33.589 03:36:39 nvmf_tcp.nvmf_digest -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:33.589 03:36:39 nvmf_tcp.nvmf_digest -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:33.589 03:36:39 nvmf_tcp.nvmf_digest -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:33.589 03:36:39 nvmf_tcp.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:33.589 03:36:39 nvmf_tcp.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:33.589 03:36:39 nvmf_tcp.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:33.589 03:36:39 nvmf_tcp.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:33:33.589 03:36:39 nvmf_tcp.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:33.589 03:36:39 nvmf_tcp.nvmf_digest -- nvmf/common.sh@47 -- # : 0 00:33:33.589 03:36:39 nvmf_tcp.nvmf_digest -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:33:33.589 03:36:39 nvmf_tcp.nvmf_digest -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:33:33.589 03:36:39 nvmf_tcp.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:33.589 03:36:39 nvmf_tcp.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:33.589 03:36:39 nvmf_tcp.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:33.589 03:36:39 nvmf_tcp.nvmf_digest -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:33:33.589 03:36:39 nvmf_tcp.nvmf_digest -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:33:33.589 03:36:39 nvmf_tcp.nvmf_digest -- nvmf/common.sh@51 -- # have_pci_nics=0 00:33:33.589 03:36:39 nvmf_tcp.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:33:33.589 03:36:39 nvmf_tcp.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:33:33.589 03:36:39 nvmf_tcp.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:33:33.589 03:36:39 nvmf_tcp.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:33:33.589 03:36:39 nvmf_tcp.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:33:33.589 03:36:39 nvmf_tcp.nvmf_digest -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:33:33.589 03:36:39 nvmf_tcp.nvmf_digest -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:33.589 03:36:39 nvmf_tcp.nvmf_digest -- nvmf/common.sh@448 -- # prepare_net_devs 00:33:33.589 03:36:39 nvmf_tcp.nvmf_digest -- nvmf/common.sh@410 -- # local -g is_hw=no 00:33:33.589 03:36:39 nvmf_tcp.nvmf_digest -- nvmf/common.sh@412 -- # remove_spdk_ns 00:33:33.589 03:36:39 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:33.589 03:36:39 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:33:33.589 03:36:39 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:33.589 03:36:39 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:33:33.589 03:36:39 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:33:33.589 03:36:39 nvmf_tcp.nvmf_digest -- nvmf/common.sh@285 -- # xtrace_disable 00:33:33.589 03:36:39 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:33:36.119 03:36:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:36.119 03:36:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@291 -- # pci_devs=() 00:33:36.119 03:36:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@291 -- # local -a pci_devs 00:33:36.119 03:36:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@292 -- # pci_net_devs=() 00:33:36.119 03:36:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:33:36.119 03:36:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@293 -- # pci_drivers=() 00:33:36.119 03:36:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@293 -- # local -A pci_drivers 00:33:36.119 03:36:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@295 -- # net_devs=() 00:33:36.119 03:36:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@295 -- # local -ga net_devs 00:33:36.119 03:36:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@296 -- # e810=() 00:33:36.119 03:36:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@296 -- # local -ga e810 00:33:36.119 03:36:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@297 -- # x722=() 00:33:36.119 03:36:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@297 -- # local -ga x722 00:33:36.119 03:36:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@298 -- # mlx=() 00:33:36.119 03:36:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@298 -- # local -ga mlx 00:33:36.119 03:36:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:36.119 03:36:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:36.119 03:36:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:36.119 03:36:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:36.119 03:36:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:36.120 03:36:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:36.120 03:36:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:36.120 03:36:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:36.120 03:36:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:36.120 03:36:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:36.120 03:36:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:36.120 03:36:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:33:36.120 03:36:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:33:36.120 03:36:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:33:36.120 03:36:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:33:36.120 03:36:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:33:36.120 03:36:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:33:36.120 03:36:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:33:36.120 03:36:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:33:36.120 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:33:36.120 03:36:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:33:36.120 03:36:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:33:36.120 03:36:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:36.120 03:36:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:36.120 03:36:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:33:36.120 03:36:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:33:36.120 03:36:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:33:36.120 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:33:36.120 03:36:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:33:36.120 03:36:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:33:36.120 03:36:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:36.120 03:36:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:36.120 03:36:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:33:36.120 03:36:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:33:36.120 03:36:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:33:36.120 03:36:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:33:36.120 03:36:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:33:36.120 03:36:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:36.120 03:36:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:33:36.120 03:36:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:36.120 03:36:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:33:36.120 03:36:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:33:36.120 03:36:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:36.120 03:36:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:33:36.120 Found net devices under 0000:0a:00.0: cvl_0_0 00:33:36.120 03:36:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:33:36.120 03:36:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:33:36.120 03:36:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:36.120 03:36:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:33:36.120 03:36:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:36.120 03:36:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:33:36.120 03:36:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:33:36.120 03:36:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:36.120 03:36:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:33:36.120 Found net devices under 0000:0a:00.1: cvl_0_1 00:33:36.120 03:36:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:33:36.120 03:36:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:33:36.120 03:36:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # is_hw=yes 00:33:36.120 03:36:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:33:36.120 03:36:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:33:36.120 03:36:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:33:36.120 03:36:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:36.120 03:36:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:36.120 03:36:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:36.120 03:36:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:33:36.120 03:36:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:36.120 03:36:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:36.120 03:36:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:33:36.120 03:36:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:36.120 03:36:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:36.120 03:36:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:33:36.120 03:36:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:33:36.120 03:36:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:33:36.120 03:36:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:36.120 03:36:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:36.120 03:36:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:36.120 03:36:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:33:36.120 03:36:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:36.120 03:36:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:36.120 03:36:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:36.120 03:36:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:33:36.120 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:36.120 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.159 ms 00:33:36.120 00:33:36.120 --- 10.0.0.2 ping statistics --- 00:33:36.120 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:36.120 rtt min/avg/max/mdev = 0.159/0.159/0.159/0.000 ms 00:33:36.120 03:36:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:36.120 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:36.120 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.123 ms 00:33:36.120 00:33:36.120 --- 10.0.0.1 ping statistics --- 00:33:36.120 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:36.120 rtt min/avg/max/mdev = 0.123/0.123/0.123/0.000 ms 00:33:36.120 03:36:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:36.120 03:36:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@422 -- # return 0 00:33:36.120 03:36:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:33:36.120 03:36:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:36.121 03:36:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:33:36.121 03:36:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:33:36.121 03:36:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:36.121 03:36:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:33:36.121 03:36:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:33:36.121 03:36:41 nvmf_tcp.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:33:36.121 03:36:41 nvmf_tcp.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:33:36.121 03:36:41 nvmf_tcp.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:33:36.121 03:36:41 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:33:36.121 03:36:41 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:33:36.121 03:36:41 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:33:36.121 ************************************ 00:33:36.121 START TEST nvmf_digest_clean 00:33:36.121 ************************************ 00:33:36.121 03:36:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1123 -- # run_digest 00:33:36.121 03:36:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:33:36.121 03:36:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:33:36.121 03:36:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:33:36.121 03:36:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:33:36.121 03:36:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:33:36.121 03:36:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:33:36.121 03:36:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@722 -- # xtrace_disable 00:33:36.121 03:36:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:33:36.121 03:36:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@481 -- # nvmfpid=3340786 00:33:36.121 03:36:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:33:36.121 03:36:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@482 -- # waitforlisten 3340786 00:33:36.121 03:36:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 3340786 ']' 00:33:36.121 03:36:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:36.121 03:36:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:33:36.121 03:36:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:36.121 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:36.121 03:36:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:33:36.121 03:36:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:33:36.121 [2024-07-15 03:36:42.019615] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:33:36.121 [2024-07-15 03:36:42.019713] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:36.121 EAL: No free 2048 kB hugepages reported on node 1 00:33:36.121 [2024-07-15 03:36:42.091779] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:36.121 [2024-07-15 03:36:42.182866] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:36.121 [2024-07-15 03:36:42.182935] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:36.121 [2024-07-15 03:36:42.182962] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:36.121 [2024-07-15 03:36:42.182975] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:36.121 [2024-07-15 03:36:42.182987] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:36.121 [2024-07-15 03:36:42.183025] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:33:36.121 03:36:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:33:36.121 03:36:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:33:36.121 03:36:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:33:36.121 03:36:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@728 -- # xtrace_disable 00:33:36.121 03:36:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:33:36.121 03:36:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:36.121 03:36:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:33:36.121 03:36:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:33:36.121 03:36:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:33:36.121 03:36:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:36.121 03:36:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:33:36.379 null0 00:33:36.379 [2024-07-15 03:36:42.358966] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:36.379 [2024-07-15 03:36:42.383202] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:36.379 03:36:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:36.379 03:36:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:33:36.379 03:36:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:33:36.379 03:36:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:33:36.379 03:36:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:33:36.379 03:36:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:33:36.379 03:36:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:33:36.379 03:36:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:33:36.379 03:36:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=3340806 00:33:36.379 03:36:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:33:36.379 03:36:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 3340806 /var/tmp/bperf.sock 00:33:36.379 03:36:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 3340806 ']' 00:33:36.379 03:36:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:33:36.379 03:36:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:33:36.379 03:36:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:33:36.379 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:33:36.379 03:36:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:33:36.379 03:36:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:33:36.379 [2024-07-15 03:36:42.433813] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:33:36.379 [2024-07-15 03:36:42.433899] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3340806 ] 00:33:36.379 EAL: No free 2048 kB hugepages reported on node 1 00:33:36.380 [2024-07-15 03:36:42.501912] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:36.638 [2024-07-15 03:36:42.595825] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:33:36.638 03:36:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:33:36.638 03:36:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:33:36.638 03:36:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:33:36.638 03:36:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:33:36.638 03:36:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:33:36.894 03:36:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:36.894 03:36:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:37.458 nvme0n1 00:33:37.458 03:36:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:33:37.458 03:36:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:33:37.458 Running I/O for 2 seconds... 00:33:39.994 00:33:39.994 Latency(us) 00:33:39.994 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:39.994 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:33:39.994 nvme0n1 : 2.01 18480.70 72.19 0.00 0.00 6916.90 3689.43 16311.18 00:33:39.994 =================================================================================================================== 00:33:39.994 Total : 18480.70 72.19 0.00 0.00 6916.90 3689.43 16311.18 00:33:39.994 0 00:33:39.994 03:36:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:33:39.994 03:36:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:33:39.994 03:36:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:33:39.994 03:36:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:33:39.994 | select(.opcode=="crc32c") 00:33:39.994 | "\(.module_name) \(.executed)"' 00:33:39.994 03:36:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:33:39.994 03:36:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:33:39.994 03:36:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:33:39.994 03:36:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:33:39.994 03:36:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:33:39.994 03:36:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 3340806 00:33:39.994 03:36:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 3340806 ']' 00:33:39.994 03:36:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 3340806 00:33:39.994 03:36:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:33:39.994 03:36:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:33:39.994 03:36:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3340806 00:33:39.994 03:36:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:33:39.994 03:36:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:33:39.994 03:36:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3340806' 00:33:39.994 killing process with pid 3340806 00:33:39.994 03:36:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 3340806 00:33:39.994 Received shutdown signal, test time was about 2.000000 seconds 00:33:39.994 00:33:39.995 Latency(us) 00:33:39.995 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:39.995 =================================================================================================================== 00:33:39.995 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:39.995 03:36:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 3340806 00:33:39.995 03:36:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:33:39.995 03:36:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:33:39.995 03:36:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:33:39.995 03:36:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:33:39.995 03:36:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:33:39.995 03:36:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:33:39.995 03:36:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:33:39.995 03:36:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=3341215 00:33:39.995 03:36:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:33:39.995 03:36:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 3341215 /var/tmp/bperf.sock 00:33:39.995 03:36:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 3341215 ']' 00:33:39.995 03:36:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:33:39.995 03:36:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:33:39.995 03:36:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:33:39.995 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:33:39.995 03:36:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:33:39.995 03:36:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:33:39.995 [2024-07-15 03:36:46.084721] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:33:39.995 [2024-07-15 03:36:46.084812] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3341215 ] 00:33:39.995 I/O size of 131072 is greater than zero copy threshold (65536). 00:33:39.995 Zero copy mechanism will not be used. 00:33:39.995 EAL: No free 2048 kB hugepages reported on node 1 00:33:40.252 [2024-07-15 03:36:46.149414] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:40.253 [2024-07-15 03:36:46.241617] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:33:40.253 03:36:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:33:40.253 03:36:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:33:40.253 03:36:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:33:40.253 03:36:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:33:40.253 03:36:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:33:40.510 03:36:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:40.510 03:36:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:41.074 nvme0n1 00:33:41.074 03:36:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:33:41.074 03:36:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:33:41.332 I/O size of 131072 is greater than zero copy threshold (65536). 00:33:41.332 Zero copy mechanism will not be used. 00:33:41.332 Running I/O for 2 seconds... 00:33:43.258 00:33:43.258 Latency(us) 00:33:43.258 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:43.258 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:33:43.258 nvme0n1 : 2.00 3967.20 495.90 0.00 0.00 4028.10 1286.45 10097.40 00:33:43.258 =================================================================================================================== 00:33:43.258 Total : 3967.20 495.90 0.00 0.00 4028.10 1286.45 10097.40 00:33:43.258 0 00:33:43.258 03:36:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:33:43.258 03:36:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:33:43.258 03:36:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:33:43.258 03:36:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:33:43.258 03:36:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:33:43.258 | select(.opcode=="crc32c") 00:33:43.258 | "\(.module_name) \(.executed)"' 00:33:43.516 03:36:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:33:43.516 03:36:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:33:43.516 03:36:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:33:43.516 03:36:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:33:43.516 03:36:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 3341215 00:33:43.516 03:36:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 3341215 ']' 00:33:43.516 03:36:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 3341215 00:33:43.516 03:36:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:33:43.516 03:36:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:33:43.516 03:36:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3341215 00:33:43.516 03:36:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:33:43.516 03:36:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:33:43.516 03:36:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3341215' 00:33:43.516 killing process with pid 3341215 00:33:43.516 03:36:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 3341215 00:33:43.516 Received shutdown signal, test time was about 2.000000 seconds 00:33:43.516 00:33:43.516 Latency(us) 00:33:43.516 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:43.516 =================================================================================================================== 00:33:43.516 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:43.516 03:36:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 3341215 00:33:43.774 03:36:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:33:43.774 03:36:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:33:43.774 03:36:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:33:43.774 03:36:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:33:43.774 03:36:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:33:43.774 03:36:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:33:43.774 03:36:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:33:43.774 03:36:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=3341741 00:33:43.774 03:36:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:33:43.774 03:36:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 3341741 /var/tmp/bperf.sock 00:33:43.774 03:36:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 3341741 ']' 00:33:43.774 03:36:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:33:43.774 03:36:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:33:43.774 03:36:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:33:43.774 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:33:43.774 03:36:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:33:43.774 03:36:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:33:43.774 [2024-07-15 03:36:49.865587] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:33:43.774 [2024-07-15 03:36:49.865684] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3341741 ] 00:33:43.774 EAL: No free 2048 kB hugepages reported on node 1 00:33:44.032 [2024-07-15 03:36:49.929321] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:44.032 [2024-07-15 03:36:50.029198] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:33:44.032 03:36:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:33:44.032 03:36:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:33:44.032 03:36:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:33:44.032 03:36:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:33:44.032 03:36:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:33:44.289 03:36:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:44.289 03:36:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:44.854 nvme0n1 00:33:44.854 03:36:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:33:44.854 03:36:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:33:45.111 Running I/O for 2 seconds... 00:33:47.011 00:33:47.011 Latency(us) 00:33:47.011 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:47.011 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:47.011 nvme0n1 : 2.00 20577.46 80.38 0.00 0.00 6210.21 2864.17 12136.30 00:33:47.011 =================================================================================================================== 00:33:47.011 Total : 20577.46 80.38 0.00 0.00 6210.21 2864.17 12136.30 00:33:47.011 0 00:33:47.011 03:36:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:33:47.011 03:36:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:33:47.011 03:36:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:33:47.011 03:36:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:33:47.011 | select(.opcode=="crc32c") 00:33:47.011 | "\(.module_name) \(.executed)"' 00:33:47.011 03:36:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:33:47.271 03:36:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:33:47.271 03:36:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:33:47.271 03:36:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:33:47.271 03:36:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:33:47.271 03:36:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 3341741 00:33:47.271 03:36:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 3341741 ']' 00:33:47.271 03:36:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 3341741 00:33:47.271 03:36:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:33:47.271 03:36:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:33:47.271 03:36:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3341741 00:33:47.271 03:36:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:33:47.271 03:36:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:33:47.271 03:36:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3341741' 00:33:47.271 killing process with pid 3341741 00:33:47.271 03:36:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 3341741 00:33:47.271 Received shutdown signal, test time was about 2.000000 seconds 00:33:47.271 00:33:47.271 Latency(us) 00:33:47.271 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:47.271 =================================================================================================================== 00:33:47.271 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:47.271 03:36:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 3341741 00:33:47.529 03:36:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:33:47.529 03:36:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:33:47.529 03:36:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:33:47.529 03:36:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:33:47.529 03:36:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:33:47.529 03:36:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:33:47.529 03:36:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:33:47.529 03:36:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=3342149 00:33:47.529 03:36:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:33:47.529 03:36:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 3342149 /var/tmp/bperf.sock 00:33:47.529 03:36:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 3342149 ']' 00:33:47.529 03:36:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:33:47.529 03:36:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:33:47.529 03:36:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:33:47.529 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:33:47.529 03:36:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:33:47.529 03:36:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:33:47.529 [2024-07-15 03:36:53.614729] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:33:47.529 [2024-07-15 03:36:53.614824] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3342149 ] 00:33:47.529 I/O size of 131072 is greater than zero copy threshold (65536). 00:33:47.529 Zero copy mechanism will not be used. 00:33:47.529 EAL: No free 2048 kB hugepages reported on node 1 00:33:47.786 [2024-07-15 03:36:53.680402] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:47.786 [2024-07-15 03:36:53.771669] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:33:47.786 03:36:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:33:47.786 03:36:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:33:47.786 03:36:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:33:47.786 03:36:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:33:47.786 03:36:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:33:48.044 03:36:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:48.044 03:36:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:48.608 nvme0n1 00:33:48.608 03:36:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:33:48.608 03:36:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:33:48.865 I/O size of 131072 is greater than zero copy threshold (65536). 00:33:48.865 Zero copy mechanism will not be used. 00:33:48.865 Running I/O for 2 seconds... 00:33:50.765 00:33:50.765 Latency(us) 00:33:50.765 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:50.765 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:33:50.765 nvme0n1 : 2.01 2934.69 366.84 0.00 0.00 5437.76 4174.89 10728.49 00:33:50.765 =================================================================================================================== 00:33:50.765 Total : 2934.69 366.84 0.00 0.00 5437.76 4174.89 10728.49 00:33:50.765 0 00:33:50.765 03:36:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:33:50.765 03:36:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:33:50.765 03:36:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:33:50.765 03:36:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:33:50.765 | select(.opcode=="crc32c") 00:33:50.765 | "\(.module_name) \(.executed)"' 00:33:50.765 03:36:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:33:51.023 03:36:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:33:51.023 03:36:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:33:51.023 03:36:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:33:51.023 03:36:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:33:51.023 03:36:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 3342149 00:33:51.023 03:36:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 3342149 ']' 00:33:51.023 03:36:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 3342149 00:33:51.023 03:36:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:33:51.023 03:36:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:33:51.023 03:36:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3342149 00:33:51.023 03:36:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:33:51.023 03:36:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:33:51.023 03:36:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3342149' 00:33:51.023 killing process with pid 3342149 00:33:51.023 03:36:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 3342149 00:33:51.023 Received shutdown signal, test time was about 2.000000 seconds 00:33:51.023 00:33:51.023 Latency(us) 00:33:51.023 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:51.023 =================================================================================================================== 00:33:51.023 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:51.023 03:36:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 3342149 00:33:51.282 03:36:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 3340786 00:33:51.282 03:36:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 3340786 ']' 00:33:51.282 03:36:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 3340786 00:33:51.282 03:36:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:33:51.282 03:36:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:33:51.282 03:36:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3340786 00:33:51.282 03:36:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:33:51.282 03:36:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:33:51.282 03:36:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3340786' 00:33:51.282 killing process with pid 3340786 00:33:51.282 03:36:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 3340786 00:33:51.282 03:36:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 3340786 00:33:51.540 00:33:51.540 real 0m15.615s 00:33:51.540 user 0m31.128s 00:33:51.540 sys 0m4.165s 00:33:51.540 03:36:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1124 -- # xtrace_disable 00:33:51.540 03:36:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:33:51.540 ************************************ 00:33:51.540 END TEST nvmf_digest_clean 00:33:51.540 ************************************ 00:33:51.540 03:36:57 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1142 -- # return 0 00:33:51.540 03:36:57 nvmf_tcp.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:33:51.540 03:36:57 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:33:51.540 03:36:57 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:33:51.540 03:36:57 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:33:51.540 ************************************ 00:33:51.540 START TEST nvmf_digest_error 00:33:51.540 ************************************ 00:33:51.540 03:36:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1123 -- # run_digest_error 00:33:51.540 03:36:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:33:51.540 03:36:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:33:51.540 03:36:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@722 -- # xtrace_disable 00:33:51.540 03:36:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:51.540 03:36:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@481 -- # nvmfpid=3342698 00:33:51.540 03:36:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:33:51.540 03:36:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@482 -- # waitforlisten 3342698 00:33:51.540 03:36:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 3342698 ']' 00:33:51.540 03:36:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:51.540 03:36:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:33:51.540 03:36:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:51.540 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:51.540 03:36:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:33:51.540 03:36:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:51.540 [2024-07-15 03:36:57.674952] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:33:51.540 [2024-07-15 03:36:57.675029] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:51.799 EAL: No free 2048 kB hugepages reported on node 1 00:33:51.799 [2024-07-15 03:36:57.739391] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:51.799 [2024-07-15 03:36:57.827266] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:51.799 [2024-07-15 03:36:57.827318] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:51.799 [2024-07-15 03:36:57.827341] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:51.799 [2024-07-15 03:36:57.827352] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:51.799 [2024-07-15 03:36:57.827362] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:51.799 [2024-07-15 03:36:57.827393] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:33:51.799 03:36:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:33:51.799 03:36:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:33:51.799 03:36:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:33:51.799 03:36:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@728 -- # xtrace_disable 00:33:51.799 03:36:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:51.799 03:36:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:51.799 03:36:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:33:51.799 03:36:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:51.799 03:36:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:51.799 [2024-07-15 03:36:57.924023] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:33:51.799 03:36:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:51.799 03:36:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:33:51.799 03:36:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:33:51.799 03:36:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:51.799 03:36:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:52.058 null0 00:33:52.058 [2024-07-15 03:36:58.042300] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:52.058 [2024-07-15 03:36:58.066550] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:52.058 03:36:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:52.058 03:36:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:33:52.058 03:36:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:33:52.058 03:36:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:33:52.058 03:36:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:33:52.058 03:36:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:33:52.058 03:36:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=3342724 00:33:52.058 03:36:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:33:52.058 03:36:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 3342724 /var/tmp/bperf.sock 00:33:52.058 03:36:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 3342724 ']' 00:33:52.058 03:36:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:33:52.058 03:36:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:33:52.058 03:36:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:33:52.058 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:33:52.058 03:36:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:33:52.058 03:36:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:52.058 [2024-07-15 03:36:58.113932] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:33:52.058 [2024-07-15 03:36:58.114002] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3342724 ] 00:33:52.058 EAL: No free 2048 kB hugepages reported on node 1 00:33:52.058 [2024-07-15 03:36:58.175966] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:52.316 [2024-07-15 03:36:58.267766] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:33:52.316 03:36:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:33:52.316 03:36:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:33:52.316 03:36:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:33:52.316 03:36:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:33:52.574 03:36:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:33:52.574 03:36:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:52.574 03:36:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:52.574 03:36:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:52.574 03:36:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:52.574 03:36:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:53.139 nvme0n1 00:33:53.139 03:36:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:33:53.139 03:36:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:53.139 03:36:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:53.139 03:36:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:53.139 03:36:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:33:53.139 03:36:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:33:53.139 Running I/O for 2 seconds... 00:33:53.139 [2024-07-15 03:36:59.204127] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12323c0) 00:33:53.139 [2024-07-15 03:36:59.204180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21655 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.139 [2024-07-15 03:36:59.204217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:53.139 [2024-07-15 03:36:59.217558] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12323c0) 00:33:53.139 [2024-07-15 03:36:59.217592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7907 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.139 [2024-07-15 03:36:59.217610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:53.139 [2024-07-15 03:36:59.228762] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12323c0) 00:33:53.139 [2024-07-15 03:36:59.228793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:16087 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.139 [2024-07-15 03:36:59.228811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:53.139 [2024-07-15 03:36:59.242485] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12323c0) 00:33:53.139 [2024-07-15 03:36:59.242529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:11669 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.139 [2024-07-15 03:36:59.242557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:53.139 [2024-07-15 03:36:59.255714] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12323c0) 00:33:53.139 [2024-07-15 03:36:59.255745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:9441 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.139 [2024-07-15 03:36:59.255763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:53.139 [2024-07-15 03:36:59.267871] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12323c0) 00:33:53.139 [2024-07-15 03:36:59.267909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:19342 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.139 [2024-07-15 03:36:59.267927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:53.139 [2024-07-15 03:36:59.279237] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12323c0) 00:33:53.139 [2024-07-15 03:36:59.279268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:24353 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.139 [2024-07-15 03:36:59.279285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:53.398 [2024-07-15 03:36:59.294821] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12323c0) 00:33:53.398 [2024-07-15 03:36:59.294853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:16283 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.398 [2024-07-15 03:36:59.294870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:53.398 [2024-07-15 03:36:59.310011] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12323c0) 00:33:53.398 [2024-07-15 03:36:59.310042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:17713 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.398 [2024-07-15 03:36:59.310059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:53.398 [2024-07-15 03:36:59.321946] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12323c0) 00:33:53.398 [2024-07-15 03:36:59.321990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:6895 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.398 [2024-07-15 03:36:59.322007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:53.398 [2024-07-15 03:36:59.335322] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12323c0) 00:33:53.398 [2024-07-15 03:36:59.335367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:10249 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.398 [2024-07-15 03:36:59.335383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:53.398 [2024-07-15 03:36:59.349467] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12323c0) 00:33:53.398 [2024-07-15 03:36:59.349495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:24788 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.398 [2024-07-15 03:36:59.349511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:53.399 [2024-07-15 03:36:59.360695] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12323c0) 00:33:53.399 [2024-07-15 03:36:59.360739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:16177 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.399 [2024-07-15 03:36:59.360754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:53.399 [2024-07-15 03:36:59.373085] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12323c0) 00:33:53.399 [2024-07-15 03:36:59.373114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:11612 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.399 [2024-07-15 03:36:59.373131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:53.399 [2024-07-15 03:36:59.386522] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12323c0) 00:33:53.399 [2024-07-15 03:36:59.386569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:5360 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.399 [2024-07-15 03:36:59.386586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:53.399 [2024-07-15 03:36:59.399261] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12323c0) 00:33:53.399 [2024-07-15 03:36:59.399305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:4878 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.399 [2024-07-15 03:36:59.399320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:53.399 [2024-07-15 03:36:59.412168] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12323c0) 00:33:53.399 [2024-07-15 03:36:59.412199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:11642 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.399 [2024-07-15 03:36:59.412216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:53.399 [2024-07-15 03:36:59.425070] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12323c0) 00:33:53.399 [2024-07-15 03:36:59.425101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:12185 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.399 [2024-07-15 03:36:59.425118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:53.399 [2024-07-15 03:36:59.435804] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12323c0) 00:33:53.399 [2024-07-15 03:36:59.435835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6437 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.399 [2024-07-15 03:36:59.435851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:53.399 [2024-07-15 03:36:59.451541] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12323c0) 00:33:53.399 [2024-07-15 03:36:59.451573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:1924 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.399 [2024-07-15 03:36:59.451590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:53.399 [2024-07-15 03:36:59.464432] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12323c0) 00:33:53.399 [2024-07-15 03:36:59.464464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4502 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.399 [2024-07-15 03:36:59.464490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:53.399 [2024-07-15 03:36:59.475802] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12323c0) 00:33:53.399 [2024-07-15 03:36:59.475833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:22749 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.399 [2024-07-15 03:36:59.475851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:53.399 [2024-07-15 03:36:59.488534] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12323c0) 00:33:53.399 [2024-07-15 03:36:59.488564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:19949 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.399 [2024-07-15 03:36:59.488579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:53.399 [2024-07-15 03:36:59.502451] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12323c0) 00:33:53.399 [2024-07-15 03:36:59.502480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21735 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.399 [2024-07-15 03:36:59.502495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:53.399 [2024-07-15 03:36:59.515640] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12323c0) 00:33:53.399 [2024-07-15 03:36:59.515672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:8702 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.399 [2024-07-15 03:36:59.515689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:53.399 [2024-07-15 03:36:59.526227] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12323c0) 00:33:53.399 [2024-07-15 03:36:59.526273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16589 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.399 [2024-07-15 03:36:59.526289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:53.399 [2024-07-15 03:36:59.541126] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12323c0) 00:33:53.399 [2024-07-15 03:36:59.541157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:17527 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.399 [2024-07-15 03:36:59.541174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:53.657 [2024-07-15 03:36:59.553433] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12323c0) 00:33:53.657 [2024-07-15 03:36:59.553461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:12626 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.657 [2024-07-15 03:36:59.553479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:53.657 [2024-07-15 03:36:59.566249] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12323c0) 00:33:53.657 [2024-07-15 03:36:59.566278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:22554 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.657 [2024-07-15 03:36:59.566309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:53.657 [2024-07-15 03:36:59.579924] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12323c0) 00:33:53.657 [2024-07-15 03:36:59.579963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:20863 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.657 [2024-07-15 03:36:59.579980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:53.657 [2024-07-15 03:36:59.594313] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12323c0) 00:33:53.657 [2024-07-15 03:36:59.594344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:25574 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.657 [2024-07-15 03:36:59.594360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:53.657 [2024-07-15 03:36:59.606131] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12323c0) 00:33:53.657 [2024-07-15 03:36:59.606178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:11348 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.657 [2024-07-15 03:36:59.606195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:53.657 [2024-07-15 03:36:59.618593] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12323c0) 00:33:53.657 [2024-07-15 03:36:59.618621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:13895 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.657 [2024-07-15 03:36:59.618637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:53.657 [2024-07-15 03:36:59.634237] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12323c0) 00:33:53.657 [2024-07-15 03:36:59.634268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:5495 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.657 [2024-07-15 03:36:59.634285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:53.657 [2024-07-15 03:36:59.649405] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12323c0) 00:33:53.657 [2024-07-15 03:36:59.649436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:11082 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.657 [2024-07-15 03:36:59.649453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:53.657 [2024-07-15 03:36:59.663331] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12323c0) 00:33:53.657 [2024-07-15 03:36:59.663362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:232 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.658 [2024-07-15 03:36:59.663394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:53.658 [2024-07-15 03:36:59.675521] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12323c0) 00:33:53.658 [2024-07-15 03:36:59.675551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:15820 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.658 [2024-07-15 03:36:59.675568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:53.658 [2024-07-15 03:36:59.688392] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12323c0) 00:33:53.658 [2024-07-15 03:36:59.688423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:16813 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.658 [2024-07-15 03:36:59.688441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:53.658 [2024-07-15 03:36:59.699195] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12323c0) 00:33:53.658 [2024-07-15 03:36:59.699240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:2982 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.658 [2024-07-15 03:36:59.699256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:53.658 [2024-07-15 03:36:59.711693] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12323c0) 00:33:53.658 [2024-07-15 03:36:59.711721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:10111 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.658 [2024-07-15 03:36:59.711736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:53.658 [2024-07-15 03:36:59.724951] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12323c0) 00:33:53.658 [2024-07-15 03:36:59.724982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:15587 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.658 [2024-07-15 03:36:59.724999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:53.658 [2024-07-15 03:36:59.738438] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12323c0) 00:33:53.658 [2024-07-15 03:36:59.738471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:10115 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.658 [2024-07-15 03:36:59.738488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:53.658 [2024-07-15 03:36:59.750132] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12323c0) 00:33:53.658 [2024-07-15 03:36:59.750164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:16875 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.658 [2024-07-15 03:36:59.750180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:53.658 [2024-07-15 03:36:59.764552] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12323c0) 00:33:53.658 [2024-07-15 03:36:59.764582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:10228 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.658 [2024-07-15 03:36:59.764599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:53.658 [2024-07-15 03:36:59.778371] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12323c0) 00:33:53.658 [2024-07-15 03:36:59.778400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:8374 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.658 [2024-07-15 03:36:59.778415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:53.658 [2024-07-15 03:36:59.794540] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12323c0) 00:33:53.658 [2024-07-15 03:36:59.794572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:21202 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.658 [2024-07-15 03:36:59.794589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:53.916 [2024-07-15 03:36:59.808926] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12323c0) 00:33:53.916 [2024-07-15 03:36:59.808957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:11514 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.916 [2024-07-15 03:36:59.808980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:53.916 [2024-07-15 03:36:59.821950] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12323c0) 00:33:53.916 [2024-07-15 03:36:59.821981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:20373 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.916 [2024-07-15 03:36:59.821998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:53.916 [2024-07-15 03:36:59.834399] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12323c0) 00:33:53.916 [2024-07-15 03:36:59.834430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:7005 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.916 [2024-07-15 03:36:59.834447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:53.916 [2024-07-15 03:36:59.845550] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12323c0) 00:33:53.916 [2024-07-15 03:36:59.845580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:5085 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.916 [2024-07-15 03:36:59.845597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:53.916 [2024-07-15 03:36:59.859422] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12323c0) 00:33:53.916 [2024-07-15 03:36:59.859454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:2980 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.916 [2024-07-15 03:36:59.859486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:53.916 [2024-07-15 03:36:59.871185] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12323c0) 00:33:53.916 [2024-07-15 03:36:59.871215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5920 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.916 [2024-07-15 03:36:59.871232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:53.916 [2024-07-15 03:36:59.883051] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12323c0) 00:33:53.916 [2024-07-15 03:36:59.883081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:14820 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.916 [2024-07-15 03:36:59.883097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:53.916 [2024-07-15 03:36:59.896442] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12323c0) 00:33:53.916 [2024-07-15 03:36:59.896471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:6491 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.916 [2024-07-15 03:36:59.896504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:53.916 [2024-07-15 03:36:59.908241] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12323c0) 00:33:53.916 [2024-07-15 03:36:59.908283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:23213 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.916 [2024-07-15 03:36:59.908297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:53.916 [2024-07-15 03:36:59.923008] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12323c0) 00:33:53.916 [2024-07-15 03:36:59.923039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:15555 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.917 [2024-07-15 03:36:59.923055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:53.917 [2024-07-15 03:36:59.933837] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12323c0) 00:33:53.917 [2024-07-15 03:36:59.933885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:3651 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.917 [2024-07-15 03:36:59.933902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:53.917 [2024-07-15 03:36:59.946819] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12323c0) 00:33:53.917 [2024-07-15 03:36:59.946849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6768 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.917 [2024-07-15 03:36:59.946866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:53.917 [2024-07-15 03:36:59.960677] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12323c0) 00:33:53.917 [2024-07-15 03:36:59.960706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:20165 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.917 [2024-07-15 03:36:59.960724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:53.917 [2024-07-15 03:36:59.974690] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12323c0) 00:33:53.917 [2024-07-15 03:36:59.974721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12825 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.917 [2024-07-15 03:36:59.974738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:53.917 [2024-07-15 03:36:59.986077] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12323c0) 00:33:53.917 [2024-07-15 03:36:59.986105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:12643 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.917 [2024-07-15 03:36:59.986121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:53.917 [2024-07-15 03:36:59.999626] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12323c0) 00:33:53.917 [2024-07-15 03:36:59.999660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:17589 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.917 [2024-07-15 03:36:59.999679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:53.917 [2024-07-15 03:37:00.016538] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12323c0) 00:33:53.917 [2024-07-15 03:37:00.016590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:9868 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.917 [2024-07-15 03:37:00.016610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:53.917 [2024-07-15 03:37:00.028693] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12323c0) 00:33:53.917 [2024-07-15 03:37:00.028745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:10802 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.917 [2024-07-15 03:37:00.028776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:53.917 [2024-07-15 03:37:00.045585] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12323c0) 00:33:53.917 [2024-07-15 03:37:00.045627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24862 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.917 [2024-07-15 03:37:00.045648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:53.917 [2024-07-15 03:37:00.057629] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12323c0) 00:33:53.917 [2024-07-15 03:37:00.057671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:12843 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.917 [2024-07-15 03:37:00.057690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:54.175 [2024-07-15 03:37:00.072403] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12323c0) 00:33:54.175 [2024-07-15 03:37:00.072434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:8341 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:54.175 [2024-07-15 03:37:00.072465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:54.175 [2024-07-15 03:37:00.085263] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12323c0) 00:33:54.175 [2024-07-15 03:37:00.085293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:24459 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:54.175 [2024-07-15 03:37:00.085311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:54.175 [2024-07-15 03:37:00.099023] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12323c0) 00:33:54.175 [2024-07-15 03:37:00.099054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5693 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:54.175 [2024-07-15 03:37:00.099071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:54.175 [2024-07-15 03:37:00.111967] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12323c0) 00:33:54.175 [2024-07-15 03:37:00.111999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:20298 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:54.175 [2024-07-15 03:37:00.112016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:54.175 [2024-07-15 03:37:00.122454] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12323c0) 00:33:54.175 [2024-07-15 03:37:00.122485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:6498 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:54.175 [2024-07-15 03:37:00.122501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:54.175 [2024-07-15 03:37:00.138368] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12323c0) 00:33:54.175 [2024-07-15 03:37:00.138396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:4626 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:54.175 [2024-07-15 03:37:00.138428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:54.175 [2024-07-15 03:37:00.153074] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12323c0) 00:33:54.175 [2024-07-15 03:37:00.153115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:4732 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:54.175 [2024-07-15 03:37:00.153133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:54.175 [2024-07-15 03:37:00.163249] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12323c0) 00:33:54.175 [2024-07-15 03:37:00.163277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10486 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:54.175 [2024-07-15 03:37:00.163308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:54.175 [2024-07-15 03:37:00.177571] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12323c0) 00:33:54.175 [2024-07-15 03:37:00.177598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:13667 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:54.175 [2024-07-15 03:37:00.177629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:54.175 [2024-07-15 03:37:00.189649] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12323c0) 00:33:54.175 [2024-07-15 03:37:00.189679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14627 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:54.175 [2024-07-15 03:37:00.189696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:54.175 [2024-07-15 03:37:00.201316] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12323c0) 00:33:54.175 [2024-07-15 03:37:00.201344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:4737 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:54.175 [2024-07-15 03:37:00.201374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:54.175 [2024-07-15 03:37:00.214541] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12323c0) 00:33:54.175 [2024-07-15 03:37:00.214571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:2695 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:54.175 [2024-07-15 03:37:00.214587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:54.175 [2024-07-15 03:37:00.228046] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12323c0) 00:33:54.175 [2024-07-15 03:37:00.228076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:16123 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:54.175 [2024-07-15 03:37:00.228092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:54.175 [2024-07-15 03:37:00.239356] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12323c0) 00:33:54.175 [2024-07-15 03:37:00.239399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:16780 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:54.175 [2024-07-15 03:37:00.239416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:54.175 [2024-07-15 03:37:00.253273] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12323c0) 00:33:54.175 [2024-07-15 03:37:00.253307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:11240 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:54.175 [2024-07-15 03:37:00.253326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:54.175 [2024-07-15 03:37:00.268036] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12323c0) 00:33:54.175 [2024-07-15 03:37:00.268066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11200 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:54.175 [2024-07-15 03:37:00.268083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:54.175 [2024-07-15 03:37:00.279635] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12323c0) 00:33:54.175 [2024-07-15 03:37:00.279668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:7292 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:54.175 [2024-07-15 03:37:00.279687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:54.175 [2024-07-15 03:37:00.293062] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12323c0) 00:33:54.175 [2024-07-15 03:37:00.293090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:5003 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:54.175 [2024-07-15 03:37:00.293106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:54.175 [2024-07-15 03:37:00.306274] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12323c0) 00:33:54.175 [2024-07-15 03:37:00.306307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:7993 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:54.175 [2024-07-15 03:37:00.306326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:54.432 [2024-07-15 03:37:00.320157] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12323c0) 00:33:54.432 [2024-07-15 03:37:00.320203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:4824 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:54.432 [2024-07-15 03:37:00.320222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:54.432 [2024-07-15 03:37:00.335019] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12323c0) 00:33:54.432 [2024-07-15 03:37:00.335048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:1572 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:54.432 [2024-07-15 03:37:00.335065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:54.432 [2024-07-15 03:37:00.348762] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12323c0) 00:33:54.432 [2024-07-15 03:37:00.348795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:17613 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:54.432 [2024-07-15 03:37:00.348813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:54.432 [2024-07-15 03:37:00.360277] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12323c0) 00:33:54.432 [2024-07-15 03:37:00.360311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:18926 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:54.432 [2024-07-15 03:37:00.360330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:54.432 [2024-07-15 03:37:00.373686] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12323c0) 00:33:54.432 [2024-07-15 03:37:00.373719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4724 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:54.432 [2024-07-15 03:37:00.373743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:54.432 [2024-07-15 03:37:00.387592] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12323c0) 00:33:54.432 [2024-07-15 03:37:00.387625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:17449 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:54.432 [2024-07-15 03:37:00.387643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:54.432 [2024-07-15 03:37:00.403549] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12323c0) 00:33:54.432 [2024-07-15 03:37:00.403583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18031 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:54.432 [2024-07-15 03:37:00.403601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:54.432 [2024-07-15 03:37:00.415829] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12323c0) 00:33:54.432 [2024-07-15 03:37:00.415862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:21617 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:54.432 [2024-07-15 03:37:00.415888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:54.432 [2024-07-15 03:37:00.432718] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12323c0) 00:33:54.432 [2024-07-15 03:37:00.432751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:21417 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:54.432 [2024-07-15 03:37:00.432770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:54.432 [2024-07-15 03:37:00.448781] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12323c0) 00:33:54.432 [2024-07-15 03:37:00.448813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:12457 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:54.432 [2024-07-15 03:37:00.448832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:54.432 [2024-07-15 03:37:00.461435] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12323c0) 00:33:54.432 [2024-07-15 03:37:00.461469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:8960 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:54.432 [2024-07-15 03:37:00.461487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:54.432 [2024-07-15 03:37:00.475397] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12323c0) 00:33:54.432 [2024-07-15 03:37:00.475430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:23579 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:54.432 [2024-07-15 03:37:00.475449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:54.432 [2024-07-15 03:37:00.491274] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12323c0) 00:33:54.432 [2024-07-15 03:37:00.491307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:4094 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:54.432 [2024-07-15 03:37:00.491326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:54.432 [2024-07-15 03:37:00.502334] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12323c0) 00:33:54.432 [2024-07-15 03:37:00.502368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:21943 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:54.432 [2024-07-15 03:37:00.502387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:54.432 [2024-07-15 03:37:00.518015] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12323c0) 00:33:54.432 [2024-07-15 03:37:00.518045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14461 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:54.433 [2024-07-15 03:37:00.518061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:54.433 [2024-07-15 03:37:00.531501] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12323c0) 00:33:54.433 [2024-07-15 03:37:00.531534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:19766 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:54.433 [2024-07-15 03:37:00.531552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:54.433 [2024-07-15 03:37:00.546737] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12323c0) 00:33:54.433 [2024-07-15 03:37:00.546771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:19302 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:54.433 [2024-07-15 03:37:00.546789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:54.433 [2024-07-15 03:37:00.562402] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12323c0) 00:33:54.433 [2024-07-15 03:37:00.562435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2901 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:54.433 [2024-07-15 03:37:00.562453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:54.690 [2024-07-15 03:37:00.579110] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12323c0) 00:33:54.690 [2024-07-15 03:37:00.579139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:14932 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:54.690 [2024-07-15 03:37:00.579171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:54.690 [2024-07-15 03:37:00.596629] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12323c0) 00:33:54.690 [2024-07-15 03:37:00.596661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:2300 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:54.690 [2024-07-15 03:37:00.596679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:54.690 [2024-07-15 03:37:00.608372] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12323c0) 00:33:54.690 [2024-07-15 03:37:00.608404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19017 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:54.690 [2024-07-15 03:37:00.608423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:54.690 [2024-07-15 03:37:00.624506] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12323c0) 00:33:54.690 [2024-07-15 03:37:00.624539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:10611 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:54.690 [2024-07-15 03:37:00.624567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:54.690 [2024-07-15 03:37:00.638789] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12323c0) 00:33:54.690 [2024-07-15 03:37:00.638822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12128 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:54.690 [2024-07-15 03:37:00.638840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:54.690 [2024-07-15 03:37:00.653302] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12323c0) 00:33:54.690 [2024-07-15 03:37:00.653334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:23444 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:54.690 [2024-07-15 03:37:00.653352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:54.690 [2024-07-15 03:37:00.665289] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12323c0) 00:33:54.690 [2024-07-15 03:37:00.665323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:23419 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:54.690 [2024-07-15 03:37:00.665342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:54.690 [2024-07-15 03:37:00.680576] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12323c0) 00:33:54.690 [2024-07-15 03:37:00.680609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:2548 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:54.690 [2024-07-15 03:37:00.680628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:54.690 [2024-07-15 03:37:00.694981] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12323c0) 00:33:54.690 [2024-07-15 03:37:00.695011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:22513 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:54.690 [2024-07-15 03:37:00.695028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:54.690 [2024-07-15 03:37:00.707594] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12323c0) 00:33:54.690 [2024-07-15 03:37:00.707627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:16922 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:54.690 [2024-07-15 03:37:00.707645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:54.690 [2024-07-15 03:37:00.723562] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12323c0) 00:33:54.690 [2024-07-15 03:37:00.723595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:13890 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:54.690 [2024-07-15 03:37:00.723614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:54.690 [2024-07-15 03:37:00.735370] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12323c0) 00:33:54.690 [2024-07-15 03:37:00.735402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:680 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:54.690 [2024-07-15 03:37:00.735421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:54.690 [2024-07-15 03:37:00.751979] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12323c0) 00:33:54.690 [2024-07-15 03:37:00.752011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:22468 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:54.690 [2024-07-15 03:37:00.752027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:54.690 [2024-07-15 03:37:00.766025] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12323c0) 00:33:54.690 [2024-07-15 03:37:00.766055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:12437 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:54.690 [2024-07-15 03:37:00.766072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:54.690 [2024-07-15 03:37:00.778323] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12323c0) 00:33:54.690 [2024-07-15 03:37:00.778356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:21588 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:54.690 [2024-07-15 03:37:00.778374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:54.690 [2024-07-15 03:37:00.794068] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12323c0) 00:33:54.690 [2024-07-15 03:37:00.794096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:2899 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:54.690 [2024-07-15 03:37:00.794112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:54.691 [2024-07-15 03:37:00.808414] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12323c0) 00:33:54.691 [2024-07-15 03:37:00.808447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:13318 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:54.691 [2024-07-15 03:37:00.808466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:54.691 [2024-07-15 03:37:00.821751] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12323c0) 00:33:54.691 [2024-07-15 03:37:00.821785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:7170 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:54.691 [2024-07-15 03:37:00.821803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:54.948 [2024-07-15 03:37:00.833988] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12323c0) 00:33:54.948 [2024-07-15 03:37:00.834016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:4054 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:54.948 [2024-07-15 03:37:00.834033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:54.948 [2024-07-15 03:37:00.848279] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12323c0) 00:33:54.948 [2024-07-15 03:37:00.848312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:18339 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:54.948 [2024-07-15 03:37:00.848331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:54.948 [2024-07-15 03:37:00.863020] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12323c0) 00:33:54.948 [2024-07-15 03:37:00.863050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:13653 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:54.948 [2024-07-15 03:37:00.863067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:54.948 [2024-07-15 03:37:00.875015] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12323c0) 00:33:54.948 [2024-07-15 03:37:00.875042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:72 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:54.948 [2024-07-15 03:37:00.875057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:54.948 [2024-07-15 03:37:00.892365] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12323c0) 00:33:54.948 [2024-07-15 03:37:00.892399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:22877 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:54.948 [2024-07-15 03:37:00.892418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:54.948 [2024-07-15 03:37:00.908298] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12323c0) 00:33:54.948 [2024-07-15 03:37:00.908332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20797 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:54.948 [2024-07-15 03:37:00.908351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:54.948 [2024-07-15 03:37:00.924748] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12323c0) 00:33:54.948 [2024-07-15 03:37:00.924782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:8638 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:54.948 [2024-07-15 03:37:00.924800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:54.948 [2024-07-15 03:37:00.936994] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12323c0) 00:33:54.948 [2024-07-15 03:37:00.937021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:7745 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:54.948 [2024-07-15 03:37:00.937036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:54.948 [2024-07-15 03:37:00.953099] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12323c0) 00:33:54.948 [2024-07-15 03:37:00.953129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:7663 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:54.949 [2024-07-15 03:37:00.953146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:54.949 [2024-07-15 03:37:00.967831] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12323c0) 00:33:54.949 [2024-07-15 03:37:00.967865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:15792 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:54.949 [2024-07-15 03:37:00.967894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:54.949 [2024-07-15 03:37:00.980284] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12323c0) 00:33:54.949 [2024-07-15 03:37:00.980318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:11966 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:54.949 [2024-07-15 03:37:00.980337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:54.949 [2024-07-15 03:37:00.994355] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12323c0) 00:33:54.949 [2024-07-15 03:37:00.994389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:15798 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:54.949 [2024-07-15 03:37:00.994414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:54.949 [2024-07-15 03:37:01.009542] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12323c0) 00:33:54.949 [2024-07-15 03:37:01.009576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:22785 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:54.949 [2024-07-15 03:37:01.009595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:54.949 [2024-07-15 03:37:01.022036] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12323c0) 00:33:54.949 [2024-07-15 03:37:01.022067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:13438 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:54.949 [2024-07-15 03:37:01.022085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:54.949 [2024-07-15 03:37:01.038155] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12323c0) 00:33:54.949 [2024-07-15 03:37:01.038205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:22340 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:54.949 [2024-07-15 03:37:01.038224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:54.949 [2024-07-15 03:37:01.054502] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12323c0) 00:33:54.949 [2024-07-15 03:37:01.054536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:2914 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:54.949 [2024-07-15 03:37:01.054555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:54.949 [2024-07-15 03:37:01.065614] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12323c0) 00:33:54.949 [2024-07-15 03:37:01.065647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15995 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:54.949 [2024-07-15 03:37:01.065666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:54.949 [2024-07-15 03:37:01.082696] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12323c0) 00:33:54.949 [2024-07-15 03:37:01.082731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15642 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:54.949 [2024-07-15 03:37:01.082751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:55.206 [2024-07-15 03:37:01.094241] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12323c0) 00:33:55.206 [2024-07-15 03:37:01.094275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:17837 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.206 [2024-07-15 03:37:01.094294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:55.206 [2024-07-15 03:37:01.110680] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12323c0) 00:33:55.206 [2024-07-15 03:37:01.110715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:1773 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.206 [2024-07-15 03:37:01.110734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:55.206 [2024-07-15 03:37:01.125763] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12323c0) 00:33:55.206 [2024-07-15 03:37:01.125803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:11438 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.207 [2024-07-15 03:37:01.125823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:55.207 [2024-07-15 03:37:01.137733] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12323c0) 00:33:55.207 [2024-07-15 03:37:01.137766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:5348 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.207 [2024-07-15 03:37:01.137785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:55.207 [2024-07-15 03:37:01.154113] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12323c0) 00:33:55.207 [2024-07-15 03:37:01.154144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:10694 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.207 [2024-07-15 03:37:01.154161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:55.207 [2024-07-15 03:37:01.170055] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12323c0) 00:33:55.207 [2024-07-15 03:37:01.170085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1266 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.207 [2024-07-15 03:37:01.170102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:55.207 [2024-07-15 03:37:01.182225] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12323c0) 00:33:55.207 [2024-07-15 03:37:01.182259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23140 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.207 [2024-07-15 03:37:01.182276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:55.207 [2024-07-15 03:37:01.198627] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12323c0) 00:33:55.207 [2024-07-15 03:37:01.198660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:19757 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.207 [2024-07-15 03:37:01.198679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:55.207 00:33:55.207 Latency(us) 00:33:55.207 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:55.207 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:33:55.207 nvme0n1 : 2.01 18695.64 73.03 0.00 0.00 6836.80 3519.53 22913.33 00:33:55.207 =================================================================================================================== 00:33:55.207 Total : 18695.64 73.03 0.00 0.00 6836.80 3519.53 22913.33 00:33:55.207 0 00:33:55.207 03:37:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:33:55.207 03:37:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:33:55.207 03:37:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:33:55.207 03:37:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:33:55.207 | .driver_specific 00:33:55.207 | .nvme_error 00:33:55.207 | .status_code 00:33:55.207 | .command_transient_transport_error' 00:33:55.465 03:37:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 147 > 0 )) 00:33:55.465 03:37:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 3342724 00:33:55.465 03:37:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 3342724 ']' 00:33:55.465 03:37:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 3342724 00:33:55.465 03:37:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:33:55.465 03:37:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:33:55.465 03:37:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3342724 00:33:55.465 03:37:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:33:55.465 03:37:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:33:55.465 03:37:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3342724' 00:33:55.465 killing process with pid 3342724 00:33:55.465 03:37:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 3342724 00:33:55.465 Received shutdown signal, test time was about 2.000000 seconds 00:33:55.465 00:33:55.465 Latency(us) 00:33:55.465 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:55.465 =================================================================================================================== 00:33:55.465 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:55.465 03:37:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 3342724 00:33:55.723 03:37:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:33:55.723 03:37:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:33:55.723 03:37:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:33:55.723 03:37:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:33:55.723 03:37:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:33:55.723 03:37:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=3343133 00:33:55.723 03:37:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:33:55.723 03:37:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 3343133 /var/tmp/bperf.sock 00:33:55.723 03:37:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 3343133 ']' 00:33:55.723 03:37:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:33:55.723 03:37:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:33:55.723 03:37:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:33:55.723 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:33:55.723 03:37:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:33:55.723 03:37:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:55.723 [2024-07-15 03:37:01.737202] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:33:55.723 [2024-07-15 03:37:01.737278] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3343133 ] 00:33:55.723 I/O size of 131072 is greater than zero copy threshold (65536). 00:33:55.723 Zero copy mechanism will not be used. 00:33:55.723 EAL: No free 2048 kB hugepages reported on node 1 00:33:55.723 [2024-07-15 03:37:01.795303] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:55.982 [2024-07-15 03:37:01.881638] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:33:55.982 03:37:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:33:55.982 03:37:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:33:55.982 03:37:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:33:55.982 03:37:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:33:56.238 03:37:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:33:56.238 03:37:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:56.238 03:37:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:56.238 03:37:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:56.238 03:37:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:56.239 03:37:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:56.806 nvme0n1 00:33:56.806 03:37:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:33:56.806 03:37:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:56.806 03:37:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:56.806 03:37:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:56.806 03:37:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:33:56.806 03:37:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:33:56.806 I/O size of 131072 is greater than zero copy threshold (65536). 00:33:56.806 Zero copy mechanism will not be used. 00:33:56.806 Running I/O for 2 seconds... 00:33:56.806 [2024-07-15 03:37:02.804950] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b81f10) 00:33:56.806 [2024-07-15 03:37:02.804999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.806 [2024-07-15 03:37:02.805018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:56.806 [2024-07-15 03:37:02.814822] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b81f10) 00:33:56.806 [2024-07-15 03:37:02.814859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.806 [2024-07-15 03:37:02.814886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:56.806 [2024-07-15 03:37:02.823960] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b81f10) 00:33:56.806 [2024-07-15 03:37:02.823991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.806 [2024-07-15 03:37:02.824009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:56.806 [2024-07-15 03:37:02.833765] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b81f10) 00:33:56.806 [2024-07-15 03:37:02.833800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.806 [2024-07-15 03:37:02.833829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:56.806 [2024-07-15 03:37:02.841139] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b81f10) 00:33:56.806 [2024-07-15 03:37:02.841169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.806 [2024-07-15 03:37:02.841203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:56.806 [2024-07-15 03:37:02.848957] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b81f10) 00:33:56.806 [2024-07-15 03:37:02.848986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.806 [2024-07-15 03:37:02.849002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:56.806 [2024-07-15 03:37:02.856681] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b81f10) 00:33:56.806 [2024-07-15 03:37:02.856713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.806 [2024-07-15 03:37:02.856731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:56.806 [2024-07-15 03:37:02.864497] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b81f10) 00:33:56.806 [2024-07-15 03:37:02.864529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.806 [2024-07-15 03:37:02.864546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:56.806 [2024-07-15 03:37:02.872255] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b81f10) 00:33:56.806 [2024-07-15 03:37:02.872288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.806 [2024-07-15 03:37:02.872306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:56.806 [2024-07-15 03:37:02.880078] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b81f10) 00:33:56.806 [2024-07-15 03:37:02.880119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.806 [2024-07-15 03:37:02.880135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:56.806 [2024-07-15 03:37:02.888077] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b81f10) 00:33:56.806 [2024-07-15 03:37:02.888105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.806 [2024-07-15 03:37:02.888121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:56.806 [2024-07-15 03:37:02.895979] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b81f10) 00:33:56.806 [2024-07-15 03:37:02.896008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.806 [2024-07-15 03:37:02.896024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:56.806 [2024-07-15 03:37:02.903828] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b81f10) 00:33:56.806 [2024-07-15 03:37:02.903865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.806 [2024-07-15 03:37:02.903895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:56.806 [2024-07-15 03:37:02.911501] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b81f10) 00:33:56.806 [2024-07-15 03:37:02.911533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.806 [2024-07-15 03:37:02.911551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:56.806 [2024-07-15 03:37:02.919271] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b81f10) 00:33:56.806 [2024-07-15 03:37:02.919302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.806 [2024-07-15 03:37:02.919320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:56.806 [2024-07-15 03:37:02.927055] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b81f10) 00:33:56.806 [2024-07-15 03:37:02.927083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.806 [2024-07-15 03:37:02.927099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:56.806 [2024-07-15 03:37:02.934732] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b81f10) 00:33:56.806 [2024-07-15 03:37:02.934764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.806 [2024-07-15 03:37:02.934781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:56.806 [2024-07-15 03:37:02.942407] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b81f10) 00:33:56.806 [2024-07-15 03:37:02.942439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.806 [2024-07-15 03:37:02.942457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:57.066 [2024-07-15 03:37:02.950549] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b81f10) 00:33:57.066 [2024-07-15 03:37:02.950581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.066 [2024-07-15 03:37:02.950599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:57.066 [2024-07-15 03:37:02.958370] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b81f10) 00:33:57.066 [2024-07-15 03:37:02.958403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.066 [2024-07-15 03:37:02.958421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:57.066 [2024-07-15 03:37:02.966252] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b81f10) 00:33:57.066 [2024-07-15 03:37:02.966282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.066 [2024-07-15 03:37:02.966305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:57.066 [2024-07-15 03:37:02.973918] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b81f10) 00:33:57.066 [2024-07-15 03:37:02.973961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.066 [2024-07-15 03:37:02.973977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:57.066 [2024-07-15 03:37:02.981674] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b81f10) 00:33:57.066 [2024-07-15 03:37:02.981705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.066 [2024-07-15 03:37:02.981723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:57.066 [2024-07-15 03:37:02.989440] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b81f10) 00:33:57.066 [2024-07-15 03:37:02.989472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.066 [2024-07-15 03:37:02.989490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:57.066 [2024-07-15 03:37:02.997256] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b81f10) 00:33:57.066 [2024-07-15 03:37:02.997288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.067 [2024-07-15 03:37:02.997305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:57.067 [2024-07-15 03:37:03.004923] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b81f10) 00:33:57.067 [2024-07-15 03:37:03.004966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.067 [2024-07-15 03:37:03.004981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:57.067 [2024-07-15 03:37:03.012622] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b81f10) 00:33:57.067 [2024-07-15 03:37:03.012655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.067 [2024-07-15 03:37:03.012672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:57.067 [2024-07-15 03:37:03.020563] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b81f10) 00:33:57.067 [2024-07-15 03:37:03.020596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.067 [2024-07-15 03:37:03.020613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:57.067 [2024-07-15 03:37:03.028250] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b81f10) 00:33:57.067 [2024-07-15 03:37:03.028282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.067 [2024-07-15 03:37:03.028300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:57.067 [2024-07-15 03:37:03.035980] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b81f10) 00:33:57.067 [2024-07-15 03:37:03.036029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.067 [2024-07-15 03:37:03.036045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:57.067 [2024-07-15 03:37:03.043718] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b81f10) 00:33:57.067 [2024-07-15 03:37:03.043750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.067 [2024-07-15 03:37:03.043767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:57.067 [2024-07-15 03:37:03.051516] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b81f10) 00:33:57.067 [2024-07-15 03:37:03.051547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.067 [2024-07-15 03:37:03.051565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:57.067 [2024-07-15 03:37:03.059315] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b81f10) 00:33:57.067 [2024-07-15 03:37:03.059347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.067 [2024-07-15 03:37:03.059365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:57.067 [2024-07-15 03:37:03.067085] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b81f10) 00:33:57.067 [2024-07-15 03:37:03.067116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.067 [2024-07-15 03:37:03.067132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:57.067 [2024-07-15 03:37:03.075307] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b81f10) 00:33:57.067 [2024-07-15 03:37:03.075341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.067 [2024-07-15 03:37:03.075359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:57.067 [2024-07-15 03:37:03.083052] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b81f10) 00:33:57.067 [2024-07-15 03:37:03.083082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.067 [2024-07-15 03:37:03.083098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:57.067 [2024-07-15 03:37:03.090797] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b81f10) 00:33:57.067 [2024-07-15 03:37:03.090830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.067 [2024-07-15 03:37:03.090848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:57.067 [2024-07-15 03:37:03.098524] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b81f10) 00:33:57.067 [2024-07-15 03:37:03.098557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.067 [2024-07-15 03:37:03.098575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:57.067 [2024-07-15 03:37:03.106328] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b81f10) 00:33:57.067 [2024-07-15 03:37:03.106360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.067 [2024-07-15 03:37:03.106378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:57.067 [2024-07-15 03:37:03.113989] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b81f10) 00:33:57.067 [2024-07-15 03:37:03.114033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.067 [2024-07-15 03:37:03.114049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:57.067 [2024-07-15 03:37:03.121784] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b81f10) 00:33:57.067 [2024-07-15 03:37:03.121828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.067 [2024-07-15 03:37:03.121844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:57.067 [2024-07-15 03:37:03.129648] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b81f10) 00:33:57.067 [2024-07-15 03:37:03.129680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.067 [2024-07-15 03:37:03.129698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:57.067 [2024-07-15 03:37:03.137331] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b81f10) 00:33:57.067 [2024-07-15 03:37:03.137363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.067 [2024-07-15 03:37:03.137382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:57.067 [2024-07-15 03:37:03.145157] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b81f10) 00:33:57.067 [2024-07-15 03:37:03.145203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.067 [2024-07-15 03:37:03.145219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:57.067 [2024-07-15 03:37:03.152621] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b81f10) 00:33:57.067 [2024-07-15 03:37:03.152654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.067 [2024-07-15 03:37:03.152672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:57.067 [2024-07-15 03:37:03.160675] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b81f10) 00:33:57.067 [2024-07-15 03:37:03.160707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.067 [2024-07-15 03:37:03.160724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:57.067 [2024-07-15 03:37:03.168675] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b81f10) 00:33:57.067 [2024-07-15 03:37:03.168709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.067 [2024-07-15 03:37:03.168734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:57.067 [2024-07-15 03:37:03.176417] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b81f10) 00:33:57.067 [2024-07-15 03:37:03.176449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.067 [2024-07-15 03:37:03.176466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:57.067 [2024-07-15 03:37:03.183805] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b81f10) 00:33:57.067 [2024-07-15 03:37:03.183837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.067 [2024-07-15 03:37:03.183856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:57.067 [2024-07-15 03:37:03.191394] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b81f10) 00:33:57.067 [2024-07-15 03:37:03.191426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.067 [2024-07-15 03:37:03.191444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:57.068 [2024-07-15 03:37:03.199157] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b81f10) 00:33:57.068 [2024-07-15 03:37:03.199196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.068 [2024-07-15 03:37:03.199212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:57.068 [2024-07-15 03:37:03.207184] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b81f10) 00:33:57.068 [2024-07-15 03:37:03.207231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.068 [2024-07-15 03:37:03.207249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:57.325 [2024-07-15 03:37:03.215323] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b81f10) 00:33:57.325 [2024-07-15 03:37:03.215356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.325 [2024-07-15 03:37:03.215373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:57.325 [2024-07-15 03:37:03.223104] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b81f10) 00:33:57.325 [2024-07-15 03:37:03.223133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.325 [2024-07-15 03:37:03.223164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:57.325 [2024-07-15 03:37:03.230897] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b81f10) 00:33:57.325 [2024-07-15 03:37:03.230941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.325 [2024-07-15 03:37:03.230957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:57.325 [2024-07-15 03:37:03.238572] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b81f10) 00:33:57.325 [2024-07-15 03:37:03.238610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.325 [2024-07-15 03:37:03.238629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:57.325 [2024-07-15 03:37:03.246329] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b81f10) 00:33:57.325 [2024-07-15 03:37:03.246362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.325 [2024-07-15 03:37:03.246380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:57.325 [2024-07-15 03:37:03.254110] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b81f10) 00:33:57.325 [2024-07-15 03:37:03.254139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.325 [2024-07-15 03:37:03.254156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:57.325 [2024-07-15 03:37:03.261722] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b81f10) 00:33:57.325 [2024-07-15 03:37:03.261754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.325 [2024-07-15 03:37:03.261772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:57.325 [2024-07-15 03:37:03.269387] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b81f10) 00:33:57.325 [2024-07-15 03:37:03.269420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.325 [2024-07-15 03:37:03.269437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:57.325 [2024-07-15 03:37:03.277113] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b81f10) 00:33:57.325 [2024-07-15 03:37:03.277143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.325 [2024-07-15 03:37:03.277159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:57.325 [2024-07-15 03:37:03.284763] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b81f10) 00:33:57.325 [2024-07-15 03:37:03.284796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.325 [2024-07-15 03:37:03.284813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:57.325 [2024-07-15 03:37:03.292621] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b81f10) 00:33:57.325 [2024-07-15 03:37:03.292653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.325 [2024-07-15 03:37:03.292671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:57.325 [2024-07-15 03:37:03.300413] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b81f10) 00:33:57.325 [2024-07-15 03:37:03.300446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.325 [2024-07-15 03:37:03.300469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:57.326 [2024-07-15 03:37:03.308226] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b81f10) 00:33:57.326 [2024-07-15 03:37:03.308258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.326 [2024-07-15 03:37:03.308277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:57.326 [2024-07-15 03:37:03.316163] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b81f10) 00:33:57.326 [2024-07-15 03:37:03.316206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.326 [2024-07-15 03:37:03.316224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:57.326 [2024-07-15 03:37:03.324087] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b81f10) 00:33:57.326 [2024-07-15 03:37:03.324117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.326 [2024-07-15 03:37:03.324148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:57.326 [2024-07-15 03:37:03.331788] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b81f10) 00:33:57.326 [2024-07-15 03:37:03.331822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.326 [2024-07-15 03:37:03.331839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:57.326 [2024-07-15 03:37:03.339379] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b81f10) 00:33:57.326 [2024-07-15 03:37:03.339410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.326 [2024-07-15 03:37:03.339427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:57.326 [2024-07-15 03:37:03.347141] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b81f10) 00:33:57.326 [2024-07-15 03:37:03.347169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.326 [2024-07-15 03:37:03.347203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:57.326 [2024-07-15 03:37:03.355008] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b81f10) 00:33:57.326 [2024-07-15 03:37:03.355037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.326 [2024-07-15 03:37:03.355053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:57.326 [2024-07-15 03:37:03.363216] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b81f10) 00:33:57.326 [2024-07-15 03:37:03.363262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.326 [2024-07-15 03:37:03.363281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:57.326 [2024-07-15 03:37:03.373214] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b81f10) 00:33:57.326 [2024-07-15 03:37:03.373254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.326 [2024-07-15 03:37:03.373274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:57.326 [2024-07-15 03:37:03.382945] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b81f10) 00:33:57.326 [2024-07-15 03:37:03.382976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.326 [2024-07-15 03:37:03.382993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:57.326 [2024-07-15 03:37:03.392820] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b81f10) 00:33:57.326 [2024-07-15 03:37:03.392854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.326 [2024-07-15 03:37:03.392873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:57.326 [2024-07-15 03:37:03.402890] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b81f10) 00:33:57.326 [2024-07-15 03:37:03.402938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.326 [2024-07-15 03:37:03.402955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:57.326 [2024-07-15 03:37:03.412777] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b81f10) 00:33:57.326 [2024-07-15 03:37:03.412812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.326 [2024-07-15 03:37:03.412831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:57.326 [2024-07-15 03:37:03.422587] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b81f10) 00:33:57.326 [2024-07-15 03:37:03.422621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.326 [2024-07-15 03:37:03.422640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:57.326 [2024-07-15 03:37:03.432768] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b81f10) 00:33:57.326 [2024-07-15 03:37:03.432814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.326 [2024-07-15 03:37:03.432833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:57.326 [2024-07-15 03:37:03.442892] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b81f10) 00:33:57.326 [2024-07-15 03:37:03.442952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.326 [2024-07-15 03:37:03.442969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:57.326 [2024-07-15 03:37:03.452717] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b81f10) 00:33:57.326 [2024-07-15 03:37:03.452751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.326 [2024-07-15 03:37:03.452770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:57.326 [2024-07-15 03:37:03.462638] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b81f10) 00:33:57.326 [2024-07-15 03:37:03.462673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.326 [2024-07-15 03:37:03.462693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:57.584 [2024-07-15 03:37:03.472827] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b81f10) 00:33:57.584 [2024-07-15 03:37:03.472862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.584 [2024-07-15 03:37:03.472889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:57.584 [2024-07-15 03:37:03.481648] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b81f10) 00:33:57.584 [2024-07-15 03:37:03.481681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.584 [2024-07-15 03:37:03.481699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:57.584 [2024-07-15 03:37:03.490107] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b81f10) 00:33:57.584 [2024-07-15 03:37:03.490138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.584 [2024-07-15 03:37:03.490155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:57.584 [2024-07-15 03:37:03.498392] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b81f10) 00:33:57.584 [2024-07-15 03:37:03.498422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.584 [2024-07-15 03:37:03.498439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:57.584 [2024-07-15 03:37:03.505666] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b81f10) 00:33:57.584 [2024-07-15 03:37:03.505695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.584 [2024-07-15 03:37:03.505711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:57.585 [2024-07-15 03:37:03.512686] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b81f10) 00:33:57.585 [2024-07-15 03:37:03.512715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.585 [2024-07-15 03:37:03.512731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:57.585 [2024-07-15 03:37:03.519762] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b81f10) 00:33:57.585 [2024-07-15 03:37:03.519791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.585 [2024-07-15 03:37:03.519807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:57.585 [2024-07-15 03:37:03.526905] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b81f10) 00:33:57.585 [2024-07-15 03:37:03.526950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.585 [2024-07-15 03:37:03.526975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:57.585 [2024-07-15 03:37:03.534288] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b81f10) 00:33:57.585 [2024-07-15 03:37:03.534332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.585 [2024-07-15 03:37:03.534348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:57.585 [2024-07-15 03:37:03.541402] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b81f10) 00:33:57.585 [2024-07-15 03:37:03.541445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.585 [2024-07-15 03:37:03.541462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:57.585 [2024-07-15 03:37:03.548487] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b81f10) 00:33:57.585 [2024-07-15 03:37:03.548515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.585 [2024-07-15 03:37:03.548531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:57.585 [2024-07-15 03:37:03.555868] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b81f10) 00:33:57.585 [2024-07-15 03:37:03.555917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.585 [2024-07-15 03:37:03.555942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:57.585 [2024-07-15 03:37:03.563121] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b81f10) 00:33:57.585 [2024-07-15 03:37:03.563151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.585 [2024-07-15 03:37:03.563167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:57.585 [2024-07-15 03:37:03.570134] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b81f10) 00:33:57.585 [2024-07-15 03:37:03.570163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.585 [2024-07-15 03:37:03.570193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:57.585 [2024-07-15 03:37:03.577217] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b81f10) 00:33:57.585 [2024-07-15 03:37:03.577247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.585 [2024-07-15 03:37:03.577263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:57.585 [2024-07-15 03:37:03.584299] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b81f10) 00:33:57.585 [2024-07-15 03:37:03.584327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.585 [2024-07-15 03:37:03.584344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:57.585 [2024-07-15 03:37:03.591293] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b81f10) 00:33:57.585 [2024-07-15 03:37:03.591321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.585 [2024-07-15 03:37:03.591337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:57.585 [2024-07-15 03:37:03.598323] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b81f10) 00:33:57.585 [2024-07-15 03:37:03.598351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.585 [2024-07-15 03:37:03.598367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:57.585 [2024-07-15 03:37:03.605469] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b81f10) 00:33:57.585 [2024-07-15 03:37:03.605511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.585 [2024-07-15 03:37:03.605527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:57.585 [2024-07-15 03:37:03.612633] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b81f10) 00:33:57.585 [2024-07-15 03:37:03.612662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.585 [2024-07-15 03:37:03.612678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:57.585 [2024-07-15 03:37:03.619772] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b81f10) 00:33:57.585 [2024-07-15 03:37:03.619800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.585 [2024-07-15 03:37:03.619816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:57.585 [2024-07-15 03:37:03.627321] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b81f10) 00:33:57.585 [2024-07-15 03:37:03.627349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.585 [2024-07-15 03:37:03.627366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:57.585 [2024-07-15 03:37:03.634728] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b81f10) 00:33:57.585 [2024-07-15 03:37:03.634758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.585 [2024-07-15 03:37:03.634774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:57.585 [2024-07-15 03:37:03.641821] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b81f10) 00:33:57.585 [2024-07-15 03:37:03.641849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.585 [2024-07-15 03:37:03.641889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:57.585 [2024-07-15 03:37:03.648938] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b81f10) 00:33:57.585 [2024-07-15 03:37:03.648967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.585 [2024-07-15 03:37:03.648995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:57.585 [2024-07-15 03:37:03.656006] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b81f10) 00:33:57.585 [2024-07-15 03:37:03.656036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.585 [2024-07-15 03:37:03.656052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:57.585 [2024-07-15 03:37:03.663125] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b81f10) 00:33:57.585 [2024-07-15 03:37:03.663154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.585 [2024-07-15 03:37:03.663171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:57.585 [2024-07-15 03:37:03.670209] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b81f10) 00:33:57.585 [2024-07-15 03:37:03.670237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.585 [2024-07-15 03:37:03.670253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:57.585 [2024-07-15 03:37:03.677240] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b81f10) 00:33:57.585 [2024-07-15 03:37:03.677269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.585 [2024-07-15 03:37:03.677284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:57.585 [2024-07-15 03:37:03.684310] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b81f10) 00:33:57.585 [2024-07-15 03:37:03.684340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.585 [2024-07-15 03:37:03.684356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:57.585 [2024-07-15 03:37:03.691275] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b81f10) 00:33:57.585 [2024-07-15 03:37:03.691303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.586 [2024-07-15 03:37:03.691319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:57.586 [2024-07-15 03:37:03.698452] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b81f10) 00:33:57.586 [2024-07-15 03:37:03.698481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.586 [2024-07-15 03:37:03.698496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:57.586 [2024-07-15 03:37:03.705608] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b81f10) 00:33:57.586 [2024-07-15 03:37:03.705636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.586 [2024-07-15 03:37:03.705652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:57.586 [2024-07-15 03:37:03.712633] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b81f10) 00:33:57.586 [2024-07-15 03:37:03.712668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.586 [2024-07-15 03:37:03.712684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:57.586 [2024-07-15 03:37:03.719778] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b81f10) 00:33:57.586 [2024-07-15 03:37:03.719806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.586 [2024-07-15 03:37:03.719821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:57.586 [2024-07-15 03:37:03.726794] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b81f10) 00:33:57.586 [2024-07-15 03:37:03.726824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.586 [2024-07-15 03:37:03.726841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:57.844 [2024-07-15 03:37:03.733872] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b81f10) 00:33:57.844 [2024-07-15 03:37:03.733908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.844 [2024-07-15 03:37:03.733925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:57.844 [2024-07-15 03:37:03.740861] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b81f10) 00:33:57.844 [2024-07-15 03:37:03.740911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.844 [2024-07-15 03:37:03.740929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:57.844 [2024-07-15 03:37:03.747874] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b81f10) 00:33:57.844 [2024-07-15 03:37:03.747911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.844 [2024-07-15 03:37:03.747942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:57.844 [2024-07-15 03:37:03.755036] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b81f10) 00:33:57.844 [2024-07-15 03:37:03.755064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.844 [2024-07-15 03:37:03.755080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:57.844 [2024-07-15 03:37:03.762066] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b81f10) 00:33:57.844 [2024-07-15 03:37:03.762095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.844 [2024-07-15 03:37:03.762111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:57.844 [2024-07-15 03:37:03.769135] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b81f10) 00:33:57.844 [2024-07-15 03:37:03.769178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.844 [2024-07-15 03:37:03.769195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:57.844 [2024-07-15 03:37:03.776370] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b81f10) 00:33:57.844 [2024-07-15 03:37:03.776399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.844 [2024-07-15 03:37:03.776415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:57.844 [2024-07-15 03:37:03.783953] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b81f10) 00:33:57.844 [2024-07-15 03:37:03.783982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.844 [2024-07-15 03:37:03.783999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:57.844 [2024-07-15 03:37:03.791098] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b81f10) 00:33:57.844 [2024-07-15 03:37:03.791127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.844 [2024-07-15 03:37:03.791143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:57.844 [2024-07-15 03:37:03.798062] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b81f10) 00:33:57.844 [2024-07-15 03:37:03.798091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.844 [2024-07-15 03:37:03.798107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:57.844 [2024-07-15 03:37:03.805125] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b81f10) 00:33:57.844 [2024-07-15 03:37:03.805154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.844 [2024-07-15 03:37:03.805184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:57.844 [2024-07-15 03:37:03.812189] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b81f10) 00:33:57.844 [2024-07-15 03:37:03.812232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.844 [2024-07-15 03:37:03.812248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:57.844 [2024-07-15 03:37:03.819266] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b81f10) 00:33:57.844 [2024-07-15 03:37:03.819294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.844 [2024-07-15 03:37:03.819310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:57.844 [2024-07-15 03:37:03.826261] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b81f10) 00:33:57.844 [2024-07-15 03:37:03.826289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.844 [2024-07-15 03:37:03.826305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:57.844 [2024-07-15 03:37:03.833316] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b81f10) 00:33:57.844 [2024-07-15 03:37:03.833346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.845 [2024-07-15 03:37:03.833370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:57.845 [2024-07-15 03:37:03.840472] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b81f10) 00:33:57.845 [2024-07-15 03:37:03.840500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.845 [2024-07-15 03:37:03.840516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:57.845 [2024-07-15 03:37:03.847689] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b81f10) 00:33:57.845 [2024-07-15 03:37:03.847719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.845 [2024-07-15 03:37:03.847735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:57.845 [2024-07-15 03:37:03.854700] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b81f10) 00:33:57.845 [2024-07-15 03:37:03.854728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.845 [2024-07-15 03:37:03.854744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:57.845 [2024-07-15 03:37:03.861907] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b81f10) 00:33:57.845 [2024-07-15 03:37:03.861936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.845 [2024-07-15 03:37:03.861952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:57.845 [2024-07-15 03:37:03.869076] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b81f10) 00:33:57.845 [2024-07-15 03:37:03.869105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.845 [2024-07-15 03:37:03.869121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:57.845 [2024-07-15 03:37:03.876130] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b81f10) 00:33:57.845 [2024-07-15 03:37:03.876176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.845 [2024-07-15 03:37:03.876192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:57.845 [2024-07-15 03:37:03.883351] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b81f10) 00:33:57.845 [2024-07-15 03:37:03.883381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.845 [2024-07-15 03:37:03.883397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:57.845 [2024-07-15 03:37:03.890361] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b81f10) 00:33:57.845 [2024-07-15 03:37:03.890390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.845 [2024-07-15 03:37:03.890405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:57.845 [2024-07-15 03:37:03.897289] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b81f10) 00:33:57.845 [2024-07-15 03:37:03.897317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.845 [2024-07-15 03:37:03.897333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:57.845 [2024-07-15 03:37:03.904374] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b81f10) 00:33:57.845 [2024-07-15 03:37:03.904417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.845 [2024-07-15 03:37:03.904433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:57.845 [2024-07-15 03:37:03.911780] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b81f10) 00:33:57.845 [2024-07-15 03:37:03.911823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.845 [2024-07-15 03:37:03.911839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:57.845 [2024-07-15 03:37:03.919104] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b81f10) 00:33:57.845 [2024-07-15 03:37:03.919136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.845 [2024-07-15 03:37:03.919152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:57.845 [2024-07-15 03:37:03.926359] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b81f10) 00:33:57.845 [2024-07-15 03:37:03.926387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.845 [2024-07-15 03:37:03.926403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:57.845 [2024-07-15 03:37:03.933411] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b81f10) 00:33:57.845 [2024-07-15 03:37:03.933439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.845 [2024-07-15 03:37:03.933455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:57.845 [2024-07-15 03:37:03.940459] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b81f10) 00:33:57.845 [2024-07-15 03:37:03.940487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.845 [2024-07-15 03:37:03.940502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:57.845 [2024-07-15 03:37:03.947390] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b81f10) 00:33:57.845 [2024-07-15 03:37:03.947418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.845 [2024-07-15 03:37:03.947434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:57.845 [2024-07-15 03:37:03.954400] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b81f10) 00:33:57.845 [2024-07-15 03:37:03.954429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.845 [2024-07-15 03:37:03.954453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:57.845 [2024-07-15 03:37:03.961461] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b81f10) 00:33:57.845 [2024-07-15 03:37:03.961504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.845 [2024-07-15 03:37:03.961520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:57.845 [2024-07-15 03:37:03.968453] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b81f10) 00:33:57.845 [2024-07-15 03:37:03.968496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.845 [2024-07-15 03:37:03.968512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:57.845 [2024-07-15 03:37:03.975539] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b81f10) 00:33:57.845 [2024-07-15 03:37:03.975567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.845 [2024-07-15 03:37:03.975583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:57.845 [2024-07-15 03:37:03.982671] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b81f10) 00:33:57.845 [2024-07-15 03:37:03.982699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.845 [2024-07-15 03:37:03.982714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:58.104 [2024-07-15 03:37:03.989931] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b81f10) 00:33:58.104 [2024-07-15 03:37:03.989960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.104 [2024-07-15 03:37:03.989976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:58.104 [2024-07-15 03:37:03.997085] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b81f10) 00:33:58.104 [2024-07-15 03:37:03.997115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.104 [2024-07-15 03:37:03.997132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:58.104 [2024-07-15 03:37:04.004224] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b81f10) 00:33:58.104 [2024-07-15 03:37:04.004252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.104 [2024-07-15 03:37:04.004268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:58.104 [2024-07-15 03:37:04.011341] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b81f10) 00:33:58.104 [2024-07-15 03:37:04.011368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.104 [2024-07-15 03:37:04.011384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:58.104 [2024-07-15 03:37:04.018316] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b81f10) 00:33:58.104 [2024-07-15 03:37:04.018351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.104 [2024-07-15 03:37:04.018368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:58.104 [2024-07-15 03:37:04.025332] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b81f10) 00:33:58.104 [2024-07-15 03:37:04.025362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.104 [2024-07-15 03:37:04.025377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:58.104 [2024-07-15 03:37:04.032298] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b81f10) 00:33:58.104 [2024-07-15 03:37:04.032326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.104 [2024-07-15 03:37:04.032342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:58.104 [2024-07-15 03:37:04.039998] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b81f10) 00:33:58.104 [2024-07-15 03:37:04.040029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.104 [2024-07-15 03:37:04.040046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:58.104 [2024-07-15 03:37:04.049203] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b81f10) 00:33:58.104 [2024-07-15 03:37:04.049234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.104 [2024-07-15 03:37:04.049251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:58.104 [2024-07-15 03:37:04.058536] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b81f10) 00:33:58.104 [2024-07-15 03:37:04.058566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.104 [2024-07-15 03:37:04.058582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:58.104 [2024-07-15 03:37:04.067991] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b81f10) 00:33:58.104 [2024-07-15 03:37:04.068022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.104 [2024-07-15 03:37:04.068039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:58.104 [2024-07-15 03:37:04.077490] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b81f10) 00:33:58.104 [2024-07-15 03:37:04.077520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.104 [2024-07-15 03:37:04.077537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:58.104 [2024-07-15 03:37:04.087061] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b81f10) 00:33:58.104 [2024-07-15 03:37:04.087094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.104 [2024-07-15 03:37:04.087112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:58.104 [2024-07-15 03:37:04.096597] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b81f10) 00:33:58.104 [2024-07-15 03:37:04.096629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.104 [2024-07-15 03:37:04.096645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:58.104 [2024-07-15 03:37:04.106184] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b81f10) 00:33:58.105 [2024-07-15 03:37:04.106230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.105 [2024-07-15 03:37:04.106246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:58.105 [2024-07-15 03:37:04.115760] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b81f10) 00:33:58.105 [2024-07-15 03:37:04.115791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.105 [2024-07-15 03:37:04.115808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:58.105 [2024-07-15 03:37:04.125085] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b81f10) 00:33:58.105 [2024-07-15 03:37:04.125116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.105 [2024-07-15 03:37:04.125133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:58.105 [2024-07-15 03:37:04.134503] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b81f10) 00:33:58.105 [2024-07-15 03:37:04.134533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.105 [2024-07-15 03:37:04.134550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:58.105 [2024-07-15 03:37:04.144182] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b81f10) 00:33:58.105 [2024-07-15 03:37:04.144228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.105 [2024-07-15 03:37:04.144244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:58.105 [2024-07-15 03:37:04.153433] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b81f10) 00:33:58.105 [2024-07-15 03:37:04.153464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.105 [2024-07-15 03:37:04.153481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:58.105 [2024-07-15 03:37:04.161438] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b81f10) 00:33:58.105 [2024-07-15 03:37:04.161470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.105 [2024-07-15 03:37:04.161486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:58.105 [2024-07-15 03:37:04.169961] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b81f10) 00:33:58.105 [2024-07-15 03:37:04.169992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.105 [2024-07-15 03:37:04.170016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:58.105 [2024-07-15 03:37:04.177092] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b81f10) 00:33:58.105 [2024-07-15 03:37:04.177122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.105 [2024-07-15 03:37:04.177138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:58.105 [2024-07-15 03:37:04.184086] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b81f10) 00:33:58.105 [2024-07-15 03:37:04.184116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.105 [2024-07-15 03:37:04.184132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:58.105 [2024-07-15 03:37:04.191224] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b81f10) 00:33:58.105 [2024-07-15 03:37:04.191269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.105 [2024-07-15 03:37:04.191285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:58.105 [2024-07-15 03:37:04.198353] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b81f10) 00:33:58.105 [2024-07-15 03:37:04.198396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.105 [2024-07-15 03:37:04.198413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:58.105 [2024-07-15 03:37:04.205565] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b81f10) 00:33:58.105 [2024-07-15 03:37:04.205593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.105 [2024-07-15 03:37:04.205609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:58.105 [2024-07-15 03:37:04.212761] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b81f10) 00:33:58.105 [2024-07-15 03:37:04.212790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.105 [2024-07-15 03:37:04.212806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:58.105 [2024-07-15 03:37:04.219858] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b81f10) 00:33:58.105 [2024-07-15 03:37:04.219893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.105 [2024-07-15 03:37:04.219926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:58.105 [2024-07-15 03:37:04.226942] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b81f10) 00:33:58.105 [2024-07-15 03:37:04.226971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.105 [2024-07-15 03:37:04.226988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:58.105 [2024-07-15 03:37:04.234386] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b81f10) 00:33:58.105 [2024-07-15 03:37:04.234415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.105 [2024-07-15 03:37:04.234431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:58.105 [2024-07-15 03:37:04.241633] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b81f10) 00:33:58.105 [2024-07-15 03:37:04.241661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.105 [2024-07-15 03:37:04.241676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:58.363 [2024-07-15 03:37:04.248979] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b81f10) 00:33:58.363 [2024-07-15 03:37:04.249009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.363 [2024-07-15 03:37:04.249025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:58.363 [2024-07-15 03:37:04.256215] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b81f10) 00:33:58.363 [2024-07-15 03:37:04.256259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.363 [2024-07-15 03:37:04.256275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:58.363 [2024-07-15 03:37:04.263253] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b81f10) 00:33:58.363 [2024-07-15 03:37:04.263283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.363 [2024-07-15 03:37:04.263299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:58.363 [2024-07-15 03:37:04.270251] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b81f10) 00:33:58.363 [2024-07-15 03:37:04.270293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.363 [2024-07-15 03:37:04.270309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:58.363 [2024-07-15 03:37:04.277410] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b81f10) 00:33:58.363 [2024-07-15 03:37:04.277440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.363 [2024-07-15 03:37:04.277472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:58.363 [2024-07-15 03:37:04.284464] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b81f10) 00:33:58.363 [2024-07-15 03:37:04.284493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.363 [2024-07-15 03:37:04.284509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:58.363 [2024-07-15 03:37:04.291605] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b81f10) 00:33:58.363 [2024-07-15 03:37:04.291634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.363 [2024-07-15 03:37:04.291657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:58.363 [2024-07-15 03:37:04.298676] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b81f10) 00:33:58.363 [2024-07-15 03:37:04.298706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.363 [2024-07-15 03:37:04.298722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:58.363 [2024-07-15 03:37:04.305793] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b81f10) 00:33:58.363 [2024-07-15 03:37:04.305836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.363 [2024-07-15 03:37:04.305852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:58.363 [2024-07-15 03:37:04.313015] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b81f10) 00:33:58.363 [2024-07-15 03:37:04.313044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.363 [2024-07-15 03:37:04.313060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:58.363 [2024-07-15 03:37:04.320289] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b81f10) 00:33:58.363 [2024-07-15 03:37:04.320318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.363 [2024-07-15 03:37:04.320334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:58.363 [2024-07-15 03:37:04.327297] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b81f10) 00:33:58.363 [2024-07-15 03:37:04.327326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.363 [2024-07-15 03:37:04.327342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:58.363 [2024-07-15 03:37:04.334332] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b81f10) 00:33:58.363 [2024-07-15 03:37:04.334361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.364 [2024-07-15 03:37:04.334376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:58.364 [2024-07-15 03:37:04.341527] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b81f10) 00:33:58.364 [2024-07-15 03:37:04.341557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.364 [2024-07-15 03:37:04.341573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:58.364 [2024-07-15 03:37:04.348497] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b81f10) 00:33:58.364 [2024-07-15 03:37:04.348526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.364 [2024-07-15 03:37:04.348542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:58.364 [2024-07-15 03:37:04.355776] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b81f10) 00:33:58.364 [2024-07-15 03:37:04.355827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.364 [2024-07-15 03:37:04.355844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:58.364 [2024-07-15 03:37:04.362858] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b81f10) 00:33:58.364 [2024-07-15 03:37:04.362896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.364 [2024-07-15 03:37:04.362913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:58.364 [2024-07-15 03:37:04.370025] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b81f10) 00:33:58.364 [2024-07-15 03:37:04.370055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.364 [2024-07-15 03:37:04.370072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:58.364 [2024-07-15 03:37:04.377120] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b81f10) 00:33:58.364 [2024-07-15 03:37:04.377150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.364 [2024-07-15 03:37:04.377166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:58.364 [2024-07-15 03:37:04.384204] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b81f10) 00:33:58.364 [2024-07-15 03:37:04.384233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.364 [2024-07-15 03:37:04.384249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:58.364 [2024-07-15 03:37:04.391433] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b81f10) 00:33:58.364 [2024-07-15 03:37:04.391461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.364 [2024-07-15 03:37:04.391477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:58.364 [2024-07-15 03:37:04.398718] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b81f10) 00:33:58.364 [2024-07-15 03:37:04.398762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.364 [2024-07-15 03:37:04.398778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:58.364 [2024-07-15 03:37:04.405805] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b81f10) 00:33:58.364 [2024-07-15 03:37:04.405834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.364 [2024-07-15 03:37:04.405866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:58.364 [2024-07-15 03:37:04.412837] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b81f10) 00:33:58.364 [2024-07-15 03:37:04.412887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.364 [2024-07-15 03:37:04.412905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:58.364 [2024-07-15 03:37:04.419823] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b81f10) 00:33:58.364 [2024-07-15 03:37:04.419867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.364 [2024-07-15 03:37:04.419893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:58.364 [2024-07-15 03:37:04.426867] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b81f10) 00:33:58.364 [2024-07-15 03:37:04.426907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.364 [2024-07-15 03:37:04.426925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:58.364 [2024-07-15 03:37:04.434031] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b81f10) 00:33:58.364 [2024-07-15 03:37:04.434060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.364 [2024-07-15 03:37:04.434077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:58.364 [2024-07-15 03:37:04.441080] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b81f10) 00:33:58.364 [2024-07-15 03:37:04.441110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.364 [2024-07-15 03:37:04.441126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:58.364 [2024-07-15 03:37:04.448184] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b81f10) 00:33:58.364 [2024-07-15 03:37:04.448229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.364 [2024-07-15 03:37:04.448244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:58.364 [2024-07-15 03:37:04.455356] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b81f10) 00:33:58.364 [2024-07-15 03:37:04.455385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.364 [2024-07-15 03:37:04.455401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:58.364 [2024-07-15 03:37:04.462482] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b81f10) 00:33:58.364 [2024-07-15 03:37:04.462510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.364 [2024-07-15 03:37:04.462525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:58.364 [2024-07-15 03:37:04.469685] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b81f10) 00:33:58.364 [2024-07-15 03:37:04.469714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.364 [2024-07-15 03:37:04.469730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:58.364 [2024-07-15 03:37:04.476712] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b81f10) 00:33:58.364 [2024-07-15 03:37:04.476741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.364 [2024-07-15 03:37:04.476764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:58.364 [2024-07-15 03:37:04.483711] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b81f10) 00:33:58.364 [2024-07-15 03:37:04.483755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.364 [2024-07-15 03:37:04.483770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:58.364 [2024-07-15 03:37:04.490718] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b81f10) 00:33:58.364 [2024-07-15 03:37:04.490762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.364 [2024-07-15 03:37:04.490777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:58.364 [2024-07-15 03:37:04.497731] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b81f10) 00:33:58.364 [2024-07-15 03:37:04.497761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.364 [2024-07-15 03:37:04.497777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:58.364 [2024-07-15 03:37:04.504911] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b81f10) 00:33:58.364 [2024-07-15 03:37:04.504948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.364 [2024-07-15 03:37:04.504964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:58.623 [2024-07-15 03:37:04.512132] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b81f10) 00:33:58.623 [2024-07-15 03:37:04.512161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.624 [2024-07-15 03:37:04.512191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:58.624 [2024-07-15 03:37:04.519245] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b81f10) 00:33:58.624 [2024-07-15 03:37:04.519274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.624 [2024-07-15 03:37:04.519289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:58.624 [2024-07-15 03:37:04.526354] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b81f10) 00:33:58.624 [2024-07-15 03:37:04.526383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.624 [2024-07-15 03:37:04.526400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:58.624 [2024-07-15 03:37:04.533489] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b81f10) 00:33:58.624 [2024-07-15 03:37:04.533517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.624 [2024-07-15 03:37:04.533533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:58.624 [2024-07-15 03:37:04.540630] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b81f10) 00:33:58.624 [2024-07-15 03:37:04.540658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.624 [2024-07-15 03:37:04.540674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:58.624 [2024-07-15 03:37:04.547713] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b81f10) 00:33:58.624 [2024-07-15 03:37:04.547742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.624 [2024-07-15 03:37:04.547760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:58.624 [2024-07-15 03:37:04.554935] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b81f10) 00:33:58.624 [2024-07-15 03:37:04.554964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.624 [2024-07-15 03:37:04.554981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:58.624 [2024-07-15 03:37:04.562019] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b81f10) 00:33:58.624 [2024-07-15 03:37:04.562049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.624 [2024-07-15 03:37:04.562065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:58.624 [2024-07-15 03:37:04.569125] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b81f10) 00:33:58.624 [2024-07-15 03:37:04.569155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.624 [2024-07-15 03:37:04.569171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:58.624 [2024-07-15 03:37:04.576163] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b81f10) 00:33:58.624 [2024-07-15 03:37:04.576206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.624 [2024-07-15 03:37:04.576222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:58.624 [2024-07-15 03:37:04.583234] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b81f10) 00:33:58.624 [2024-07-15 03:37:04.583263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.624 [2024-07-15 03:37:04.583278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:58.624 [2024-07-15 03:37:04.590270] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b81f10) 00:33:58.624 [2024-07-15 03:37:04.590316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.624 [2024-07-15 03:37:04.590333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:58.624 [2024-07-15 03:37:04.597678] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b81f10) 00:33:58.624 [2024-07-15 03:37:04.597724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.624 [2024-07-15 03:37:04.597751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:58.624 [2024-07-15 03:37:04.604814] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b81f10) 00:33:58.624 [2024-07-15 03:37:04.604843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.624 [2024-07-15 03:37:04.604883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:58.624 [2024-07-15 03:37:04.612151] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b81f10) 00:33:58.624 [2024-07-15 03:37:04.612181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.624 [2024-07-15 03:37:04.612214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:58.624 [2024-07-15 03:37:04.619557] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b81f10) 00:33:58.624 [2024-07-15 03:37:04.619587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.624 [2024-07-15 03:37:04.619604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:58.624 [2024-07-15 03:37:04.627005] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b81f10) 00:33:58.624 [2024-07-15 03:37:04.627036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.624 [2024-07-15 03:37:04.627053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:58.624 [2024-07-15 03:37:04.634119] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b81f10) 00:33:58.624 [2024-07-15 03:37:04.634149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.624 [2024-07-15 03:37:04.634166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:58.624 [2024-07-15 03:37:04.641141] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b81f10) 00:33:58.624 [2024-07-15 03:37:04.641171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.624 [2024-07-15 03:37:04.641202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:58.624 [2024-07-15 03:37:04.648181] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b81f10) 00:33:58.624 [2024-07-15 03:37:04.648227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.624 [2024-07-15 03:37:04.648244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:58.624 [2024-07-15 03:37:04.655366] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b81f10) 00:33:58.624 [2024-07-15 03:37:04.655408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.624 [2024-07-15 03:37:04.655424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:58.624 [2024-07-15 03:37:04.662526] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b81f10) 00:33:58.624 [2024-07-15 03:37:04.662577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.624 [2024-07-15 03:37:04.662594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:58.624 [2024-07-15 03:37:04.669648] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b81f10) 00:33:58.624 [2024-07-15 03:37:04.669678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.624 [2024-07-15 03:37:04.669694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:58.624 [2024-07-15 03:37:04.676824] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b81f10) 00:33:58.624 [2024-07-15 03:37:04.676853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.624 [2024-07-15 03:37:04.676896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:58.624 [2024-07-15 03:37:04.683978] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b81f10) 00:33:58.624 [2024-07-15 03:37:04.684008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.624 [2024-07-15 03:37:04.684024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:58.624 [2024-07-15 03:37:04.691092] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b81f10) 00:33:58.624 [2024-07-15 03:37:04.691122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.624 [2024-07-15 03:37:04.691138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:58.624 [2024-07-15 03:37:04.698326] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b81f10) 00:33:58.625 [2024-07-15 03:37:04.698355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.625 [2024-07-15 03:37:04.698372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:58.625 [2024-07-15 03:37:04.705658] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b81f10) 00:33:58.625 [2024-07-15 03:37:04.705687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.625 [2024-07-15 03:37:04.705703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:58.625 [2024-07-15 03:37:04.713004] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b81f10) 00:33:58.625 [2024-07-15 03:37:04.713033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.625 [2024-07-15 03:37:04.713065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:58.625 [2024-07-15 03:37:04.720776] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b81f10) 00:33:58.625 [2024-07-15 03:37:04.720809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.625 [2024-07-15 03:37:04.720827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:58.625 [2024-07-15 03:37:04.728376] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b81f10) 00:33:58.625 [2024-07-15 03:37:04.728408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.625 [2024-07-15 03:37:04.728426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:58.625 [2024-07-15 03:37:04.736091] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b81f10) 00:33:58.625 [2024-07-15 03:37:04.736120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.625 [2024-07-15 03:37:04.736138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:58.625 [2024-07-15 03:37:04.744138] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b81f10) 00:33:58.625 [2024-07-15 03:37:04.744168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.625 [2024-07-15 03:37:04.744184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:58.625 [2024-07-15 03:37:04.752221] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b81f10) 00:33:58.625 [2024-07-15 03:37:04.752265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.625 [2024-07-15 03:37:04.752285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:58.625 [2024-07-15 03:37:04.760239] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b81f10) 00:33:58.625 [2024-07-15 03:37:04.760280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.625 [2024-07-15 03:37:04.760295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:58.884 [2024-07-15 03:37:04.768438] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b81f10) 00:33:58.884 [2024-07-15 03:37:04.768470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.884 [2024-07-15 03:37:04.768488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:58.884 [2024-07-15 03:37:04.776256] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b81f10) 00:33:58.884 [2024-07-15 03:37:04.776287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.884 [2024-07-15 03:37:04.776306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:58.884 [2024-07-15 03:37:04.784011] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b81f10) 00:33:58.884 [2024-07-15 03:37:04.784039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.884 [2024-07-15 03:37:04.784056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:58.884 [2024-07-15 03:37:04.791751] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b81f10) 00:33:58.884 [2024-07-15 03:37:04.791783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.884 [2024-07-15 03:37:04.791807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:58.884 [2024-07-15 03:37:04.799308] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b81f10) 00:33:58.884 [2024-07-15 03:37:04.799340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.884 [2024-07-15 03:37:04.799358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:58.884 00:33:58.884 Latency(us) 00:33:58.884 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:58.884 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:33:58.884 nvme0n1 : 2.00 4069.31 508.66 0.00 0.00 3926.73 3301.07 10145.94 00:33:58.884 =================================================================================================================== 00:33:58.884 Total : 4069.31 508.66 0.00 0.00 3926.73 3301.07 10145.94 00:33:58.884 0 00:33:58.884 03:37:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:33:58.884 03:37:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:33:58.884 03:37:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:33:58.884 | .driver_specific 00:33:58.884 | .nvme_error 00:33:58.884 | .status_code 00:33:58.884 | .command_transient_transport_error' 00:33:58.884 03:37:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:33:59.143 03:37:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 263 > 0 )) 00:33:59.143 03:37:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 3343133 00:33:59.143 03:37:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 3343133 ']' 00:33:59.143 03:37:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 3343133 00:33:59.143 03:37:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:33:59.143 03:37:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:33:59.143 03:37:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3343133 00:33:59.143 03:37:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:33:59.143 03:37:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:33:59.143 03:37:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3343133' 00:33:59.143 killing process with pid 3343133 00:33:59.143 03:37:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 3343133 00:33:59.143 Received shutdown signal, test time was about 2.000000 seconds 00:33:59.143 00:33:59.143 Latency(us) 00:33:59.143 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:59.143 =================================================================================================================== 00:33:59.143 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:59.143 03:37:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 3343133 00:33:59.468 03:37:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:33:59.468 03:37:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:33:59.468 03:37:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:33:59.468 03:37:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:33:59.468 03:37:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:33:59.468 03:37:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=3343542 00:33:59.468 03:37:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:33:59.468 03:37:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 3343542 /var/tmp/bperf.sock 00:33:59.468 03:37:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 3343542 ']' 00:33:59.468 03:37:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:33:59.468 03:37:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:33:59.468 03:37:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:33:59.468 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:33:59.468 03:37:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:33:59.468 03:37:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:59.468 [2024-07-15 03:37:05.373653] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:33:59.468 [2024-07-15 03:37:05.373728] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3343542 ] 00:33:59.468 EAL: No free 2048 kB hugepages reported on node 1 00:33:59.468 [2024-07-15 03:37:05.435816] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:59.468 [2024-07-15 03:37:05.523653] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:33:59.752 03:37:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:33:59.752 03:37:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:33:59.752 03:37:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:33:59.752 03:37:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:34:00.009 03:37:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:34:00.009 03:37:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:00.009 03:37:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:34:00.009 03:37:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:00.009 03:37:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:34:00.009 03:37:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:34:00.268 nvme0n1 00:34:00.268 03:37:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:34:00.268 03:37:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:00.268 03:37:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:34:00.268 03:37:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:00.268 03:37:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:34:00.268 03:37:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:34:00.268 Running I/O for 2 seconds... 00:34:00.268 [2024-07-15 03:37:06.394483] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217dc40) with pdu=0x2000190ee5c8 00:34:00.268 [2024-07-15 03:37:06.395407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:5700 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:00.268 [2024-07-15 03:37:06.395464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:34:00.268 [2024-07-15 03:37:06.405845] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217dc40) with pdu=0x2000190fac10 00:34:00.268 [2024-07-15 03:37:06.406780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:13048 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:00.268 [2024-07-15 03:37:06.406809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:34:00.527 [2024-07-15 03:37:06.418371] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217dc40) with pdu=0x2000190eaef0 00:34:00.527 [2024-07-15 03:37:06.419436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:18726 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:00.527 [2024-07-15 03:37:06.419480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:34:00.527 [2024-07-15 03:37:06.430534] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217dc40) with pdu=0x2000190e1b48 00:34:00.527 [2024-07-15 03:37:06.431704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:13276 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:00.527 [2024-07-15 03:37:06.431733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:34:00.527 [2024-07-15 03:37:06.441413] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217dc40) with pdu=0x2000190e84c0 00:34:00.527 [2024-07-15 03:37:06.442202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:22679 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:00.527 [2024-07-15 03:37:06.442230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:34:00.527 [2024-07-15 03:37:06.453424] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217dc40) with pdu=0x2000190f2510 00:34:00.527 [2024-07-15 03:37:06.454074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:11827 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:00.527 [2024-07-15 03:37:06.454103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:34:00.527 [2024-07-15 03:37:06.465680] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217dc40) with pdu=0x2000190e2c28 00:34:00.527 [2024-07-15 03:37:06.466489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:23827 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:00.527 [2024-07-15 03:37:06.466518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:34:00.527 [2024-07-15 03:37:06.479213] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217dc40) with pdu=0x2000190f0bc0 00:34:00.527 [2024-07-15 03:37:06.480815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:19805 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:00.527 [2024-07-15 03:37:06.480843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:34:00.527 [2024-07-15 03:37:06.491424] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217dc40) with pdu=0x2000190ebb98 00:34:00.527 [2024-07-15 03:37:06.493233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:4931 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:00.527 [2024-07-15 03:37:06.493260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:34:00.527 [2024-07-15 03:37:06.499709] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217dc40) with pdu=0x2000190df118 00:34:00.527 [2024-07-15 03:37:06.500483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:23578 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:00.527 [2024-07-15 03:37:06.500509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:34:00.527 [2024-07-15 03:37:06.510911] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217dc40) with pdu=0x2000190f35f0 00:34:00.527 [2024-07-15 03:37:06.511698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:7728 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:00.527 [2024-07-15 03:37:06.511724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:34:00.527 [2024-07-15 03:37:06.523332] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217dc40) with pdu=0x2000190fd208 00:34:00.527 [2024-07-15 03:37:06.524226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:11333 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:00.527 [2024-07-15 03:37:06.524254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:34:00.527 [2024-07-15 03:37:06.536516] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217dc40) with pdu=0x2000190fc998 00:34:00.527 [2024-07-15 03:37:06.537613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:6259 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:00.527 [2024-07-15 03:37:06.537656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:34:00.527 [2024-07-15 03:37:06.548589] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217dc40) with pdu=0x2000190f0bc0 00:34:00.527 [2024-07-15 03:37:06.549984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:384 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:00.527 [2024-07-15 03:37:06.550012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:34:00.527 [2024-07-15 03:37:06.559700] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217dc40) with pdu=0x2000190ec840 00:34:00.527 [2024-07-15 03:37:06.561006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:22625 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:00.527 [2024-07-15 03:37:06.561033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:34:00.527 [2024-07-15 03:37:06.571932] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217dc40) with pdu=0x2000190e8088 00:34:00.527 [2024-07-15 03:37:06.573296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:13026 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:00.527 [2024-07-15 03:37:06.573323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:34:00.527 [2024-07-15 03:37:06.584226] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217dc40) with pdu=0x2000190e01f8 00:34:00.527 [2024-07-15 03:37:06.585672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:4928 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:00.527 [2024-07-15 03:37:06.585713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:34:00.528 [2024-07-15 03:37:06.596373] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217dc40) with pdu=0x2000190f3e60 00:34:00.528 [2024-07-15 03:37:06.598091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:5035 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:00.528 [2024-07-15 03:37:06.598118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:34:00.528 [2024-07-15 03:37:06.608552] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217dc40) with pdu=0x2000190eaef0 00:34:00.528 [2024-07-15 03:37:06.610445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:20342 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:00.528 [2024-07-15 03:37:06.610474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:34:00.528 [2024-07-15 03:37:06.616834] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217dc40) with pdu=0x2000190f35f0 00:34:00.528 [2024-07-15 03:37:06.617725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:22256 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:00.528 [2024-07-15 03:37:06.617753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:34:00.528 [2024-07-15 03:37:06.627925] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217dc40) with pdu=0x2000190ddc00 00:34:00.528 [2024-07-15 03:37:06.628649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:9651 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:00.528 [2024-07-15 03:37:06.628675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:34:00.528 [2024-07-15 03:37:06.641021] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217dc40) with pdu=0x2000190f3e60 00:34:00.528 [2024-07-15 03:37:06.642009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:20048 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:00.528 [2024-07-15 03:37:06.642038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:34:00.528 [2024-07-15 03:37:06.652162] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217dc40) with pdu=0x2000190fb480 00:34:00.528 [2024-07-15 03:37:06.653124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:1327 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:00.528 [2024-07-15 03:37:06.653154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:34:00.528 [2024-07-15 03:37:06.664407] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217dc40) with pdu=0x2000190f0bc0 00:34:00.528 [2024-07-15 03:37:06.665537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:6046 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:00.528 [2024-07-15 03:37:06.665582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:34:00.786 [2024-07-15 03:37:06.676502] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217dc40) with pdu=0x2000190eb328 00:34:00.786 [2024-07-15 03:37:06.677710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:10952 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:00.786 [2024-07-15 03:37:06.677738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:34:00.786 [2024-07-15 03:37:06.687343] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217dc40) with pdu=0x2000190e84c0 00:34:00.786 [2024-07-15 03:37:06.688092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:19861 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:00.786 [2024-07-15 03:37:06.688128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:34:00.786 [2024-07-15 03:37:06.698222] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217dc40) with pdu=0x2000190f9b30 00:34:00.787 [2024-07-15 03:37:06.699022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:2879 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:00.787 [2024-07-15 03:37:06.699050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:34:00.787 [2024-07-15 03:37:06.710479] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217dc40) with pdu=0x2000190fe720 00:34:00.787 [2024-07-15 03:37:06.711368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:12168 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:00.787 [2024-07-15 03:37:06.711410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:34:00.787 [2024-07-15 03:37:06.723467] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217dc40) with pdu=0x2000190eee38 00:34:00.787 [2024-07-15 03:37:06.724593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:2149 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:00.787 [2024-07-15 03:37:06.724620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:34:00.787 [2024-07-15 03:37:06.734602] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217dc40) with pdu=0x2000190fd640 00:34:00.787 [2024-07-15 03:37:06.735622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:21519 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:00.787 [2024-07-15 03:37:06.735664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:34:00.787 [2024-07-15 03:37:06.746835] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217dc40) with pdu=0x2000190ea248 00:34:00.787 [2024-07-15 03:37:06.748025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:21832 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:00.787 [2024-07-15 03:37:06.748054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:34:00.787 [2024-07-15 03:37:06.759079] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217dc40) with pdu=0x2000190f7970 00:34:00.787 [2024-07-15 03:37:06.760461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:3772 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:00.787 [2024-07-15 03:37:06.760488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:34:00.787 [2024-07-15 03:37:06.770017] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217dc40) with pdu=0x2000190e7c50 00:34:00.787 [2024-07-15 03:37:06.770981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:6501 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:00.787 [2024-07-15 03:37:06.771009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:34:00.787 [2024-07-15 03:37:06.781940] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217dc40) with pdu=0x2000190e9168 00:34:00.787 [2024-07-15 03:37:06.782719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:9413 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:00.787 [2024-07-15 03:37:06.782762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:34:00.787 [2024-07-15 03:37:06.794246] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217dc40) with pdu=0x2000190eaef0 00:34:00.787 [2024-07-15 03:37:06.795257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:13019 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:00.787 [2024-07-15 03:37:06.795284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:34:00.787 [2024-07-15 03:37:06.805412] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217dc40) with pdu=0x2000190fef90 00:34:00.787 [2024-07-15 03:37:06.807129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:9400 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:00.787 [2024-07-15 03:37:06.807157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:34:00.787 [2024-07-15 03:37:06.815385] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217dc40) with pdu=0x2000190f5378 00:34:00.787 [2024-07-15 03:37:06.816106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:18323 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:00.787 [2024-07-15 03:37:06.816133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:34:00.787 [2024-07-15 03:37:06.827523] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217dc40) with pdu=0x2000190e23b8 00:34:00.787 [2024-07-15 03:37:06.828465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:13611 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:00.787 [2024-07-15 03:37:06.828491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:34:00.787 [2024-07-15 03:37:06.839769] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217dc40) with pdu=0x2000190e9168 00:34:00.787 [2024-07-15 03:37:06.840816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:18809 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:00.787 [2024-07-15 03:37:06.840842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:34:00.787 [2024-07-15 03:37:06.852015] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217dc40) with pdu=0x2000190f3a28 00:34:00.787 [2024-07-15 03:37:06.853178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:23146 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:00.787 [2024-07-15 03:37:06.853220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:34:00.787 [2024-07-15 03:37:06.864259] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217dc40) with pdu=0x2000190fa7d8 00:34:00.787 [2024-07-15 03:37:06.865559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:8614 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:00.787 [2024-07-15 03:37:06.865600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:34:00.787 [2024-07-15 03:37:06.876317] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217dc40) with pdu=0x2000190e23b8 00:34:00.787 [2024-07-15 03:37:06.877751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:4903 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:00.787 [2024-07-15 03:37:06.877793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:34:00.787 [2024-07-15 03:37:06.887153] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217dc40) with pdu=0x2000190e3d08 00:34:00.787 [2024-07-15 03:37:06.888241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:3731 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:00.787 [2024-07-15 03:37:06.888268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:34:00.787 [2024-07-15 03:37:06.898987] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217dc40) with pdu=0x2000190f4298 00:34:00.787 [2024-07-15 03:37:06.899931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:2094 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:00.787 [2024-07-15 03:37:06.899958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:34:00.787 [2024-07-15 03:37:06.910070] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217dc40) with pdu=0x2000190ff3c8 00:34:00.787 [2024-07-15 03:37:06.911894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:16712 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:00.787 [2024-07-15 03:37:06.911923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:34:00.787 [2024-07-15 03:37:06.921014] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217dc40) with pdu=0x2000190f4298 00:34:00.787 [2024-07-15 03:37:06.921852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:14095 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:00.787 [2024-07-15 03:37:06.921890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:34:01.045 [2024-07-15 03:37:06.932040] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217dc40) with pdu=0x2000190e6738 00:34:01.045 [2024-07-15 03:37:06.932846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:22924 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:01.045 [2024-07-15 03:37:06.932873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:34:01.045 [2024-07-15 03:37:06.944301] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217dc40) with pdu=0x2000190dece0 00:34:01.045 [2024-07-15 03:37:06.945171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:4051 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:01.045 [2024-07-15 03:37:06.945199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:34:01.045 [2024-07-15 03:37:06.958376] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217dc40) with pdu=0x2000190f31b8 00:34:01.045 [2024-07-15 03:37:06.959565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:6421 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:01.046 [2024-07-15 03:37:06.959596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:34:01.046 [2024-07-15 03:37:06.971375] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217dc40) with pdu=0x2000190f2510 00:34:01.046 [2024-07-15 03:37:06.972439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:6567 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:01.046 [2024-07-15 03:37:06.972470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:34:01.046 [2024-07-15 03:37:06.983337] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217dc40) with pdu=0x2000190f4298 00:34:01.046 [2024-07-15 03:37:06.985143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:19893 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:01.046 [2024-07-15 03:37:06.985170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:34:01.046 [2024-07-15 03:37:06.994153] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217dc40) with pdu=0x2000190e01f8 00:34:01.046 [2024-07-15 03:37:06.995052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23189 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:01.046 [2024-07-15 03:37:06.995099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:34:01.046 [2024-07-15 03:37:07.007372] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217dc40) with pdu=0x2000190e0a68 00:34:01.046 [2024-07-15 03:37:07.008360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:4755 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:01.046 [2024-07-15 03:37:07.008390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:34:01.046 [2024-07-15 03:37:07.020706] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217dc40) with pdu=0x2000190f6458 00:34:01.046 [2024-07-15 03:37:07.021888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:7719 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:01.046 [2024-07-15 03:37:07.021933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:34:01.046 [2024-07-15 03:37:07.034833] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217dc40) with pdu=0x2000190f7538 00:34:01.046 [2024-07-15 03:37:07.036324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:5200 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:01.046 [2024-07-15 03:37:07.036355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:34:01.046 [2024-07-15 03:37:07.046647] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217dc40) with pdu=0x2000190e6fa8 00:34:01.046 [2024-07-15 03:37:07.048059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:15138 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:01.046 [2024-07-15 03:37:07.048088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:34:01.046 [2024-07-15 03:37:07.059861] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217dc40) with pdu=0x2000190eb760 00:34:01.046 [2024-07-15 03:37:07.061370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:22590 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:01.046 [2024-07-15 03:37:07.061411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:34:01.046 [2024-07-15 03:37:07.073200] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217dc40) with pdu=0x2000190feb58 00:34:01.046 [2024-07-15 03:37:07.074853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:20530 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:01.046 [2024-07-15 03:37:07.074890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:34:01.046 [2024-07-15 03:37:07.086420] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217dc40) with pdu=0x2000190de038 00:34:01.046 [2024-07-15 03:37:07.088256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:8845 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:01.046 [2024-07-15 03:37:07.088286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:34:01.046 [2024-07-15 03:37:07.098344] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217dc40) with pdu=0x2000190e6300 00:34:01.046 [2024-07-15 03:37:07.099687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:24472 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:01.046 [2024-07-15 03:37:07.099717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:34:01.046 [2024-07-15 03:37:07.109608] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217dc40) with pdu=0x2000190f4f40 00:34:01.046 [2024-07-15 03:37:07.111603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:15231 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:01.046 [2024-07-15 03:37:07.111633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:34:01.046 [2024-07-15 03:37:07.120714] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217dc40) with pdu=0x2000190ed0b0 00:34:01.046 [2024-07-15 03:37:07.121533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:2526 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:01.046 [2024-07-15 03:37:07.121562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:34:01.046 [2024-07-15 03:37:07.133965] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217dc40) with pdu=0x2000190fbcf0 00:34:01.046 [2024-07-15 03:37:07.134961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:22415 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:01.046 [2024-07-15 03:37:07.134988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:34:01.046 [2024-07-15 03:37:07.147236] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217dc40) with pdu=0x2000190f7538 00:34:01.046 [2024-07-15 03:37:07.148392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:2949 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:01.046 [2024-07-15 03:37:07.148422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:34:01.046 [2024-07-15 03:37:07.161391] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217dc40) with pdu=0x2000190fb480 00:34:01.046 [2024-07-15 03:37:07.162729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:22190 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:01.046 [2024-07-15 03:37:07.162760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:34:01.046 [2024-07-15 03:37:07.174527] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217dc40) with pdu=0x2000190e88f8 00:34:01.046 [2024-07-15 03:37:07.176052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:17841 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:01.046 [2024-07-15 03:37:07.176080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:34:01.046 [2024-07-15 03:37:07.186606] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217dc40) with pdu=0x2000190e5658 00:34:01.046 [2024-07-15 03:37:07.188122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:22118 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:01.046 [2024-07-15 03:37:07.188149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:34:01.304 [2024-07-15 03:37:07.198535] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217dc40) with pdu=0x2000190fd208 00:34:01.304 [2024-07-15 03:37:07.199545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:9700 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:01.304 [2024-07-15 03:37:07.199577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:34:01.304 [2024-07-15 03:37:07.211425] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217dc40) with pdu=0x2000190f6458 00:34:01.304 [2024-07-15 03:37:07.212247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:15125 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:01.304 [2024-07-15 03:37:07.212274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:34:01.304 [2024-07-15 03:37:07.225952] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217dc40) with pdu=0x2000190df988 00:34:01.304 [2024-07-15 03:37:07.227780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:23714 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:01.304 [2024-07-15 03:37:07.227812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:34:01.304 [2024-07-15 03:37:07.239237] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217dc40) with pdu=0x2000190e6738 00:34:01.304 [2024-07-15 03:37:07.241247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:18453 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:01.305 [2024-07-15 03:37:07.241278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:34:01.305 [2024-07-15 03:37:07.248199] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217dc40) with pdu=0x2000190f0788 00:34:01.305 [2024-07-15 03:37:07.249056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:20261 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:01.305 [2024-07-15 03:37:07.249083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:34:01.305 [2024-07-15 03:37:07.261559] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217dc40) with pdu=0x2000190df988 00:34:01.305 [2024-07-15 03:37:07.262576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:6895 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:01.305 [2024-07-15 03:37:07.262607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:01.305 [2024-07-15 03:37:07.273662] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217dc40) with pdu=0x2000190e0a68 00:34:01.305 [2024-07-15 03:37:07.274664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:23572 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:01.305 [2024-07-15 03:37:07.274694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:34:01.305 [2024-07-15 03:37:07.287010] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217dc40) with pdu=0x2000190de038 00:34:01.305 [2024-07-15 03:37:07.288190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:17805 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:01.305 [2024-07-15 03:37:07.288217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:34:01.305 [2024-07-15 03:37:07.300279] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217dc40) with pdu=0x2000190f7970 00:34:01.305 [2024-07-15 03:37:07.301589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:4901 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:01.305 [2024-07-15 03:37:07.301620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:34:01.305 [2024-07-15 03:37:07.313504] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217dc40) with pdu=0x2000190ee5c8 00:34:01.305 [2024-07-15 03:37:07.315049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:20325 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:01.305 [2024-07-15 03:37:07.315076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:34:01.305 [2024-07-15 03:37:07.326708] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217dc40) with pdu=0x2000190ed0b0 00:34:01.305 [2024-07-15 03:37:07.328375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:13277 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:01.305 [2024-07-15 03:37:07.328405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:34:01.305 [2024-07-15 03:37:07.338562] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217dc40) with pdu=0x2000190f5378 00:34:01.305 [2024-07-15 03:37:07.339733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:8624 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:01.305 [2024-07-15 03:37:07.339764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:34:01.305 [2024-07-15 03:37:07.351019] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217dc40) with pdu=0x2000190de8a8 00:34:01.305 [2024-07-15 03:37:07.352252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25394 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:01.305 [2024-07-15 03:37:07.352283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:34:01.305 [2024-07-15 03:37:07.365350] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217dc40) with pdu=0x2000190eea00 00:34:01.305 [2024-07-15 03:37:07.367191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:8848 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:01.305 [2024-07-15 03:37:07.367222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:34:01.305 [2024-07-15 03:37:07.377226] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217dc40) with pdu=0x2000190e3498 00:34:01.305 [2024-07-15 03:37:07.378560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:21571 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:01.305 [2024-07-15 03:37:07.378591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:34:01.305 [2024-07-15 03:37:07.388739] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217dc40) with pdu=0x2000190e27f0 00:34:01.305 [2024-07-15 03:37:07.390515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:5965 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:01.305 [2024-07-15 03:37:07.390545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:34:01.305 [2024-07-15 03:37:07.400436] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217dc40) with pdu=0x2000190ff3c8 00:34:01.305 [2024-07-15 03:37:07.401272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:2056 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:01.305 [2024-07-15 03:37:07.401302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:01.305 [2024-07-15 03:37:07.413560] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217dc40) with pdu=0x2000190f1430 00:34:01.305 [2024-07-15 03:37:07.414550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:23930 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:01.305 [2024-07-15 03:37:07.414576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:01.305 [2024-07-15 03:37:07.425547] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217dc40) with pdu=0x2000190fc560 00:34:01.305 [2024-07-15 03:37:07.426573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:1020 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:01.305 [2024-07-15 03:37:07.426604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:34:01.305 [2024-07-15 03:37:07.438903] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217dc40) with pdu=0x2000190fa3a0 00:34:01.305 [2024-07-15 03:37:07.440071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10024 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:01.305 [2024-07-15 03:37:07.440103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:34:01.563 [2024-07-15 03:37:07.452192] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217dc40) with pdu=0x2000190f31b8 00:34:01.563 [2024-07-15 03:37:07.453519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:7125 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:01.563 [2024-07-15 03:37:07.453550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:34:01.563 [2024-07-15 03:37:07.464033] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217dc40) with pdu=0x2000190e95a0 00:34:01.563 [2024-07-15 03:37:07.464866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:9755 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:01.563 [2024-07-15 03:37:07.464903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:34:01.563 [2024-07-15 03:37:07.476824] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217dc40) with pdu=0x2000190ea680 00:34:01.563 [2024-07-15 03:37:07.477478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:22061 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:01.563 [2024-07-15 03:37:07.477508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:34:01.563 [2024-07-15 03:37:07.490091] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217dc40) with pdu=0x2000190fd208 00:34:01.563 [2024-07-15 03:37:07.490898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:7666 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:01.563 [2024-07-15 03:37:07.490940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:34:01.563 [2024-07-15 03:37:07.503337] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217dc40) with pdu=0x2000190f8a50 00:34:01.563 [2024-07-15 03:37:07.504361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:11319 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:01.563 [2024-07-15 03:37:07.504392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:34:01.563 [2024-07-15 03:37:07.515282] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217dc40) with pdu=0x2000190e2c28 00:34:01.563 [2024-07-15 03:37:07.517108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:11614 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:01.563 [2024-07-15 03:37:07.517135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:34:01.563 [2024-07-15 03:37:07.526133] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217dc40) with pdu=0x2000190f9f68 00:34:01.563 [2024-07-15 03:37:07.526971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:11372 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:01.563 [2024-07-15 03:37:07.526996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:34:01.563 [2024-07-15 03:37:07.540308] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217dc40) with pdu=0x2000190f3e60 00:34:01.563 [2024-07-15 03:37:07.541304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:5709 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:01.563 [2024-07-15 03:37:07.541334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:34:01.563 [2024-07-15 03:37:07.553384] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217dc40) with pdu=0x2000190ed920 00:34:01.563 [2024-07-15 03:37:07.554552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:14395 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:01.563 [2024-07-15 03:37:07.554582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:34:01.563 [2024-07-15 03:37:07.565408] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217dc40) with pdu=0x2000190f6cc8 00:34:01.563 [2024-07-15 03:37:07.566560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:4381 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:01.563 [2024-07-15 03:37:07.566590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:34:01.563 [2024-07-15 03:37:07.578756] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217dc40) with pdu=0x2000190f6458 00:34:01.564 [2024-07-15 03:37:07.580154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:1213 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:01.564 [2024-07-15 03:37:07.580181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:34:01.564 [2024-07-15 03:37:07.592028] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217dc40) with pdu=0x2000190fb480 00:34:01.564 [2024-07-15 03:37:07.593545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:21194 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:01.564 [2024-07-15 03:37:07.593576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:34:01.564 [2024-07-15 03:37:07.605285] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217dc40) with pdu=0x2000190f8a50 00:34:01.564 [2024-07-15 03:37:07.606959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:24159 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:01.564 [2024-07-15 03:37:07.606986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:34:01.564 [2024-07-15 03:37:07.618497] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217dc40) with pdu=0x2000190f0788 00:34:01.564 [2024-07-15 03:37:07.620330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:3615 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:01.564 [2024-07-15 03:37:07.620360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:34:01.564 [2024-07-15 03:37:07.631814] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217dc40) with pdu=0x2000190e3498 00:34:01.564 [2024-07-15 03:37:07.633827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:3991 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:01.564 [2024-07-15 03:37:07.633858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:34:01.564 [2024-07-15 03:37:07.640774] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217dc40) with pdu=0x2000190f4f40 00:34:01.564 [2024-07-15 03:37:07.641607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:14313 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:01.564 [2024-07-15 03:37:07.641637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:34:01.564 [2024-07-15 03:37:07.652733] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217dc40) with pdu=0x2000190e9e10 00:34:01.564 [2024-07-15 03:37:07.653554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:656 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:01.564 [2024-07-15 03:37:07.653584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:34:01.564 [2024-07-15 03:37:07.665980] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217dc40) with pdu=0x2000190ec840 00:34:01.564 [2024-07-15 03:37:07.666995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:10784 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:01.564 [2024-07-15 03:37:07.667023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:34:01.564 [2024-07-15 03:37:07.679240] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217dc40) with pdu=0x2000190f3e60 00:34:01.564 [2024-07-15 03:37:07.680414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:19935 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:01.564 [2024-07-15 03:37:07.680445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:34:01.564 [2024-07-15 03:37:07.692574] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217dc40) with pdu=0x2000190f1430 00:34:01.564 [2024-07-15 03:37:07.693919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:6686 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:01.564 [2024-07-15 03:37:07.693962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:34:01.564 [2024-07-15 03:37:07.705920] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217dc40) with pdu=0x2000190ddc00 00:34:01.823 [2024-07-15 03:37:07.707643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:15404 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:01.823 [2024-07-15 03:37:07.707674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:34:01.823 [2024-07-15 03:37:07.719456] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217dc40) with pdu=0x2000190f0350 00:34:01.823 [2024-07-15 03:37:07.721199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:3529 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:01.823 [2024-07-15 03:37:07.721225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:34:01.823 [2024-07-15 03:37:07.732692] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217dc40) with pdu=0x2000190f3a28 00:34:01.823 [2024-07-15 03:37:07.734526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:15082 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:01.823 [2024-07-15 03:37:07.734557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:34:01.823 [2024-07-15 03:37:07.745898] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217dc40) with pdu=0x2000190f0ff8 00:34:01.823 [2024-07-15 03:37:07.747946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:22473 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:01.823 [2024-07-15 03:37:07.747973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:34:01.823 [2024-07-15 03:37:07.754984] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217dc40) with pdu=0x2000190e1710 00:34:01.823 [2024-07-15 03:37:07.755824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:19457 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:01.823 [2024-07-15 03:37:07.755854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:34:01.823 [2024-07-15 03:37:07.768180] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217dc40) with pdu=0x2000190f3a28 00:34:01.823 [2024-07-15 03:37:07.769165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:3763 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:01.823 [2024-07-15 03:37:07.769216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:01.823 [2024-07-15 03:37:07.780215] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217dc40) with pdu=0x2000190f5be8 00:34:01.823 [2024-07-15 03:37:07.781228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:12927 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:01.823 [2024-07-15 03:37:07.781259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:34:01.823 [2024-07-15 03:37:07.793573] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217dc40) with pdu=0x2000190eee38 00:34:01.823 [2024-07-15 03:37:07.794738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:21308 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:01.823 [2024-07-15 03:37:07.794769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:34:01.823 [2024-07-15 03:37:07.806901] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217dc40) with pdu=0x2000190f3e60 00:34:01.823 [2024-07-15 03:37:07.808317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:23114 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:01.823 [2024-07-15 03:37:07.808348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:34:01.823 [2024-07-15 03:37:07.820276] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217dc40) with pdu=0x2000190fb480 00:34:01.823 [2024-07-15 03:37:07.821778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:3431 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:01.823 [2024-07-15 03:37:07.821809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:34:01.823 [2024-07-15 03:37:07.833524] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217dc40) with pdu=0x2000190ff3c8 00:34:01.823 [2024-07-15 03:37:07.835315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:1745 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:01.823 [2024-07-15 03:37:07.835346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:34:01.823 [2024-07-15 03:37:07.846933] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217dc40) with pdu=0x2000190e4578 00:34:01.823 [2024-07-15 03:37:07.848758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:24759 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:01.823 [2024-07-15 03:37:07.848789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:34:01.823 [2024-07-15 03:37:07.860212] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217dc40) with pdu=0x2000190f0bc0 00:34:01.823 [2024-07-15 03:37:07.862343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:6328 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:01.823 [2024-07-15 03:37:07.862374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:34:01.823 [2024-07-15 03:37:07.869264] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217dc40) with pdu=0x2000190e12d8 00:34:01.823 [2024-07-15 03:37:07.870161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:8032 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:01.823 [2024-07-15 03:37:07.870206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:34:01.823 [2024-07-15 03:37:07.881368] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217dc40) with pdu=0x2000190f6020 00:34:01.823 [2024-07-15 03:37:07.882264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:20105 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:01.823 [2024-07-15 03:37:07.882294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:34:01.824 [2024-07-15 03:37:07.894665] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217dc40) with pdu=0x2000190edd58 00:34:01.824 [2024-07-15 03:37:07.895684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:16494 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:01.824 [2024-07-15 03:37:07.895715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:34:01.824 [2024-07-15 03:37:07.908035] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217dc40) with pdu=0x2000190e01f8 00:34:01.824 [2024-07-15 03:37:07.909207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:14437 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:01.824 [2024-07-15 03:37:07.909238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:34:01.824 [2024-07-15 03:37:07.921376] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217dc40) with pdu=0x2000190e8d30 00:34:01.824 [2024-07-15 03:37:07.922679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:12549 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:01.824 [2024-07-15 03:37:07.922706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:34:01.824 [2024-07-15 03:37:07.934613] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217dc40) with pdu=0x2000190ddc00 00:34:01.824 [2024-07-15 03:37:07.936133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:10937 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:01.824 [2024-07-15 03:37:07.936177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:34:01.824 [2024-07-15 03:37:07.948020] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217dc40) with pdu=0x2000190e4140 00:34:01.824 [2024-07-15 03:37:07.949688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:8322 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:01.824 [2024-07-15 03:37:07.949719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:34:01.824 [2024-07-15 03:37:07.961153] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217dc40) with pdu=0x2000190feb58 00:34:01.824 [2024-07-15 03:37:07.963021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:5089 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:01.824 [2024-07-15 03:37:07.963048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:34:02.082 [2024-07-15 03:37:07.974466] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217dc40) with pdu=0x2000190fac10 00:34:02.082 [2024-07-15 03:37:07.976504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:946 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:02.082 [2024-07-15 03:37:07.976535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:34:02.082 [2024-07-15 03:37:07.983201] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217dc40) with pdu=0x2000190f4f40 00:34:02.082 [2024-07-15 03:37:07.983933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:15592 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:02.082 [2024-07-15 03:37:07.983961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:34:02.082 [2024-07-15 03:37:07.995392] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217dc40) with pdu=0x2000190feb58 00:34:02.082 [2024-07-15 03:37:07.996324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:3844 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:02.082 [2024-07-15 03:37:07.996365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:02.082 [2024-07-15 03:37:08.006626] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217dc40) with pdu=0x2000190f9f68 00:34:02.082 [2024-07-15 03:37:08.007507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:5417 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:02.082 [2024-07-15 03:37:08.007533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:34:02.082 [2024-07-15 03:37:08.019749] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217dc40) with pdu=0x2000190f6cc8 00:34:02.082 [2024-07-15 03:37:08.020792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:7088 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:02.082 [2024-07-15 03:37:08.020819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:02.082 [2024-07-15 03:37:08.031848] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217dc40) with pdu=0x2000190f4298 00:34:02.082 [2024-07-15 03:37:08.033095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:268 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:02.082 [2024-07-15 03:37:08.033123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:02.082 [2024-07-15 03:37:08.042864] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217dc40) with pdu=0x2000190e01f8 00:34:02.082 [2024-07-15 03:37:08.044048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:5108 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:02.082 [2024-07-15 03:37:08.044076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:34:02.082 [2024-07-15 03:37:08.055167] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217dc40) with pdu=0x2000190ee5c8 00:34:02.082 [2024-07-15 03:37:08.056539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:24490 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:02.082 [2024-07-15 03:37:08.056566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:34:02.082 [2024-07-15 03:37:08.066094] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217dc40) with pdu=0x2000190f6020 00:34:02.082 [2024-07-15 03:37:08.067007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:22072 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:02.082 [2024-07-15 03:37:08.067035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:34:02.082 [2024-07-15 03:37:08.077920] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217dc40) with pdu=0x2000190e3498 00:34:02.082 [2024-07-15 03:37:08.078676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:16265 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:02.082 [2024-07-15 03:37:08.078719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:34:02.082 [2024-07-15 03:37:08.090074] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217dc40) with pdu=0x2000190e1710 00:34:02.082 [2024-07-15 03:37:08.091003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:18246 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:02.082 [2024-07-15 03:37:08.091038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:34:02.082 [2024-07-15 03:37:08.101136] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217dc40) with pdu=0x2000190f1ca0 00:34:02.082 [2024-07-15 03:37:08.102853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:15938 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:02.082 [2024-07-15 03:37:08.102889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:34:02.082 [2024-07-15 03:37:08.112040] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217dc40) with pdu=0x2000190f3a28 00:34:02.082 [2024-07-15 03:37:08.112797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:17267 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:02.082 [2024-07-15 03:37:08.112823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:02.082 [2024-07-15 03:37:08.124177] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217dc40) with pdu=0x2000190e5a90 00:34:02.082 [2024-07-15 03:37:08.125143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:17206 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:02.082 [2024-07-15 03:37:08.125171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:02.082 [2024-07-15 03:37:08.136042] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217dc40) with pdu=0x2000190f2948 00:34:02.082 [2024-07-15 03:37:08.136935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:729 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:02.082 [2024-07-15 03:37:08.136963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:34:02.082 [2024-07-15 03:37:08.147946] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217dc40) with pdu=0x2000190e99d8 00:34:02.082 [2024-07-15 03:37:08.148681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:3391 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:02.082 [2024-07-15 03:37:08.148708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:02.082 [2024-07-15 03:37:08.161317] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217dc40) with pdu=0x2000190f31b8 00:34:02.082 [2024-07-15 03:37:08.162947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:14461 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:02.082 [2024-07-15 03:37:08.162974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:02.082 [2024-07-15 03:37:08.172128] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217dc40) with pdu=0x2000190e23b8 00:34:02.083 [2024-07-15 03:37:08.173415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:1881 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:02.083 [2024-07-15 03:37:08.173444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:34:02.083 [2024-07-15 03:37:08.182964] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217dc40) with pdu=0x2000190fc998 00:34:02.083 [2024-07-15 03:37:08.184668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:21633 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:02.083 [2024-07-15 03:37:08.184697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:34:02.083 [2024-07-15 03:37:08.195333] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217dc40) with pdu=0x2000190fe720 00:34:02.083 [2024-07-15 03:37:08.197176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:16476 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:02.083 [2024-07-15 03:37:08.197204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:02.083 [2024-07-15 03:37:08.205398] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217dc40) with pdu=0x2000190f4298 00:34:02.083 [2024-07-15 03:37:08.206348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:7515 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:02.083 [2024-07-15 03:37:08.206375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:34:02.083 [2024-07-15 03:37:08.218643] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217dc40) with pdu=0x2000190e7818 00:34:02.083 [2024-07-15 03:37:08.219752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:3493 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:02.083 [2024-07-15 03:37:08.219779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:02.340 [2024-07-15 03:37:08.230955] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217dc40) with pdu=0x2000190efae0 00:34:02.341 [2024-07-15 03:37:08.232139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:6142 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:02.341 [2024-07-15 03:37:08.232166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:02.341 [2024-07-15 03:37:08.242018] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217dc40) with pdu=0x2000190f35f0 00:34:02.341 [2024-07-15 03:37:08.243252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:23753 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:02.341 [2024-07-15 03:37:08.243279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:34:02.341 [2024-07-15 03:37:08.254319] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217dc40) with pdu=0x2000190e7c50 00:34:02.341 [2024-07-15 03:37:08.255713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:1802 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:02.341 [2024-07-15 03:37:08.255740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:02.341 [2024-07-15 03:37:08.265245] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217dc40) with pdu=0x2000190fd208 00:34:02.341 [2024-07-15 03:37:08.266172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:9282 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:02.341 [2024-07-15 03:37:08.266214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:34:02.341 [2024-07-15 03:37:08.277173] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217dc40) with pdu=0x2000190e0630 00:34:02.341 [2024-07-15 03:37:08.277914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:11522 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:02.341 [2024-07-15 03:37:08.277955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:02.341 [2024-07-15 03:37:08.289458] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217dc40) with pdu=0x2000190e49b0 00:34:02.341 [2024-07-15 03:37:08.290390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:19235 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:02.341 [2024-07-15 03:37:08.290433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:34:02.341 [2024-07-15 03:37:08.300521] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217dc40) with pdu=0x2000190de8a8 00:34:02.341 [2024-07-15 03:37:08.302139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:4467 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:02.341 [2024-07-15 03:37:08.302166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:34:02.341 [2024-07-15 03:37:08.312682] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217dc40) with pdu=0x2000190efae0 00:34:02.341 [2024-07-15 03:37:08.314560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:12228 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:02.341 [2024-07-15 03:37:08.314588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:02.341 [2024-07-15 03:37:08.323560] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217dc40) with pdu=0x2000190ddc00 00:34:02.341 [2024-07-15 03:37:08.324477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:9518 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:02.341 [2024-07-15 03:37:08.324503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:34:02.341 [2024-07-15 03:37:08.335652] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217dc40) with pdu=0x2000190f4b08 00:34:02.341 [2024-07-15 03:37:08.336699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:14710 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:02.341 [2024-07-15 03:37:08.336740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:34:02.341 [2024-07-15 03:37:08.346734] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217dc40) with pdu=0x2000190f31b8 00:34:02.341 [2024-07-15 03:37:08.347740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:5488 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:02.341 [2024-07-15 03:37:08.347766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:02.341 [2024-07-15 03:37:08.358846] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217dc40) with pdu=0x2000190ee190 00:34:02.341 [2024-07-15 03:37:08.360031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:19762 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:02.341 [2024-07-15 03:37:08.360058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:34:02.341 [2024-07-15 03:37:08.371770] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217dc40) with pdu=0x2000190f7da8 00:34:02.341 [2024-07-15 03:37:08.373187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:16917 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:02.341 [2024-07-15 03:37:08.373213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:02.341 00:34:02.341 Latency(us) 00:34:02.341 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:02.341 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:34:02.341 nvme0n1 : 2.00 20862.09 81.49 0.00 0.00 6128.96 2463.67 17379.18 00:34:02.341 =================================================================================================================== 00:34:02.341 Total : 20862.09 81.49 0.00 0.00 6128.96 2463.67 17379.18 00:34:02.341 0 00:34:02.341 03:37:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:34:02.341 03:37:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:34:02.341 03:37:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:34:02.341 | .driver_specific 00:34:02.341 | .nvme_error 00:34:02.341 | .status_code 00:34:02.341 | .command_transient_transport_error' 00:34:02.341 03:37:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:34:02.599 03:37:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 163 > 0 )) 00:34:02.599 03:37:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 3343542 00:34:02.599 03:37:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 3343542 ']' 00:34:02.599 03:37:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 3343542 00:34:02.599 03:37:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:34:02.599 03:37:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:34:02.599 03:37:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3343542 00:34:02.599 03:37:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:34:02.599 03:37:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:34:02.599 03:37:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3343542' 00:34:02.599 killing process with pid 3343542 00:34:02.599 03:37:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 3343542 00:34:02.599 Received shutdown signal, test time was about 2.000000 seconds 00:34:02.599 00:34:02.599 Latency(us) 00:34:02.599 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:02.599 =================================================================================================================== 00:34:02.599 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:34:02.599 03:37:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 3343542 00:34:02.857 03:37:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:34:02.857 03:37:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:34:02.857 03:37:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:34:02.857 03:37:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:34:02.857 03:37:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:34:02.857 03:37:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=3343947 00:34:02.857 03:37:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:34:02.857 03:37:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 3343947 /var/tmp/bperf.sock 00:34:02.857 03:37:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 3343947 ']' 00:34:02.857 03:37:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:34:02.857 03:37:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:34:02.857 03:37:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:34:02.857 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:34:02.857 03:37:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:34:02.857 03:37:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:34:02.857 [2024-07-15 03:37:08.947136] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:34:02.857 [2024-07-15 03:37:08.947234] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3343947 ] 00:34:02.857 I/O size of 131072 is greater than zero copy threshold (65536). 00:34:02.857 Zero copy mechanism will not be used. 00:34:02.857 EAL: No free 2048 kB hugepages reported on node 1 00:34:03.115 [2024-07-15 03:37:09.012081] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:03.115 [2024-07-15 03:37:09.104253] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:34:03.115 03:37:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:34:03.115 03:37:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:34:03.115 03:37:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:34:03.115 03:37:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:34:03.373 03:37:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:34:03.373 03:37:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:03.373 03:37:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:34:03.373 03:37:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:03.373 03:37:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:34:03.373 03:37:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:34:03.937 nvme0n1 00:34:03.937 03:37:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:34:03.937 03:37:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:03.937 03:37:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:34:03.937 03:37:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:03.937 03:37:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:34:03.937 03:37:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:34:03.937 I/O size of 131072 is greater than zero copy threshold (65536). 00:34:03.937 Zero copy mechanism will not be used. 00:34:03.937 Running I/O for 2 seconds... 00:34:03.937 [2024-07-15 03:37:09.954483] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217df80) with pdu=0x2000190fef90 00:34:03.937 [2024-07-15 03:37:09.954834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.937 [2024-07-15 03:37:09.954873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:03.937 [2024-07-15 03:37:09.964339] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217df80) with pdu=0x2000190fef90 00:34:03.937 [2024-07-15 03:37:09.964670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.937 [2024-07-15 03:37:09.964701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:03.937 [2024-07-15 03:37:09.974310] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217df80) with pdu=0x2000190fef90 00:34:03.937 [2024-07-15 03:37:09.974646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.937 [2024-07-15 03:37:09.974675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:03.937 [2024-07-15 03:37:09.982760] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217df80) with pdu=0x2000190fef90 00:34:03.937 [2024-07-15 03:37:09.983108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.937 [2024-07-15 03:37:09.983138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:03.937 [2024-07-15 03:37:09.990834] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217df80) with pdu=0x2000190fef90 00:34:03.937 [2024-07-15 03:37:09.991190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.937 [2024-07-15 03:37:09.991219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:03.937 [2024-07-15 03:37:09.999108] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217df80) with pdu=0x2000190fef90 00:34:03.937 [2024-07-15 03:37:09.999434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.937 [2024-07-15 03:37:09.999463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:03.937 [2024-07-15 03:37:10.006867] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217df80) with pdu=0x2000190fef90 00:34:03.937 [2024-07-15 03:37:10.006994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.937 [2024-07-15 03:37:10.007024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:03.937 [2024-07-15 03:37:10.015381] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217df80) with pdu=0x2000190fef90 00:34:03.937 [2024-07-15 03:37:10.015719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.937 [2024-07-15 03:37:10.015749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:03.937 [2024-07-15 03:37:10.023385] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217df80) with pdu=0x2000190fef90 00:34:03.937 [2024-07-15 03:37:10.023526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.937 [2024-07-15 03:37:10.023555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:03.937 [2024-07-15 03:37:10.031750] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217df80) with pdu=0x2000190fef90 00:34:03.937 [2024-07-15 03:37:10.032107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.937 [2024-07-15 03:37:10.032153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:03.937 [2024-07-15 03:37:10.040756] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217df80) with pdu=0x2000190fef90 00:34:03.937 [2024-07-15 03:37:10.041098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.937 [2024-07-15 03:37:10.041128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:03.937 [2024-07-15 03:37:10.050123] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217df80) with pdu=0x2000190fef90 00:34:03.937 [2024-07-15 03:37:10.050479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.937 [2024-07-15 03:37:10.050528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:03.937 [2024-07-15 03:37:10.061784] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217df80) with pdu=0x2000190fef90 00:34:03.937 [2024-07-15 03:37:10.062157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.937 [2024-07-15 03:37:10.062189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:03.937 [2024-07-15 03:37:10.072303] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217df80) with pdu=0x2000190fef90 00:34:03.937 [2024-07-15 03:37:10.072530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.938 [2024-07-15 03:37:10.072575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:04.196 [2024-07-15 03:37:10.083351] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217df80) with pdu=0x2000190fef90 00:34:04.196 [2024-07-15 03:37:10.083719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.196 [2024-07-15 03:37:10.083748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:04.196 [2024-07-15 03:37:10.093364] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217df80) with pdu=0x2000190fef90 00:34:04.196 [2024-07-15 03:37:10.093702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.196 [2024-07-15 03:37:10.093731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:04.196 [2024-07-15 03:37:10.101149] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217df80) with pdu=0x2000190fef90 00:34:04.196 [2024-07-15 03:37:10.101464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.196 [2024-07-15 03:37:10.101494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:04.196 [2024-07-15 03:37:10.108554] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217df80) with pdu=0x2000190fef90 00:34:04.196 [2024-07-15 03:37:10.108674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.196 [2024-07-15 03:37:10.108703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:04.196 [2024-07-15 03:37:10.116309] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217df80) with pdu=0x2000190fef90 00:34:04.196 [2024-07-15 03:37:10.116660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.196 [2024-07-15 03:37:10.116688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:04.196 [2024-07-15 03:37:10.124503] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217df80) with pdu=0x2000190fef90 00:34:04.196 [2024-07-15 03:37:10.124888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.196 [2024-07-15 03:37:10.124927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:04.196 [2024-07-15 03:37:10.132760] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217df80) with pdu=0x2000190fef90 00:34:04.196 [2024-07-15 03:37:10.133017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.196 [2024-07-15 03:37:10.133045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:04.196 [2024-07-15 03:37:10.139666] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217df80) with pdu=0x2000190fef90 00:34:04.196 [2024-07-15 03:37:10.140012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.196 [2024-07-15 03:37:10.140049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:04.196 [2024-07-15 03:37:10.146483] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217df80) with pdu=0x2000190fef90 00:34:04.196 [2024-07-15 03:37:10.146809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.196 [2024-07-15 03:37:10.146837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:04.196 [2024-07-15 03:37:10.153620] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217df80) with pdu=0x2000190fef90 00:34:04.196 [2024-07-15 03:37:10.153912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.196 [2024-07-15 03:37:10.153941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:04.196 [2024-07-15 03:37:10.160287] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217df80) with pdu=0x2000190fef90 00:34:04.196 [2024-07-15 03:37:10.160535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.196 [2024-07-15 03:37:10.160563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:04.196 [2024-07-15 03:37:10.166439] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217df80) with pdu=0x2000190fef90 00:34:04.196 [2024-07-15 03:37:10.166719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.196 [2024-07-15 03:37:10.166748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:04.196 [2024-07-15 03:37:10.173096] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217df80) with pdu=0x2000190fef90 00:34:04.196 [2024-07-15 03:37:10.173364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.196 [2024-07-15 03:37:10.173392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:04.196 [2024-07-15 03:37:10.180147] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217df80) with pdu=0x2000190fef90 00:34:04.196 [2024-07-15 03:37:10.180426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.196 [2024-07-15 03:37:10.180456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:04.196 [2024-07-15 03:37:10.187007] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217df80) with pdu=0x2000190fef90 00:34:04.196 [2024-07-15 03:37:10.187263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.196 [2024-07-15 03:37:10.187291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:04.196 [2024-07-15 03:37:10.194746] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217df80) with pdu=0x2000190fef90 00:34:04.196 [2024-07-15 03:37:10.195062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.196 [2024-07-15 03:37:10.195091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:04.196 [2024-07-15 03:37:10.202853] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217df80) with pdu=0x2000190fef90 00:34:04.196 [2024-07-15 03:37:10.203165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.196 [2024-07-15 03:37:10.203194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:04.196 [2024-07-15 03:37:10.210995] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217df80) with pdu=0x2000190fef90 00:34:04.196 [2024-07-15 03:37:10.211336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.196 [2024-07-15 03:37:10.211365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:04.196 [2024-07-15 03:37:10.219057] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217df80) with pdu=0x2000190fef90 00:34:04.196 [2024-07-15 03:37:10.219338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.196 [2024-07-15 03:37:10.219368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:04.196 [2024-07-15 03:37:10.226198] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217df80) with pdu=0x2000190fef90 00:34:04.196 [2024-07-15 03:37:10.226487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.196 [2024-07-15 03:37:10.226516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:04.196 [2024-07-15 03:37:10.233465] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217df80) with pdu=0x2000190fef90 00:34:04.196 [2024-07-15 03:37:10.233741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.196 [2024-07-15 03:37:10.233779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:04.196 [2024-07-15 03:37:10.240258] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217df80) with pdu=0x2000190fef90 00:34:04.196 [2024-07-15 03:37:10.240572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.197 [2024-07-15 03:37:10.240601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:04.197 [2024-07-15 03:37:10.247178] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217df80) with pdu=0x2000190fef90 00:34:04.197 [2024-07-15 03:37:10.247460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.197 [2024-07-15 03:37:10.247488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:04.197 [2024-07-15 03:37:10.254171] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217df80) with pdu=0x2000190fef90 00:34:04.197 [2024-07-15 03:37:10.254458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.197 [2024-07-15 03:37:10.254486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:04.197 [2024-07-15 03:37:10.260651] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217df80) with pdu=0x2000190fef90 00:34:04.197 [2024-07-15 03:37:10.260909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.197 [2024-07-15 03:37:10.260941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:04.197 [2024-07-15 03:37:10.267286] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217df80) with pdu=0x2000190fef90 00:34:04.197 [2024-07-15 03:37:10.267555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.197 [2024-07-15 03:37:10.267583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:04.197 [2024-07-15 03:37:10.273804] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217df80) with pdu=0x2000190fef90 00:34:04.197 [2024-07-15 03:37:10.274054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.197 [2024-07-15 03:37:10.274082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:04.197 [2024-07-15 03:37:10.280475] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217df80) with pdu=0x2000190fef90 00:34:04.197 [2024-07-15 03:37:10.280736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.197 [2024-07-15 03:37:10.280764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:04.197 [2024-07-15 03:37:10.286852] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217df80) with pdu=0x2000190fef90 00:34:04.197 [2024-07-15 03:37:10.287124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.197 [2024-07-15 03:37:10.287153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:04.197 [2024-07-15 03:37:10.293473] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217df80) with pdu=0x2000190fef90 00:34:04.197 [2024-07-15 03:37:10.293770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.197 [2024-07-15 03:37:10.293798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:04.197 [2024-07-15 03:37:10.299932] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217df80) with pdu=0x2000190fef90 00:34:04.197 [2024-07-15 03:37:10.300181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.197 [2024-07-15 03:37:10.300210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:04.197 [2024-07-15 03:37:10.307063] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217df80) with pdu=0x2000190fef90 00:34:04.197 [2024-07-15 03:37:10.307311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.197 [2024-07-15 03:37:10.307347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:04.197 [2024-07-15 03:37:10.313743] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217df80) with pdu=0x2000190fef90 00:34:04.197 [2024-07-15 03:37:10.314028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.197 [2024-07-15 03:37:10.314057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:04.197 [2024-07-15 03:37:10.320603] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217df80) with pdu=0x2000190fef90 00:34:04.197 [2024-07-15 03:37:10.320851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.197 [2024-07-15 03:37:10.320884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:04.197 [2024-07-15 03:37:10.327320] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217df80) with pdu=0x2000190fef90 00:34:04.197 [2024-07-15 03:37:10.327622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.197 [2024-07-15 03:37:10.327651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:04.197 [2024-07-15 03:37:10.334303] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217df80) with pdu=0x2000190fef90 00:34:04.197 [2024-07-15 03:37:10.334625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.197 [2024-07-15 03:37:10.334653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:04.456 [2024-07-15 03:37:10.342376] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217df80) with pdu=0x2000190fef90 00:34:04.456 [2024-07-15 03:37:10.342690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.456 [2024-07-15 03:37:10.342721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:04.456 [2024-07-15 03:37:10.350827] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217df80) with pdu=0x2000190fef90 00:34:04.456 [2024-07-15 03:37:10.351159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.456 [2024-07-15 03:37:10.351188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:04.456 [2024-07-15 03:37:10.359102] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217df80) with pdu=0x2000190fef90 00:34:04.456 [2024-07-15 03:37:10.359441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.456 [2024-07-15 03:37:10.359470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:04.456 [2024-07-15 03:37:10.367336] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217df80) with pdu=0x2000190fef90 00:34:04.456 [2024-07-15 03:37:10.367636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.456 [2024-07-15 03:37:10.367665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:04.456 [2024-07-15 03:37:10.374826] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217df80) with pdu=0x2000190fef90 00:34:04.456 [2024-07-15 03:37:10.375167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.456 [2024-07-15 03:37:10.375195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:04.456 [2024-07-15 03:37:10.381531] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217df80) with pdu=0x2000190fef90 00:34:04.456 [2024-07-15 03:37:10.381784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.456 [2024-07-15 03:37:10.381812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:04.456 [2024-07-15 03:37:10.388807] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217df80) with pdu=0x2000190fef90 00:34:04.456 [2024-07-15 03:37:10.389114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.456 [2024-07-15 03:37:10.389143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:04.456 [2024-07-15 03:37:10.395605] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217df80) with pdu=0x2000190fef90 00:34:04.456 [2024-07-15 03:37:10.395901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.456 [2024-07-15 03:37:10.395928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:04.456 [2024-07-15 03:37:10.402274] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217df80) with pdu=0x2000190fef90 00:34:04.456 [2024-07-15 03:37:10.402565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.456 [2024-07-15 03:37:10.402593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:04.456 [2024-07-15 03:37:10.408873] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217df80) with pdu=0x2000190fef90 00:34:04.456 [2024-07-15 03:37:10.409145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.456 [2024-07-15 03:37:10.409183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:04.456 [2024-07-15 03:37:10.415919] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217df80) with pdu=0x2000190fef90 00:34:04.456 [2024-07-15 03:37:10.416176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.456 [2024-07-15 03:37:10.416204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:04.456 [2024-07-15 03:37:10.422395] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217df80) with pdu=0x2000190fef90 00:34:04.456 [2024-07-15 03:37:10.422645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.456 [2024-07-15 03:37:10.422673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:04.456 [2024-07-15 03:37:10.429101] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217df80) with pdu=0x2000190fef90 00:34:04.456 [2024-07-15 03:37:10.429402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.456 [2024-07-15 03:37:10.429430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:04.456 [2024-07-15 03:37:10.436012] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217df80) with pdu=0x2000190fef90 00:34:04.456 [2024-07-15 03:37:10.436265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.456 [2024-07-15 03:37:10.436293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:04.456 [2024-07-15 03:37:10.442917] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217df80) with pdu=0x2000190fef90 00:34:04.456 [2024-07-15 03:37:10.443191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.456 [2024-07-15 03:37:10.443221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:04.456 [2024-07-15 03:37:10.449759] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217df80) with pdu=0x2000190fef90 00:34:04.456 [2024-07-15 03:37:10.450040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.456 [2024-07-15 03:37:10.450069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:04.456 [2024-07-15 03:37:10.456458] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217df80) with pdu=0x2000190fef90 00:34:04.456 [2024-07-15 03:37:10.456707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.456 [2024-07-15 03:37:10.456735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:04.456 [2024-07-15 03:37:10.462804] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217df80) with pdu=0x2000190fef90 00:34:04.456 [2024-07-15 03:37:10.463059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.456 [2024-07-15 03:37:10.463088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:04.456 [2024-07-15 03:37:10.469502] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217df80) with pdu=0x2000190fef90 00:34:04.456 [2024-07-15 03:37:10.469762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.456 [2024-07-15 03:37:10.469792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:04.456 [2024-07-15 03:37:10.476209] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217df80) with pdu=0x2000190fef90 00:34:04.456 [2024-07-15 03:37:10.476457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.456 [2024-07-15 03:37:10.476485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:04.456 [2024-07-15 03:37:10.482930] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217df80) with pdu=0x2000190fef90 00:34:04.456 [2024-07-15 03:37:10.483223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.456 [2024-07-15 03:37:10.483252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:04.456 [2024-07-15 03:37:10.489684] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217df80) with pdu=0x2000190fef90 00:34:04.456 [2024-07-15 03:37:10.489956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.456 [2024-07-15 03:37:10.489992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:04.456 [2024-07-15 03:37:10.496440] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217df80) with pdu=0x2000190fef90 00:34:04.456 [2024-07-15 03:37:10.496716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.456 [2024-07-15 03:37:10.496744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:04.456 [2024-07-15 03:37:10.503480] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217df80) with pdu=0x2000190fef90 00:34:04.456 [2024-07-15 03:37:10.503727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.456 [2024-07-15 03:37:10.503757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:04.456 [2024-07-15 03:37:10.510060] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217df80) with pdu=0x2000190fef90 00:34:04.456 [2024-07-15 03:37:10.510373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.457 [2024-07-15 03:37:10.510402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:04.457 [2024-07-15 03:37:10.516639] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217df80) with pdu=0x2000190fef90 00:34:04.457 [2024-07-15 03:37:10.516898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.457 [2024-07-15 03:37:10.516937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:04.457 [2024-07-15 03:37:10.523820] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217df80) with pdu=0x2000190fef90 00:34:04.457 [2024-07-15 03:37:10.524156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.457 [2024-07-15 03:37:10.524184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:04.457 [2024-07-15 03:37:10.530695] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217df80) with pdu=0x2000190fef90 00:34:04.457 [2024-07-15 03:37:10.530955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.457 [2024-07-15 03:37:10.530984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:04.457 [2024-07-15 03:37:10.537228] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217df80) with pdu=0x2000190fef90 00:34:04.457 [2024-07-15 03:37:10.537476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.457 [2024-07-15 03:37:10.537504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:04.457 [2024-07-15 03:37:10.543946] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217df80) with pdu=0x2000190fef90 00:34:04.457 [2024-07-15 03:37:10.544205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.457 [2024-07-15 03:37:10.544234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:04.457 [2024-07-15 03:37:10.550890] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217df80) with pdu=0x2000190fef90 00:34:04.457 [2024-07-15 03:37:10.551169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.457 [2024-07-15 03:37:10.551198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:04.457 [2024-07-15 03:37:10.557712] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217df80) with pdu=0x2000190fef90 00:34:04.457 [2024-07-15 03:37:10.557983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.457 [2024-07-15 03:37:10.558013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:04.457 [2024-07-15 03:37:10.565262] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217df80) with pdu=0x2000190fef90 00:34:04.457 [2024-07-15 03:37:10.565665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.457 [2024-07-15 03:37:10.565693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:04.457 [2024-07-15 03:37:10.573616] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217df80) with pdu=0x2000190fef90 00:34:04.457 [2024-07-15 03:37:10.573916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.457 [2024-07-15 03:37:10.573945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:04.457 [2024-07-15 03:37:10.581725] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217df80) with pdu=0x2000190fef90 00:34:04.457 [2024-07-15 03:37:10.582081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.457 [2024-07-15 03:37:10.582110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:04.457 [2024-07-15 03:37:10.589162] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217df80) with pdu=0x2000190fef90 00:34:04.457 [2024-07-15 03:37:10.589523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.457 [2024-07-15 03:37:10.589552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:04.457 [2024-07-15 03:37:10.597749] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217df80) with pdu=0x2000190fef90 00:34:04.457 [2024-07-15 03:37:10.598056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.457 [2024-07-15 03:37:10.598085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:04.715 [2024-07-15 03:37:10.605501] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217df80) with pdu=0x2000190fef90 00:34:04.715 [2024-07-15 03:37:10.605806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.715 [2024-07-15 03:37:10.605835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:04.715 [2024-07-15 03:37:10.613568] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217df80) with pdu=0x2000190fef90 00:34:04.715 [2024-07-15 03:37:10.613855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.715 [2024-07-15 03:37:10.613899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:04.715 [2024-07-15 03:37:10.622000] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217df80) with pdu=0x2000190fef90 00:34:04.715 [2024-07-15 03:37:10.622375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.715 [2024-07-15 03:37:10.622403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:04.715 [2024-07-15 03:37:10.630287] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217df80) with pdu=0x2000190fef90 00:34:04.715 [2024-07-15 03:37:10.630570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.715 [2024-07-15 03:37:10.630598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:04.715 [2024-07-15 03:37:10.637639] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217df80) with pdu=0x2000190fef90 00:34:04.715 [2024-07-15 03:37:10.638007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.715 [2024-07-15 03:37:10.638035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:04.715 [2024-07-15 03:37:10.645957] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217df80) with pdu=0x2000190fef90 00:34:04.715 [2024-07-15 03:37:10.646374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.715 [2024-07-15 03:37:10.646402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:04.715 [2024-07-15 03:37:10.654475] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217df80) with pdu=0x2000190fef90 00:34:04.715 [2024-07-15 03:37:10.654832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.715 [2024-07-15 03:37:10.654860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:04.715 [2024-07-15 03:37:10.662759] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217df80) with pdu=0x2000190fef90 00:34:04.715 [2024-07-15 03:37:10.663133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.715 [2024-07-15 03:37:10.663161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:04.715 [2024-07-15 03:37:10.671009] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217df80) with pdu=0x2000190fef90 00:34:04.715 [2024-07-15 03:37:10.671278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.715 [2024-07-15 03:37:10.671307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:04.715 [2024-07-15 03:37:10.678161] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217df80) with pdu=0x2000190fef90 00:34:04.715 [2024-07-15 03:37:10.678444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.715 [2024-07-15 03:37:10.678473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:04.715 [2024-07-15 03:37:10.685303] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217df80) with pdu=0x2000190fef90 00:34:04.715 [2024-07-15 03:37:10.685614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.715 [2024-07-15 03:37:10.685642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:04.716 [2024-07-15 03:37:10.692118] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217df80) with pdu=0x2000190fef90 00:34:04.716 [2024-07-15 03:37:10.692426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.716 [2024-07-15 03:37:10.692455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:04.716 [2024-07-15 03:37:10.700100] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217df80) with pdu=0x2000190fef90 00:34:04.716 [2024-07-15 03:37:10.700441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.716 [2024-07-15 03:37:10.700471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:04.716 [2024-07-15 03:37:10.707627] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217df80) with pdu=0x2000190fef90 00:34:04.716 [2024-07-15 03:37:10.707913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.716 [2024-07-15 03:37:10.707942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:04.716 [2024-07-15 03:37:10.714729] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217df80) with pdu=0x2000190fef90 00:34:04.716 [2024-07-15 03:37:10.715012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.716 [2024-07-15 03:37:10.715041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:04.716 [2024-07-15 03:37:10.721471] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217df80) with pdu=0x2000190fef90 00:34:04.716 [2024-07-15 03:37:10.721787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.716 [2024-07-15 03:37:10.721817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:04.716 [2024-07-15 03:37:10.729645] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217df80) with pdu=0x2000190fef90 00:34:04.716 [2024-07-15 03:37:10.730007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.716 [2024-07-15 03:37:10.730036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:04.716 [2024-07-15 03:37:10.738017] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217df80) with pdu=0x2000190fef90 00:34:04.716 [2024-07-15 03:37:10.738284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.716 [2024-07-15 03:37:10.738313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:04.716 [2024-07-15 03:37:10.746168] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217df80) with pdu=0x2000190fef90 00:34:04.716 [2024-07-15 03:37:10.746486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.716 [2024-07-15 03:37:10.746515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:04.716 [2024-07-15 03:37:10.754598] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217df80) with pdu=0x2000190fef90 00:34:04.716 [2024-07-15 03:37:10.754911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.716 [2024-07-15 03:37:10.754939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:04.716 [2024-07-15 03:37:10.762040] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217df80) with pdu=0x2000190fef90 00:34:04.716 [2024-07-15 03:37:10.762367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.716 [2024-07-15 03:37:10.762395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:04.716 [2024-07-15 03:37:10.770440] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217df80) with pdu=0x2000190fef90 00:34:04.716 [2024-07-15 03:37:10.770749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.716 [2024-07-15 03:37:10.770778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:04.716 [2024-07-15 03:37:10.777989] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217df80) with pdu=0x2000190fef90 00:34:04.716 [2024-07-15 03:37:10.778357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.716 [2024-07-15 03:37:10.778385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:04.716 [2024-07-15 03:37:10.786054] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217df80) with pdu=0x2000190fef90 00:34:04.716 [2024-07-15 03:37:10.786361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.716 [2024-07-15 03:37:10.786389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:04.716 [2024-07-15 03:37:10.794628] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217df80) with pdu=0x2000190fef90 00:34:04.716 [2024-07-15 03:37:10.794913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.716 [2024-07-15 03:37:10.794943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:04.716 [2024-07-15 03:37:10.802903] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217df80) with pdu=0x2000190fef90 00:34:04.716 [2024-07-15 03:37:10.803276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.716 [2024-07-15 03:37:10.803304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:04.716 [2024-07-15 03:37:10.811460] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217df80) with pdu=0x2000190fef90 00:34:04.716 [2024-07-15 03:37:10.811790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.716 [2024-07-15 03:37:10.811819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:04.716 [2024-07-15 03:37:10.819621] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217df80) with pdu=0x2000190fef90 00:34:04.716 [2024-07-15 03:37:10.819978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.716 [2024-07-15 03:37:10.820015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:04.716 [2024-07-15 03:37:10.826851] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217df80) with pdu=0x2000190fef90 00:34:04.716 [2024-07-15 03:37:10.827275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.716 [2024-07-15 03:37:10.827303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:04.716 [2024-07-15 03:37:10.835308] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217df80) with pdu=0x2000190fef90 00:34:04.716 [2024-07-15 03:37:10.835611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.716 [2024-07-15 03:37:10.835641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:04.716 [2024-07-15 03:37:10.842602] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217df80) with pdu=0x2000190fef90 00:34:04.716 [2024-07-15 03:37:10.842875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.716 [2024-07-15 03:37:10.842911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:04.716 [2024-07-15 03:37:10.849340] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217df80) with pdu=0x2000190fef90 00:34:04.716 [2024-07-15 03:37:10.849647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.716 [2024-07-15 03:37:10.849675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:04.716 [2024-07-15 03:37:10.856297] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217df80) with pdu=0x2000190fef90 00:34:04.716 [2024-07-15 03:37:10.856561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.716 [2024-07-15 03:37:10.856589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:04.978 [2024-07-15 03:37:10.863396] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217df80) with pdu=0x2000190fef90 00:34:04.978 [2024-07-15 03:37:10.863674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.978 [2024-07-15 03:37:10.863702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:04.978 [2024-07-15 03:37:10.870979] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217df80) with pdu=0x2000190fef90 00:34:04.978 [2024-07-15 03:37:10.871306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.978 [2024-07-15 03:37:10.871336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:04.978 [2024-07-15 03:37:10.879377] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217df80) with pdu=0x2000190fef90 00:34:04.978 [2024-07-15 03:37:10.879726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.978 [2024-07-15 03:37:10.879754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:04.978 [2024-07-15 03:37:10.887787] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217df80) with pdu=0x2000190fef90 00:34:04.978 [2024-07-15 03:37:10.888105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.978 [2024-07-15 03:37:10.888134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:04.978 [2024-07-15 03:37:10.896082] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217df80) with pdu=0x2000190fef90 00:34:04.978 [2024-07-15 03:37:10.896377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.978 [2024-07-15 03:37:10.896405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:04.978 [2024-07-15 03:37:10.904159] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217df80) with pdu=0x2000190fef90 00:34:04.978 [2024-07-15 03:37:10.904523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.978 [2024-07-15 03:37:10.904551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:04.978 [2024-07-15 03:37:10.912491] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217df80) with pdu=0x2000190fef90 00:34:04.978 [2024-07-15 03:37:10.912865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.978 [2024-07-15 03:37:10.913126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:04.978 [2024-07-15 03:37:10.920377] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217df80) with pdu=0x2000190fef90 00:34:04.978 [2024-07-15 03:37:10.920686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.978 [2024-07-15 03:37:10.920715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:04.978 [2024-07-15 03:37:10.928681] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217df80) with pdu=0x2000190fef90 00:34:04.978 [2024-07-15 03:37:10.929033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.978 [2024-07-15 03:37:10.929061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:04.978 [2024-07-15 03:37:10.936760] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217df80) with pdu=0x2000190fef90 00:34:04.978 [2024-07-15 03:37:10.937044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.978 [2024-07-15 03:37:10.937073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:04.978 [2024-07-15 03:37:10.943990] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217df80) with pdu=0x2000190fef90 00:34:04.978 [2024-07-15 03:37:10.944256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.978 [2024-07-15 03:37:10.944285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:04.978 [2024-07-15 03:37:10.950849] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217df80) with pdu=0x2000190fef90 00:34:04.978 [2024-07-15 03:37:10.951191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.978 [2024-07-15 03:37:10.951220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:04.978 [2024-07-15 03:37:10.958126] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217df80) with pdu=0x2000190fef90 00:34:04.978 [2024-07-15 03:37:10.958376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.978 [2024-07-15 03:37:10.958403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:04.978 [2024-07-15 03:37:10.964970] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217df80) with pdu=0x2000190fef90 00:34:04.978 [2024-07-15 03:37:10.965223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.978 [2024-07-15 03:37:10.965251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:04.978 [2024-07-15 03:37:10.972030] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217df80) with pdu=0x2000190fef90 00:34:04.978 [2024-07-15 03:37:10.972253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.978 [2024-07-15 03:37:10.972283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:04.978 [2024-07-15 03:37:10.979278] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217df80) with pdu=0x2000190fef90 00:34:04.978 [2024-07-15 03:37:10.979567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.978 [2024-07-15 03:37:10.979595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:04.978 [2024-07-15 03:37:10.986078] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217df80) with pdu=0x2000190fef90 00:34:04.978 [2024-07-15 03:37:10.986332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.978 [2024-07-15 03:37:10.986361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:04.979 [2024-07-15 03:37:10.992659] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217df80) with pdu=0x2000190fef90 00:34:04.979 [2024-07-15 03:37:10.992995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.979 [2024-07-15 03:37:10.993050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:04.979 [2024-07-15 03:37:11.000071] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217df80) with pdu=0x2000190fef90 00:34:04.979 [2024-07-15 03:37:11.000399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.979 [2024-07-15 03:37:11.000428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:04.979 [2024-07-15 03:37:11.007572] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217df80) with pdu=0x2000190fef90 00:34:04.979 [2024-07-15 03:37:11.007890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.979 [2024-07-15 03:37:11.007919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:04.979 [2024-07-15 03:37:11.015653] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217df80) with pdu=0x2000190fef90 00:34:04.979 [2024-07-15 03:37:11.016071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.979 [2024-07-15 03:37:11.016108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:04.979 [2024-07-15 03:37:11.024865] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217df80) with pdu=0x2000190fef90 00:34:04.979 [2024-07-15 03:37:11.025237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.979 [2024-07-15 03:37:11.025266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:04.979 [2024-07-15 03:37:11.031963] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217df80) with pdu=0x2000190fef90 00:34:04.979 [2024-07-15 03:37:11.032262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.979 [2024-07-15 03:37:11.032290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:04.979 [2024-07-15 03:37:11.038777] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217df80) with pdu=0x2000190fef90 00:34:04.979 [2024-07-15 03:37:11.039030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.979 [2024-07-15 03:37:11.039058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:04.979 [2024-07-15 03:37:11.045948] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217df80) with pdu=0x2000190fef90 00:34:04.979 [2024-07-15 03:37:11.046239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.979 [2024-07-15 03:37:11.046267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:04.979 [2024-07-15 03:37:11.052812] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217df80) with pdu=0x2000190fef90 00:34:04.979 [2024-07-15 03:37:11.053060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.979 [2024-07-15 03:37:11.053090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:04.979 [2024-07-15 03:37:11.059596] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217df80) with pdu=0x2000190fef90 00:34:04.979 [2024-07-15 03:37:11.059885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.979 [2024-07-15 03:37:11.059914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:04.979 [2024-07-15 03:37:11.066246] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217df80) with pdu=0x2000190fef90 00:34:04.979 [2024-07-15 03:37:11.066498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.979 [2024-07-15 03:37:11.066527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:04.979 [2024-07-15 03:37:11.072869] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217df80) with pdu=0x2000190fef90 00:34:04.979 [2024-07-15 03:37:11.073125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.979 [2024-07-15 03:37:11.073154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:04.979 [2024-07-15 03:37:11.079971] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217df80) with pdu=0x2000190fef90 00:34:04.979 [2024-07-15 03:37:11.080285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.979 [2024-07-15 03:37:11.080314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:04.979 [2024-07-15 03:37:11.086643] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217df80) with pdu=0x2000190fef90 00:34:04.979 [2024-07-15 03:37:11.086904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.979 [2024-07-15 03:37:11.086932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:04.979 [2024-07-15 03:37:11.093063] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217df80) with pdu=0x2000190fef90 00:34:04.979 [2024-07-15 03:37:11.093310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.979 [2024-07-15 03:37:11.093339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:04.979 [2024-07-15 03:37:11.099847] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217df80) with pdu=0x2000190fef90 00:34:04.979 [2024-07-15 03:37:11.100107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.979 [2024-07-15 03:37:11.100136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:04.979 [2024-07-15 03:37:11.106937] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217df80) with pdu=0x2000190fef90 00:34:04.979 [2024-07-15 03:37:11.107219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.979 [2024-07-15 03:37:11.107248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:04.979 [2024-07-15 03:37:11.113674] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217df80) with pdu=0x2000190fef90 00:34:04.979 [2024-07-15 03:37:11.113932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.979 [2024-07-15 03:37:11.113959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:04.979 [2024-07-15 03:37:11.120433] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217df80) with pdu=0x2000190fef90 00:34:04.979 [2024-07-15 03:37:11.120768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.979 [2024-07-15 03:37:11.120796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:05.237 [2024-07-15 03:37:11.127238] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217df80) with pdu=0x2000190fef90 00:34:05.237 [2024-07-15 03:37:11.127488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.237 [2024-07-15 03:37:11.127516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:05.237 [2024-07-15 03:37:11.134039] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217df80) with pdu=0x2000190fef90 00:34:05.237 [2024-07-15 03:37:11.134290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.237 [2024-07-15 03:37:11.134328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:05.237 [2024-07-15 03:37:11.140667] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217df80) with pdu=0x2000190fef90 00:34:05.237 [2024-07-15 03:37:11.140921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.237 [2024-07-15 03:37:11.140950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:05.237 [2024-07-15 03:37:11.148025] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217df80) with pdu=0x2000190fef90 00:34:05.237 [2024-07-15 03:37:11.148273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.237 [2024-07-15 03:37:11.148302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:05.237 [2024-07-15 03:37:11.154700] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217df80) with pdu=0x2000190fef90 00:34:05.237 [2024-07-15 03:37:11.154978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.237 [2024-07-15 03:37:11.155006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:05.237 [2024-07-15 03:37:11.161168] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217df80) with pdu=0x2000190fef90 00:34:05.237 [2024-07-15 03:37:11.161487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.237 [2024-07-15 03:37:11.161515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:05.237 [2024-07-15 03:37:11.168075] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217df80) with pdu=0x2000190fef90 00:34:05.237 [2024-07-15 03:37:11.168358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.237 [2024-07-15 03:37:11.168386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:05.237 [2024-07-15 03:37:11.175132] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217df80) with pdu=0x2000190fef90 00:34:05.237 [2024-07-15 03:37:11.175415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.237 [2024-07-15 03:37:11.175443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:05.237 [2024-07-15 03:37:11.181599] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217df80) with pdu=0x2000190fef90 00:34:05.237 [2024-07-15 03:37:11.181866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.237 [2024-07-15 03:37:11.181901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:05.237 [2024-07-15 03:37:11.188417] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217df80) with pdu=0x2000190fef90 00:34:05.237 [2024-07-15 03:37:11.188665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.237 [2024-07-15 03:37:11.188695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:05.237 [2024-07-15 03:37:11.194936] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217df80) with pdu=0x2000190fef90 00:34:05.237 [2024-07-15 03:37:11.195087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.237 [2024-07-15 03:37:11.195115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:05.237 [2024-07-15 03:37:11.201526] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217df80) with pdu=0x2000190fef90 00:34:05.237 [2024-07-15 03:37:11.201777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.237 [2024-07-15 03:37:11.201806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:05.237 [2024-07-15 03:37:11.208172] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217df80) with pdu=0x2000190fef90 00:34:05.237 [2024-07-15 03:37:11.208434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.237 [2024-07-15 03:37:11.208462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:05.237 [2024-07-15 03:37:11.215008] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217df80) with pdu=0x2000190fef90 00:34:05.237 [2024-07-15 03:37:11.215329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.237 [2024-07-15 03:37:11.215357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:05.237 [2024-07-15 03:37:11.222102] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217df80) with pdu=0x2000190fef90 00:34:05.237 [2024-07-15 03:37:11.222362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.237 [2024-07-15 03:37:11.222391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:05.237 [2024-07-15 03:37:11.228653] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217df80) with pdu=0x2000190fef90 00:34:05.237 [2024-07-15 03:37:11.228957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.237 [2024-07-15 03:37:11.228986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:05.237 [2024-07-15 03:37:11.235345] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217df80) with pdu=0x2000190fef90 00:34:05.237 [2024-07-15 03:37:11.235613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.237 [2024-07-15 03:37:11.235643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:05.237 [2024-07-15 03:37:11.242694] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217df80) with pdu=0x2000190fef90 00:34:05.237 [2024-07-15 03:37:11.243018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.237 [2024-07-15 03:37:11.243047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:05.237 [2024-07-15 03:37:11.249566] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217df80) with pdu=0x2000190fef90 00:34:05.237 [2024-07-15 03:37:11.249848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.237 [2024-07-15 03:37:11.249884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:05.237 [2024-07-15 03:37:11.256285] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217df80) with pdu=0x2000190fef90 00:34:05.237 [2024-07-15 03:37:11.256535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.237 [2024-07-15 03:37:11.256564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:05.237 [2024-07-15 03:37:11.262853] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217df80) with pdu=0x2000190fef90 00:34:05.237 [2024-07-15 03:37:11.263154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.237 [2024-07-15 03:37:11.263183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:05.237 [2024-07-15 03:37:11.269223] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217df80) with pdu=0x2000190fef90 00:34:05.237 [2024-07-15 03:37:11.269481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.237 [2024-07-15 03:37:11.269509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:05.237 [2024-07-15 03:37:11.275748] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217df80) with pdu=0x2000190fef90 00:34:05.237 [2024-07-15 03:37:11.276017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.237 [2024-07-15 03:37:11.276046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:05.237 [2024-07-15 03:37:11.282301] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217df80) with pdu=0x2000190fef90 00:34:05.237 [2024-07-15 03:37:11.282549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.237 [2024-07-15 03:37:11.282578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:05.238 [2024-07-15 03:37:11.289112] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217df80) with pdu=0x2000190fef90 00:34:05.238 [2024-07-15 03:37:11.289385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.238 [2024-07-15 03:37:11.289419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:05.238 [2024-07-15 03:37:11.296146] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217df80) with pdu=0x2000190fef90 00:34:05.238 [2024-07-15 03:37:11.296428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.238 [2024-07-15 03:37:11.296456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:05.238 [2024-07-15 03:37:11.303032] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217df80) with pdu=0x2000190fef90 00:34:05.238 [2024-07-15 03:37:11.303283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.238 [2024-07-15 03:37:11.303311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:05.238 [2024-07-15 03:37:11.309983] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217df80) with pdu=0x2000190fef90 00:34:05.238 [2024-07-15 03:37:11.310259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.238 [2024-07-15 03:37:11.310295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:05.238 [2024-07-15 03:37:11.316721] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217df80) with pdu=0x2000190fef90 00:34:05.238 [2024-07-15 03:37:11.316988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.238 [2024-07-15 03:37:11.317017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:05.238 [2024-07-15 03:37:11.323365] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217df80) with pdu=0x2000190fef90 00:34:05.238 [2024-07-15 03:37:11.323615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.238 [2024-07-15 03:37:11.323643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:05.238 [2024-07-15 03:37:11.329865] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217df80) with pdu=0x2000190fef90 00:34:05.238 [2024-07-15 03:37:11.330132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.238 [2024-07-15 03:37:11.330167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:05.238 [2024-07-15 03:37:11.336490] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217df80) with pdu=0x2000190fef90 00:34:05.238 [2024-07-15 03:37:11.336738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.238 [2024-07-15 03:37:11.336766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:05.238 [2024-07-15 03:37:11.343094] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217df80) with pdu=0x2000190fef90 00:34:05.238 [2024-07-15 03:37:11.343443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.238 [2024-07-15 03:37:11.343471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:05.238 [2024-07-15 03:37:11.350286] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217df80) with pdu=0x2000190fef90 00:34:05.238 [2024-07-15 03:37:11.350554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.238 [2024-07-15 03:37:11.350582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:05.238 [2024-07-15 03:37:11.357460] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217df80) with pdu=0x2000190fef90 00:34:05.238 [2024-07-15 03:37:11.357729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.238 [2024-07-15 03:37:11.357757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:05.238 [2024-07-15 03:37:11.363989] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217df80) with pdu=0x2000190fef90 00:34:05.238 [2024-07-15 03:37:11.364289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.238 [2024-07-15 03:37:11.364318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:05.238 [2024-07-15 03:37:11.370843] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217df80) with pdu=0x2000190fef90 00:34:05.238 [2024-07-15 03:37:11.371112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.238 [2024-07-15 03:37:11.371140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:05.238 [2024-07-15 03:37:11.377973] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217df80) with pdu=0x2000190fef90 00:34:05.238 [2024-07-15 03:37:11.378240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.238 [2024-07-15 03:37:11.378269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:05.496 [2024-07-15 03:37:11.384697] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217df80) with pdu=0x2000190fef90 00:34:05.496 [2024-07-15 03:37:11.385002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.496 [2024-07-15 03:37:11.385038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:05.496 [2024-07-15 03:37:11.390889] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217df80) with pdu=0x2000190fef90 00:34:05.496 [2024-07-15 03:37:11.391159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.496 [2024-07-15 03:37:11.391187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:05.496 [2024-07-15 03:37:11.397934] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217df80) with pdu=0x2000190fef90 00:34:05.496 [2024-07-15 03:37:11.398191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.496 [2024-07-15 03:37:11.398219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:05.496 [2024-07-15 03:37:11.404994] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217df80) with pdu=0x2000190fef90 00:34:05.496 [2024-07-15 03:37:11.405250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.496 [2024-07-15 03:37:11.405278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:05.496 [2024-07-15 03:37:11.411631] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217df80) with pdu=0x2000190fef90 00:34:05.496 [2024-07-15 03:37:11.411894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.496 [2024-07-15 03:37:11.411922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:05.496 [2024-07-15 03:37:11.418313] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217df80) with pdu=0x2000190fef90 00:34:05.496 [2024-07-15 03:37:11.418622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.496 [2024-07-15 03:37:11.418650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:05.496 [2024-07-15 03:37:11.425290] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217df80) with pdu=0x2000190fef90 00:34:05.496 [2024-07-15 03:37:11.425601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.496 [2024-07-15 03:37:11.425629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:05.497 [2024-07-15 03:37:11.431573] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217df80) with pdu=0x2000190fef90 00:34:05.497 [2024-07-15 03:37:11.431868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.497 [2024-07-15 03:37:11.431904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:05.497 [2024-07-15 03:37:11.438960] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217df80) with pdu=0x2000190fef90 00:34:05.497 [2024-07-15 03:37:11.439211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.497 [2024-07-15 03:37:11.439240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:05.497 [2024-07-15 03:37:11.445871] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217df80) with pdu=0x2000190fef90 00:34:05.497 [2024-07-15 03:37:11.446160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.497 [2024-07-15 03:37:11.446188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:05.497 [2024-07-15 03:37:11.452597] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217df80) with pdu=0x2000190fef90 00:34:05.497 [2024-07-15 03:37:11.452872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.497 [2024-07-15 03:37:11.452908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:05.497 [2024-07-15 03:37:11.459295] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217df80) with pdu=0x2000190fef90 00:34:05.497 [2024-07-15 03:37:11.459586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.497 [2024-07-15 03:37:11.459614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:05.497 [2024-07-15 03:37:11.466195] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217df80) with pdu=0x2000190fef90 00:34:05.497 [2024-07-15 03:37:11.466447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.497 [2024-07-15 03:37:11.466475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:05.497 [2024-07-15 03:37:11.473721] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217df80) with pdu=0x2000190fef90 00:34:05.497 [2024-07-15 03:37:11.474001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.497 [2024-07-15 03:37:11.474031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:05.497 [2024-07-15 03:37:11.480208] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217df80) with pdu=0x2000190fef90 00:34:05.497 [2024-07-15 03:37:11.480475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.497 [2024-07-15 03:37:11.480504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:05.497 [2024-07-15 03:37:11.487219] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217df80) with pdu=0x2000190fef90 00:34:05.497 [2024-07-15 03:37:11.487498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.497 [2024-07-15 03:37:11.487535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:05.497 [2024-07-15 03:37:11.493866] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217df80) with pdu=0x2000190fef90 00:34:05.497 [2024-07-15 03:37:11.494124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.497 [2024-07-15 03:37:11.494154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:05.497 [2024-07-15 03:37:11.500501] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217df80) with pdu=0x2000190fef90 00:34:05.497 [2024-07-15 03:37:11.500770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.497 [2024-07-15 03:37:11.500798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:05.497 [2024-07-15 03:37:11.507335] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217df80) with pdu=0x2000190fef90 00:34:05.497 [2024-07-15 03:37:11.507622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.497 [2024-07-15 03:37:11.507649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:05.497 [2024-07-15 03:37:11.514084] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217df80) with pdu=0x2000190fef90 00:34:05.497 [2024-07-15 03:37:11.514329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.497 [2024-07-15 03:37:11.514356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:05.497 [2024-07-15 03:37:11.520523] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217df80) with pdu=0x2000190fef90 00:34:05.497 [2024-07-15 03:37:11.520799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.497 [2024-07-15 03:37:11.520826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:05.497 [2024-07-15 03:37:11.527329] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217df80) with pdu=0x2000190fef90 00:34:05.497 [2024-07-15 03:37:11.527590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.497 [2024-07-15 03:37:11.527618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:05.497 [2024-07-15 03:37:11.533830] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217df80) with pdu=0x2000190fef90 00:34:05.497 [2024-07-15 03:37:11.534085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.497 [2024-07-15 03:37:11.534113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:05.497 [2024-07-15 03:37:11.540469] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217df80) with pdu=0x2000190fef90 00:34:05.497 [2024-07-15 03:37:11.540724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.497 [2024-07-15 03:37:11.540752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:05.497 [2024-07-15 03:37:11.547097] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217df80) with pdu=0x2000190fef90 00:34:05.497 [2024-07-15 03:37:11.547352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.497 [2024-07-15 03:37:11.547380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:05.497 [2024-07-15 03:37:11.553663] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217df80) with pdu=0x2000190fef90 00:34:05.497 [2024-07-15 03:37:11.553949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.497 [2024-07-15 03:37:11.553977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:05.497 [2024-07-15 03:37:11.559925] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217df80) with pdu=0x2000190fef90 00:34:05.497 [2024-07-15 03:37:11.560254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.497 [2024-07-15 03:37:11.560282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:05.497 [2024-07-15 03:37:11.566720] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217df80) with pdu=0x2000190fef90 00:34:05.497 [2024-07-15 03:37:11.566977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.497 [2024-07-15 03:37:11.567005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:05.497 [2024-07-15 03:37:11.573419] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217df80) with pdu=0x2000190fef90 00:34:05.497 [2024-07-15 03:37:11.573668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.497 [2024-07-15 03:37:11.573695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:05.497 [2024-07-15 03:37:11.579804] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217df80) with pdu=0x2000190fef90 00:34:05.497 [2024-07-15 03:37:11.580082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.497 [2024-07-15 03:37:11.580110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:05.497 [2024-07-15 03:37:11.586466] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217df80) with pdu=0x2000190fef90 00:34:05.497 [2024-07-15 03:37:11.586764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.497 [2024-07-15 03:37:11.586792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:05.497 [2024-07-15 03:37:11.594062] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217df80) with pdu=0x2000190fef90 00:34:05.497 [2024-07-15 03:37:11.594308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.497 [2024-07-15 03:37:11.594336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:05.497 [2024-07-15 03:37:11.600639] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217df80) with pdu=0x2000190fef90 00:34:05.498 [2024-07-15 03:37:11.600921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.498 [2024-07-15 03:37:11.600948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:05.498 [2024-07-15 03:37:11.607213] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217df80) with pdu=0x2000190fef90 00:34:05.498 [2024-07-15 03:37:11.607461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.498 [2024-07-15 03:37:11.607488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:05.498 [2024-07-15 03:37:11.613652] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217df80) with pdu=0x2000190fef90 00:34:05.498 [2024-07-15 03:37:11.613921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.498 [2024-07-15 03:37:11.613950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:05.498 [2024-07-15 03:37:11.621222] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217df80) with pdu=0x2000190fef90 00:34:05.498 [2024-07-15 03:37:11.621506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.498 [2024-07-15 03:37:11.621534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:05.498 [2024-07-15 03:37:11.628438] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217df80) with pdu=0x2000190fef90 00:34:05.498 [2024-07-15 03:37:11.628694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.498 [2024-07-15 03:37:11.628722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:05.498 [2024-07-15 03:37:11.635201] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217df80) with pdu=0x2000190fef90 00:34:05.498 [2024-07-15 03:37:11.635465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.498 [2024-07-15 03:37:11.635493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:05.755 [2024-07-15 03:37:11.641801] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217df80) with pdu=0x2000190fef90 00:34:05.755 [2024-07-15 03:37:11.642107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.755 [2024-07-15 03:37:11.642135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:05.755 [2024-07-15 03:37:11.648484] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217df80) with pdu=0x2000190fef90 00:34:05.755 [2024-07-15 03:37:11.648750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.755 [2024-07-15 03:37:11.648778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:05.755 [2024-07-15 03:37:11.655112] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217df80) with pdu=0x2000190fef90 00:34:05.755 [2024-07-15 03:37:11.655413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.755 [2024-07-15 03:37:11.655441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:05.755 [2024-07-15 03:37:11.661866] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217df80) with pdu=0x2000190fef90 00:34:05.755 [2024-07-15 03:37:11.662133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.755 [2024-07-15 03:37:11.662170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:05.755 [2024-07-15 03:37:11.668907] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217df80) with pdu=0x2000190fef90 00:34:05.755 [2024-07-15 03:37:11.669157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.755 [2024-07-15 03:37:11.669185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:05.755 [2024-07-15 03:37:11.675610] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217df80) with pdu=0x2000190fef90 00:34:05.755 [2024-07-15 03:37:11.675859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.756 [2024-07-15 03:37:11.675894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:05.756 [2024-07-15 03:37:11.682113] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217df80) with pdu=0x2000190fef90 00:34:05.756 [2024-07-15 03:37:11.682364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.756 [2024-07-15 03:37:11.682391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:05.756 [2024-07-15 03:37:11.688827] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217df80) with pdu=0x2000190fef90 00:34:05.756 [2024-07-15 03:37:11.689088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.756 [2024-07-15 03:37:11.689115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:05.756 [2024-07-15 03:37:11.695403] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217df80) with pdu=0x2000190fef90 00:34:05.756 [2024-07-15 03:37:11.695683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.756 [2024-07-15 03:37:11.695711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:05.756 [2024-07-15 03:37:11.702189] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217df80) with pdu=0x2000190fef90 00:34:05.756 [2024-07-15 03:37:11.702482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.756 [2024-07-15 03:37:11.702510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:05.756 [2024-07-15 03:37:11.708748] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217df80) with pdu=0x2000190fef90 00:34:05.756 [2024-07-15 03:37:11.709005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.756 [2024-07-15 03:37:11.709033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:05.756 [2024-07-15 03:37:11.715751] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217df80) with pdu=0x2000190fef90 00:34:05.756 [2024-07-15 03:37:11.716007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.756 [2024-07-15 03:37:11.716036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:05.756 [2024-07-15 03:37:11.722530] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217df80) with pdu=0x2000190fef90 00:34:05.756 [2024-07-15 03:37:11.722778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.756 [2024-07-15 03:37:11.722806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:05.756 [2024-07-15 03:37:11.728748] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217df80) with pdu=0x2000190fef90 00:34:05.756 [2024-07-15 03:37:11.729003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.756 [2024-07-15 03:37:11.729037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:05.756 [2024-07-15 03:37:11.735345] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217df80) with pdu=0x2000190fef90 00:34:05.756 [2024-07-15 03:37:11.735643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.756 [2024-07-15 03:37:11.735671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:05.756 [2024-07-15 03:37:11.742261] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217df80) with pdu=0x2000190fef90 00:34:05.756 [2024-07-15 03:37:11.742583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.756 [2024-07-15 03:37:11.742611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:05.756 [2024-07-15 03:37:11.749256] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217df80) with pdu=0x2000190fef90 00:34:05.756 [2024-07-15 03:37:11.749514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.756 [2024-07-15 03:37:11.749542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:05.756 [2024-07-15 03:37:11.756285] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217df80) with pdu=0x2000190fef90 00:34:05.756 [2024-07-15 03:37:11.756534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.756 [2024-07-15 03:37:11.756562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:05.756 [2024-07-15 03:37:11.762507] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217df80) with pdu=0x2000190fef90 00:34:05.756 [2024-07-15 03:37:11.762842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.756 [2024-07-15 03:37:11.762870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:05.756 [2024-07-15 03:37:11.769652] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217df80) with pdu=0x2000190fef90 00:34:05.756 [2024-07-15 03:37:11.769904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.756 [2024-07-15 03:37:11.769932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:05.756 [2024-07-15 03:37:11.776234] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217df80) with pdu=0x2000190fef90 00:34:05.756 [2024-07-15 03:37:11.776520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.756 [2024-07-15 03:37:11.776553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:05.756 [2024-07-15 03:37:11.782896] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217df80) with pdu=0x2000190fef90 00:34:05.756 [2024-07-15 03:37:11.783142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.756 [2024-07-15 03:37:11.783170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:05.756 [2024-07-15 03:37:11.789612] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217df80) with pdu=0x2000190fef90 00:34:05.756 [2024-07-15 03:37:11.789874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.756 [2024-07-15 03:37:11.789908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:05.756 [2024-07-15 03:37:11.796394] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217df80) with pdu=0x2000190fef90 00:34:05.756 [2024-07-15 03:37:11.796654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.756 [2024-07-15 03:37:11.796682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:05.756 [2024-07-15 03:37:11.803017] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217df80) with pdu=0x2000190fef90 00:34:05.756 [2024-07-15 03:37:11.803312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.756 [2024-07-15 03:37:11.803340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:05.756 [2024-07-15 03:37:11.809765] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217df80) with pdu=0x2000190fef90 00:34:05.756 [2024-07-15 03:37:11.810021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.756 [2024-07-15 03:37:11.810050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:05.756 [2024-07-15 03:37:11.816607] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217df80) with pdu=0x2000190fef90 00:34:05.756 [2024-07-15 03:37:11.816882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.756 [2024-07-15 03:37:11.816911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:05.756 [2024-07-15 03:37:11.823663] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217df80) with pdu=0x2000190fef90 00:34:05.756 [2024-07-15 03:37:11.823993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.756 [2024-07-15 03:37:11.824021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:05.756 [2024-07-15 03:37:11.830623] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217df80) with pdu=0x2000190fef90 00:34:05.756 [2024-07-15 03:37:11.830871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.756 [2024-07-15 03:37:11.830906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:05.756 [2024-07-15 03:37:11.837183] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217df80) with pdu=0x2000190fef90 00:34:05.756 [2024-07-15 03:37:11.837485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.756 [2024-07-15 03:37:11.837512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:05.756 [2024-07-15 03:37:11.843841] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217df80) with pdu=0x2000190fef90 00:34:05.757 [2024-07-15 03:37:11.844152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.757 [2024-07-15 03:37:11.844179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:05.757 [2024-07-15 03:37:11.850065] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217df80) with pdu=0x2000190fef90 00:34:05.757 [2024-07-15 03:37:11.850316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.757 [2024-07-15 03:37:11.850344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:05.757 [2024-07-15 03:37:11.856857] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217df80) with pdu=0x2000190fef90 00:34:05.757 [2024-07-15 03:37:11.857138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.757 [2024-07-15 03:37:11.857165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:05.757 [2024-07-15 03:37:11.863787] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217df80) with pdu=0x2000190fef90 00:34:05.757 [2024-07-15 03:37:11.864044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.757 [2024-07-15 03:37:11.864073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:05.757 [2024-07-15 03:37:11.870409] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217df80) with pdu=0x2000190fef90 00:34:05.757 [2024-07-15 03:37:11.870669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.757 [2024-07-15 03:37:11.870697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:05.757 [2024-07-15 03:37:11.877068] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217df80) with pdu=0x2000190fef90 00:34:05.757 [2024-07-15 03:37:11.877319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.757 [2024-07-15 03:37:11.877347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:05.757 [2024-07-15 03:37:11.883248] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217df80) with pdu=0x2000190fef90 00:34:05.757 [2024-07-15 03:37:11.883496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.757 [2024-07-15 03:37:11.883523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:05.757 [2024-07-15 03:37:11.889922] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217df80) with pdu=0x2000190fef90 00:34:05.757 [2024-07-15 03:37:11.890173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.757 [2024-07-15 03:37:11.890201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:05.757 [2024-07-15 03:37:11.896477] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217df80) with pdu=0x2000190fef90 00:34:05.757 [2024-07-15 03:37:11.896760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.757 [2024-07-15 03:37:11.896788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:06.014 [2024-07-15 03:37:11.903200] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217df80) with pdu=0x2000190fef90 00:34:06.014 [2024-07-15 03:37:11.903568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.014 [2024-07-15 03:37:11.903595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:06.014 [2024-07-15 03:37:11.911293] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217df80) with pdu=0x2000190fef90 00:34:06.014 [2024-07-15 03:37:11.911593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.014 [2024-07-15 03:37:11.911620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:06.014 [2024-07-15 03:37:11.919800] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217df80) with pdu=0x2000190fef90 00:34:06.014 [2024-07-15 03:37:11.920096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.014 [2024-07-15 03:37:11.920124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:06.014 [2024-07-15 03:37:11.926608] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217df80) with pdu=0x2000190fef90 00:34:06.014 [2024-07-15 03:37:11.926930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.014 [2024-07-15 03:37:11.926958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:06.014 [2024-07-15 03:37:11.933919] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217df80) with pdu=0x2000190fef90 00:34:06.014 [2024-07-15 03:37:11.934189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.014 [2024-07-15 03:37:11.934218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:06.014 [2024-07-15 03:37:11.940507] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217df80) with pdu=0x2000190fef90 00:34:06.014 [2024-07-15 03:37:11.940756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.014 [2024-07-15 03:37:11.940784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:06.014 [2024-07-15 03:37:11.947863] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x217df80) with pdu=0x2000190fef90 00:34:06.014 [2024-07-15 03:37:11.948356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.014 [2024-07-15 03:37:11.948384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:06.014 00:34:06.014 Latency(us) 00:34:06.015 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:06.015 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:34:06.015 nvme0n1 : 2.00 4312.67 539.08 0.00 0.00 3701.58 2257.35 11116.85 00:34:06.015 =================================================================================================================== 00:34:06.015 Total : 4312.67 539.08 0.00 0.00 3701.58 2257.35 11116.85 00:34:06.015 0 00:34:06.015 03:37:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:34:06.015 03:37:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:34:06.015 03:37:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:34:06.015 | .driver_specific 00:34:06.015 | .nvme_error 00:34:06.015 | .status_code 00:34:06.015 | .command_transient_transport_error' 00:34:06.015 03:37:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:34:06.272 03:37:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 278 > 0 )) 00:34:06.272 03:37:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 3343947 00:34:06.272 03:37:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 3343947 ']' 00:34:06.272 03:37:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 3343947 00:34:06.272 03:37:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:34:06.272 03:37:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:34:06.272 03:37:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3343947 00:34:06.272 03:37:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:34:06.272 03:37:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:34:06.272 03:37:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3343947' 00:34:06.272 killing process with pid 3343947 00:34:06.272 03:37:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 3343947 00:34:06.272 Received shutdown signal, test time was about 2.000000 seconds 00:34:06.272 00:34:06.272 Latency(us) 00:34:06.272 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:06.272 =================================================================================================================== 00:34:06.272 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:34:06.272 03:37:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 3343947 00:34:06.585 03:37:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 3342698 00:34:06.585 03:37:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 3342698 ']' 00:34:06.585 03:37:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 3342698 00:34:06.585 03:37:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:34:06.585 03:37:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:34:06.585 03:37:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3342698 00:34:06.585 03:37:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:34:06.585 03:37:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:34:06.585 03:37:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3342698' 00:34:06.585 killing process with pid 3342698 00:34:06.585 03:37:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 3342698 00:34:06.585 03:37:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 3342698 00:34:06.843 00:34:06.843 real 0m15.115s 00:34:06.843 user 0m30.132s 00:34:06.843 sys 0m4.122s 00:34:06.843 03:37:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1124 -- # xtrace_disable 00:34:06.843 03:37:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:34:06.843 ************************************ 00:34:06.843 END TEST nvmf_digest_error 00:34:06.843 ************************************ 00:34:06.843 03:37:12 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1142 -- # return 0 00:34:06.843 03:37:12 nvmf_tcp.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:34:06.843 03:37:12 nvmf_tcp.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:34:06.843 03:37:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@488 -- # nvmfcleanup 00:34:06.843 03:37:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@117 -- # sync 00:34:06.843 03:37:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:34:06.843 03:37:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@120 -- # set +e 00:34:06.843 03:37:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@121 -- # for i in {1..20} 00:34:06.843 03:37:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:34:06.843 rmmod nvme_tcp 00:34:06.843 rmmod nvme_fabrics 00:34:06.843 rmmod nvme_keyring 00:34:06.843 03:37:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:34:06.843 03:37:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@124 -- # set -e 00:34:06.843 03:37:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@125 -- # return 0 00:34:06.843 03:37:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@489 -- # '[' -n 3342698 ']' 00:34:06.843 03:37:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@490 -- # killprocess 3342698 00:34:06.843 03:37:12 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@948 -- # '[' -z 3342698 ']' 00:34:06.843 03:37:12 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@952 -- # kill -0 3342698 00:34:06.843 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (3342698) - No such process 00:34:06.843 03:37:12 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@975 -- # echo 'Process with pid 3342698 is not found' 00:34:06.843 Process with pid 3342698 is not found 00:34:06.843 03:37:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:34:06.843 03:37:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:34:06.843 03:37:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:34:06.843 03:37:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:34:06.843 03:37:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@278 -- # remove_spdk_ns 00:34:06.843 03:37:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:06.843 03:37:12 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:34:06.843 03:37:12 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:08.740 03:37:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:34:08.740 00:34:08.740 real 0m35.214s 00:34:08.740 user 1m2.166s 00:34:08.740 sys 0m9.849s 00:34:08.740 03:37:14 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1124 -- # xtrace_disable 00:34:08.740 03:37:14 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:34:08.740 ************************************ 00:34:08.740 END TEST nvmf_digest 00:34:08.740 ************************************ 00:34:08.997 03:37:14 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:34:08.997 03:37:14 nvmf_tcp -- nvmf/nvmf.sh@111 -- # [[ 0 -eq 1 ]] 00:34:08.997 03:37:14 nvmf_tcp -- nvmf/nvmf.sh@116 -- # [[ 0 -eq 1 ]] 00:34:08.997 03:37:14 nvmf_tcp -- nvmf/nvmf.sh@121 -- # [[ phy == phy ]] 00:34:08.997 03:37:14 nvmf_tcp -- nvmf/nvmf.sh@122 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:34:08.997 03:37:14 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:34:08.997 03:37:14 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:34:08.997 03:37:14 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:08.997 ************************************ 00:34:08.997 START TEST nvmf_bdevperf 00:34:08.997 ************************************ 00:34:08.997 03:37:14 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:34:08.997 * Looking for test storage... 00:34:08.997 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:34:08.997 03:37:14 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:08.997 03:37:14 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:34:08.997 03:37:14 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:08.997 03:37:14 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:08.997 03:37:14 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:08.998 03:37:14 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:08.998 03:37:14 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:08.998 03:37:14 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:08.998 03:37:14 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:08.998 03:37:14 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:08.998 03:37:14 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:08.998 03:37:14 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:08.998 03:37:14 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:34:08.998 03:37:14 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:34:08.998 03:37:14 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:08.998 03:37:14 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:08.998 03:37:14 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:08.998 03:37:14 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:08.998 03:37:14 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:08.998 03:37:14 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:08.998 03:37:14 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:08.998 03:37:14 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:08.998 03:37:14 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:08.998 03:37:14 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:08.998 03:37:14 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:08.998 03:37:14 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:34:08.998 03:37:14 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:08.998 03:37:14 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@47 -- # : 0 00:34:08.998 03:37:14 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:34:08.998 03:37:14 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:34:08.998 03:37:14 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:08.998 03:37:14 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:08.998 03:37:14 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:08.998 03:37:14 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:34:08.998 03:37:14 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:34:08.998 03:37:14 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:34:08.998 03:37:14 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:34:08.998 03:37:14 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:34:08.998 03:37:14 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:34:08.998 03:37:14 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:34:08.998 03:37:14 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:08.998 03:37:14 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@448 -- # prepare_net_devs 00:34:08.998 03:37:14 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:34:08.998 03:37:14 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:34:08.998 03:37:14 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:08.998 03:37:14 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:34:08.998 03:37:14 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:08.998 03:37:14 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:34:08.998 03:37:14 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:34:08.998 03:37:14 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@285 -- # xtrace_disable 00:34:08.998 03:37:14 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:10.897 03:37:16 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:10.897 03:37:16 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@291 -- # pci_devs=() 00:34:10.897 03:37:16 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@291 -- # local -a pci_devs 00:34:10.897 03:37:16 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:34:10.897 03:37:16 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:34:10.897 03:37:16 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@293 -- # pci_drivers=() 00:34:10.897 03:37:16 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:34:10.897 03:37:16 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@295 -- # net_devs=() 00:34:10.897 03:37:16 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@295 -- # local -ga net_devs 00:34:10.897 03:37:16 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@296 -- # e810=() 00:34:10.897 03:37:16 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@296 -- # local -ga e810 00:34:10.897 03:37:16 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@297 -- # x722=() 00:34:10.897 03:37:16 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@297 -- # local -ga x722 00:34:10.897 03:37:16 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@298 -- # mlx=() 00:34:10.897 03:37:16 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@298 -- # local -ga mlx 00:34:10.897 03:37:16 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:10.897 03:37:16 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:10.897 03:37:16 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:10.897 03:37:16 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:10.897 03:37:16 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:10.897 03:37:16 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:10.897 03:37:16 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:10.897 03:37:16 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:10.897 03:37:16 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:10.897 03:37:16 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:10.897 03:37:16 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:10.897 03:37:16 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:34:10.897 03:37:16 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:34:10.897 03:37:16 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:34:10.897 03:37:16 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:34:10.897 03:37:16 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:34:10.897 03:37:16 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:34:10.897 03:37:16 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:34:10.897 03:37:16 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:34:10.897 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:34:10.897 03:37:16 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:34:10.897 03:37:16 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:34:10.897 03:37:16 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:10.897 03:37:16 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:10.897 03:37:16 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:34:10.897 03:37:16 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:34:10.897 03:37:16 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:34:10.897 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:34:10.897 03:37:16 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:34:10.897 03:37:16 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:34:10.897 03:37:16 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:10.897 03:37:16 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:10.897 03:37:16 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:34:10.897 03:37:16 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:34:10.897 03:37:16 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:34:10.897 03:37:16 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:34:10.897 03:37:16 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:34:10.897 03:37:16 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:10.897 03:37:16 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:34:10.898 03:37:16 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:10.898 03:37:16 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:34:10.898 03:37:16 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:34:10.898 03:37:16 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:10.898 03:37:16 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:34:10.898 Found net devices under 0000:0a:00.0: cvl_0_0 00:34:10.898 03:37:16 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:34:10.898 03:37:16 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:34:10.898 03:37:16 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:10.898 03:37:16 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:34:10.898 03:37:16 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:10.898 03:37:16 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:34:10.898 03:37:16 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:34:10.898 03:37:16 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:10.898 03:37:16 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:34:10.898 Found net devices under 0000:0a:00.1: cvl_0_1 00:34:10.898 03:37:16 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:34:10.898 03:37:16 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:34:10.898 03:37:16 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # is_hw=yes 00:34:10.898 03:37:16 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:34:10.898 03:37:16 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:34:10.898 03:37:16 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:34:10.898 03:37:16 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:10.898 03:37:16 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:10.898 03:37:16 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:10.898 03:37:16 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:34:10.898 03:37:16 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:10.898 03:37:16 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:10.898 03:37:16 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:34:10.898 03:37:16 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:10.898 03:37:16 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:10.898 03:37:16 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:34:10.898 03:37:16 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:34:10.898 03:37:16 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:34:10.898 03:37:16 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:10.898 03:37:16 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:10.898 03:37:16 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:10.898 03:37:16 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:34:10.898 03:37:16 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:10.898 03:37:16 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:10.898 03:37:16 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:10.898 03:37:16 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:34:10.898 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:10.898 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.175 ms 00:34:10.898 00:34:10.898 --- 10.0.0.2 ping statistics --- 00:34:10.898 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:10.898 rtt min/avg/max/mdev = 0.175/0.175/0.175/0.000 ms 00:34:10.898 03:37:16 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:10.898 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:10.898 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.196 ms 00:34:10.898 00:34:10.898 --- 10.0.0.1 ping statistics --- 00:34:10.898 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:10.898 rtt min/avg/max/mdev = 0.196/0.196/0.196/0.000 ms 00:34:10.898 03:37:16 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:10.898 03:37:16 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@422 -- # return 0 00:34:10.898 03:37:16 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:34:10.898 03:37:16 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:10.898 03:37:16 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:34:10.898 03:37:16 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:34:10.898 03:37:16 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:10.898 03:37:16 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:34:10.898 03:37:16 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:34:10.898 03:37:16 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:34:10.898 03:37:16 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:34:10.898 03:37:16 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:34:10.898 03:37:16 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@722 -- # xtrace_disable 00:34:10.898 03:37:16 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:10.898 03:37:16 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=3346288 00:34:10.898 03:37:16 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 3346288 00:34:10.898 03:37:16 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:34:10.898 03:37:16 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@829 -- # '[' -z 3346288 ']' 00:34:10.898 03:37:16 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:10.898 03:37:16 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@834 -- # local max_retries=100 00:34:10.898 03:37:16 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:10.898 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:10.898 03:37:16 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@838 -- # xtrace_disable 00:34:10.898 03:37:16 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:10.898 [2024-07-15 03:37:16.958312] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:34:10.898 [2024-07-15 03:37:16.958389] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:10.898 EAL: No free 2048 kB hugepages reported on node 1 00:34:10.898 [2024-07-15 03:37:17.031135] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:34:11.157 [2024-07-15 03:37:17.131240] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:11.157 [2024-07-15 03:37:17.131292] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:11.157 [2024-07-15 03:37:17.131319] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:11.157 [2024-07-15 03:37:17.131331] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:11.157 [2024-07-15 03:37:17.131340] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:11.157 [2024-07-15 03:37:17.131421] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:34:11.157 [2024-07-15 03:37:17.131486] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:34:11.157 [2024-07-15 03:37:17.131488] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:34:11.157 03:37:17 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:34:11.157 03:37:17 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@862 -- # return 0 00:34:11.157 03:37:17 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:34:11.157 03:37:17 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@728 -- # xtrace_disable 00:34:11.157 03:37:17 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:11.157 03:37:17 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:11.157 03:37:17 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:34:11.157 03:37:17 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:11.157 03:37:17 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:11.157 [2024-07-15 03:37:17.279582] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:11.157 03:37:17 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:11.157 03:37:17 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:34:11.157 03:37:17 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:11.157 03:37:17 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:11.416 Malloc0 00:34:11.416 03:37:17 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:11.416 03:37:17 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:34:11.416 03:37:17 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:11.416 03:37:17 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:11.416 03:37:17 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:11.416 03:37:17 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:34:11.416 03:37:17 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:11.416 03:37:17 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:11.416 03:37:17 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:11.416 03:37:17 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:11.416 03:37:17 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:11.416 03:37:17 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:11.416 [2024-07-15 03:37:17.345129] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:11.416 03:37:17 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:11.416 03:37:17 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:34:11.416 03:37:17 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:34:11.416 03:37:17 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:34:11.416 03:37:17 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:34:11.416 03:37:17 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:34:11.417 03:37:17 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:34:11.417 { 00:34:11.417 "params": { 00:34:11.417 "name": "Nvme$subsystem", 00:34:11.417 "trtype": "$TEST_TRANSPORT", 00:34:11.417 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:11.417 "adrfam": "ipv4", 00:34:11.417 "trsvcid": "$NVMF_PORT", 00:34:11.417 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:11.417 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:11.417 "hdgst": ${hdgst:-false}, 00:34:11.417 "ddgst": ${ddgst:-false} 00:34:11.417 }, 00:34:11.417 "method": "bdev_nvme_attach_controller" 00:34:11.417 } 00:34:11.417 EOF 00:34:11.417 )") 00:34:11.417 03:37:17 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:34:11.417 03:37:17 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:34:11.417 03:37:17 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:34:11.417 03:37:17 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:34:11.417 "params": { 00:34:11.417 "name": "Nvme1", 00:34:11.417 "trtype": "tcp", 00:34:11.417 "traddr": "10.0.0.2", 00:34:11.417 "adrfam": "ipv4", 00:34:11.417 "trsvcid": "4420", 00:34:11.417 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:34:11.417 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:34:11.417 "hdgst": false, 00:34:11.417 "ddgst": false 00:34:11.417 }, 00:34:11.417 "method": "bdev_nvme_attach_controller" 00:34:11.417 }' 00:34:11.417 [2024-07-15 03:37:17.391774] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:34:11.417 [2024-07-15 03:37:17.391848] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3346439 ] 00:34:11.417 EAL: No free 2048 kB hugepages reported on node 1 00:34:11.417 [2024-07-15 03:37:17.453939] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:11.417 [2024-07-15 03:37:17.554021] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:34:11.676 Running I/O for 1 seconds... 00:34:13.052 00:34:13.052 Latency(us) 00:34:13.052 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:13.052 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:34:13.052 Verification LBA range: start 0x0 length 0x4000 00:34:13.052 Nvme1n1 : 1.01 8659.56 33.83 0.00 0.00 14721.56 3021.94 15049.01 00:34:13.052 =================================================================================================================== 00:34:13.052 Total : 8659.56 33.83 0.00 0.00 14721.56 3021.94 15049.01 00:34:13.052 03:37:19 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=3346575 00:34:13.052 03:37:19 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:34:13.052 03:37:19 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:34:13.052 03:37:19 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:34:13.052 03:37:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:34:13.052 03:37:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:34:13.052 03:37:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:34:13.052 03:37:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:34:13.052 { 00:34:13.052 "params": { 00:34:13.052 "name": "Nvme$subsystem", 00:34:13.052 "trtype": "$TEST_TRANSPORT", 00:34:13.052 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:13.052 "adrfam": "ipv4", 00:34:13.052 "trsvcid": "$NVMF_PORT", 00:34:13.052 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:13.052 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:13.052 "hdgst": ${hdgst:-false}, 00:34:13.052 "ddgst": ${ddgst:-false} 00:34:13.052 }, 00:34:13.052 "method": "bdev_nvme_attach_controller" 00:34:13.052 } 00:34:13.052 EOF 00:34:13.052 )") 00:34:13.052 03:37:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:34:13.052 03:37:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:34:13.052 03:37:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:34:13.052 03:37:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:34:13.052 "params": { 00:34:13.052 "name": "Nvme1", 00:34:13.052 "trtype": "tcp", 00:34:13.052 "traddr": "10.0.0.2", 00:34:13.052 "adrfam": "ipv4", 00:34:13.052 "trsvcid": "4420", 00:34:13.053 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:34:13.053 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:34:13.053 "hdgst": false, 00:34:13.053 "ddgst": false 00:34:13.053 }, 00:34:13.053 "method": "bdev_nvme_attach_controller" 00:34:13.053 }' 00:34:13.053 [2024-07-15 03:37:19.071581] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:34:13.053 [2024-07-15 03:37:19.071657] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3346575 ] 00:34:13.053 EAL: No free 2048 kB hugepages reported on node 1 00:34:13.053 [2024-07-15 03:37:19.131754] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:13.310 [2024-07-15 03:37:19.219267] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:34:13.310 Running I/O for 15 seconds... 00:34:16.596 03:37:22 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 3346288 00:34:16.596 03:37:22 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:34:16.596 [2024-07-15 03:37:22.038295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:45392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.596 [2024-07-15 03:37:22.038348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.596 [2024-07-15 03:37:22.038383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:45400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.596 [2024-07-15 03:37:22.038401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.596 [2024-07-15 03:37:22.038422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:45408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.596 [2024-07-15 03:37:22.038439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.596 [2024-07-15 03:37:22.038457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:45416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.596 [2024-07-15 03:37:22.038474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.596 [2024-07-15 03:37:22.038491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:45424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.596 [2024-07-15 03:37:22.038509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.596 [2024-07-15 03:37:22.038527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:45432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.596 [2024-07-15 03:37:22.038554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.596 [2024-07-15 03:37:22.038574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:45440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.596 [2024-07-15 03:37:22.038591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.596 [2024-07-15 03:37:22.038609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:45448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.596 [2024-07-15 03:37:22.038626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.596 [2024-07-15 03:37:22.038646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:45456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.596 [2024-07-15 03:37:22.038662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.596 [2024-07-15 03:37:22.038681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:45464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.596 [2024-07-15 03:37:22.038698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.596 [2024-07-15 03:37:22.038716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:45472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.596 [2024-07-15 03:37:22.038733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.596 [2024-07-15 03:37:22.038752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:45480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.596 [2024-07-15 03:37:22.038768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.596 [2024-07-15 03:37:22.038785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:45488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.596 [2024-07-15 03:37:22.038800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.596 [2024-07-15 03:37:22.038817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:45496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.596 [2024-07-15 03:37:22.038833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.596 [2024-07-15 03:37:22.038850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:45504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.596 [2024-07-15 03:37:22.038870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.596 [2024-07-15 03:37:22.038897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:45512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.596 [2024-07-15 03:37:22.038929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.596 [2024-07-15 03:37:22.038946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:46408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.596 [2024-07-15 03:37:22.038960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.596 [2024-07-15 03:37:22.038975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:45520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.596 [2024-07-15 03:37:22.038990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.597 [2024-07-15 03:37:22.039010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:45528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.597 [2024-07-15 03:37:22.039025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.597 [2024-07-15 03:37:22.039040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:45536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.597 [2024-07-15 03:37:22.039055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.597 [2024-07-15 03:37:22.039070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:45544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.597 [2024-07-15 03:37:22.039085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.597 [2024-07-15 03:37:22.039100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:45552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.597 [2024-07-15 03:37:22.039113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.597 [2024-07-15 03:37:22.039129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:45560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.597 [2024-07-15 03:37:22.039143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.597 [2024-07-15 03:37:22.039181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:45568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.597 [2024-07-15 03:37:22.039197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.597 [2024-07-15 03:37:22.039215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:45576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.597 [2024-07-15 03:37:22.039239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.597 [2024-07-15 03:37:22.039256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:45584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.597 [2024-07-15 03:37:22.039271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.597 [2024-07-15 03:37:22.039289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:45592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.597 [2024-07-15 03:37:22.039304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.597 [2024-07-15 03:37:22.039321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:45600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.597 [2024-07-15 03:37:22.039337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.597 [2024-07-15 03:37:22.039355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:45608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.597 [2024-07-15 03:37:22.039370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.597 [2024-07-15 03:37:22.039388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:45616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.597 [2024-07-15 03:37:22.039403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.597 [2024-07-15 03:37:22.039420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:45624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.597 [2024-07-15 03:37:22.039439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.597 [2024-07-15 03:37:22.039458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:45632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.597 [2024-07-15 03:37:22.039473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.597 [2024-07-15 03:37:22.039491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:45640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.597 [2024-07-15 03:37:22.039507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.597 [2024-07-15 03:37:22.039524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:45648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.597 [2024-07-15 03:37:22.039540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.597 [2024-07-15 03:37:22.039557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:45656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.597 [2024-07-15 03:37:22.039573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.597 [2024-07-15 03:37:22.039591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:45664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.597 [2024-07-15 03:37:22.039607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.597 [2024-07-15 03:37:22.039624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:45672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.597 [2024-07-15 03:37:22.039640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.597 [2024-07-15 03:37:22.039657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:45680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.597 [2024-07-15 03:37:22.039672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.597 [2024-07-15 03:37:22.039690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:45688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.597 [2024-07-15 03:37:22.039705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.597 [2024-07-15 03:37:22.039722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:45696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.597 [2024-07-15 03:37:22.039738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.597 [2024-07-15 03:37:22.039755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:45704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.597 [2024-07-15 03:37:22.039770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.597 [2024-07-15 03:37:22.039788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:45712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.597 [2024-07-15 03:37:22.039803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.597 [2024-07-15 03:37:22.039820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:45720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.597 [2024-07-15 03:37:22.039836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.597 [2024-07-15 03:37:22.039853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:45728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.597 [2024-07-15 03:37:22.039889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.597 [2024-07-15 03:37:22.039909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:45736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.597 [2024-07-15 03:37:22.039941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.597 [2024-07-15 03:37:22.039957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:45744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.597 [2024-07-15 03:37:22.039971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.597 [2024-07-15 03:37:22.039986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:45752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.597 [2024-07-15 03:37:22.040000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.597 [2024-07-15 03:37:22.040016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:45760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.597 [2024-07-15 03:37:22.040030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.597 [2024-07-15 03:37:22.040045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:45768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.597 [2024-07-15 03:37:22.040059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.597 [2024-07-15 03:37:22.040075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:45776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.597 [2024-07-15 03:37:22.040089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.597 [2024-07-15 03:37:22.040104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:45784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.597 [2024-07-15 03:37:22.040118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.597 [2024-07-15 03:37:22.040134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:45792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.597 [2024-07-15 03:37:22.040148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.597 [2024-07-15 03:37:22.040179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:45800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.597 [2024-07-15 03:37:22.040193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.597 [2024-07-15 03:37:22.040208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:45808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.597 [2024-07-15 03:37:22.040236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.597 [2024-07-15 03:37:22.040255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:45816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.597 [2024-07-15 03:37:22.040270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.597 [2024-07-15 03:37:22.040288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:45824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.598 [2024-07-15 03:37:22.040303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.598 [2024-07-15 03:37:22.040324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:45832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.598 [2024-07-15 03:37:22.040340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.598 [2024-07-15 03:37:22.040357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:45840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.598 [2024-07-15 03:37:22.040373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.598 [2024-07-15 03:37:22.040390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:45848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.598 [2024-07-15 03:37:22.040405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.598 [2024-07-15 03:37:22.040422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:45856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.598 [2024-07-15 03:37:22.040438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.598 [2024-07-15 03:37:22.040455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:45864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.598 [2024-07-15 03:37:22.040470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.598 [2024-07-15 03:37:22.040487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:45872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.598 [2024-07-15 03:37:22.040502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.598 [2024-07-15 03:37:22.040519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:45880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.598 [2024-07-15 03:37:22.040534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.598 [2024-07-15 03:37:22.040551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:45888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.598 [2024-07-15 03:37:22.040566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.598 [2024-07-15 03:37:22.040583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:45896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.598 [2024-07-15 03:37:22.040598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.598 [2024-07-15 03:37:22.040615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:45904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.598 [2024-07-15 03:37:22.040630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.598 [2024-07-15 03:37:22.040648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:45912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.598 [2024-07-15 03:37:22.040663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.598 [2024-07-15 03:37:22.040680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:45920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.598 [2024-07-15 03:37:22.040696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.598 [2024-07-15 03:37:22.040713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:45928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.598 [2024-07-15 03:37:22.040732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.598 [2024-07-15 03:37:22.040750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:45936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.598 [2024-07-15 03:37:22.040765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.598 [2024-07-15 03:37:22.040782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:45944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.598 [2024-07-15 03:37:22.040798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.598 [2024-07-15 03:37:22.040815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:45952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.598 [2024-07-15 03:37:22.040830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.598 [2024-07-15 03:37:22.040847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:45960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.598 [2024-07-15 03:37:22.040871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.598 [2024-07-15 03:37:22.040896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:45968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.598 [2024-07-15 03:37:22.040927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.598 [2024-07-15 03:37:22.040943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:45976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.598 [2024-07-15 03:37:22.040957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.598 [2024-07-15 03:37:22.040973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:45984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.598 [2024-07-15 03:37:22.040987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.598 [2024-07-15 03:37:22.041002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:45992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.598 [2024-07-15 03:37:22.041015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.598 [2024-07-15 03:37:22.041031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:46000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.598 [2024-07-15 03:37:22.041045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.598 [2024-07-15 03:37:22.041060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:46008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.598 [2024-07-15 03:37:22.041074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.598 [2024-07-15 03:37:22.041089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:46016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.598 [2024-07-15 03:37:22.041103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.598 [2024-07-15 03:37:22.041118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:46024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.598 [2024-07-15 03:37:22.041132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.598 [2024-07-15 03:37:22.041151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:46032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.598 [2024-07-15 03:37:22.041193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.598 [2024-07-15 03:37:22.041211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:46040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.598 [2024-07-15 03:37:22.041226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.598 [2024-07-15 03:37:22.041244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:46048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.598 [2024-07-15 03:37:22.041260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.598 [2024-07-15 03:37:22.041277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:46056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.598 [2024-07-15 03:37:22.041292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.598 [2024-07-15 03:37:22.041309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:46064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.598 [2024-07-15 03:37:22.041325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.598 [2024-07-15 03:37:22.041342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:46072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.598 [2024-07-15 03:37:22.041357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.598 [2024-07-15 03:37:22.041375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:46080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.598 [2024-07-15 03:37:22.041390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.598 [2024-07-15 03:37:22.041407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:46088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.598 [2024-07-15 03:37:22.041423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.598 [2024-07-15 03:37:22.041440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:46096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.598 [2024-07-15 03:37:22.041454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.598 [2024-07-15 03:37:22.041471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:46104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.598 [2024-07-15 03:37:22.041487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.598 [2024-07-15 03:37:22.041504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:46112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.598 [2024-07-15 03:37:22.041519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.598 [2024-07-15 03:37:22.041536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:46120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.598 [2024-07-15 03:37:22.041551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.599 [2024-07-15 03:37:22.041569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:46128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.599 [2024-07-15 03:37:22.041589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.599 [2024-07-15 03:37:22.041607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:46136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.599 [2024-07-15 03:37:22.041622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.599 [2024-07-15 03:37:22.041639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:46144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.599 [2024-07-15 03:37:22.041654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.599 [2024-07-15 03:37:22.041671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:46152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.599 [2024-07-15 03:37:22.041687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.599 [2024-07-15 03:37:22.041704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:46160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.599 [2024-07-15 03:37:22.041719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.599 [2024-07-15 03:37:22.041736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:46168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.599 [2024-07-15 03:37:22.041751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.599 [2024-07-15 03:37:22.041769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:46176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.599 [2024-07-15 03:37:22.041784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.599 [2024-07-15 03:37:22.041801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:46184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.599 [2024-07-15 03:37:22.041817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.599 [2024-07-15 03:37:22.041834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:46192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.599 [2024-07-15 03:37:22.041849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.599 [2024-07-15 03:37:22.041872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:46200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.599 [2024-07-15 03:37:22.041894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.599 [2024-07-15 03:37:22.041913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:46208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.599 [2024-07-15 03:37:22.041944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.599 [2024-07-15 03:37:22.041959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:46216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.599 [2024-07-15 03:37:22.041973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.599 [2024-07-15 03:37:22.041989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:46224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.599 [2024-07-15 03:37:22.042003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.599 [2024-07-15 03:37:22.042018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:46232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.599 [2024-07-15 03:37:22.042036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.599 [2024-07-15 03:37:22.042052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:46240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.599 [2024-07-15 03:37:22.042066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.599 [2024-07-15 03:37:22.042081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:46248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.599 [2024-07-15 03:37:22.042095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.599 [2024-07-15 03:37:22.042111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:46256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.599 [2024-07-15 03:37:22.042124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.599 [2024-07-15 03:37:22.042140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:46264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.599 [2024-07-15 03:37:22.042154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.599 [2024-07-15 03:37:22.042184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:46272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.599 [2024-07-15 03:37:22.042197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.599 [2024-07-15 03:37:22.042212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:46280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.599 [2024-07-15 03:37:22.042224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.599 [2024-07-15 03:37:22.042255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:46288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.599 [2024-07-15 03:37:22.042271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.599 [2024-07-15 03:37:22.042288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:46296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.599 [2024-07-15 03:37:22.042303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.599 [2024-07-15 03:37:22.042320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:46304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.599 [2024-07-15 03:37:22.042335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.599 [2024-07-15 03:37:22.042352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:46312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.599 [2024-07-15 03:37:22.042368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.599 [2024-07-15 03:37:22.042385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:46320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.599 [2024-07-15 03:37:22.042400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.599 [2024-07-15 03:37:22.042417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:46328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.599 [2024-07-15 03:37:22.042432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.599 [2024-07-15 03:37:22.042456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:46336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.599 [2024-07-15 03:37:22.042473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.599 [2024-07-15 03:37:22.042490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:46344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.599 [2024-07-15 03:37:22.042505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.599 [2024-07-15 03:37:22.042522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:46352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.599 [2024-07-15 03:37:22.042537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.599 [2024-07-15 03:37:22.042554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:46360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.599 [2024-07-15 03:37:22.042569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.599 [2024-07-15 03:37:22.042585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:46368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.599 [2024-07-15 03:37:22.042600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.599 [2024-07-15 03:37:22.042617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:46376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.599 [2024-07-15 03:37:22.042632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.599 [2024-07-15 03:37:22.042648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:46384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.599 [2024-07-15 03:37:22.042664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.599 [2024-07-15 03:37:22.042680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:46392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.599 [2024-07-15 03:37:22.042695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.599 [2024-07-15 03:37:22.042713] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x137d050 is same with the state(5) to be set 00:34:16.599 [2024-07-15 03:37:22.042733] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:16.599 [2024-07-15 03:37:22.042746] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:16.599 [2024-07-15 03:37:22.042759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:46400 len:8 PRP1 0x0 PRP2 0x0 00:34:16.599 [2024-07-15 03:37:22.042774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.599 [2024-07-15 03:37:22.042838] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x137d050 was disconnected and freed. reset controller. 00:34:16.599 [2024-07-15 03:37:22.046730] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:16.600 [2024-07-15 03:37:22.046805] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114bed0 (9): Bad file descriptor 00:34:16.600 [2024-07-15 03:37:22.047580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.600 [2024-07-15 03:37:22.047627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114bed0 with addr=10.0.0.2, port=4420 00:34:16.600 [2024-07-15 03:37:22.047651] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114bed0 is same with the state(5) to be set 00:34:16.600 [2024-07-15 03:37:22.047902] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114bed0 (9): Bad file descriptor 00:34:16.600 [2024-07-15 03:37:22.048141] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:16.600 [2024-07-15 03:37:22.048177] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:16.600 [2024-07-15 03:37:22.048197] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:16.600 [2024-07-15 03:37:22.051816] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:16.600 [2024-07-15 03:37:22.060964] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:16.600 [2024-07-15 03:37:22.061361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.600 [2024-07-15 03:37:22.061392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114bed0 with addr=10.0.0.2, port=4420 00:34:16.600 [2024-07-15 03:37:22.061410] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114bed0 is same with the state(5) to be set 00:34:16.600 [2024-07-15 03:37:22.061649] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114bed0 (9): Bad file descriptor 00:34:16.600 [2024-07-15 03:37:22.061904] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:16.600 [2024-07-15 03:37:22.061928] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:16.600 [2024-07-15 03:37:22.061943] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:16.600 [2024-07-15 03:37:22.065614] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:16.600 [2024-07-15 03:37:22.074941] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:16.600 [2024-07-15 03:37:22.075375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.600 [2024-07-15 03:37:22.075407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114bed0 with addr=10.0.0.2, port=4420 00:34:16.600 [2024-07-15 03:37:22.075426] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114bed0 is same with the state(5) to be set 00:34:16.600 [2024-07-15 03:37:22.075665] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114bed0 (9): Bad file descriptor 00:34:16.600 [2024-07-15 03:37:22.075919] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:16.600 [2024-07-15 03:37:22.075943] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:16.600 [2024-07-15 03:37:22.075959] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:16.600 [2024-07-15 03:37:22.079538] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:16.600 [2024-07-15 03:37:22.088856] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:16.600 [2024-07-15 03:37:22.089261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.600 [2024-07-15 03:37:22.089293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114bed0 with addr=10.0.0.2, port=4420 00:34:16.600 [2024-07-15 03:37:22.089311] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114bed0 is same with the state(5) to be set 00:34:16.600 [2024-07-15 03:37:22.089550] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114bed0 (9): Bad file descriptor 00:34:16.600 [2024-07-15 03:37:22.089792] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:16.600 [2024-07-15 03:37:22.089815] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:16.600 [2024-07-15 03:37:22.089836] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:16.600 [2024-07-15 03:37:22.093421] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:16.600 [2024-07-15 03:37:22.102713] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:16.600 [2024-07-15 03:37:22.103139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.600 [2024-07-15 03:37:22.103171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114bed0 with addr=10.0.0.2, port=4420 00:34:16.600 [2024-07-15 03:37:22.103189] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114bed0 is same with the state(5) to be set 00:34:16.600 [2024-07-15 03:37:22.103427] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114bed0 (9): Bad file descriptor 00:34:16.600 [2024-07-15 03:37:22.103670] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:16.600 [2024-07-15 03:37:22.103693] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:16.600 [2024-07-15 03:37:22.103708] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:16.600 [2024-07-15 03:37:22.107289] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:16.600 [2024-07-15 03:37:22.116571] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:16.600 [2024-07-15 03:37:22.116994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.600 [2024-07-15 03:37:22.117026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114bed0 with addr=10.0.0.2, port=4420 00:34:16.600 [2024-07-15 03:37:22.117043] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114bed0 is same with the state(5) to be set 00:34:16.600 [2024-07-15 03:37:22.117282] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114bed0 (9): Bad file descriptor 00:34:16.600 [2024-07-15 03:37:22.117523] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:16.600 [2024-07-15 03:37:22.117546] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:16.600 [2024-07-15 03:37:22.117562] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:16.600 [2024-07-15 03:37:22.121143] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:16.600 [2024-07-15 03:37:22.130450] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:16.600 [2024-07-15 03:37:22.130865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.600 [2024-07-15 03:37:22.130904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114bed0 with addr=10.0.0.2, port=4420 00:34:16.600 [2024-07-15 03:37:22.130922] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114bed0 is same with the state(5) to be set 00:34:16.600 [2024-07-15 03:37:22.131161] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114bed0 (9): Bad file descriptor 00:34:16.600 [2024-07-15 03:37:22.131403] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:16.600 [2024-07-15 03:37:22.131426] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:16.600 [2024-07-15 03:37:22.131441] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:16.600 [2024-07-15 03:37:22.135021] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:16.600 [2024-07-15 03:37:22.144296] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:16.600 [2024-07-15 03:37:22.144696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.600 [2024-07-15 03:37:22.144738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114bed0 with addr=10.0.0.2, port=4420 00:34:16.600 [2024-07-15 03:37:22.144753] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114bed0 is same with the state(5) to be set 00:34:16.600 [2024-07-15 03:37:22.145027] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114bed0 (9): Bad file descriptor 00:34:16.600 [2024-07-15 03:37:22.145270] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:16.600 [2024-07-15 03:37:22.145293] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:16.600 [2024-07-15 03:37:22.145308] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:16.600 [2024-07-15 03:37:22.148886] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:16.600 [2024-07-15 03:37:22.158159] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:16.600 [2024-07-15 03:37:22.158525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.600 [2024-07-15 03:37:22.158556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114bed0 with addr=10.0.0.2, port=4420 00:34:16.600 [2024-07-15 03:37:22.158573] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114bed0 is same with the state(5) to be set 00:34:16.600 [2024-07-15 03:37:22.158811] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114bed0 (9): Bad file descriptor 00:34:16.600 [2024-07-15 03:37:22.159063] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:16.600 [2024-07-15 03:37:22.159087] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:16.600 [2024-07-15 03:37:22.159102] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:16.600 [2024-07-15 03:37:22.162677] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:16.600 [2024-07-15 03:37:22.172175] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:16.600 [2024-07-15 03:37:22.172539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.600 [2024-07-15 03:37:22.172569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114bed0 with addr=10.0.0.2, port=4420 00:34:16.600 [2024-07-15 03:37:22.172587] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114bed0 is same with the state(5) to be set 00:34:16.600 [2024-07-15 03:37:22.172824] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114bed0 (9): Bad file descriptor 00:34:16.600 [2024-07-15 03:37:22.173076] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:16.600 [2024-07-15 03:37:22.173100] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:16.601 [2024-07-15 03:37:22.173115] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:16.601 [2024-07-15 03:37:22.176685] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:16.601 [2024-07-15 03:37:22.186183] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:16.601 [2024-07-15 03:37:22.186609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.601 [2024-07-15 03:37:22.186652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114bed0 with addr=10.0.0.2, port=4420 00:34:16.601 [2024-07-15 03:37:22.186667] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114bed0 is same with the state(5) to be set 00:34:16.601 [2024-07-15 03:37:22.186947] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114bed0 (9): Bad file descriptor 00:34:16.601 [2024-07-15 03:37:22.187204] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:16.601 [2024-07-15 03:37:22.187227] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:16.601 [2024-07-15 03:37:22.187242] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:16.601 [2024-07-15 03:37:22.190814] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:16.601 [2024-07-15 03:37:22.200124] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:16.601 [2024-07-15 03:37:22.200542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.601 [2024-07-15 03:37:22.200574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114bed0 with addr=10.0.0.2, port=4420 00:34:16.601 [2024-07-15 03:37:22.200591] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114bed0 is same with the state(5) to be set 00:34:16.601 [2024-07-15 03:37:22.200830] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114bed0 (9): Bad file descriptor 00:34:16.601 [2024-07-15 03:37:22.201082] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:16.601 [2024-07-15 03:37:22.201106] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:16.601 [2024-07-15 03:37:22.201121] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:16.601 [2024-07-15 03:37:22.204691] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:16.601 [2024-07-15 03:37:22.213976] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:16.601 [2024-07-15 03:37:22.214366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.601 [2024-07-15 03:37:22.214397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114bed0 with addr=10.0.0.2, port=4420 00:34:16.601 [2024-07-15 03:37:22.214415] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114bed0 is same with the state(5) to be set 00:34:16.601 [2024-07-15 03:37:22.214652] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114bed0 (9): Bad file descriptor 00:34:16.601 [2024-07-15 03:37:22.214905] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:16.601 [2024-07-15 03:37:22.214929] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:16.601 [2024-07-15 03:37:22.214944] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:16.601 [2024-07-15 03:37:22.218512] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:16.601 [2024-07-15 03:37:22.228008] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:16.601 [2024-07-15 03:37:22.228413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.601 [2024-07-15 03:37:22.228444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114bed0 with addr=10.0.0.2, port=4420 00:34:16.601 [2024-07-15 03:37:22.228461] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114bed0 is same with the state(5) to be set 00:34:16.601 [2024-07-15 03:37:22.228699] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114bed0 (9): Bad file descriptor 00:34:16.601 [2024-07-15 03:37:22.228952] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:16.601 [2024-07-15 03:37:22.228976] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:16.601 [2024-07-15 03:37:22.228997] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:16.601 [2024-07-15 03:37:22.232569] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:16.601 [2024-07-15 03:37:22.241849] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:16.601 [2024-07-15 03:37:22.242256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.601 [2024-07-15 03:37:22.242288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114bed0 with addr=10.0.0.2, port=4420 00:34:16.601 [2024-07-15 03:37:22.242305] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114bed0 is same with the state(5) to be set 00:34:16.601 [2024-07-15 03:37:22.242544] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114bed0 (9): Bad file descriptor 00:34:16.601 [2024-07-15 03:37:22.242787] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:16.601 [2024-07-15 03:37:22.242810] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:16.601 [2024-07-15 03:37:22.242825] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:16.601 [2024-07-15 03:37:22.246412] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:16.601 [2024-07-15 03:37:22.255692] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:16.601 [2024-07-15 03:37:22.256091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.601 [2024-07-15 03:37:22.256123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114bed0 with addr=10.0.0.2, port=4420 00:34:16.601 [2024-07-15 03:37:22.256141] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114bed0 is same with the state(5) to be set 00:34:16.601 [2024-07-15 03:37:22.256379] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114bed0 (9): Bad file descriptor 00:34:16.601 [2024-07-15 03:37:22.256621] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:16.601 [2024-07-15 03:37:22.256644] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:16.601 [2024-07-15 03:37:22.256659] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:16.601 [2024-07-15 03:37:22.260253] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:16.601 [2024-07-15 03:37:22.269534] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:16.601 [2024-07-15 03:37:22.269968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.601 [2024-07-15 03:37:22.269995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114bed0 with addr=10.0.0.2, port=4420 00:34:16.601 [2024-07-15 03:37:22.270025] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114bed0 is same with the state(5) to be set 00:34:16.601 [2024-07-15 03:37:22.270270] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114bed0 (9): Bad file descriptor 00:34:16.601 [2024-07-15 03:37:22.270513] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:16.601 [2024-07-15 03:37:22.270536] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:16.601 [2024-07-15 03:37:22.270551] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:16.601 [2024-07-15 03:37:22.274134] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:16.601 [2024-07-15 03:37:22.283426] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:16.601 [2024-07-15 03:37:22.283847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.602 [2024-07-15 03:37:22.283893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114bed0 with addr=10.0.0.2, port=4420 00:34:16.602 [2024-07-15 03:37:22.283913] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114bed0 is same with the state(5) to be set 00:34:16.602 [2024-07-15 03:37:22.284152] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114bed0 (9): Bad file descriptor 00:34:16.602 [2024-07-15 03:37:22.284395] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:16.602 [2024-07-15 03:37:22.284418] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:16.602 [2024-07-15 03:37:22.284433] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:16.602 [2024-07-15 03:37:22.288014] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:16.602 [2024-07-15 03:37:22.297291] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:16.602 [2024-07-15 03:37:22.297705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.602 [2024-07-15 03:37:22.297736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114bed0 with addr=10.0.0.2, port=4420 00:34:16.602 [2024-07-15 03:37:22.297753] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114bed0 is same with the state(5) to be set 00:34:16.602 [2024-07-15 03:37:22.298002] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114bed0 (9): Bad file descriptor 00:34:16.602 [2024-07-15 03:37:22.298266] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:16.602 [2024-07-15 03:37:22.298286] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:16.602 [2024-07-15 03:37:22.298299] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:16.602 [2024-07-15 03:37:22.301833] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:16.602 [2024-07-15 03:37:22.311064] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:16.602 [2024-07-15 03:37:22.311432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.602 [2024-07-15 03:37:22.311461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114bed0 with addr=10.0.0.2, port=4420 00:34:16.602 [2024-07-15 03:37:22.311476] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114bed0 is same with the state(5) to be set 00:34:16.602 [2024-07-15 03:37:22.311719] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114bed0 (9): Bad file descriptor 00:34:16.602 [2024-07-15 03:37:22.311971] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:16.602 [2024-07-15 03:37:22.311995] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:16.602 [2024-07-15 03:37:22.312010] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:16.602 [2024-07-15 03:37:22.315570] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:16.602 [2024-07-15 03:37:22.325061] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:16.602 [2024-07-15 03:37:22.325461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.602 [2024-07-15 03:37:22.325492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114bed0 with addr=10.0.0.2, port=4420 00:34:16.602 [2024-07-15 03:37:22.325509] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114bed0 is same with the state(5) to be set 00:34:16.602 [2024-07-15 03:37:22.325747] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114bed0 (9): Bad file descriptor 00:34:16.602 [2024-07-15 03:37:22.326006] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:16.602 [2024-07-15 03:37:22.326031] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:16.602 [2024-07-15 03:37:22.326046] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:16.602 [2024-07-15 03:37:22.329617] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:16.602 [2024-07-15 03:37:22.338918] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:16.602 [2024-07-15 03:37:22.339339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.602 [2024-07-15 03:37:22.339371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114bed0 with addr=10.0.0.2, port=4420 00:34:16.602 [2024-07-15 03:37:22.339388] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114bed0 is same with the state(5) to be set 00:34:16.602 [2024-07-15 03:37:22.339626] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114bed0 (9): Bad file descriptor 00:34:16.602 [2024-07-15 03:37:22.339869] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:16.602 [2024-07-15 03:37:22.339902] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:16.602 [2024-07-15 03:37:22.339918] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:16.602 [2024-07-15 03:37:22.343489] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:16.602 [2024-07-15 03:37:22.352766] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:16.602 [2024-07-15 03:37:22.353198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.602 [2024-07-15 03:37:22.353230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114bed0 with addr=10.0.0.2, port=4420 00:34:16.602 [2024-07-15 03:37:22.353247] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114bed0 is same with the state(5) to be set 00:34:16.602 [2024-07-15 03:37:22.353485] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114bed0 (9): Bad file descriptor 00:34:16.602 [2024-07-15 03:37:22.353727] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:16.602 [2024-07-15 03:37:22.353750] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:16.602 [2024-07-15 03:37:22.353766] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:16.602 [2024-07-15 03:37:22.357347] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:16.602 [2024-07-15 03:37:22.366620] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:16.602 [2024-07-15 03:37:22.367035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.602 [2024-07-15 03:37:22.367066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114bed0 with addr=10.0.0.2, port=4420 00:34:16.602 [2024-07-15 03:37:22.367084] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114bed0 is same with the state(5) to be set 00:34:16.602 [2024-07-15 03:37:22.367321] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114bed0 (9): Bad file descriptor 00:34:16.602 [2024-07-15 03:37:22.367563] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:16.602 [2024-07-15 03:37:22.367586] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:16.602 [2024-07-15 03:37:22.367601] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:16.602 [2024-07-15 03:37:22.371189] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:16.602 [2024-07-15 03:37:22.380461] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:16.602 [2024-07-15 03:37:22.380890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.602 [2024-07-15 03:37:22.380921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114bed0 with addr=10.0.0.2, port=4420 00:34:16.602 [2024-07-15 03:37:22.380954] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114bed0 is same with the state(5) to be set 00:34:16.602 [2024-07-15 03:37:22.381196] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114bed0 (9): Bad file descriptor 00:34:16.602 [2024-07-15 03:37:22.381449] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:16.602 [2024-07-15 03:37:22.381473] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:16.602 [2024-07-15 03:37:22.381488] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:16.602 [2024-07-15 03:37:22.384967] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:16.602 [2024-07-15 03:37:22.394465] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:16.602 [2024-07-15 03:37:22.394888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.602 [2024-07-15 03:37:22.394935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114bed0 with addr=10.0.0.2, port=4420 00:34:16.602 [2024-07-15 03:37:22.394951] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114bed0 is same with the state(5) to be set 00:34:16.602 [2024-07-15 03:37:22.395179] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114bed0 (9): Bad file descriptor 00:34:16.602 [2024-07-15 03:37:22.395431] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:16.602 [2024-07-15 03:37:22.395455] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:16.602 [2024-07-15 03:37:22.395470] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:16.602 [2024-07-15 03:37:22.399049] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:16.602 [2024-07-15 03:37:22.408342] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:16.602 [2024-07-15 03:37:22.408834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.602 [2024-07-15 03:37:22.408893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114bed0 with addr=10.0.0.2, port=4420 00:34:16.602 [2024-07-15 03:37:22.408912] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114bed0 is same with the state(5) to be set 00:34:16.602 [2024-07-15 03:37:22.409157] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114bed0 (9): Bad file descriptor 00:34:16.602 [2024-07-15 03:37:22.409414] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:16.602 [2024-07-15 03:37:22.409437] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:16.602 [2024-07-15 03:37:22.409452] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:16.603 [2024-07-15 03:37:22.413015] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:16.603 [2024-07-15 03:37:22.422303] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:16.603 [2024-07-15 03:37:22.422822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.603 [2024-07-15 03:37:22.422874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114bed0 with addr=10.0.0.2, port=4420 00:34:16.603 [2024-07-15 03:37:22.422905] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114bed0 is same with the state(5) to be set 00:34:16.603 [2024-07-15 03:37:22.423145] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114bed0 (9): Bad file descriptor 00:34:16.603 [2024-07-15 03:37:22.423388] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:16.603 [2024-07-15 03:37:22.423411] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:16.603 [2024-07-15 03:37:22.423426] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:16.603 [2024-07-15 03:37:22.427004] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:16.603 [2024-07-15 03:37:22.436291] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:16.603 [2024-07-15 03:37:22.436689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.603 [2024-07-15 03:37:22.436721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114bed0 with addr=10.0.0.2, port=4420 00:34:16.603 [2024-07-15 03:37:22.436738] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114bed0 is same with the state(5) to be set 00:34:16.603 [2024-07-15 03:37:22.436988] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114bed0 (9): Bad file descriptor 00:34:16.603 [2024-07-15 03:37:22.437231] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:16.603 [2024-07-15 03:37:22.437255] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:16.603 [2024-07-15 03:37:22.437270] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:16.603 [2024-07-15 03:37:22.440843] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:16.603 [2024-07-15 03:37:22.450338] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:16.603 [2024-07-15 03:37:22.450749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.603 [2024-07-15 03:37:22.450780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114bed0 with addr=10.0.0.2, port=4420 00:34:16.603 [2024-07-15 03:37:22.450797] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114bed0 is same with the state(5) to be set 00:34:16.603 [2024-07-15 03:37:22.451046] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114bed0 (9): Bad file descriptor 00:34:16.603 [2024-07-15 03:37:22.451289] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:16.603 [2024-07-15 03:37:22.451312] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:16.603 [2024-07-15 03:37:22.451327] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:16.603 [2024-07-15 03:37:22.454905] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:16.603 [2024-07-15 03:37:22.464186] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:16.603 [2024-07-15 03:37:22.464595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.603 [2024-07-15 03:37:22.464626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114bed0 with addr=10.0.0.2, port=4420 00:34:16.603 [2024-07-15 03:37:22.464643] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114bed0 is same with the state(5) to be set 00:34:16.603 [2024-07-15 03:37:22.464891] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114bed0 (9): Bad file descriptor 00:34:16.603 [2024-07-15 03:37:22.465135] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:16.603 [2024-07-15 03:37:22.465164] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:16.603 [2024-07-15 03:37:22.465180] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:16.603 [2024-07-15 03:37:22.468750] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:16.603 [2024-07-15 03:37:22.478036] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:16.603 [2024-07-15 03:37:22.478437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.603 [2024-07-15 03:37:22.478468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114bed0 with addr=10.0.0.2, port=4420 00:34:16.603 [2024-07-15 03:37:22.478485] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114bed0 is same with the state(5) to be set 00:34:16.603 [2024-07-15 03:37:22.478723] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114bed0 (9): Bad file descriptor 00:34:16.603 [2024-07-15 03:37:22.478976] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:16.603 [2024-07-15 03:37:22.479000] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:16.603 [2024-07-15 03:37:22.479015] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:16.603 [2024-07-15 03:37:22.482588] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:16.603 [2024-07-15 03:37:22.491866] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:16.603 [2024-07-15 03:37:22.492284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.603 [2024-07-15 03:37:22.492315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114bed0 with addr=10.0.0.2, port=4420 00:34:16.603 [2024-07-15 03:37:22.492332] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114bed0 is same with the state(5) to be set 00:34:16.603 [2024-07-15 03:37:22.492569] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114bed0 (9): Bad file descriptor 00:34:16.603 [2024-07-15 03:37:22.492810] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:16.603 [2024-07-15 03:37:22.492833] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:16.603 [2024-07-15 03:37:22.492849] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:16.603 [2024-07-15 03:37:22.496431] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:16.603 [2024-07-15 03:37:22.505710] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:16.603 [2024-07-15 03:37:22.506136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.603 [2024-07-15 03:37:22.506167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114bed0 with addr=10.0.0.2, port=4420 00:34:16.603 [2024-07-15 03:37:22.506185] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114bed0 is same with the state(5) to be set 00:34:16.603 [2024-07-15 03:37:22.506422] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114bed0 (9): Bad file descriptor 00:34:16.603 [2024-07-15 03:37:22.506664] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:16.603 [2024-07-15 03:37:22.506687] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:16.603 [2024-07-15 03:37:22.506702] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:16.603 [2024-07-15 03:37:22.510284] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:16.603 [2024-07-15 03:37:22.519571] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:16.603 [2024-07-15 03:37:22.519977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.603 [2024-07-15 03:37:22.520009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114bed0 with addr=10.0.0.2, port=4420 00:34:16.603 [2024-07-15 03:37:22.520026] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114bed0 is same with the state(5) to be set 00:34:16.603 [2024-07-15 03:37:22.520265] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114bed0 (9): Bad file descriptor 00:34:16.603 [2024-07-15 03:37:22.520507] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:16.603 [2024-07-15 03:37:22.520530] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:16.603 [2024-07-15 03:37:22.520545] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:16.603 [2024-07-15 03:37:22.524125] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:16.603 [2024-07-15 03:37:22.533611] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:16.603 [2024-07-15 03:37:22.534022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.603 [2024-07-15 03:37:22.534054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114bed0 with addr=10.0.0.2, port=4420 00:34:16.603 [2024-07-15 03:37:22.534071] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114bed0 is same with the state(5) to be set 00:34:16.603 [2024-07-15 03:37:22.534309] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114bed0 (9): Bad file descriptor 00:34:16.603 [2024-07-15 03:37:22.534551] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:16.603 [2024-07-15 03:37:22.534574] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:16.603 [2024-07-15 03:37:22.534589] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:16.603 [2024-07-15 03:37:22.538170] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:16.603 [2024-07-15 03:37:22.547459] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:16.603 [2024-07-15 03:37:22.547907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.603 [2024-07-15 03:37:22.547935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114bed0 with addr=10.0.0.2, port=4420 00:34:16.603 [2024-07-15 03:37:22.547965] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114bed0 is same with the state(5) to be set 00:34:16.603 [2024-07-15 03:37:22.548179] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114bed0 (9): Bad file descriptor 00:34:16.603 [2024-07-15 03:37:22.548446] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:16.604 [2024-07-15 03:37:22.548466] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:16.604 [2024-07-15 03:37:22.548479] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:16.604 [2024-07-15 03:37:22.551929] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:16.604 [2024-07-15 03:37:22.561401] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:16.604 [2024-07-15 03:37:22.561812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.604 [2024-07-15 03:37:22.561843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114bed0 with addr=10.0.0.2, port=4420 00:34:16.604 [2024-07-15 03:37:22.561860] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114bed0 is same with the state(5) to be set 00:34:16.604 [2024-07-15 03:37:22.562114] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114bed0 (9): Bad file descriptor 00:34:16.604 [2024-07-15 03:37:22.562358] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:16.604 [2024-07-15 03:37:22.562381] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:16.604 [2024-07-15 03:37:22.562396] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:16.604 [2024-07-15 03:37:22.565973] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:16.604 [2024-07-15 03:37:22.575248] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:16.604 [2024-07-15 03:37:22.575616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.604 [2024-07-15 03:37:22.575648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114bed0 with addr=10.0.0.2, port=4420 00:34:16.604 [2024-07-15 03:37:22.575665] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114bed0 is same with the state(5) to be set 00:34:16.604 [2024-07-15 03:37:22.575915] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114bed0 (9): Bad file descriptor 00:34:16.604 [2024-07-15 03:37:22.576158] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:16.604 [2024-07-15 03:37:22.576181] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:16.604 [2024-07-15 03:37:22.576196] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:16.604 [2024-07-15 03:37:22.579773] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:16.604 [2024-07-15 03:37:22.589277] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:16.604 [2024-07-15 03:37:22.589688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.604 [2024-07-15 03:37:22.589719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114bed0 with addr=10.0.0.2, port=4420 00:34:16.604 [2024-07-15 03:37:22.589736] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114bed0 is same with the state(5) to be set 00:34:16.604 [2024-07-15 03:37:22.589985] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114bed0 (9): Bad file descriptor 00:34:16.604 [2024-07-15 03:37:22.590229] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:16.604 [2024-07-15 03:37:22.590252] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:16.604 [2024-07-15 03:37:22.590267] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:16.604 [2024-07-15 03:37:22.593839] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:16.604 [2024-07-15 03:37:22.603120] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:16.604 [2024-07-15 03:37:22.603509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.604 [2024-07-15 03:37:22.603539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114bed0 with addr=10.0.0.2, port=4420 00:34:16.604 [2024-07-15 03:37:22.603557] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114bed0 is same with the state(5) to be set 00:34:16.604 [2024-07-15 03:37:22.603794] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114bed0 (9): Bad file descriptor 00:34:16.604 [2024-07-15 03:37:22.604047] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:16.604 [2024-07-15 03:37:22.604071] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:16.604 [2024-07-15 03:37:22.604092] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:16.604 [2024-07-15 03:37:22.607669] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:16.604 [2024-07-15 03:37:22.616958] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:16.604 [2024-07-15 03:37:22.617347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.604 [2024-07-15 03:37:22.617378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114bed0 with addr=10.0.0.2, port=4420 00:34:16.604 [2024-07-15 03:37:22.617395] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114bed0 is same with the state(5) to be set 00:34:16.604 [2024-07-15 03:37:22.617633] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114bed0 (9): Bad file descriptor 00:34:16.604 [2024-07-15 03:37:22.617885] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:16.604 [2024-07-15 03:37:22.617908] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:16.604 [2024-07-15 03:37:22.617923] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:16.604 [2024-07-15 03:37:22.621496] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:16.604 [2024-07-15 03:37:22.630991] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:16.604 [2024-07-15 03:37:22.631425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.604 [2024-07-15 03:37:22.631452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114bed0 with addr=10.0.0.2, port=4420 00:34:16.604 [2024-07-15 03:37:22.631481] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114bed0 is same with the state(5) to be set 00:34:16.604 [2024-07-15 03:37:22.631726] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114bed0 (9): Bad file descriptor 00:34:16.604 [2024-07-15 03:37:22.631979] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:16.604 [2024-07-15 03:37:22.632003] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:16.604 [2024-07-15 03:37:22.632018] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:16.604 [2024-07-15 03:37:22.635592] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:16.604 [2024-07-15 03:37:22.644882] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:16.604 [2024-07-15 03:37:22.645310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.604 [2024-07-15 03:37:22.645341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114bed0 with addr=10.0.0.2, port=4420 00:34:16.604 [2024-07-15 03:37:22.645358] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114bed0 is same with the state(5) to be set 00:34:16.604 [2024-07-15 03:37:22.645595] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114bed0 (9): Bad file descriptor 00:34:16.604 [2024-07-15 03:37:22.645838] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:16.604 [2024-07-15 03:37:22.645862] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:16.604 [2024-07-15 03:37:22.645889] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:16.604 [2024-07-15 03:37:22.649471] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:16.604 [2024-07-15 03:37:22.658755] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:16.604 [2024-07-15 03:37:22.659158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.604 [2024-07-15 03:37:22.659201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114bed0 with addr=10.0.0.2, port=4420 00:34:16.604 [2024-07-15 03:37:22.659218] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114bed0 is same with the state(5) to be set 00:34:16.604 [2024-07-15 03:37:22.659466] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114bed0 (9): Bad file descriptor 00:34:16.604 [2024-07-15 03:37:22.659718] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:16.604 [2024-07-15 03:37:22.659741] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:16.604 [2024-07-15 03:37:22.659757] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:16.604 [2024-07-15 03:37:22.663374] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:16.604 [2024-07-15 03:37:22.672666] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:16.604 [2024-07-15 03:37:22.673075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.604 [2024-07-15 03:37:22.673106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114bed0 with addr=10.0.0.2, port=4420 00:34:16.604 [2024-07-15 03:37:22.673123] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114bed0 is same with the state(5) to be set 00:34:16.604 [2024-07-15 03:37:22.673361] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114bed0 (9): Bad file descriptor 00:34:16.604 [2024-07-15 03:37:22.673603] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:16.604 [2024-07-15 03:37:22.673636] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:16.604 [2024-07-15 03:37:22.673651] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:16.604 [2024-07-15 03:37:22.677237] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:16.604 [2024-07-15 03:37:22.686558] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:16.604 [2024-07-15 03:37:22.686980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.604 [2024-07-15 03:37:22.687013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114bed0 with addr=10.0.0.2, port=4420 00:34:16.605 [2024-07-15 03:37:22.687030] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114bed0 is same with the state(5) to be set 00:34:16.605 [2024-07-15 03:37:22.687269] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114bed0 (9): Bad file descriptor 00:34:16.605 [2024-07-15 03:37:22.687511] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:16.605 [2024-07-15 03:37:22.687535] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:16.605 [2024-07-15 03:37:22.687551] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:16.605 [2024-07-15 03:37:22.691136] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:16.605 [2024-07-15 03:37:22.700444] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:16.605 [2024-07-15 03:37:22.700836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.605 [2024-07-15 03:37:22.700867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114bed0 with addr=10.0.0.2, port=4420 00:34:16.605 [2024-07-15 03:37:22.700906] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114bed0 is same with the state(5) to be set 00:34:16.605 [2024-07-15 03:37:22.701151] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114bed0 (9): Bad file descriptor 00:34:16.605 [2024-07-15 03:37:22.701394] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:16.605 [2024-07-15 03:37:22.701418] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:16.605 [2024-07-15 03:37:22.701433] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:16.605 [2024-07-15 03:37:22.705013] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:16.605 [2024-07-15 03:37:22.714302] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:16.605 [2024-07-15 03:37:22.714720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.605 [2024-07-15 03:37:22.714751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114bed0 with addr=10.0.0.2, port=4420 00:34:16.605 [2024-07-15 03:37:22.714768] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114bed0 is same with the state(5) to be set 00:34:16.605 [2024-07-15 03:37:22.715017] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114bed0 (9): Bad file descriptor 00:34:16.605 [2024-07-15 03:37:22.715259] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:16.605 [2024-07-15 03:37:22.715282] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:16.605 [2024-07-15 03:37:22.715297] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:16.605 [2024-07-15 03:37:22.718894] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:16.605 [2024-07-15 03:37:22.728189] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:16.605 [2024-07-15 03:37:22.728611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.605 [2024-07-15 03:37:22.728654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114bed0 with addr=10.0.0.2, port=4420 00:34:16.605 [2024-07-15 03:37:22.728669] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114bed0 is same with the state(5) to be set 00:34:16.605 [2024-07-15 03:37:22.728951] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114bed0 (9): Bad file descriptor 00:34:16.605 [2024-07-15 03:37:22.729194] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:16.605 [2024-07-15 03:37:22.729218] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:16.605 [2024-07-15 03:37:22.729233] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:16.605 [2024-07-15 03:37:22.732815] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:16.864 [2024-07-15 03:37:22.742138] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:16.864 [2024-07-15 03:37:22.742589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.864 [2024-07-15 03:37:22.742641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114bed0 with addr=10.0.0.2, port=4420 00:34:16.864 [2024-07-15 03:37:22.742658] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114bed0 is same with the state(5) to be set 00:34:16.864 [2024-07-15 03:37:22.742906] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114bed0 (9): Bad file descriptor 00:34:16.864 [2024-07-15 03:37:22.743148] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:16.864 [2024-07-15 03:37:22.743172] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:16.864 [2024-07-15 03:37:22.743193] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:16.864 [2024-07-15 03:37:22.746774] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:16.864 [2024-07-15 03:37:22.756077] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:16.864 [2024-07-15 03:37:22.756545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.864 [2024-07-15 03:37:22.756599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114bed0 with addr=10.0.0.2, port=4420 00:34:16.864 [2024-07-15 03:37:22.756616] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114bed0 is same with the state(5) to be set 00:34:16.864 [2024-07-15 03:37:22.756854] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114bed0 (9): Bad file descriptor 00:34:16.864 [2024-07-15 03:37:22.757106] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:16.864 [2024-07-15 03:37:22.757130] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:16.864 [2024-07-15 03:37:22.757144] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:16.864 [2024-07-15 03:37:22.760720] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:16.864 [2024-07-15 03:37:22.770028] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:16.864 [2024-07-15 03:37:22.770509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.864 [2024-07-15 03:37:22.770562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114bed0 with addr=10.0.0.2, port=4420 00:34:16.864 [2024-07-15 03:37:22.770579] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114bed0 is same with the state(5) to be set 00:34:16.864 [2024-07-15 03:37:22.770816] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114bed0 (9): Bad file descriptor 00:34:16.864 [2024-07-15 03:37:22.771068] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:16.864 [2024-07-15 03:37:22.771092] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:16.864 [2024-07-15 03:37:22.771108] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:16.864 [2024-07-15 03:37:22.774690] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:16.864 [2024-07-15 03:37:22.784001] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:16.864 [2024-07-15 03:37:22.784526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.864 [2024-07-15 03:37:22.784578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114bed0 with addr=10.0.0.2, port=4420 00:34:16.864 [2024-07-15 03:37:22.784596] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114bed0 is same with the state(5) to be set 00:34:16.864 [2024-07-15 03:37:22.784833] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114bed0 (9): Bad file descriptor 00:34:16.864 [2024-07-15 03:37:22.785085] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:16.864 [2024-07-15 03:37:22.785109] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:16.864 [2024-07-15 03:37:22.785123] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:16.864 [2024-07-15 03:37:22.788703] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:16.864 [2024-07-15 03:37:22.797999] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:16.864 [2024-07-15 03:37:22.798400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.864 [2024-07-15 03:37:22.798432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114bed0 with addr=10.0.0.2, port=4420 00:34:16.864 [2024-07-15 03:37:22.798449] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114bed0 is same with the state(5) to be set 00:34:16.864 [2024-07-15 03:37:22.798693] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114bed0 (9): Bad file descriptor 00:34:16.864 [2024-07-15 03:37:22.798969] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:16.864 [2024-07-15 03:37:22.798991] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:16.864 [2024-07-15 03:37:22.799005] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:16.864 [2024-07-15 03:37:22.802461] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:16.864 [2024-07-15 03:37:22.812028] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:16.864 [2024-07-15 03:37:22.812438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.864 [2024-07-15 03:37:22.812470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114bed0 with addr=10.0.0.2, port=4420 00:34:16.864 [2024-07-15 03:37:22.812487] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114bed0 is same with the state(5) to be set 00:34:16.864 [2024-07-15 03:37:22.812725] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114bed0 (9): Bad file descriptor 00:34:16.864 [2024-07-15 03:37:22.812978] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:16.864 [2024-07-15 03:37:22.813002] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:16.864 [2024-07-15 03:37:22.813017] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:16.864 [2024-07-15 03:37:22.816590] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:16.864 [2024-07-15 03:37:22.825873] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:16.864 [2024-07-15 03:37:22.826280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.865 [2024-07-15 03:37:22.826311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114bed0 with addr=10.0.0.2, port=4420 00:34:16.865 [2024-07-15 03:37:22.826328] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114bed0 is same with the state(5) to be set 00:34:16.865 [2024-07-15 03:37:22.826566] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114bed0 (9): Bad file descriptor 00:34:16.865 [2024-07-15 03:37:22.826807] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:16.865 [2024-07-15 03:37:22.826830] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:16.865 [2024-07-15 03:37:22.826845] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:16.865 [2024-07-15 03:37:22.830426] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:16.865 [2024-07-15 03:37:22.839714] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:16.865 [2024-07-15 03:37:22.840110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.865 [2024-07-15 03:37:22.840141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114bed0 with addr=10.0.0.2, port=4420 00:34:16.865 [2024-07-15 03:37:22.840159] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114bed0 is same with the state(5) to be set 00:34:16.865 [2024-07-15 03:37:22.840396] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114bed0 (9): Bad file descriptor 00:34:16.865 [2024-07-15 03:37:22.840644] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:16.865 [2024-07-15 03:37:22.840668] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:16.865 [2024-07-15 03:37:22.840683] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:16.865 [2024-07-15 03:37:22.844265] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:16.865 [2024-07-15 03:37:22.853760] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:16.865 [2024-07-15 03:37:22.854213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.865 [2024-07-15 03:37:22.854239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114bed0 with addr=10.0.0.2, port=4420 00:34:16.865 [2024-07-15 03:37:22.854269] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114bed0 is same with the state(5) to be set 00:34:16.865 [2024-07-15 03:37:22.854500] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114bed0 (9): Bad file descriptor 00:34:16.865 [2024-07-15 03:37:22.854748] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:16.865 [2024-07-15 03:37:22.854772] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:16.865 [2024-07-15 03:37:22.854787] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:16.865 [2024-07-15 03:37:22.858371] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:16.865 [2024-07-15 03:37:22.867663] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:16.865 [2024-07-15 03:37:22.868057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.865 [2024-07-15 03:37:22.868088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114bed0 with addr=10.0.0.2, port=4420 00:34:16.865 [2024-07-15 03:37:22.868105] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114bed0 is same with the state(5) to be set 00:34:16.865 [2024-07-15 03:37:22.868343] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114bed0 (9): Bad file descriptor 00:34:16.865 [2024-07-15 03:37:22.868585] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:16.865 [2024-07-15 03:37:22.868607] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:16.865 [2024-07-15 03:37:22.868622] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:16.865 [2024-07-15 03:37:22.872205] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:16.865 [2024-07-15 03:37:22.881697] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:16.865 [2024-07-15 03:37:22.882115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.865 [2024-07-15 03:37:22.882147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114bed0 with addr=10.0.0.2, port=4420 00:34:16.865 [2024-07-15 03:37:22.882164] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114bed0 is same with the state(5) to be set 00:34:16.865 [2024-07-15 03:37:22.882402] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114bed0 (9): Bad file descriptor 00:34:16.865 [2024-07-15 03:37:22.882644] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:16.865 [2024-07-15 03:37:22.882667] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:16.865 [2024-07-15 03:37:22.882682] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:16.865 [2024-07-15 03:37:22.886272] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:16.865 [2024-07-15 03:37:22.895564] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:16.865 [2024-07-15 03:37:22.895977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.865 [2024-07-15 03:37:22.896008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114bed0 with addr=10.0.0.2, port=4420 00:34:16.865 [2024-07-15 03:37:22.896025] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114bed0 is same with the state(5) to be set 00:34:16.865 [2024-07-15 03:37:22.896263] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114bed0 (9): Bad file descriptor 00:34:16.865 [2024-07-15 03:37:22.896506] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:16.865 [2024-07-15 03:37:22.896529] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:16.865 [2024-07-15 03:37:22.896544] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:16.865 [2024-07-15 03:37:22.900126] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:16.865 [2024-07-15 03:37:22.909411] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:16.865 [2024-07-15 03:37:22.909828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.865 [2024-07-15 03:37:22.909859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114bed0 with addr=10.0.0.2, port=4420 00:34:16.865 [2024-07-15 03:37:22.909884] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114bed0 is same with the state(5) to be set 00:34:16.865 [2024-07-15 03:37:22.910125] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114bed0 (9): Bad file descriptor 00:34:16.865 [2024-07-15 03:37:22.910367] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:16.865 [2024-07-15 03:37:22.910391] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:16.865 [2024-07-15 03:37:22.910406] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:16.865 [2024-07-15 03:37:22.913987] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:16.865 [2024-07-15 03:37:22.923273] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:16.865 [2024-07-15 03:37:22.923659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.865 [2024-07-15 03:37:22.923690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114bed0 with addr=10.0.0.2, port=4420 00:34:16.865 [2024-07-15 03:37:22.923707] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114bed0 is same with the state(5) to be set 00:34:16.865 [2024-07-15 03:37:22.923956] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114bed0 (9): Bad file descriptor 00:34:16.865 [2024-07-15 03:37:22.924199] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:16.865 [2024-07-15 03:37:22.924223] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:16.865 [2024-07-15 03:37:22.924238] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:16.865 [2024-07-15 03:37:22.927810] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:16.865 [2024-07-15 03:37:22.937314] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:16.865 [2024-07-15 03:37:22.937731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.865 [2024-07-15 03:37:22.937762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114bed0 with addr=10.0.0.2, port=4420 00:34:16.865 [2024-07-15 03:37:22.937784] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114bed0 is same with the state(5) to be set 00:34:16.865 [2024-07-15 03:37:22.938035] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114bed0 (9): Bad file descriptor 00:34:16.865 [2024-07-15 03:37:22.938278] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:16.865 [2024-07-15 03:37:22.938301] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:16.865 [2024-07-15 03:37:22.938317] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:16.865 [2024-07-15 03:37:22.941897] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:16.865 [2024-07-15 03:37:22.951180] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:16.865 [2024-07-15 03:37:22.951569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.865 [2024-07-15 03:37:22.951600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114bed0 with addr=10.0.0.2, port=4420 00:34:16.865 [2024-07-15 03:37:22.951617] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114bed0 is same with the state(5) to be set 00:34:16.865 [2024-07-15 03:37:22.951855] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114bed0 (9): Bad file descriptor 00:34:16.865 [2024-07-15 03:37:22.952107] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:16.865 [2024-07-15 03:37:22.952131] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:16.865 [2024-07-15 03:37:22.952146] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:16.866 [2024-07-15 03:37:22.955721] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:16.866 [2024-07-15 03:37:22.965032] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:16.866 [2024-07-15 03:37:22.965453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.866 [2024-07-15 03:37:22.965484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114bed0 with addr=10.0.0.2, port=4420 00:34:16.866 [2024-07-15 03:37:22.965501] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114bed0 is same with the state(5) to be set 00:34:16.866 [2024-07-15 03:37:22.965739] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114bed0 (9): Bad file descriptor 00:34:16.866 [2024-07-15 03:37:22.965995] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:16.866 [2024-07-15 03:37:22.966019] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:16.866 [2024-07-15 03:37:22.966034] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:16.866 [2024-07-15 03:37:22.969606] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:16.866 [2024-07-15 03:37:22.978909] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:16.866 [2024-07-15 03:37:22.979319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.866 [2024-07-15 03:37:22.979349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114bed0 with addr=10.0.0.2, port=4420 00:34:16.866 [2024-07-15 03:37:22.979366] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114bed0 is same with the state(5) to be set 00:34:16.866 [2024-07-15 03:37:22.979604] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114bed0 (9): Bad file descriptor 00:34:16.866 [2024-07-15 03:37:22.979846] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:16.866 [2024-07-15 03:37:22.979884] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:16.866 [2024-07-15 03:37:22.979903] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:16.866 [2024-07-15 03:37:22.983498] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:16.866 [2024-07-15 03:37:22.992791] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:16.866 [2024-07-15 03:37:22.993197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.866 [2024-07-15 03:37:22.993229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114bed0 with addr=10.0.0.2, port=4420 00:34:16.866 [2024-07-15 03:37:22.993247] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114bed0 is same with the state(5) to be set 00:34:16.866 [2024-07-15 03:37:22.993485] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114bed0 (9): Bad file descriptor 00:34:16.866 [2024-07-15 03:37:22.993727] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:16.866 [2024-07-15 03:37:22.993750] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:16.866 [2024-07-15 03:37:22.993765] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:16.866 [2024-07-15 03:37:22.997348] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:17.130 [2024-07-15 03:37:23.006646] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:17.130 [2024-07-15 03:37:23.007066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.130 [2024-07-15 03:37:23.007098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114bed0 with addr=10.0.0.2, port=4420 00:34:17.130 [2024-07-15 03:37:23.007115] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114bed0 is same with the state(5) to be set 00:34:17.130 [2024-07-15 03:37:23.007353] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114bed0 (9): Bad file descriptor 00:34:17.130 [2024-07-15 03:37:23.007595] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:17.130 [2024-07-15 03:37:23.007618] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:17.130 [2024-07-15 03:37:23.007633] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:17.130 [2024-07-15 03:37:23.011217] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:17.130 [2024-07-15 03:37:23.020514] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:17.130 [2024-07-15 03:37:23.020997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.130 [2024-07-15 03:37:23.021026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114bed0 with addr=10.0.0.2, port=4420 00:34:17.130 [2024-07-15 03:37:23.021042] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114bed0 is same with the state(5) to be set 00:34:17.130 [2024-07-15 03:37:23.021288] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114bed0 (9): Bad file descriptor 00:34:17.130 [2024-07-15 03:37:23.021532] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:17.130 [2024-07-15 03:37:23.021555] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:17.130 [2024-07-15 03:37:23.021570] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:17.130 [2024-07-15 03:37:23.025179] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:17.130 [2024-07-15 03:37:23.034469] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:17.130 [2024-07-15 03:37:23.034883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.130 [2024-07-15 03:37:23.034914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114bed0 with addr=10.0.0.2, port=4420 00:34:17.130 [2024-07-15 03:37:23.034932] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114bed0 is same with the state(5) to be set 00:34:17.130 [2024-07-15 03:37:23.035170] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114bed0 (9): Bad file descriptor 00:34:17.130 [2024-07-15 03:37:23.035412] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:17.130 [2024-07-15 03:37:23.035435] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:17.130 [2024-07-15 03:37:23.035450] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:17.130 [2024-07-15 03:37:23.039032] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:17.130 [2024-07-15 03:37:23.048321] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:17.130 [2024-07-15 03:37:23.048720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.131 [2024-07-15 03:37:23.048749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114bed0 with addr=10.0.0.2, port=4420 00:34:17.131 [2024-07-15 03:37:23.048766] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114bed0 is same with the state(5) to be set 00:34:17.131 [2024-07-15 03:37:23.049007] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114bed0 (9): Bad file descriptor 00:34:17.131 [2024-07-15 03:37:23.049231] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:17.131 [2024-07-15 03:37:23.049265] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:17.131 [2024-07-15 03:37:23.049279] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:17.131 [2024-07-15 03:37:23.052744] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:17.131 [2024-07-15 03:37:23.062364] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:17.131 [2024-07-15 03:37:23.062922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.131 [2024-07-15 03:37:23.062955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114bed0 with addr=10.0.0.2, port=4420 00:34:17.131 [2024-07-15 03:37:23.062973] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114bed0 is same with the state(5) to be set 00:34:17.131 [2024-07-15 03:37:23.063212] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114bed0 (9): Bad file descriptor 00:34:17.131 [2024-07-15 03:37:23.063453] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:17.131 [2024-07-15 03:37:23.063478] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:17.131 [2024-07-15 03:37:23.063493] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:17.131 [2024-07-15 03:37:23.067077] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:17.131 [2024-07-15 03:37:23.076368] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:17.131 [2024-07-15 03:37:23.076781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.131 [2024-07-15 03:37:23.076814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114bed0 with addr=10.0.0.2, port=4420 00:34:17.131 [2024-07-15 03:37:23.076832] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114bed0 is same with the state(5) to be set 00:34:17.131 [2024-07-15 03:37:23.077090] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114bed0 (9): Bad file descriptor 00:34:17.131 [2024-07-15 03:37:23.077334] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:17.131 [2024-07-15 03:37:23.077359] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:17.131 [2024-07-15 03:37:23.077374] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:17.131 [2024-07-15 03:37:23.080971] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:17.131 [2024-07-15 03:37:23.090339] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:17.131 [2024-07-15 03:37:23.090757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.131 [2024-07-15 03:37:23.090790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114bed0 with addr=10.0.0.2, port=4420 00:34:17.131 [2024-07-15 03:37:23.090808] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114bed0 is same with the state(5) to be set 00:34:17.131 [2024-07-15 03:37:23.091057] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114bed0 (9): Bad file descriptor 00:34:17.131 [2024-07-15 03:37:23.091301] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:17.131 [2024-07-15 03:37:23.091326] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:17.131 [2024-07-15 03:37:23.091342] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:17.131 [2024-07-15 03:37:23.094921] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:17.131 [2024-07-15 03:37:23.104200] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:17.131 [2024-07-15 03:37:23.104715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.131 [2024-07-15 03:37:23.104771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114bed0 with addr=10.0.0.2, port=4420 00:34:17.131 [2024-07-15 03:37:23.104789] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114bed0 is same with the state(5) to be set 00:34:17.131 [2024-07-15 03:37:23.105037] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114bed0 (9): Bad file descriptor 00:34:17.131 [2024-07-15 03:37:23.105280] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:17.131 [2024-07-15 03:37:23.105305] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:17.131 [2024-07-15 03:37:23.105321] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:17.131 [2024-07-15 03:37:23.108899] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:17.131 [2024-07-15 03:37:23.118179] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:17.131 [2024-07-15 03:37:23.118705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.131 [2024-07-15 03:37:23.118766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114bed0 with addr=10.0.0.2, port=4420 00:34:17.131 [2024-07-15 03:37:23.118784] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114bed0 is same with the state(5) to be set 00:34:17.131 [2024-07-15 03:37:23.119031] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114bed0 (9): Bad file descriptor 00:34:17.131 [2024-07-15 03:37:23.119275] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:17.131 [2024-07-15 03:37:23.119299] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:17.131 [2024-07-15 03:37:23.119321] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:17.131 [2024-07-15 03:37:23.122903] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:17.131 [2024-07-15 03:37:23.132193] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:17.131 [2024-07-15 03:37:23.132618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.131 [2024-07-15 03:37:23.132662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114bed0 with addr=10.0.0.2, port=4420 00:34:17.131 [2024-07-15 03:37:23.132681] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114bed0 is same with the state(5) to be set 00:34:17.131 [2024-07-15 03:37:23.132946] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114bed0 (9): Bad file descriptor 00:34:17.131 [2024-07-15 03:37:23.133175] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:17.131 [2024-07-15 03:37:23.133201] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:17.131 [2024-07-15 03:37:23.133217] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:17.131 [2024-07-15 03:37:23.136799] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:17.131 [2024-07-15 03:37:23.146119] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:17.131 [2024-07-15 03:37:23.146540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.131 [2024-07-15 03:37:23.146573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114bed0 with addr=10.0.0.2, port=4420 00:34:17.131 [2024-07-15 03:37:23.146592] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114bed0 is same with the state(5) to be set 00:34:17.131 [2024-07-15 03:37:23.146831] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114bed0 (9): Bad file descriptor 00:34:17.131 [2024-07-15 03:37:23.147087] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:17.131 [2024-07-15 03:37:23.147112] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:17.131 [2024-07-15 03:37:23.147137] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:17.131 [2024-07-15 03:37:23.150717] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:17.131 [2024-07-15 03:37:23.160042] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:17.131 [2024-07-15 03:37:23.160433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.131 [2024-07-15 03:37:23.160465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114bed0 with addr=10.0.0.2, port=4420 00:34:17.131 [2024-07-15 03:37:23.160483] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114bed0 is same with the state(5) to be set 00:34:17.131 [2024-07-15 03:37:23.160722] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114bed0 (9): Bad file descriptor 00:34:17.131 [2024-07-15 03:37:23.160980] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:17.131 [2024-07-15 03:37:23.161014] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:17.131 [2024-07-15 03:37:23.161032] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:17.131 [2024-07-15 03:37:23.164608] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:17.131 [2024-07-15 03:37:23.173902] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:17.131 [2024-07-15 03:37:23.174411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.131 [2024-07-15 03:37:23.174462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114bed0 with addr=10.0.0.2, port=4420 00:34:17.131 [2024-07-15 03:37:23.174479] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114bed0 is same with the state(5) to be set 00:34:17.131 [2024-07-15 03:37:23.174719] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114bed0 (9): Bad file descriptor 00:34:17.131 [2024-07-15 03:37:23.174971] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:17.131 [2024-07-15 03:37:23.174996] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:17.131 [2024-07-15 03:37:23.175012] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:17.132 [2024-07-15 03:37:23.178583] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:17.132 [2024-07-15 03:37:23.187873] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:17.132 [2024-07-15 03:37:23.188337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.132 [2024-07-15 03:37:23.188364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114bed0 with addr=10.0.0.2, port=4420 00:34:17.132 [2024-07-15 03:37:23.188380] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114bed0 is same with the state(5) to be set 00:34:17.132 [2024-07-15 03:37:23.188624] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114bed0 (9): Bad file descriptor 00:34:17.132 [2024-07-15 03:37:23.188867] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:17.132 [2024-07-15 03:37:23.188900] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:17.132 [2024-07-15 03:37:23.188917] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:17.132 [2024-07-15 03:37:23.192486] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:17.132 [2024-07-15 03:37:23.201777] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:17.132 [2024-07-15 03:37:23.202194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.132 [2024-07-15 03:37:23.202226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114bed0 with addr=10.0.0.2, port=4420 00:34:17.132 [2024-07-15 03:37:23.202244] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114bed0 is same with the state(5) to be set 00:34:17.132 [2024-07-15 03:37:23.202483] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114bed0 (9): Bad file descriptor 00:34:17.132 [2024-07-15 03:37:23.202727] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:17.132 [2024-07-15 03:37:23.202751] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:17.132 [2024-07-15 03:37:23.202767] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:17.132 [2024-07-15 03:37:23.206347] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:17.132 [2024-07-15 03:37:23.215663] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:17.132 [2024-07-15 03:37:23.216100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.132 [2024-07-15 03:37:23.216133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114bed0 with addr=10.0.0.2, port=4420 00:34:17.132 [2024-07-15 03:37:23.216150] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114bed0 is same with the state(5) to be set 00:34:17.132 [2024-07-15 03:37:23.216389] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114bed0 (9): Bad file descriptor 00:34:17.132 [2024-07-15 03:37:23.216638] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:17.132 [2024-07-15 03:37:23.216663] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:17.132 [2024-07-15 03:37:23.216678] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:17.132 [2024-07-15 03:37:23.220284] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:17.132 [2024-07-15 03:37:23.229584] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:17.132 [2024-07-15 03:37:23.229988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.132 [2024-07-15 03:37:23.230022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114bed0 with addr=10.0.0.2, port=4420 00:34:17.132 [2024-07-15 03:37:23.230040] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114bed0 is same with the state(5) to be set 00:34:17.132 [2024-07-15 03:37:23.230281] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114bed0 (9): Bad file descriptor 00:34:17.132 [2024-07-15 03:37:23.230524] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:17.132 [2024-07-15 03:37:23.230550] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:17.132 [2024-07-15 03:37:23.230566] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:17.132 [2024-07-15 03:37:23.234162] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:17.132 [2024-07-15 03:37:23.243453] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:17.132 [2024-07-15 03:37:23.243864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.132 [2024-07-15 03:37:23.243903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114bed0 with addr=10.0.0.2, port=4420 00:34:17.132 [2024-07-15 03:37:23.243921] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114bed0 is same with the state(5) to be set 00:34:17.132 [2024-07-15 03:37:23.244160] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114bed0 (9): Bad file descriptor 00:34:17.132 [2024-07-15 03:37:23.244403] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:17.132 [2024-07-15 03:37:23.244428] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:17.132 [2024-07-15 03:37:23.244445] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:17.132 [2024-07-15 03:37:23.248030] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:17.132 [2024-07-15 03:37:23.257330] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:17.132 [2024-07-15 03:37:23.257857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.132 [2024-07-15 03:37:23.257920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114bed0 with addr=10.0.0.2, port=4420 00:34:17.132 [2024-07-15 03:37:23.257939] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114bed0 is same with the state(5) to be set 00:34:17.132 [2024-07-15 03:37:23.258178] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114bed0 (9): Bad file descriptor 00:34:17.132 [2024-07-15 03:37:23.258421] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:17.132 [2024-07-15 03:37:23.258445] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:17.132 [2024-07-15 03:37:23.258462] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:17.132 [2024-07-15 03:37:23.262057] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:17.431 [2024-07-15 03:37:23.271269] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:17.431 [2024-07-15 03:37:23.271701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.431 [2024-07-15 03:37:23.271732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114bed0 with addr=10.0.0.2, port=4420 00:34:17.431 [2024-07-15 03:37:23.271749] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114bed0 is same with the state(5) to be set 00:34:17.431 [2024-07-15 03:37:23.272003] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114bed0 (9): Bad file descriptor 00:34:17.431 [2024-07-15 03:37:23.272247] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:17.431 [2024-07-15 03:37:23.272273] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:17.431 [2024-07-15 03:37:23.272289] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:17.431 [2024-07-15 03:37:23.275872] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:17.431 [2024-07-15 03:37:23.285215] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:17.431 [2024-07-15 03:37:23.285634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.431 [2024-07-15 03:37:23.285668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114bed0 with addr=10.0.0.2, port=4420 00:34:17.431 [2024-07-15 03:37:23.285686] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114bed0 is same with the state(5) to be set 00:34:17.431 [2024-07-15 03:37:23.285938] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114bed0 (9): Bad file descriptor 00:34:17.431 [2024-07-15 03:37:23.286182] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:17.431 [2024-07-15 03:37:23.286208] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:17.431 [2024-07-15 03:37:23.286224] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:17.431 [2024-07-15 03:37:23.289802] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:17.431 [2024-07-15 03:37:23.299092] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:17.431 [2024-07-15 03:37:23.299507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.431 [2024-07-15 03:37:23.299540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114bed0 with addr=10.0.0.2, port=4420 00:34:17.431 [2024-07-15 03:37:23.299558] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114bed0 is same with the state(5) to be set 00:34:17.431 [2024-07-15 03:37:23.299798] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114bed0 (9): Bad file descriptor 00:34:17.431 [2024-07-15 03:37:23.300051] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:17.431 [2024-07-15 03:37:23.300091] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:17.431 [2024-07-15 03:37:23.300107] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:17.431 [2024-07-15 03:37:23.303610] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:17.431 [2024-07-15 03:37:23.313138] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:17.431 [2024-07-15 03:37:23.313550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.431 [2024-07-15 03:37:23.313587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114bed0 with addr=10.0.0.2, port=4420 00:34:17.431 [2024-07-15 03:37:23.313606] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114bed0 is same with the state(5) to be set 00:34:17.431 [2024-07-15 03:37:23.313844] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114bed0 (9): Bad file descriptor 00:34:17.431 [2024-07-15 03:37:23.314096] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:17.431 [2024-07-15 03:37:23.314122] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:17.431 [2024-07-15 03:37:23.314138] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:17.431 [2024-07-15 03:37:23.317711] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:17.432 [2024-07-15 03:37:23.326993] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:17.432 [2024-07-15 03:37:23.327513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.432 [2024-07-15 03:37:23.327566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114bed0 with addr=10.0.0.2, port=4420 00:34:17.432 [2024-07-15 03:37:23.327584] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114bed0 is same with the state(5) to be set 00:34:17.432 [2024-07-15 03:37:23.327822] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114bed0 (9): Bad file descriptor 00:34:17.432 [2024-07-15 03:37:23.328073] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:17.432 [2024-07-15 03:37:23.328099] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:17.432 [2024-07-15 03:37:23.328115] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:17.432 [2024-07-15 03:37:23.331694] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:17.432 [2024-07-15 03:37:23.340996] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:17.432 [2024-07-15 03:37:23.341409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.432 [2024-07-15 03:37:23.341441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114bed0 with addr=10.0.0.2, port=4420 00:34:17.432 [2024-07-15 03:37:23.341459] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114bed0 is same with the state(5) to be set 00:34:17.432 [2024-07-15 03:37:23.341698] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114bed0 (9): Bad file descriptor 00:34:17.432 [2024-07-15 03:37:23.341956] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:17.432 [2024-07-15 03:37:23.341983] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:17.432 [2024-07-15 03:37:23.341999] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:17.432 [2024-07-15 03:37:23.345573] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:17.432 [2024-07-15 03:37:23.354859] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:17.432 [2024-07-15 03:37:23.355275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.432 [2024-07-15 03:37:23.355307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114bed0 with addr=10.0.0.2, port=4420 00:34:17.432 [2024-07-15 03:37:23.355325] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114bed0 is same with the state(5) to be set 00:34:17.432 [2024-07-15 03:37:23.355563] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114bed0 (9): Bad file descriptor 00:34:17.432 [2024-07-15 03:37:23.355814] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:17.432 [2024-07-15 03:37:23.355840] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:17.432 [2024-07-15 03:37:23.355856] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:17.432 [2024-07-15 03:37:23.359447] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:17.432 [2024-07-15 03:37:23.368729] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:17.432 [2024-07-15 03:37:23.369134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.432 [2024-07-15 03:37:23.369167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114bed0 with addr=10.0.0.2, port=4420 00:34:17.432 [2024-07-15 03:37:23.369185] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114bed0 is same with the state(5) to be set 00:34:17.432 [2024-07-15 03:37:23.369423] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114bed0 (9): Bad file descriptor 00:34:17.432 [2024-07-15 03:37:23.369666] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:17.432 [2024-07-15 03:37:23.369692] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:17.432 [2024-07-15 03:37:23.369708] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:17.432 [2024-07-15 03:37:23.373296] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:17.432 [2024-07-15 03:37:23.382589] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:17.432 [2024-07-15 03:37:23.382998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.432 [2024-07-15 03:37:23.383030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114bed0 with addr=10.0.0.2, port=4420 00:34:17.432 [2024-07-15 03:37:23.383048] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114bed0 is same with the state(5) to be set 00:34:17.432 [2024-07-15 03:37:23.383288] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114bed0 (9): Bad file descriptor 00:34:17.432 [2024-07-15 03:37:23.383531] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:17.432 [2024-07-15 03:37:23.383556] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:17.432 [2024-07-15 03:37:23.383572] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:17.432 [2024-07-15 03:37:23.387158] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:17.432 [2024-07-15 03:37:23.396463] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:17.432 [2024-07-15 03:37:23.396882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.432 [2024-07-15 03:37:23.396924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114bed0 with addr=10.0.0.2, port=4420 00:34:17.432 [2024-07-15 03:37:23.396943] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114bed0 is same with the state(5) to be set 00:34:17.432 [2024-07-15 03:37:23.397182] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114bed0 (9): Bad file descriptor 00:34:17.432 [2024-07-15 03:37:23.397425] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:17.432 [2024-07-15 03:37:23.397450] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:17.432 [2024-07-15 03:37:23.397466] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:17.432 [2024-07-15 03:37:23.401050] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:17.432 [2024-07-15 03:37:23.410348] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:17.432 [2024-07-15 03:37:23.410737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.432 [2024-07-15 03:37:23.410769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114bed0 with addr=10.0.0.2, port=4420 00:34:17.432 [2024-07-15 03:37:23.410787] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114bed0 is same with the state(5) to be set 00:34:17.432 [2024-07-15 03:37:23.411037] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114bed0 (9): Bad file descriptor 00:34:17.432 [2024-07-15 03:37:23.411280] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:17.432 [2024-07-15 03:37:23.411305] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:17.432 [2024-07-15 03:37:23.411321] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:17.432 [2024-07-15 03:37:23.414917] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:17.432 [2024-07-15 03:37:23.424223] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:17.432 [2024-07-15 03:37:23.424634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.432 [2024-07-15 03:37:23.424665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114bed0 with addr=10.0.0.2, port=4420 00:34:17.432 [2024-07-15 03:37:23.424683] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114bed0 is same with the state(5) to be set 00:34:17.432 [2024-07-15 03:37:23.424933] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114bed0 (9): Bad file descriptor 00:34:17.432 [2024-07-15 03:37:23.425176] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:17.432 [2024-07-15 03:37:23.425201] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:17.432 [2024-07-15 03:37:23.425218] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:17.432 [2024-07-15 03:37:23.428799] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:17.432 [2024-07-15 03:37:23.438098] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:17.432 [2024-07-15 03:37:23.438509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.432 [2024-07-15 03:37:23.438540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114bed0 with addr=10.0.0.2, port=4420 00:34:17.432 [2024-07-15 03:37:23.438557] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114bed0 is same with the state(5) to be set 00:34:17.432 [2024-07-15 03:37:23.438796] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114bed0 (9): Bad file descriptor 00:34:17.432 [2024-07-15 03:37:23.439049] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:17.432 [2024-07-15 03:37:23.439075] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:17.432 [2024-07-15 03:37:23.439092] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:17.432 [2024-07-15 03:37:23.442667] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:17.432 [2024-07-15 03:37:23.451967] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:17.432 [2024-07-15 03:37:23.452379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.432 [2024-07-15 03:37:23.452411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114bed0 with addr=10.0.0.2, port=4420 00:34:17.432 [2024-07-15 03:37:23.452434] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114bed0 is same with the state(5) to be set 00:34:17.432 [2024-07-15 03:37:23.452674] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114bed0 (9): Bad file descriptor 00:34:17.432 [2024-07-15 03:37:23.452930] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:17.432 [2024-07-15 03:37:23.452956] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:17.432 [2024-07-15 03:37:23.452973] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:17.432 [2024-07-15 03:37:23.456550] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:17.432 [2024-07-15 03:37:23.465839] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:17.432 [2024-07-15 03:37:23.466370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.432 [2024-07-15 03:37:23.466424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114bed0 with addr=10.0.0.2, port=4420 00:34:17.432 [2024-07-15 03:37:23.466442] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114bed0 is same with the state(5) to be set 00:34:17.432 [2024-07-15 03:37:23.466680] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114bed0 (9): Bad file descriptor 00:34:17.432 [2024-07-15 03:37:23.466935] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:17.432 [2024-07-15 03:37:23.466962] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:17.432 [2024-07-15 03:37:23.466978] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:17.432 [2024-07-15 03:37:23.470552] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:17.432 [2024-07-15 03:37:23.479836] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:17.432 [2024-07-15 03:37:23.480346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.432 [2024-07-15 03:37:23.480400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114bed0 with addr=10.0.0.2, port=4420 00:34:17.432 [2024-07-15 03:37:23.480418] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114bed0 is same with the state(5) to be set 00:34:17.432 [2024-07-15 03:37:23.480656] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114bed0 (9): Bad file descriptor 00:34:17.432 [2024-07-15 03:37:23.480913] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:17.432 [2024-07-15 03:37:23.480940] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:17.432 [2024-07-15 03:37:23.480956] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:17.432 [2024-07-15 03:37:23.484536] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:17.432 [2024-07-15 03:37:23.493822] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:17.432 [2024-07-15 03:37:23.494310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.432 [2024-07-15 03:37:23.494360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114bed0 with addr=10.0.0.2, port=4420 00:34:17.432 [2024-07-15 03:37:23.494379] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114bed0 is same with the state(5) to be set 00:34:17.432 [2024-07-15 03:37:23.494617] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114bed0 (9): Bad file descriptor 00:34:17.432 [2024-07-15 03:37:23.494860] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:17.432 [2024-07-15 03:37:23.494903] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:17.432 [2024-07-15 03:37:23.494921] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:17.432 [2024-07-15 03:37:23.498497] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:17.432 [2024-07-15 03:37:23.507787] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:17.432 [2024-07-15 03:37:23.508181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.432 [2024-07-15 03:37:23.508215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114bed0 with addr=10.0.0.2, port=4420 00:34:17.432 [2024-07-15 03:37:23.508234] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114bed0 is same with the state(5) to be set 00:34:17.433 [2024-07-15 03:37:23.508474] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114bed0 (9): Bad file descriptor 00:34:17.433 [2024-07-15 03:37:23.508718] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:17.433 [2024-07-15 03:37:23.508743] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:17.433 [2024-07-15 03:37:23.508759] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:17.433 [2024-07-15 03:37:23.512350] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:17.433 [2024-07-15 03:37:23.521645] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:17.433 [2024-07-15 03:37:23.522053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.433 [2024-07-15 03:37:23.522086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114bed0 with addr=10.0.0.2, port=4420 00:34:17.433 [2024-07-15 03:37:23.522105] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114bed0 is same with the state(5) to be set 00:34:17.433 [2024-07-15 03:37:23.522344] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114bed0 (9): Bad file descriptor 00:34:17.433 [2024-07-15 03:37:23.522589] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:17.433 [2024-07-15 03:37:23.522614] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:17.433 [2024-07-15 03:37:23.522630] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:17.433 [2024-07-15 03:37:23.526216] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:17.433 [2024-07-15 03:37:23.535501] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:17.433 [2024-07-15 03:37:23.535919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.433 [2024-07-15 03:37:23.535951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114bed0 with addr=10.0.0.2, port=4420 00:34:17.433 [2024-07-15 03:37:23.535969] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114bed0 is same with the state(5) to be set 00:34:17.433 [2024-07-15 03:37:23.536208] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114bed0 (9): Bad file descriptor 00:34:17.433 [2024-07-15 03:37:23.536450] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:17.433 [2024-07-15 03:37:23.536476] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:17.433 [2024-07-15 03:37:23.536492] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:17.433 [2024-07-15 03:37:23.540077] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:17.433 [2024-07-15 03:37:23.549361] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:17.433 [2024-07-15 03:37:23.549822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.433 [2024-07-15 03:37:23.549850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114bed0 with addr=10.0.0.2, port=4420 00:34:17.433 [2024-07-15 03:37:23.549891] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114bed0 is same with the state(5) to be set 00:34:17.433 [2024-07-15 03:37:23.550124] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114bed0 (9): Bad file descriptor 00:34:17.433 [2024-07-15 03:37:23.550373] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:17.433 [2024-07-15 03:37:23.550395] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:17.433 [2024-07-15 03:37:23.550409] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:17.433 [2024-07-15 03:37:23.553933] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:17.433 [2024-07-15 03:37:23.563221] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:17.433 [2024-07-15 03:37:23.563637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.433 [2024-07-15 03:37:23.563669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114bed0 with addr=10.0.0.2, port=4420 00:34:17.433 [2024-07-15 03:37:23.563687] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114bed0 is same with the state(5) to be set 00:34:17.433 [2024-07-15 03:37:23.563941] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114bed0 (9): Bad file descriptor 00:34:17.433 [2024-07-15 03:37:23.564184] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:17.433 [2024-07-15 03:37:23.564210] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:17.433 [2024-07-15 03:37:23.564226] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:17.433 [2024-07-15 03:37:23.567803] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:17.691 [2024-07-15 03:37:23.577112] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:17.691 [2024-07-15 03:37:23.577510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.691 [2024-07-15 03:37:23.577543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114bed0 with addr=10.0.0.2, port=4420 00:34:17.691 [2024-07-15 03:37:23.577561] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114bed0 is same with the state(5) to be set 00:34:17.691 [2024-07-15 03:37:23.577799] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114bed0 (9): Bad file descriptor 00:34:17.691 [2024-07-15 03:37:23.578053] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:17.691 [2024-07-15 03:37:23.578080] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:17.691 [2024-07-15 03:37:23.578096] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:17.691 [2024-07-15 03:37:23.581672] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:17.691 [2024-07-15 03:37:23.590993] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:17.691 [2024-07-15 03:37:23.591405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.691 [2024-07-15 03:37:23.591438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114bed0 with addr=10.0.0.2, port=4420 00:34:17.691 [2024-07-15 03:37:23.591456] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114bed0 is same with the state(5) to be set 00:34:17.691 [2024-07-15 03:37:23.591702] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114bed0 (9): Bad file descriptor 00:34:17.691 [2024-07-15 03:37:23.591960] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:17.691 [2024-07-15 03:37:23.591987] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:17.691 [2024-07-15 03:37:23.592004] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:17.691 [2024-07-15 03:37:23.595578] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:17.691 [2024-07-15 03:37:23.604865] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:17.691 [2024-07-15 03:37:23.605295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.691 [2024-07-15 03:37:23.605327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114bed0 with addr=10.0.0.2, port=4420 00:34:17.691 [2024-07-15 03:37:23.605345] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114bed0 is same with the state(5) to be set 00:34:17.691 [2024-07-15 03:37:23.605583] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114bed0 (9): Bad file descriptor 00:34:17.691 [2024-07-15 03:37:23.605826] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:17.691 [2024-07-15 03:37:23.605851] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:17.691 [2024-07-15 03:37:23.605867] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:17.691 [2024-07-15 03:37:23.609462] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:17.691 [2024-07-15 03:37:23.618751] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:17.691 [2024-07-15 03:37:23.619171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.691 [2024-07-15 03:37:23.619203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114bed0 with addr=10.0.0.2, port=4420 00:34:17.691 [2024-07-15 03:37:23.619221] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114bed0 is same with the state(5) to be set 00:34:17.691 [2024-07-15 03:37:23.619460] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114bed0 (9): Bad file descriptor 00:34:17.691 [2024-07-15 03:37:23.619703] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:17.691 [2024-07-15 03:37:23.619728] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:17.691 [2024-07-15 03:37:23.619745] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:17.691 [2024-07-15 03:37:23.623335] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:17.691 [2024-07-15 03:37:23.632620] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:17.691 [2024-07-15 03:37:23.633013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.691 [2024-07-15 03:37:23.633045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114bed0 with addr=10.0.0.2, port=4420 00:34:17.691 [2024-07-15 03:37:23.633063] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114bed0 is same with the state(5) to be set 00:34:17.691 [2024-07-15 03:37:23.633302] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114bed0 (9): Bad file descriptor 00:34:17.691 [2024-07-15 03:37:23.633545] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:17.691 [2024-07-15 03:37:23.633570] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:17.691 [2024-07-15 03:37:23.633592] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:17.691 [2024-07-15 03:37:23.637184] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:17.691 [2024-07-15 03:37:23.646473] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:17.691 [2024-07-15 03:37:23.646893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.691 [2024-07-15 03:37:23.646926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114bed0 with addr=10.0.0.2, port=4420 00:34:17.691 [2024-07-15 03:37:23.646944] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114bed0 is same with the state(5) to be set 00:34:17.691 [2024-07-15 03:37:23.647183] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114bed0 (9): Bad file descriptor 00:34:17.691 [2024-07-15 03:37:23.647426] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:17.691 [2024-07-15 03:37:23.647452] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:17.691 [2024-07-15 03:37:23.647468] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:17.691 [2024-07-15 03:37:23.651052] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:17.691 [2024-07-15 03:37:23.660336] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:17.691 [2024-07-15 03:37:23.660751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.691 [2024-07-15 03:37:23.660783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114bed0 with addr=10.0.0.2, port=4420 00:34:17.691 [2024-07-15 03:37:23.660800] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114bed0 is same with the state(5) to be set 00:34:17.691 [2024-07-15 03:37:23.661053] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114bed0 (9): Bad file descriptor 00:34:17.691 [2024-07-15 03:37:23.661296] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:17.692 [2024-07-15 03:37:23.661322] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:17.692 [2024-07-15 03:37:23.661338] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:17.692 [2024-07-15 03:37:23.664920] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:17.692 [2024-07-15 03:37:23.674206] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:17.692 [2024-07-15 03:37:23.674599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.692 [2024-07-15 03:37:23.674633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114bed0 with addr=10.0.0.2, port=4420 00:34:17.692 [2024-07-15 03:37:23.674651] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114bed0 is same with the state(5) to be set 00:34:17.692 [2024-07-15 03:37:23.674906] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114bed0 (9): Bad file descriptor 00:34:17.692 [2024-07-15 03:37:23.675151] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:17.692 [2024-07-15 03:37:23.675177] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:17.692 [2024-07-15 03:37:23.675193] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:17.692 [2024-07-15 03:37:23.678772] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:17.692 [2024-07-15 03:37:23.688075] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:17.692 [2024-07-15 03:37:23.688474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.692 [2024-07-15 03:37:23.688512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114bed0 with addr=10.0.0.2, port=4420 00:34:17.692 [2024-07-15 03:37:23.688531] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114bed0 is same with the state(5) to be set 00:34:17.692 [2024-07-15 03:37:23.688771] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114bed0 (9): Bad file descriptor 00:34:17.692 [2024-07-15 03:37:23.689031] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:17.692 [2024-07-15 03:37:23.689057] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:17.692 [2024-07-15 03:37:23.689073] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:17.692 [2024-07-15 03:37:23.692651] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:17.692 [2024-07-15 03:37:23.701954] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:17.692 [2024-07-15 03:37:23.702376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.692 [2024-07-15 03:37:23.702408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114bed0 with addr=10.0.0.2, port=4420 00:34:17.692 [2024-07-15 03:37:23.702426] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114bed0 is same with the state(5) to be set 00:34:17.692 [2024-07-15 03:37:23.702664] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114bed0 (9): Bad file descriptor 00:34:17.692 [2024-07-15 03:37:23.702920] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:17.692 [2024-07-15 03:37:23.702946] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:17.692 [2024-07-15 03:37:23.702962] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:17.692 [2024-07-15 03:37:23.706539] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:17.692 [2024-07-15 03:37:23.715829] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:17.692 [2024-07-15 03:37:23.716250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.692 [2024-07-15 03:37:23.716282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114bed0 with addr=10.0.0.2, port=4420 00:34:17.692 [2024-07-15 03:37:23.716300] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114bed0 is same with the state(5) to be set 00:34:17.692 [2024-07-15 03:37:23.716539] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114bed0 (9): Bad file descriptor 00:34:17.692 [2024-07-15 03:37:23.716782] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:17.692 [2024-07-15 03:37:23.716808] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:17.692 [2024-07-15 03:37:23.716824] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:17.692 [2024-07-15 03:37:23.720410] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:17.692 [2024-07-15 03:37:23.729693] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:17.692 [2024-07-15 03:37:23.730114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.692 [2024-07-15 03:37:23.730146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114bed0 with addr=10.0.0.2, port=4420 00:34:17.692 [2024-07-15 03:37:23.730164] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114bed0 is same with the state(5) to be set 00:34:17.692 [2024-07-15 03:37:23.730402] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114bed0 (9): Bad file descriptor 00:34:17.692 [2024-07-15 03:37:23.730650] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:17.692 [2024-07-15 03:37:23.730676] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:17.692 [2024-07-15 03:37:23.730693] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:17.692 [2024-07-15 03:37:23.734281] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:17.692 [2024-07-15 03:37:23.743572] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:17.692 [2024-07-15 03:37:23.743974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.692 [2024-07-15 03:37:23.744007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114bed0 with addr=10.0.0.2, port=4420 00:34:17.692 [2024-07-15 03:37:23.744025] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114bed0 is same with the state(5) to be set 00:34:17.692 [2024-07-15 03:37:23.744265] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114bed0 (9): Bad file descriptor 00:34:17.692 [2024-07-15 03:37:23.744508] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:17.692 [2024-07-15 03:37:23.744533] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:17.692 [2024-07-15 03:37:23.744548] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:17.692 [2024-07-15 03:37:23.748130] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:17.692 [2024-07-15 03:37:23.757432] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:17.692 [2024-07-15 03:37:23.757855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.692 [2024-07-15 03:37:23.757894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114bed0 with addr=10.0.0.2, port=4420 00:34:17.692 [2024-07-15 03:37:23.757914] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114bed0 is same with the state(5) to be set 00:34:17.692 [2024-07-15 03:37:23.758162] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114bed0 (9): Bad file descriptor 00:34:17.692 [2024-07-15 03:37:23.758405] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:17.692 [2024-07-15 03:37:23.758429] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:17.692 [2024-07-15 03:37:23.758446] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:17.692 [2024-07-15 03:37:23.762028] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:17.692 [2024-07-15 03:37:23.771318] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:17.692 [2024-07-15 03:37:23.771738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.692 [2024-07-15 03:37:23.771770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114bed0 with addr=10.0.0.2, port=4420 00:34:17.692 [2024-07-15 03:37:23.771788] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114bed0 is same with the state(5) to be set 00:34:17.692 [2024-07-15 03:37:23.772035] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114bed0 (9): Bad file descriptor 00:34:17.692 [2024-07-15 03:37:23.772280] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:17.692 [2024-07-15 03:37:23.772304] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:17.692 [2024-07-15 03:37:23.772320] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:17.692 [2024-07-15 03:37:23.775910] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:17.692 [2024-07-15 03:37:23.785224] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:17.692 [2024-07-15 03:37:23.785665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.692 [2024-07-15 03:37:23.785715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114bed0 with addr=10.0.0.2, port=4420 00:34:17.692 [2024-07-15 03:37:23.785733] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114bed0 is same with the state(5) to be set 00:34:17.692 [2024-07-15 03:37:23.785981] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114bed0 (9): Bad file descriptor 00:34:17.692 [2024-07-15 03:37:23.786225] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:17.692 [2024-07-15 03:37:23.786250] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:17.692 [2024-07-15 03:37:23.786265] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:17.692 [2024-07-15 03:37:23.789844] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:17.692 [2024-07-15 03:37:23.799136] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:17.692 [2024-07-15 03:37:23.799546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.692 [2024-07-15 03:37:23.799578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114bed0 with addr=10.0.0.2, port=4420 00:34:17.692 [2024-07-15 03:37:23.799595] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114bed0 is same with the state(5) to be set 00:34:17.692 [2024-07-15 03:37:23.799834] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114bed0 (9): Bad file descriptor 00:34:17.692 [2024-07-15 03:37:23.800087] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:17.692 [2024-07-15 03:37:23.800112] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:17.692 [2024-07-15 03:37:23.800127] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:17.692 [2024-07-15 03:37:23.803702] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:17.692 [2024-07-15 03:37:23.813003] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:17.692 [2024-07-15 03:37:23.813399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.692 [2024-07-15 03:37:23.813431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114bed0 with addr=10.0.0.2, port=4420 00:34:17.692 [2024-07-15 03:37:23.813448] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114bed0 is same with the state(5) to be set 00:34:17.692 [2024-07-15 03:37:23.813687] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114bed0 (9): Bad file descriptor 00:34:17.692 [2024-07-15 03:37:23.813941] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:17.692 [2024-07-15 03:37:23.813967] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:17.692 [2024-07-15 03:37:23.813983] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:17.692 [2024-07-15 03:37:23.817556] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:17.692 [2024-07-15 03:37:23.826846] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:17.692 [2024-07-15 03:37:23.827267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.692 [2024-07-15 03:37:23.827299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114bed0 with addr=10.0.0.2, port=4420 00:34:17.692 [2024-07-15 03:37:23.827322] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114bed0 is same with the state(5) to be set 00:34:17.692 [2024-07-15 03:37:23.827562] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114bed0 (9): Bad file descriptor 00:34:17.692 [2024-07-15 03:37:23.827806] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:17.692 [2024-07-15 03:37:23.827830] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:17.692 [2024-07-15 03:37:23.827846] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:17.692 [2024-07-15 03:37:23.831427] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:17.951 [2024-07-15 03:37:23.840721] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:17.951 [2024-07-15 03:37:23.841099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.951 [2024-07-15 03:37:23.841131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114bed0 with addr=10.0.0.2, port=4420 00:34:17.951 [2024-07-15 03:37:23.841149] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114bed0 is same with the state(5) to be set 00:34:17.951 [2024-07-15 03:37:23.841387] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114bed0 (9): Bad file descriptor 00:34:17.951 [2024-07-15 03:37:23.841630] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:17.951 [2024-07-15 03:37:23.841654] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:17.951 [2024-07-15 03:37:23.841670] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:17.951 [2024-07-15 03:37:23.845256] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:17.951 [2024-07-15 03:37:23.854753] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:17.951 [2024-07-15 03:37:23.855154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.951 [2024-07-15 03:37:23.855186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114bed0 with addr=10.0.0.2, port=4420 00:34:17.951 [2024-07-15 03:37:23.855203] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114bed0 is same with the state(5) to be set 00:34:17.951 [2024-07-15 03:37:23.855441] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114bed0 (9): Bad file descriptor 00:34:17.951 [2024-07-15 03:37:23.855684] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:17.951 [2024-07-15 03:37:23.855708] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:17.951 [2024-07-15 03:37:23.855724] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:17.951 [2024-07-15 03:37:23.859328] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:17.951 [2024-07-15 03:37:23.868615] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:17.951 [2024-07-15 03:37:23.869013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.951 [2024-07-15 03:37:23.869045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114bed0 with addr=10.0.0.2, port=4420 00:34:17.951 [2024-07-15 03:37:23.869064] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114bed0 is same with the state(5) to be set 00:34:17.951 [2024-07-15 03:37:23.869302] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114bed0 (9): Bad file descriptor 00:34:17.951 [2024-07-15 03:37:23.869546] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:17.951 [2024-07-15 03:37:23.869575] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:17.951 [2024-07-15 03:37:23.869592] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:17.951 [2024-07-15 03:37:23.873176] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:17.951 [2024-07-15 03:37:23.882468] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:17.951 [2024-07-15 03:37:23.882888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.951 [2024-07-15 03:37:23.882920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114bed0 with addr=10.0.0.2, port=4420 00:34:17.951 [2024-07-15 03:37:23.882938] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114bed0 is same with the state(5) to be set 00:34:17.951 [2024-07-15 03:37:23.883177] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114bed0 (9): Bad file descriptor 00:34:17.951 [2024-07-15 03:37:23.883420] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:17.951 [2024-07-15 03:37:23.883451] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:17.951 [2024-07-15 03:37:23.883477] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:17.951 [2024-07-15 03:37:23.887065] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:17.951 [2024-07-15 03:37:23.896353] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:17.951 [2024-07-15 03:37:23.896779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.951 [2024-07-15 03:37:23.896811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114bed0 with addr=10.0.0.2, port=4420 00:34:17.951 [2024-07-15 03:37:23.896829] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114bed0 is same with the state(5) to be set 00:34:17.951 [2024-07-15 03:37:23.897076] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114bed0 (9): Bad file descriptor 00:34:17.951 [2024-07-15 03:37:23.897321] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:17.951 [2024-07-15 03:37:23.897345] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:17.951 [2024-07-15 03:37:23.897361] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:17.951 [2024-07-15 03:37:23.900943] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:17.951 [2024-07-15 03:37:23.910233] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:17.951 [2024-07-15 03:37:23.910647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.951 [2024-07-15 03:37:23.910679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114bed0 with addr=10.0.0.2, port=4420 00:34:17.951 [2024-07-15 03:37:23.910697] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114bed0 is same with the state(5) to be set 00:34:17.951 [2024-07-15 03:37:23.910945] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114bed0 (9): Bad file descriptor 00:34:17.951 [2024-07-15 03:37:23.911189] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:17.951 [2024-07-15 03:37:23.911214] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:17.951 [2024-07-15 03:37:23.911230] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:17.951 [2024-07-15 03:37:23.914808] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:17.951 [2024-07-15 03:37:23.924102] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:17.951 [2024-07-15 03:37:23.924521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.951 [2024-07-15 03:37:23.924553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114bed0 with addr=10.0.0.2, port=4420 00:34:17.951 [2024-07-15 03:37:23.924571] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114bed0 is same with the state(5) to be set 00:34:17.951 [2024-07-15 03:37:23.924810] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114bed0 (9): Bad file descriptor 00:34:17.951 [2024-07-15 03:37:23.925064] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:17.951 [2024-07-15 03:37:23.925090] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:17.951 [2024-07-15 03:37:23.925105] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:17.951 [2024-07-15 03:37:23.928678] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:17.951 [2024-07-15 03:37:23.937964] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:17.951 [2024-07-15 03:37:23.938378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.951 [2024-07-15 03:37:23.938410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114bed0 with addr=10.0.0.2, port=4420 00:34:17.951 [2024-07-15 03:37:23.938428] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114bed0 is same with the state(5) to be set 00:34:17.951 [2024-07-15 03:37:23.938667] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114bed0 (9): Bad file descriptor 00:34:17.951 [2024-07-15 03:37:23.938921] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:17.951 [2024-07-15 03:37:23.938946] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:17.951 [2024-07-15 03:37:23.938961] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:17.951 [2024-07-15 03:37:23.942535] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:17.951 [2024-07-15 03:37:23.951815] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:17.951 [2024-07-15 03:37:23.952241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.951 [2024-07-15 03:37:23.952272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114bed0 with addr=10.0.0.2, port=4420 00:34:17.951 [2024-07-15 03:37:23.952290] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114bed0 is same with the state(5) to be set 00:34:17.952 [2024-07-15 03:37:23.952529] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114bed0 (9): Bad file descriptor 00:34:17.952 [2024-07-15 03:37:23.952772] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:17.952 [2024-07-15 03:37:23.952796] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:17.952 [2024-07-15 03:37:23.952812] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:17.952 [2024-07-15 03:37:23.956396] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:17.952 [2024-07-15 03:37:23.965671] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:17.952 [2024-07-15 03:37:23.966089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.952 [2024-07-15 03:37:23.966121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114bed0 with addr=10.0.0.2, port=4420 00:34:17.952 [2024-07-15 03:37:23.966144] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114bed0 is same with the state(5) to be set 00:34:17.952 [2024-07-15 03:37:23.966384] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114bed0 (9): Bad file descriptor 00:34:17.952 [2024-07-15 03:37:23.966627] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:17.952 [2024-07-15 03:37:23.966652] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:17.952 [2024-07-15 03:37:23.966667] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:17.952 [2024-07-15 03:37:23.970249] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:17.952 [2024-07-15 03:37:23.979525] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:17.952 [2024-07-15 03:37:23.979939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.952 [2024-07-15 03:37:23.979971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114bed0 with addr=10.0.0.2, port=4420 00:34:17.952 [2024-07-15 03:37:23.979988] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114bed0 is same with the state(5) to be set 00:34:17.952 [2024-07-15 03:37:23.980227] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114bed0 (9): Bad file descriptor 00:34:17.952 [2024-07-15 03:37:23.980471] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:17.952 [2024-07-15 03:37:23.980495] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:17.952 [2024-07-15 03:37:23.980511] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:17.952 [2024-07-15 03:37:23.984096] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:17.952 [2024-07-15 03:37:23.993401] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:17.952 [2024-07-15 03:37:23.993806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.952 [2024-07-15 03:37:23.993837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114bed0 with addr=10.0.0.2, port=4420 00:34:17.952 [2024-07-15 03:37:23.993855] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114bed0 is same with the state(5) to be set 00:34:17.952 [2024-07-15 03:37:23.994102] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114bed0 (9): Bad file descriptor 00:34:17.952 [2024-07-15 03:37:23.994347] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:17.952 [2024-07-15 03:37:23.994371] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:17.952 [2024-07-15 03:37:23.994387] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:17.952 [2024-07-15 03:37:23.997970] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:17.952 [2024-07-15 03:37:24.007255] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:17.952 [2024-07-15 03:37:24.007667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.952 [2024-07-15 03:37:24.007698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114bed0 with addr=10.0.0.2, port=4420 00:34:17.952 [2024-07-15 03:37:24.007716] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114bed0 is same with the state(5) to be set 00:34:17.952 [2024-07-15 03:37:24.007965] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114bed0 (9): Bad file descriptor 00:34:17.952 [2024-07-15 03:37:24.008209] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:17.952 [2024-07-15 03:37:24.008239] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:17.952 [2024-07-15 03:37:24.008257] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:17.952 [2024-07-15 03:37:24.011830] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:17.952 [2024-07-15 03:37:24.021110] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:17.952 [2024-07-15 03:37:24.021520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.952 [2024-07-15 03:37:24.021551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114bed0 with addr=10.0.0.2, port=4420 00:34:17.952 [2024-07-15 03:37:24.021569] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114bed0 is same with the state(5) to be set 00:34:17.952 [2024-07-15 03:37:24.021808] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114bed0 (9): Bad file descriptor 00:34:17.952 [2024-07-15 03:37:24.022062] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:17.952 [2024-07-15 03:37:24.022088] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:17.952 [2024-07-15 03:37:24.022104] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:17.952 [2024-07-15 03:37:24.025676] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:17.952 [2024-07-15 03:37:24.034968] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:17.952 [2024-07-15 03:37:24.035381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.952 [2024-07-15 03:37:24.035412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114bed0 with addr=10.0.0.2, port=4420 00:34:17.952 [2024-07-15 03:37:24.035430] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114bed0 is same with the state(5) to be set 00:34:17.952 [2024-07-15 03:37:24.035668] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114bed0 (9): Bad file descriptor 00:34:17.952 [2024-07-15 03:37:24.035922] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:17.952 [2024-07-15 03:37:24.035947] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:17.952 [2024-07-15 03:37:24.035963] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:17.952 [2024-07-15 03:37:24.039539] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:17.952 [2024-07-15 03:37:24.048822] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:17.952 [2024-07-15 03:37:24.049325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.952 [2024-07-15 03:37:24.049358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114bed0 with addr=10.0.0.2, port=4420 00:34:17.952 [2024-07-15 03:37:24.049377] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114bed0 is same with the state(5) to be set 00:34:17.952 [2024-07-15 03:37:24.049615] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114bed0 (9): Bad file descriptor 00:34:17.952 [2024-07-15 03:37:24.049859] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:17.952 [2024-07-15 03:37:24.049893] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:17.952 [2024-07-15 03:37:24.049910] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:17.952 [2024-07-15 03:37:24.053482] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:17.952 [2024-07-15 03:37:24.062759] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:17.952 [2024-07-15 03:37:24.063162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.952 [2024-07-15 03:37:24.063195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114bed0 with addr=10.0.0.2, port=4420 00:34:17.952 [2024-07-15 03:37:24.063213] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114bed0 is same with the state(5) to be set 00:34:17.952 [2024-07-15 03:37:24.063452] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114bed0 (9): Bad file descriptor 00:34:17.952 [2024-07-15 03:37:24.063695] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:17.952 [2024-07-15 03:37:24.063720] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:17.952 [2024-07-15 03:37:24.063735] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:17.952 [2024-07-15 03:37:24.067315] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:17.952 [2024-07-15 03:37:24.076801] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:17.952 [2024-07-15 03:37:24.077234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.952 [2024-07-15 03:37:24.077266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114bed0 with addr=10.0.0.2, port=4420 00:34:17.952 [2024-07-15 03:37:24.077284] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114bed0 is same with the state(5) to be set 00:34:17.952 [2024-07-15 03:37:24.077522] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114bed0 (9): Bad file descriptor 00:34:17.952 [2024-07-15 03:37:24.077765] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:17.952 [2024-07-15 03:37:24.077790] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:17.952 [2024-07-15 03:37:24.077806] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:17.952 [2024-07-15 03:37:24.081590] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:17.953 [2024-07-15 03:37:24.090663] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:17.953 [2024-07-15 03:37:24.091064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.953 [2024-07-15 03:37:24.091095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114bed0 with addr=10.0.0.2, port=4420 00:34:17.953 [2024-07-15 03:37:24.091113] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114bed0 is same with the state(5) to be set 00:34:17.953 [2024-07-15 03:37:24.091352] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114bed0 (9): Bad file descriptor 00:34:17.953 [2024-07-15 03:37:24.091595] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:17.953 [2024-07-15 03:37:24.091620] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:17.953 [2024-07-15 03:37:24.091636] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:18.211 [2024-07-15 03:37:24.095221] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:18.211 [2024-07-15 03:37:24.104505] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:18.211 [2024-07-15 03:37:24.104917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.211 [2024-07-15 03:37:24.104950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114bed0 with addr=10.0.0.2, port=4420 00:34:18.211 [2024-07-15 03:37:24.104968] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114bed0 is same with the state(5) to be set 00:34:18.211 [2024-07-15 03:37:24.105214] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114bed0 (9): Bad file descriptor 00:34:18.211 [2024-07-15 03:37:24.105457] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:18.211 [2024-07-15 03:37:24.105482] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:18.211 [2024-07-15 03:37:24.105498] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:18.211 [2024-07-15 03:37:24.109082] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:18.211 [2024-07-15 03:37:24.118373] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:18.211 [2024-07-15 03:37:24.118761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.211 [2024-07-15 03:37:24.118792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114bed0 with addr=10.0.0.2, port=4420 00:34:18.211 [2024-07-15 03:37:24.118810] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114bed0 is same with the state(5) to be set 00:34:18.211 [2024-07-15 03:37:24.119059] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114bed0 (9): Bad file descriptor 00:34:18.211 [2024-07-15 03:37:24.119302] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:18.211 [2024-07-15 03:37:24.119326] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:18.211 [2024-07-15 03:37:24.119342] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:18.211 [2024-07-15 03:37:24.122922] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:18.211 [2024-07-15 03:37:24.132408] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:18.211 [2024-07-15 03:37:24.132820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.211 [2024-07-15 03:37:24.132852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114bed0 with addr=10.0.0.2, port=4420 00:34:18.211 [2024-07-15 03:37:24.132870] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114bed0 is same with the state(5) to be set 00:34:18.211 [2024-07-15 03:37:24.133119] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114bed0 (9): Bad file descriptor 00:34:18.211 [2024-07-15 03:37:24.133363] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:18.211 [2024-07-15 03:37:24.133388] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:18.211 [2024-07-15 03:37:24.133403] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:18.211 [2024-07-15 03:37:24.136984] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:18.211 [2024-07-15 03:37:24.146260] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:18.211 [2024-07-15 03:37:24.146682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.211 [2024-07-15 03:37:24.146714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114bed0 with addr=10.0.0.2, port=4420 00:34:18.211 [2024-07-15 03:37:24.146731] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114bed0 is same with the state(5) to be set 00:34:18.211 [2024-07-15 03:37:24.146981] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114bed0 (9): Bad file descriptor 00:34:18.211 [2024-07-15 03:37:24.147225] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:18.211 [2024-07-15 03:37:24.147250] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:18.211 [2024-07-15 03:37:24.147271] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:18.211 [2024-07-15 03:37:24.150844] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:18.211 [2024-07-15 03:37:24.160127] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:18.211 [2024-07-15 03:37:24.160547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.211 [2024-07-15 03:37:24.160578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114bed0 with addr=10.0.0.2, port=4420 00:34:18.211 [2024-07-15 03:37:24.160596] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114bed0 is same with the state(5) to be set 00:34:18.211 [2024-07-15 03:37:24.160834] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114bed0 (9): Bad file descriptor 00:34:18.211 [2024-07-15 03:37:24.161089] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:18.211 [2024-07-15 03:37:24.161114] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:18.211 [2024-07-15 03:37:24.161130] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:18.211 [2024-07-15 03:37:24.164703] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:18.211 [2024-07-15 03:37:24.173992] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:18.211 [2024-07-15 03:37:24.174442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.211 [2024-07-15 03:37:24.174491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114bed0 with addr=10.0.0.2, port=4420 00:34:18.211 [2024-07-15 03:37:24.174509] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114bed0 is same with the state(5) to be set 00:34:18.212 [2024-07-15 03:37:24.174747] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114bed0 (9): Bad file descriptor 00:34:18.212 [2024-07-15 03:37:24.175002] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:18.212 [2024-07-15 03:37:24.175027] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:18.212 [2024-07-15 03:37:24.175043] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:18.212 [2024-07-15 03:37:24.178639] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:18.212 [2024-07-15 03:37:24.187952] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:18.212 [2024-07-15 03:37:24.188367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.212 [2024-07-15 03:37:24.188399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114bed0 with addr=10.0.0.2, port=4420 00:34:18.212 [2024-07-15 03:37:24.188417] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114bed0 is same with the state(5) to be set 00:34:18.212 [2024-07-15 03:37:24.188656] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114bed0 (9): Bad file descriptor 00:34:18.212 [2024-07-15 03:37:24.188911] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:18.212 [2024-07-15 03:37:24.188935] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:18.212 [2024-07-15 03:37:24.188951] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:18.212 [2024-07-15 03:37:24.192526] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:18.212 [2024-07-15 03:37:24.201807] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:18.212 [2024-07-15 03:37:24.202233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.212 [2024-07-15 03:37:24.202270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114bed0 with addr=10.0.0.2, port=4420 00:34:18.212 [2024-07-15 03:37:24.202289] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114bed0 is same with the state(5) to be set 00:34:18.212 [2024-07-15 03:37:24.202528] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114bed0 (9): Bad file descriptor 00:34:18.212 [2024-07-15 03:37:24.202771] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:18.212 [2024-07-15 03:37:24.202796] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:18.212 [2024-07-15 03:37:24.202812] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:18.212 [2024-07-15 03:37:24.206397] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:18.212 [2024-07-15 03:37:24.215675] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:18.212 [2024-07-15 03:37:24.216104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.212 [2024-07-15 03:37:24.216136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114bed0 with addr=10.0.0.2, port=4420 00:34:18.212 [2024-07-15 03:37:24.216154] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114bed0 is same with the state(5) to be set 00:34:18.212 [2024-07-15 03:37:24.216393] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114bed0 (9): Bad file descriptor 00:34:18.212 [2024-07-15 03:37:24.216636] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:18.212 [2024-07-15 03:37:24.216660] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:18.212 [2024-07-15 03:37:24.216676] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:18.212 [2024-07-15 03:37:24.220261] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:18.212 [2024-07-15 03:37:24.229544] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:18.212 [2024-07-15 03:37:24.229963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.212 [2024-07-15 03:37:24.229996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114bed0 with addr=10.0.0.2, port=4420 00:34:18.212 [2024-07-15 03:37:24.230015] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114bed0 is same with the state(5) to be set 00:34:18.212 [2024-07-15 03:37:24.230254] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114bed0 (9): Bad file descriptor 00:34:18.212 [2024-07-15 03:37:24.230497] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:18.212 [2024-07-15 03:37:24.230521] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:18.212 [2024-07-15 03:37:24.230537] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:18.212 [2024-07-15 03:37:24.234124] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:18.212 [2024-07-15 03:37:24.243426] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:18.212 [2024-07-15 03:37:24.243837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.212 [2024-07-15 03:37:24.243869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114bed0 with addr=10.0.0.2, port=4420 00:34:18.212 [2024-07-15 03:37:24.243895] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114bed0 is same with the state(5) to be set 00:34:18.212 [2024-07-15 03:37:24.244136] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114bed0 (9): Bad file descriptor 00:34:18.212 [2024-07-15 03:37:24.244385] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:18.212 [2024-07-15 03:37:24.244410] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:18.212 [2024-07-15 03:37:24.244426] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:18.212 [2024-07-15 03:37:24.248009] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:18.212 [2024-07-15 03:37:24.257291] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:18.212 [2024-07-15 03:37:24.257705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.212 [2024-07-15 03:37:24.257737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114bed0 with addr=10.0.0.2, port=4420 00:34:18.212 [2024-07-15 03:37:24.257755] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114bed0 is same with the state(5) to be set 00:34:18.212 [2024-07-15 03:37:24.258006] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114bed0 (9): Bad file descriptor 00:34:18.212 [2024-07-15 03:37:24.258251] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:18.212 [2024-07-15 03:37:24.258275] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:18.212 [2024-07-15 03:37:24.258291] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:18.212 [2024-07-15 03:37:24.261865] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:18.212 [2024-07-15 03:37:24.271147] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:18.212 [2024-07-15 03:37:24.271561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.212 [2024-07-15 03:37:24.271592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114bed0 with addr=10.0.0.2, port=4420 00:34:18.212 [2024-07-15 03:37:24.271610] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114bed0 is same with the state(5) to be set 00:34:18.212 [2024-07-15 03:37:24.271849] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114bed0 (9): Bad file descriptor 00:34:18.212 [2024-07-15 03:37:24.272100] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:18.212 [2024-07-15 03:37:24.272125] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:18.212 [2024-07-15 03:37:24.272141] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:18.212 [2024-07-15 03:37:24.275717] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:18.212 [2024-07-15 03:37:24.285013] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:18.212 [2024-07-15 03:37:24.285434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.212 [2024-07-15 03:37:24.285465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114bed0 with addr=10.0.0.2, port=4420 00:34:18.212 [2024-07-15 03:37:24.285483] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114bed0 is same with the state(5) to be set 00:34:18.212 [2024-07-15 03:37:24.285721] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114bed0 (9): Bad file descriptor 00:34:18.212 [2024-07-15 03:37:24.285976] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:18.212 [2024-07-15 03:37:24.286002] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:18.212 [2024-07-15 03:37:24.286018] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:18.212 [2024-07-15 03:37:24.289610] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:18.212 [2024-07-15 03:37:24.298896] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:18.212 [2024-07-15 03:37:24.299319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.212 [2024-07-15 03:37:24.299351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114bed0 with addr=10.0.0.2, port=4420 00:34:18.212 [2024-07-15 03:37:24.299369] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114bed0 is same with the state(5) to be set 00:34:18.212 [2024-07-15 03:37:24.299607] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114bed0 (9): Bad file descriptor 00:34:18.212 [2024-07-15 03:37:24.299851] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:18.212 [2024-07-15 03:37:24.299875] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:18.212 [2024-07-15 03:37:24.299900] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:18.212 [2024-07-15 03:37:24.303472] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:18.213 [2024-07-15 03:37:24.312771] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:18.213 [2024-07-15 03:37:24.313179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.213 [2024-07-15 03:37:24.313211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114bed0 with addr=10.0.0.2, port=4420 00:34:18.213 [2024-07-15 03:37:24.313229] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114bed0 is same with the state(5) to be set 00:34:18.213 [2024-07-15 03:37:24.313468] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114bed0 (9): Bad file descriptor 00:34:18.213 [2024-07-15 03:37:24.313711] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:18.213 [2024-07-15 03:37:24.313735] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:18.213 [2024-07-15 03:37:24.313751] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:18.213 [2024-07-15 03:37:24.317332] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:18.213 [2024-07-15 03:37:24.326616] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:18.213 [2024-07-15 03:37:24.327012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.213 [2024-07-15 03:37:24.327044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114bed0 with addr=10.0.0.2, port=4420 00:34:18.213 [2024-07-15 03:37:24.327062] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114bed0 is same with the state(5) to be set 00:34:18.213 [2024-07-15 03:37:24.327302] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114bed0 (9): Bad file descriptor 00:34:18.213 [2024-07-15 03:37:24.327545] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:18.213 [2024-07-15 03:37:24.327570] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:18.213 [2024-07-15 03:37:24.327586] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:18.213 [2024-07-15 03:37:24.331164] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:18.213 [2024-07-15 03:37:24.340657] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:18.213 [2024-07-15 03:37:24.341035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.213 [2024-07-15 03:37:24.341067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114bed0 with addr=10.0.0.2, port=4420 00:34:18.213 [2024-07-15 03:37:24.341092] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114bed0 is same with the state(5) to be set 00:34:18.213 [2024-07-15 03:37:24.341333] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114bed0 (9): Bad file descriptor 00:34:18.213 [2024-07-15 03:37:24.341576] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:18.213 [2024-07-15 03:37:24.341601] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:18.213 [2024-07-15 03:37:24.341617] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:18.213 [2024-07-15 03:37:24.345213] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:18.471 [2024-07-15 03:37:24.354514] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:18.471 [2024-07-15 03:37:24.354905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.471 [2024-07-15 03:37:24.354938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114bed0 with addr=10.0.0.2, port=4420 00:34:18.471 [2024-07-15 03:37:24.354956] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114bed0 is same with the state(5) to be set 00:34:18.471 [2024-07-15 03:37:24.355196] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114bed0 (9): Bad file descriptor 00:34:18.471 [2024-07-15 03:37:24.355440] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:18.471 [2024-07-15 03:37:24.355464] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:18.472 [2024-07-15 03:37:24.355480] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:18.472 [2024-07-15 03:37:24.359062] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:18.472 [2024-07-15 03:37:24.368353] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:18.472 [2024-07-15 03:37:24.368776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.472 [2024-07-15 03:37:24.368808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114bed0 with addr=10.0.0.2, port=4420 00:34:18.472 [2024-07-15 03:37:24.368826] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114bed0 is same with the state(5) to be set 00:34:18.472 [2024-07-15 03:37:24.369074] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114bed0 (9): Bad file descriptor 00:34:18.472 [2024-07-15 03:37:24.369318] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:18.472 [2024-07-15 03:37:24.369343] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:18.472 [2024-07-15 03:37:24.369358] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:18.472 [2024-07-15 03:37:24.372939] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:18.472 [2024-07-15 03:37:24.382228] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:18.472 [2024-07-15 03:37:24.382721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.472 [2024-07-15 03:37:24.382753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114bed0 with addr=10.0.0.2, port=4420 00:34:18.472 [2024-07-15 03:37:24.382771] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114bed0 is same with the state(5) to be set 00:34:18.472 [2024-07-15 03:37:24.383020] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114bed0 (9): Bad file descriptor 00:34:18.472 [2024-07-15 03:37:24.383265] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:18.472 [2024-07-15 03:37:24.383295] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:18.472 [2024-07-15 03:37:24.383312] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:18.472 [2024-07-15 03:37:24.386891] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:18.472 [2024-07-15 03:37:24.396174] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:18.472 [2024-07-15 03:37:24.396610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.472 [2024-07-15 03:37:24.396642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114bed0 with addr=10.0.0.2, port=4420 00:34:18.472 [2024-07-15 03:37:24.396660] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114bed0 is same with the state(5) to be set 00:34:18.472 [2024-07-15 03:37:24.396907] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114bed0 (9): Bad file descriptor 00:34:18.472 [2024-07-15 03:37:24.397151] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:18.472 [2024-07-15 03:37:24.397175] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:18.472 [2024-07-15 03:37:24.397192] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:18.472 [2024-07-15 03:37:24.400773] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:18.472 [2024-07-15 03:37:24.410063] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:18.472 [2024-07-15 03:37:24.410568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.472 [2024-07-15 03:37:24.410599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114bed0 with addr=10.0.0.2, port=4420 00:34:18.472 [2024-07-15 03:37:24.410617] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114bed0 is same with the state(5) to be set 00:34:18.472 [2024-07-15 03:37:24.410857] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114bed0 (9): Bad file descriptor 00:34:18.472 [2024-07-15 03:37:24.411110] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:18.472 [2024-07-15 03:37:24.411135] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:18.472 [2024-07-15 03:37:24.411151] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:18.472 [2024-07-15 03:37:24.414726] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:18.472 [2024-07-15 03:37:24.424022] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:18.472 [2024-07-15 03:37:24.424435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.472 [2024-07-15 03:37:24.424468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114bed0 with addr=10.0.0.2, port=4420 00:34:18.472 [2024-07-15 03:37:24.424485] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114bed0 is same with the state(5) to be set 00:34:18.472 [2024-07-15 03:37:24.424726] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114bed0 (9): Bad file descriptor 00:34:18.472 [2024-07-15 03:37:24.424979] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:18.472 [2024-07-15 03:37:24.425004] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:18.472 [2024-07-15 03:37:24.425020] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:18.472 [2024-07-15 03:37:24.428592] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:18.472 [2024-07-15 03:37:24.437895] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:18.472 [2024-07-15 03:37:24.438286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.472 [2024-07-15 03:37:24.438319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114bed0 with addr=10.0.0.2, port=4420 00:34:18.472 [2024-07-15 03:37:24.438337] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114bed0 is same with the state(5) to be set 00:34:18.472 [2024-07-15 03:37:24.438576] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114bed0 (9): Bad file descriptor 00:34:18.472 [2024-07-15 03:37:24.438821] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:18.472 [2024-07-15 03:37:24.438844] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:18.472 [2024-07-15 03:37:24.438860] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:18.472 [2024-07-15 03:37:24.442445] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:18.472 [2024-07-15 03:37:24.451728] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:18.472 [2024-07-15 03:37:24.452160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.472 [2024-07-15 03:37:24.452192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114bed0 with addr=10.0.0.2, port=4420 00:34:18.472 [2024-07-15 03:37:24.452210] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114bed0 is same with the state(5) to be set 00:34:18.472 [2024-07-15 03:37:24.452449] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114bed0 (9): Bad file descriptor 00:34:18.473 [2024-07-15 03:37:24.452692] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:18.473 [2024-07-15 03:37:24.452717] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:18.473 [2024-07-15 03:37:24.452732] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:18.473 [2024-07-15 03:37:24.456318] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:18.473 [2024-07-15 03:37:24.465599] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:18.473 [2024-07-15 03:37:24.465995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.473 [2024-07-15 03:37:24.466028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114bed0 with addr=10.0.0.2, port=4420 00:34:18.473 [2024-07-15 03:37:24.466045] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114bed0 is same with the state(5) to be set 00:34:18.473 [2024-07-15 03:37:24.466285] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114bed0 (9): Bad file descriptor 00:34:18.473 [2024-07-15 03:37:24.466528] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:18.473 [2024-07-15 03:37:24.466552] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:18.473 [2024-07-15 03:37:24.466568] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:18.473 [2024-07-15 03:37:24.470153] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:18.473 [2024-07-15 03:37:24.479437] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:18.473 [2024-07-15 03:37:24.479870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.473 [2024-07-15 03:37:24.479909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114bed0 with addr=10.0.0.2, port=4420 00:34:18.473 [2024-07-15 03:37:24.479927] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114bed0 is same with the state(5) to be set 00:34:18.473 [2024-07-15 03:37:24.480172] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114bed0 (9): Bad file descriptor 00:34:18.473 [2024-07-15 03:37:24.480416] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:18.473 [2024-07-15 03:37:24.480440] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:18.473 [2024-07-15 03:37:24.480456] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:18.473 [2024-07-15 03:37:24.484060] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:18.473 [2024-07-15 03:37:24.493341] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:18.473 [2024-07-15 03:37:24.493733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.473 [2024-07-15 03:37:24.493765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114bed0 with addr=10.0.0.2, port=4420 00:34:18.473 [2024-07-15 03:37:24.493783] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114bed0 is same with the state(5) to be set 00:34:18.473 [2024-07-15 03:37:24.494035] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114bed0 (9): Bad file descriptor 00:34:18.473 [2024-07-15 03:37:24.494280] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:18.473 [2024-07-15 03:37:24.494304] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:18.473 [2024-07-15 03:37:24.494321] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:18.473 [2024-07-15 03:37:24.497902] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:18.473 [2024-07-15 03:37:24.507186] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:18.473 [2024-07-15 03:37:24.507597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.473 [2024-07-15 03:37:24.507628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114bed0 with addr=10.0.0.2, port=4420 00:34:18.473 [2024-07-15 03:37:24.507646] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114bed0 is same with the state(5) to be set 00:34:18.473 [2024-07-15 03:37:24.507895] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114bed0 (9): Bad file descriptor 00:34:18.473 [2024-07-15 03:37:24.508138] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:18.473 [2024-07-15 03:37:24.508163] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:18.473 [2024-07-15 03:37:24.508179] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:18.473 [2024-07-15 03:37:24.511750] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:18.473 [2024-07-15 03:37:24.521030] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:18.473 [2024-07-15 03:37:24.521456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.473 [2024-07-15 03:37:24.521487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114bed0 with addr=10.0.0.2, port=4420 00:34:18.473 [2024-07-15 03:37:24.521505] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114bed0 is same with the state(5) to be set 00:34:18.473 [2024-07-15 03:37:24.521744] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114bed0 (9): Bad file descriptor 00:34:18.473 [2024-07-15 03:37:24.521997] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:18.473 [2024-07-15 03:37:24.522023] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:18.473 [2024-07-15 03:37:24.522044] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:18.473 [2024-07-15 03:37:24.525618] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:18.473 [2024-07-15 03:37:24.534902] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:18.473 [2024-07-15 03:37:24.535316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.473 [2024-07-15 03:37:24.535348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114bed0 with addr=10.0.0.2, port=4420 00:34:18.473 [2024-07-15 03:37:24.535366] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114bed0 is same with the state(5) to be set 00:34:18.473 [2024-07-15 03:37:24.535604] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114bed0 (9): Bad file descriptor 00:34:18.473 [2024-07-15 03:37:24.535848] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:18.473 [2024-07-15 03:37:24.535872] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:18.473 [2024-07-15 03:37:24.535898] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:18.473 [2024-07-15 03:37:24.539473] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:18.473 [2024-07-15 03:37:24.548759] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:18.473 [2024-07-15 03:37:24.549178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.473 [2024-07-15 03:37:24.549211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114bed0 with addr=10.0.0.2, port=4420 00:34:18.473 [2024-07-15 03:37:24.549229] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114bed0 is same with the state(5) to be set 00:34:18.473 [2024-07-15 03:37:24.549468] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114bed0 (9): Bad file descriptor 00:34:18.473 [2024-07-15 03:37:24.549712] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:18.473 [2024-07-15 03:37:24.549736] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:18.473 [2024-07-15 03:37:24.549752] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:18.473 [2024-07-15 03:37:24.553334] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:18.473 [2024-07-15 03:37:24.562616] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:18.473 [2024-07-15 03:37:24.563017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.473 [2024-07-15 03:37:24.563050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114bed0 with addr=10.0.0.2, port=4420 00:34:18.474 [2024-07-15 03:37:24.563067] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114bed0 is same with the state(5) to be set 00:34:18.474 [2024-07-15 03:37:24.563307] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114bed0 (9): Bad file descriptor 00:34:18.474 [2024-07-15 03:37:24.563551] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:18.474 [2024-07-15 03:37:24.563575] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:18.474 [2024-07-15 03:37:24.563590] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:18.474 [2024-07-15 03:37:24.567172] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:18.474 [2024-07-15 03:37:24.576458] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:18.474 [2024-07-15 03:37:24.576891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.474 [2024-07-15 03:37:24.576923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114bed0 with addr=10.0.0.2, port=4420 00:34:18.474 [2024-07-15 03:37:24.576941] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114bed0 is same with the state(5) to be set 00:34:18.474 [2024-07-15 03:37:24.577180] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114bed0 (9): Bad file descriptor 00:34:18.474 [2024-07-15 03:37:24.577424] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:18.474 [2024-07-15 03:37:24.577448] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:18.474 [2024-07-15 03:37:24.577464] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:18.474 [2024-07-15 03:37:24.581044] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:18.474 [2024-07-15 03:37:24.590327] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:18.474 [2024-07-15 03:37:24.590713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.474 [2024-07-15 03:37:24.590745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114bed0 with addr=10.0.0.2, port=4420 00:34:18.474 [2024-07-15 03:37:24.590763] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114bed0 is same with the state(5) to be set 00:34:18.474 [2024-07-15 03:37:24.591012] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114bed0 (9): Bad file descriptor 00:34:18.474 [2024-07-15 03:37:24.591256] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:18.474 [2024-07-15 03:37:24.591281] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:18.474 [2024-07-15 03:37:24.591297] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:18.474 [2024-07-15 03:37:24.594873] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:18.474 [2024-07-15 03:37:24.604367] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:18.474 [2024-07-15 03:37:24.604780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.474 [2024-07-15 03:37:24.604812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114bed0 with addr=10.0.0.2, port=4420 00:34:18.474 [2024-07-15 03:37:24.604830] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114bed0 is same with the state(5) to be set 00:34:18.474 [2024-07-15 03:37:24.605079] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114bed0 (9): Bad file descriptor 00:34:18.474 [2024-07-15 03:37:24.605323] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:18.474 [2024-07-15 03:37:24.605347] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:18.474 [2024-07-15 03:37:24.605363] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:18.474 [2024-07-15 03:37:24.608944] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:18.734 [2024-07-15 03:37:24.618222] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:18.734 [2024-07-15 03:37:24.618691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.734 [2024-07-15 03:37:24.618722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114bed0 with addr=10.0.0.2, port=4420 00:34:18.734 [2024-07-15 03:37:24.618740] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114bed0 is same with the state(5) to be set 00:34:18.734 [2024-07-15 03:37:24.618996] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114bed0 (9): Bad file descriptor 00:34:18.734 [2024-07-15 03:37:24.619240] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:18.734 [2024-07-15 03:37:24.619265] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:18.734 [2024-07-15 03:37:24.619281] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:18.734 [2024-07-15 03:37:24.622852] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:18.734 [2024-07-15 03:37:24.632130] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:18.734 [2024-07-15 03:37:24.632518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.734 [2024-07-15 03:37:24.632550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114bed0 with addr=10.0.0.2, port=4420 00:34:18.734 [2024-07-15 03:37:24.632568] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114bed0 is same with the state(5) to be set 00:34:18.734 [2024-07-15 03:37:24.632807] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114bed0 (9): Bad file descriptor 00:34:18.734 [2024-07-15 03:37:24.633061] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:18.734 [2024-07-15 03:37:24.633086] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:18.734 [2024-07-15 03:37:24.633102] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:18.734 [2024-07-15 03:37:24.636676] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:18.734 [2024-07-15 03:37:24.646169] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:18.734 [2024-07-15 03:37:24.646583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.734 [2024-07-15 03:37:24.646614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114bed0 with addr=10.0.0.2, port=4420 00:34:18.734 [2024-07-15 03:37:24.646632] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114bed0 is same with the state(5) to be set 00:34:18.734 [2024-07-15 03:37:24.646870] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114bed0 (9): Bad file descriptor 00:34:18.734 [2024-07-15 03:37:24.647124] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:18.734 [2024-07-15 03:37:24.647149] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:18.734 [2024-07-15 03:37:24.647165] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:18.734 [2024-07-15 03:37:24.650739] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:18.734 [2024-07-15 03:37:24.660022] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:18.734 [2024-07-15 03:37:24.660433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.734 [2024-07-15 03:37:24.660464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114bed0 with addr=10.0.0.2, port=4420 00:34:18.734 [2024-07-15 03:37:24.660482] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114bed0 is same with the state(5) to be set 00:34:18.734 [2024-07-15 03:37:24.660721] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114bed0 (9): Bad file descriptor 00:34:18.734 [2024-07-15 03:37:24.660974] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:18.734 [2024-07-15 03:37:24.660999] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:18.734 [2024-07-15 03:37:24.661021] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:18.734 [2024-07-15 03:37:24.664593] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:18.734 [2024-07-15 03:37:24.673883] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:18.734 [2024-07-15 03:37:24.674294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.734 [2024-07-15 03:37:24.674326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114bed0 with addr=10.0.0.2, port=4420 00:34:18.734 [2024-07-15 03:37:24.674344] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114bed0 is same with the state(5) to be set 00:34:18.734 [2024-07-15 03:37:24.674582] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114bed0 (9): Bad file descriptor 00:34:18.734 [2024-07-15 03:37:24.674825] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:18.734 [2024-07-15 03:37:24.674849] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:18.734 [2024-07-15 03:37:24.674865] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:18.734 [2024-07-15 03:37:24.678446] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:18.734 [2024-07-15 03:37:24.687736] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:18.734 [2024-07-15 03:37:24.688172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.734 [2024-07-15 03:37:24.688204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114bed0 with addr=10.0.0.2, port=4420 00:34:18.734 [2024-07-15 03:37:24.688222] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114bed0 is same with the state(5) to be set 00:34:18.734 [2024-07-15 03:37:24.688460] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114bed0 (9): Bad file descriptor 00:34:18.734 [2024-07-15 03:37:24.688704] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:18.734 [2024-07-15 03:37:24.688728] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:18.734 [2024-07-15 03:37:24.688744] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:18.734 [2024-07-15 03:37:24.692324] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:18.734 [2024-07-15 03:37:24.701597] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:18.734 [2024-07-15 03:37:24.701989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.734 [2024-07-15 03:37:24.702020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114bed0 with addr=10.0.0.2, port=4420 00:34:18.734 [2024-07-15 03:37:24.702038] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114bed0 is same with the state(5) to be set 00:34:18.734 [2024-07-15 03:37:24.702276] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114bed0 (9): Bad file descriptor 00:34:18.734 [2024-07-15 03:37:24.702519] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:18.734 [2024-07-15 03:37:24.702544] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:18.734 [2024-07-15 03:37:24.702560] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:18.734 [2024-07-15 03:37:24.706141] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:18.734 [2024-07-15 03:37:24.715626] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:18.734 [2024-07-15 03:37:24.716043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.734 [2024-07-15 03:37:24.716081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114bed0 with addr=10.0.0.2, port=4420 00:34:18.734 [2024-07-15 03:37:24.716100] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114bed0 is same with the state(5) to be set 00:34:18.734 [2024-07-15 03:37:24.716339] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114bed0 (9): Bad file descriptor 00:34:18.734 [2024-07-15 03:37:24.716583] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:18.734 [2024-07-15 03:37:24.716607] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:18.734 [2024-07-15 03:37:24.716622] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:18.734 [2024-07-15 03:37:24.720203] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:18.734 [2024-07-15 03:37:24.729488] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:18.734 [2024-07-15 03:37:24.729913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.734 [2024-07-15 03:37:24.729945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114bed0 with addr=10.0.0.2, port=4420 00:34:18.734 [2024-07-15 03:37:24.729963] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114bed0 is same with the state(5) to be set 00:34:18.734 [2024-07-15 03:37:24.730202] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114bed0 (9): Bad file descriptor 00:34:18.735 [2024-07-15 03:37:24.730445] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:18.735 [2024-07-15 03:37:24.730470] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:18.735 [2024-07-15 03:37:24.730485] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:18.735 [2024-07-15 03:37:24.734069] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:18.735 [2024-07-15 03:37:24.743360] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:18.735 [2024-07-15 03:37:24.743776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.735 [2024-07-15 03:37:24.743808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114bed0 with addr=10.0.0.2, port=4420 00:34:18.735 [2024-07-15 03:37:24.743826] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114bed0 is same with the state(5) to be set 00:34:18.735 [2024-07-15 03:37:24.744075] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114bed0 (9): Bad file descriptor 00:34:18.735 [2024-07-15 03:37:24.744319] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:18.735 [2024-07-15 03:37:24.744344] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:18.735 [2024-07-15 03:37:24.744360] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:18.735 [2024-07-15 03:37:24.747937] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:18.735 [2024-07-15 03:37:24.757216] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:18.735 [2024-07-15 03:37:24.757639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.735 [2024-07-15 03:37:24.757670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114bed0 with addr=10.0.0.2, port=4420 00:34:18.735 [2024-07-15 03:37:24.757688] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114bed0 is same with the state(5) to be set 00:34:18.735 [2024-07-15 03:37:24.757937] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114bed0 (9): Bad file descriptor 00:34:18.735 [2024-07-15 03:37:24.758188] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:18.735 [2024-07-15 03:37:24.758214] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:18.735 [2024-07-15 03:37:24.758230] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:18.735 [2024-07-15 03:37:24.761802] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:18.735 [2024-07-15 03:37:24.771085] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:18.735 [2024-07-15 03:37:24.771493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.735 [2024-07-15 03:37:24.771525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114bed0 with addr=10.0.0.2, port=4420 00:34:18.735 [2024-07-15 03:37:24.771542] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114bed0 is same with the state(5) to be set 00:34:18.735 [2024-07-15 03:37:24.771780] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114bed0 (9): Bad file descriptor 00:34:18.735 [2024-07-15 03:37:24.772033] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:18.735 [2024-07-15 03:37:24.772058] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:18.735 [2024-07-15 03:37:24.772075] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:18.735 [2024-07-15 03:37:24.775648] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:18.735 [2024-07-15 03:37:24.784948] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:18.735 [2024-07-15 03:37:24.785374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.735 [2024-07-15 03:37:24.785406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114bed0 with addr=10.0.0.2, port=4420 00:34:18.735 [2024-07-15 03:37:24.785424] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114bed0 is same with the state(5) to be set 00:34:18.735 [2024-07-15 03:37:24.785663] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114bed0 (9): Bad file descriptor 00:34:18.735 [2024-07-15 03:37:24.785916] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:18.735 [2024-07-15 03:37:24.785941] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:18.735 [2024-07-15 03:37:24.785956] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:18.735 [2024-07-15 03:37:24.789530] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:18.735 [2024-07-15 03:37:24.798808] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:18.735 [2024-07-15 03:37:24.799230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.735 [2024-07-15 03:37:24.799262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114bed0 with addr=10.0.0.2, port=4420 00:34:18.735 [2024-07-15 03:37:24.799279] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114bed0 is same with the state(5) to be set 00:34:18.735 [2024-07-15 03:37:24.799518] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114bed0 (9): Bad file descriptor 00:34:18.735 [2024-07-15 03:37:24.799761] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:18.735 [2024-07-15 03:37:24.799786] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:18.735 [2024-07-15 03:37:24.799801] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:18.735 [2024-07-15 03:37:24.803388] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:18.735 [2024-07-15 03:37:24.812668] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:18.735 [2024-07-15 03:37:24.813093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.735 [2024-07-15 03:37:24.813125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114bed0 with addr=10.0.0.2, port=4420 00:34:18.735 [2024-07-15 03:37:24.813143] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114bed0 is same with the state(5) to be set 00:34:18.735 [2024-07-15 03:37:24.813381] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114bed0 (9): Bad file descriptor 00:34:18.735 [2024-07-15 03:37:24.813624] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:18.735 [2024-07-15 03:37:24.813648] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:18.735 [2024-07-15 03:37:24.813664] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:18.735 [2024-07-15 03:37:24.817242] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:18.735 [2024-07-15 03:37:24.826522] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:18.735 [2024-07-15 03:37:24.826938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.735 [2024-07-15 03:37:24.826971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114bed0 with addr=10.0.0.2, port=4420 00:34:18.735 [2024-07-15 03:37:24.826989] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114bed0 is same with the state(5) to be set 00:34:18.735 [2024-07-15 03:37:24.827229] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114bed0 (9): Bad file descriptor 00:34:18.735 [2024-07-15 03:37:24.827472] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:18.735 [2024-07-15 03:37:24.827496] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:18.735 [2024-07-15 03:37:24.827512] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:18.735 [2024-07-15 03:37:24.831095] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:18.735 [2024-07-15 03:37:24.840378] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:18.735 [2024-07-15 03:37:24.840783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.735 [2024-07-15 03:37:24.840815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114bed0 with addr=10.0.0.2, port=4420 00:34:18.735 [2024-07-15 03:37:24.840834] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114bed0 is same with the state(5) to be set 00:34:18.735 [2024-07-15 03:37:24.841083] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114bed0 (9): Bad file descriptor 00:34:18.735 [2024-07-15 03:37:24.841328] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:18.735 [2024-07-15 03:37:24.841352] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:18.735 [2024-07-15 03:37:24.841368] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:18.735 [2024-07-15 03:37:24.844945] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:18.735 [2024-07-15 03:37:24.854224] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:18.735 [2024-07-15 03:37:24.854610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.735 [2024-07-15 03:37:24.854642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114bed0 with addr=10.0.0.2, port=4420 00:34:18.735 [2024-07-15 03:37:24.854666] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114bed0 is same with the state(5) to be set 00:34:18.735 [2024-07-15 03:37:24.854917] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114bed0 (9): Bad file descriptor 00:34:18.735 [2024-07-15 03:37:24.855161] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:18.735 [2024-07-15 03:37:24.855186] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:18.735 [2024-07-15 03:37:24.855201] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:18.735 [2024-07-15 03:37:24.858771] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:18.735 [2024-07-15 03:37:24.868262] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:18.735 [2024-07-15 03:37:24.868684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.735 [2024-07-15 03:37:24.868716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114bed0 with addr=10.0.0.2, port=4420 00:34:18.735 [2024-07-15 03:37:24.868733] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114bed0 is same with the state(5) to be set 00:34:18.736 [2024-07-15 03:37:24.868982] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114bed0 (9): Bad file descriptor 00:34:18.736 [2024-07-15 03:37:24.869226] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:18.736 [2024-07-15 03:37:24.869251] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:18.736 [2024-07-15 03:37:24.869267] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:18.736 [2024-07-15 03:37:24.872841] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:18.995 [2024-07-15 03:37:24.882131] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:18.995 [2024-07-15 03:37:24.882557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.995 [2024-07-15 03:37:24.882589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114bed0 with addr=10.0.0.2, port=4420 00:34:18.995 [2024-07-15 03:37:24.882606] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114bed0 is same with the state(5) to be set 00:34:18.995 [2024-07-15 03:37:24.882845] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114bed0 (9): Bad file descriptor 00:34:18.995 [2024-07-15 03:37:24.883097] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:18.995 [2024-07-15 03:37:24.883123] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:18.995 [2024-07-15 03:37:24.883139] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:18.995 [2024-07-15 03:37:24.886710] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:18.995 [2024-07-15 03:37:24.896000] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:18.995 [2024-07-15 03:37:24.896425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.995 [2024-07-15 03:37:24.896456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114bed0 with addr=10.0.0.2, port=4420 00:34:18.995 [2024-07-15 03:37:24.896474] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114bed0 is same with the state(5) to be set 00:34:18.995 [2024-07-15 03:37:24.896712] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114bed0 (9): Bad file descriptor 00:34:18.995 [2024-07-15 03:37:24.896966] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:18.995 [2024-07-15 03:37:24.896996] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:18.996 [2024-07-15 03:37:24.897013] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:18.996 [2024-07-15 03:37:24.900584] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:18.996 [2024-07-15 03:37:24.909903] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:18.996 [2024-07-15 03:37:24.910324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.996 [2024-07-15 03:37:24.910356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114bed0 with addr=10.0.0.2, port=4420 00:34:18.996 [2024-07-15 03:37:24.910374] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114bed0 is same with the state(5) to be set 00:34:18.996 [2024-07-15 03:37:24.910613] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114bed0 (9): Bad file descriptor 00:34:18.996 [2024-07-15 03:37:24.910856] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:18.996 [2024-07-15 03:37:24.910889] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:18.996 [2024-07-15 03:37:24.910907] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:18.996 [2024-07-15 03:37:24.914481] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:18.996 [2024-07-15 03:37:24.923759] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:18.996 [2024-07-15 03:37:24.924133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.996 [2024-07-15 03:37:24.924165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114bed0 with addr=10.0.0.2, port=4420 00:34:18.996 [2024-07-15 03:37:24.924183] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114bed0 is same with the state(5) to be set 00:34:18.996 [2024-07-15 03:37:24.924421] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114bed0 (9): Bad file descriptor 00:34:18.996 [2024-07-15 03:37:24.924664] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:18.996 [2024-07-15 03:37:24.924689] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:18.996 [2024-07-15 03:37:24.924704] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:18.996 [2024-07-15 03:37:24.928286] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:18.996 [2024-07-15 03:37:24.937785] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:18.996 [2024-07-15 03:37:24.938193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.996 [2024-07-15 03:37:24.938225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114bed0 with addr=10.0.0.2, port=4420 00:34:18.996 [2024-07-15 03:37:24.938242] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114bed0 is same with the state(5) to be set 00:34:18.996 [2024-07-15 03:37:24.938481] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114bed0 (9): Bad file descriptor 00:34:18.996 [2024-07-15 03:37:24.938724] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:18.996 [2024-07-15 03:37:24.938749] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:18.996 [2024-07-15 03:37:24.938764] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:18.996 [2024-07-15 03:37:24.942349] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:18.996 [2024-07-15 03:37:24.951643] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:18.996 [2024-07-15 03:37:24.952018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.996 [2024-07-15 03:37:24.952050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114bed0 with addr=10.0.0.2, port=4420 00:34:18.996 [2024-07-15 03:37:24.952068] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114bed0 is same with the state(5) to be set 00:34:18.996 [2024-07-15 03:37:24.952306] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114bed0 (9): Bad file descriptor 00:34:18.996 [2024-07-15 03:37:24.952549] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:18.996 [2024-07-15 03:37:24.952573] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:18.996 [2024-07-15 03:37:24.952589] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:18.996 [2024-07-15 03:37:24.956173] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:18.996 [2024-07-15 03:37:24.965663] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:18.996 [2024-07-15 03:37:24.966042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.996 [2024-07-15 03:37:24.966074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114bed0 with addr=10.0.0.2, port=4420 00:34:18.996 [2024-07-15 03:37:24.966092] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114bed0 is same with the state(5) to be set 00:34:18.996 [2024-07-15 03:37:24.966331] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114bed0 (9): Bad file descriptor 00:34:18.996 [2024-07-15 03:37:24.966574] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:18.996 [2024-07-15 03:37:24.966598] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:18.996 [2024-07-15 03:37:24.966614] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:18.996 [2024-07-15 03:37:24.970197] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:18.996 [2024-07-15 03:37:24.979697] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:18.996 [2024-07-15 03:37:24.980100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.996 [2024-07-15 03:37:24.980133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114bed0 with addr=10.0.0.2, port=4420 00:34:18.996 [2024-07-15 03:37:24.980150] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114bed0 is same with the state(5) to be set 00:34:18.996 [2024-07-15 03:37:24.980389] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114bed0 (9): Bad file descriptor 00:34:18.996 [2024-07-15 03:37:24.980632] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:18.996 [2024-07-15 03:37:24.980656] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:18.996 [2024-07-15 03:37:24.980672] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:18.996 [2024-07-15 03:37:24.984260] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:18.996 [2024-07-15 03:37:24.993549] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:18.996 [2024-07-15 03:37:24.993955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.996 [2024-07-15 03:37:24.993988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114bed0 with addr=10.0.0.2, port=4420 00:34:18.996 [2024-07-15 03:37:24.994006] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114bed0 is same with the state(5) to be set 00:34:18.996 [2024-07-15 03:37:24.994250] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114bed0 (9): Bad file descriptor 00:34:18.996 [2024-07-15 03:37:24.994494] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:18.996 [2024-07-15 03:37:24.994519] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:18.996 [2024-07-15 03:37:24.994534] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:18.996 [2024-07-15 03:37:24.998113] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:18.996 [2024-07-15 03:37:25.007404] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:18.996 [2024-07-15 03:37:25.007814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.996 [2024-07-15 03:37:25.007845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114bed0 with addr=10.0.0.2, port=4420 00:34:18.996 [2024-07-15 03:37:25.007863] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114bed0 is same with the state(5) to be set 00:34:18.996 [2024-07-15 03:37:25.008109] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114bed0 (9): Bad file descriptor 00:34:18.996 [2024-07-15 03:37:25.008353] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:18.996 [2024-07-15 03:37:25.008378] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:18.996 [2024-07-15 03:37:25.008393] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:18.996 [2024-07-15 03:37:25.011976] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:18.996 [2024-07-15 03:37:25.021264] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:18.996 [2024-07-15 03:37:25.021747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.996 [2024-07-15 03:37:25.021800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114bed0 with addr=10.0.0.2, port=4420 00:34:18.996 [2024-07-15 03:37:25.021818] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114bed0 is same with the state(5) to be set 00:34:18.996 [2024-07-15 03:37:25.022064] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114bed0 (9): Bad file descriptor 00:34:18.996 [2024-07-15 03:37:25.022309] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:18.996 [2024-07-15 03:37:25.022333] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:18.996 [2024-07-15 03:37:25.022349] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:18.996 [2024-07-15 03:37:25.025930] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:18.996 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 3346288 Killed "${NVMF_APP[@]}" "$@" 00:34:18.996 03:37:25 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:34:18.996 03:37:25 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:34:18.996 03:37:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:34:18.996 03:37:25 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@722 -- # xtrace_disable 00:34:18.996 03:37:25 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:18.996 [2024-07-15 03:37:25.035214] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:18.996 [2024-07-15 03:37:25.035624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.997 [2024-07-15 03:37:25.035673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114bed0 with addr=10.0.0.2, port=4420 00:34:18.997 [2024-07-15 03:37:25.035696] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114bed0 is same with the state(5) to be set 00:34:18.997 [2024-07-15 03:37:25.035947] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114bed0 (9): Bad file descriptor 00:34:18.997 03:37:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=3347357 00:34:18.997 03:37:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:34:18.997 03:37:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 3347357 00:34:18.997 [2024-07-15 03:37:25.036191] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:18.997 [2024-07-15 03:37:25.036215] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:18.997 [2024-07-15 03:37:25.036231] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:18.997 03:37:25 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@829 -- # '[' -z 3347357 ']' 00:34:18.997 03:37:25 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:18.997 03:37:25 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@834 -- # local max_retries=100 00:34:18.997 03:37:25 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:18.997 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:18.997 03:37:25 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@838 -- # xtrace_disable 00:34:18.997 03:37:25 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:18.997 [2024-07-15 03:37:25.039805] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:18.997 [2024-07-15 03:37:25.049106] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:18.997 [2024-07-15 03:37:25.049519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.997 [2024-07-15 03:37:25.049551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114bed0 with addr=10.0.0.2, port=4420 00:34:18.997 [2024-07-15 03:37:25.049569] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114bed0 is same with the state(5) to be set 00:34:18.997 [2024-07-15 03:37:25.049807] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114bed0 (9): Bad file descriptor 00:34:18.997 [2024-07-15 03:37:25.050060] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:18.997 [2024-07-15 03:37:25.050084] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:18.997 [2024-07-15 03:37:25.050100] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:18.997 [2024-07-15 03:37:25.053671] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:18.997 [2024-07-15 03:37:25.062958] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:18.997 [2024-07-15 03:37:25.063346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.997 [2024-07-15 03:37:25.063389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114bed0 with addr=10.0.0.2, port=4420 00:34:18.997 [2024-07-15 03:37:25.063407] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114bed0 is same with the state(5) to be set 00:34:18.997 [2024-07-15 03:37:25.063645] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114bed0 (9): Bad file descriptor 00:34:18.997 [2024-07-15 03:37:25.063898] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:18.997 [2024-07-15 03:37:25.063923] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:18.997 [2024-07-15 03:37:25.063944] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:18.997 [2024-07-15 03:37:25.067519] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:18.997 [2024-07-15 03:37:25.076886] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:18.997 [2024-07-15 03:37:25.077335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.997 [2024-07-15 03:37:25.077368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114bed0 with addr=10.0.0.2, port=4420 00:34:18.997 [2024-07-15 03:37:25.077386] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114bed0 is same with the state(5) to be set 00:34:18.997 [2024-07-15 03:37:25.077626] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114bed0 (9): Bad file descriptor 00:34:18.997 [2024-07-15 03:37:25.077870] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:18.997 [2024-07-15 03:37:25.077904] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:18.997 [2024-07-15 03:37:25.077921] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:18.997 [2024-07-15 03:37:25.081496] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:18.997 [2024-07-15 03:37:25.085502] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:34:18.997 [2024-07-15 03:37:25.085571] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:18.997 [2024-07-15 03:37:25.090812] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:18.997 [2024-07-15 03:37:25.091245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.997 [2024-07-15 03:37:25.091277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114bed0 with addr=10.0.0.2, port=4420 00:34:18.997 [2024-07-15 03:37:25.091296] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114bed0 is same with the state(5) to be set 00:34:18.997 [2024-07-15 03:37:25.091534] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114bed0 (9): Bad file descriptor 00:34:18.997 [2024-07-15 03:37:25.091778] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:18.997 [2024-07-15 03:37:25.091802] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:18.997 [2024-07-15 03:37:25.091819] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:18.997 [2024-07-15 03:37:25.095400] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:18.997 [2024-07-15 03:37:25.104858] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:18.997 [2024-07-15 03:37:25.105289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.997 [2024-07-15 03:37:25.105321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114bed0 with addr=10.0.0.2, port=4420 00:34:18.997 [2024-07-15 03:37:25.105345] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114bed0 is same with the state(5) to be set 00:34:18.997 [2024-07-15 03:37:25.105584] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114bed0 (9): Bad file descriptor 00:34:18.997 [2024-07-15 03:37:25.105827] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:18.997 [2024-07-15 03:37:25.105851] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:18.997 [2024-07-15 03:37:25.105867] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:18.997 [2024-07-15 03:37:25.109458] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:18.997 [2024-07-15 03:37:25.118750] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:18.997 [2024-07-15 03:37:25.119175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.997 [2024-07-15 03:37:25.119207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114bed0 with addr=10.0.0.2, port=4420 00:34:18.997 [2024-07-15 03:37:25.119227] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114bed0 is same with the state(5) to be set 00:34:18.997 [2024-07-15 03:37:25.119465] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114bed0 (9): Bad file descriptor 00:34:18.997 [2024-07-15 03:37:25.119709] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:18.997 [2024-07-15 03:37:25.119733] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:18.997 [2024-07-15 03:37:25.119749] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:18.997 [2024-07-15 03:37:25.123333] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:18.997 EAL: No free 2048 kB hugepages reported on node 1 00:34:18.997 [2024-07-15 03:37:25.132630] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:18.997 [2024-07-15 03:37:25.133038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.997 [2024-07-15 03:37:25.133069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114bed0 with addr=10.0.0.2, port=4420 00:34:18.997 [2024-07-15 03:37:25.133087] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114bed0 is same with the state(5) to be set 00:34:18.997 [2024-07-15 03:37:25.133328] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114bed0 (9): Bad file descriptor 00:34:18.997 [2024-07-15 03:37:25.133572] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:18.997 [2024-07-15 03:37:25.133596] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:18.997 [2024-07-15 03:37:25.133612] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:18.997 [2024-07-15 03:37:25.137195] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:19.257 [2024-07-15 03:37:25.146507] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:19.257 [2024-07-15 03:37:25.146949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.257 [2024-07-15 03:37:25.146981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114bed0 with addr=10.0.0.2, port=4420 00:34:19.257 [2024-07-15 03:37:25.147002] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114bed0 is same with the state(5) to be set 00:34:19.257 [2024-07-15 03:37:25.147241] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114bed0 (9): Bad file descriptor 00:34:19.257 [2024-07-15 03:37:25.147484] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:19.257 [2024-07-15 03:37:25.147508] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:19.257 [2024-07-15 03:37:25.147524] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:19.257 [2024-07-15 03:37:25.151106] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:19.257 [2024-07-15 03:37:25.160388] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:19.257 [2024-07-15 03:37:25.160803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.257 [2024-07-15 03:37:25.160840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114bed0 with addr=10.0.0.2, port=4420 00:34:19.257 [2024-07-15 03:37:25.160870] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114bed0 is same with the state(5) to be set 00:34:19.257 [2024-07-15 03:37:25.161120] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114bed0 (9): Bad file descriptor 00:34:19.257 [2024-07-15 03:37:25.161364] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:19.257 [2024-07-15 03:37:25.161388] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:19.257 [2024-07-15 03:37:25.161403] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:19.257 [2024-07-15 03:37:25.162067] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:34:19.257 [2024-07-15 03:37:25.164982] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:19.257 [2024-07-15 03:37:25.174300] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:19.257 [2024-07-15 03:37:25.174893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.257 [2024-07-15 03:37:25.174949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114bed0 with addr=10.0.0.2, port=4420 00:34:19.257 [2024-07-15 03:37:25.174972] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114bed0 is same with the state(5) to be set 00:34:19.257 [2024-07-15 03:37:25.175221] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114bed0 (9): Bad file descriptor 00:34:19.257 [2024-07-15 03:37:25.175470] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:19.257 [2024-07-15 03:37:25.175495] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:19.257 [2024-07-15 03:37:25.175515] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:19.257 [2024-07-15 03:37:25.179100] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:19.257 [2024-07-15 03:37:25.188397] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:19.257 [2024-07-15 03:37:25.188863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.257 [2024-07-15 03:37:25.188911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114bed0 with addr=10.0.0.2, port=4420 00:34:19.257 [2024-07-15 03:37:25.188930] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114bed0 is same with the state(5) to be set 00:34:19.257 [2024-07-15 03:37:25.189170] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114bed0 (9): Bad file descriptor 00:34:19.257 [2024-07-15 03:37:25.189415] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:19.257 [2024-07-15 03:37:25.189441] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:19.257 [2024-07-15 03:37:25.189457] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:19.257 [2024-07-15 03:37:25.193033] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:19.257 [2024-07-15 03:37:25.202313] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:19.257 [2024-07-15 03:37:25.202728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.257 [2024-07-15 03:37:25.202760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114bed0 with addr=10.0.0.2, port=4420 00:34:19.257 [2024-07-15 03:37:25.202785] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114bed0 is same with the state(5) to be set 00:34:19.257 [2024-07-15 03:37:25.203036] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114bed0 (9): Bad file descriptor 00:34:19.257 [2024-07-15 03:37:25.203294] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:19.257 [2024-07-15 03:37:25.203320] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:19.257 [2024-07-15 03:37:25.203336] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:19.257 [2024-07-15 03:37:25.206916] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:19.257 [2024-07-15 03:37:25.216213] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:19.257 [2024-07-15 03:37:25.216763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.257 [2024-07-15 03:37:25.216816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114bed0 with addr=10.0.0.2, port=4420 00:34:19.257 [2024-07-15 03:37:25.216838] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114bed0 is same with the state(5) to be set 00:34:19.258 [2024-07-15 03:37:25.217096] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114bed0 (9): Bad file descriptor 00:34:19.258 [2024-07-15 03:37:25.217344] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:19.258 [2024-07-15 03:37:25.217370] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:19.258 [2024-07-15 03:37:25.217389] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:19.258 [2024-07-15 03:37:25.220976] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:19.258 [2024-07-15 03:37:25.230279] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:19.258 [2024-07-15 03:37:25.230790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.258 [2024-07-15 03:37:25.230839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114bed0 with addr=10.0.0.2, port=4420 00:34:19.258 [2024-07-15 03:37:25.230860] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114bed0 is same with the state(5) to be set 00:34:19.258 [2024-07-15 03:37:25.231112] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114bed0 (9): Bad file descriptor 00:34:19.258 [2024-07-15 03:37:25.231359] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:19.258 [2024-07-15 03:37:25.231385] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:19.258 [2024-07-15 03:37:25.231403] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:19.258 [2024-07-15 03:37:25.234985] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:19.258 [2024-07-15 03:37:25.244266] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:19.258 [2024-07-15 03:37:25.244673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.258 [2024-07-15 03:37:25.244707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114bed0 with addr=10.0.0.2, port=4420 00:34:19.258 [2024-07-15 03:37:25.244725] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114bed0 is same with the state(5) to be set 00:34:19.258 [2024-07-15 03:37:25.244975] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114bed0 (9): Bad file descriptor 00:34:19.258 [2024-07-15 03:37:25.245220] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:19.258 [2024-07-15 03:37:25.245245] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:19.258 [2024-07-15 03:37:25.245262] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:19.258 [2024-07-15 03:37:25.248845] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:19.258 [2024-07-15 03:37:25.255709] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:19.258 [2024-07-15 03:37:25.255745] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:19.258 [2024-07-15 03:37:25.255770] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:19.258 [2024-07-15 03:37:25.255783] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:19.258 [2024-07-15 03:37:25.255795] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:19.258 [2024-07-15 03:37:25.255874] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:34:19.258 [2024-07-15 03:37:25.255927] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:34:19.258 [2024-07-15 03:37:25.255931] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:34:19.258 [2024-07-15 03:37:25.258145] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:19.258 [2024-07-15 03:37:25.258607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.258 [2024-07-15 03:37:25.258639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114bed0 with addr=10.0.0.2, port=4420 00:34:19.258 [2024-07-15 03:37:25.258667] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114bed0 is same with the state(5) to be set 00:34:19.258 [2024-07-15 03:37:25.258918] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114bed0 (9): Bad file descriptor 00:34:19.258 [2024-07-15 03:37:25.259162] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:19.258 [2024-07-15 03:37:25.259187] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:19.258 [2024-07-15 03:37:25.259204] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:19.258 [2024-07-15 03:37:25.262795] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:19.258 [2024-07-15 03:37:25.272150] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:19.258 [2024-07-15 03:37:25.272765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.258 [2024-07-15 03:37:25.272814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114bed0 with addr=10.0.0.2, port=4420 00:34:19.258 [2024-07-15 03:37:25.272839] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114bed0 is same with the state(5) to be set 00:34:19.258 [2024-07-15 03:37:25.273098] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114bed0 (9): Bad file descriptor 00:34:19.258 [2024-07-15 03:37:25.273349] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:19.258 [2024-07-15 03:37:25.273376] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:19.258 [2024-07-15 03:37:25.273396] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:19.258 [2024-07-15 03:37:25.276988] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:19.258 [2024-07-15 03:37:25.286107] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:19.258 [2024-07-15 03:37:25.286730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.258 [2024-07-15 03:37:25.286778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114bed0 with addr=10.0.0.2, port=4420 00:34:19.258 [2024-07-15 03:37:25.286802] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114bed0 is same with the state(5) to be set 00:34:19.258 [2024-07-15 03:37:25.287064] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114bed0 (9): Bad file descriptor 00:34:19.258 [2024-07-15 03:37:25.287328] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:19.258 [2024-07-15 03:37:25.287356] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:19.258 [2024-07-15 03:37:25.287377] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:19.258 [2024-07-15 03:37:25.290966] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:19.258 [2024-07-15 03:37:25.300068] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:19.258 [2024-07-15 03:37:25.300663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.258 [2024-07-15 03:37:25.300712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114bed0 with addr=10.0.0.2, port=4420 00:34:19.258 [2024-07-15 03:37:25.300736] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114bed0 is same with the state(5) to be set 00:34:19.258 [2024-07-15 03:37:25.300996] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114bed0 (9): Bad file descriptor 00:34:19.258 [2024-07-15 03:37:25.301246] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:19.258 [2024-07-15 03:37:25.301273] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:19.258 [2024-07-15 03:37:25.301294] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:19.258 [2024-07-15 03:37:25.304885] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:19.258 [2024-07-15 03:37:25.313990] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:19.258 [2024-07-15 03:37:25.314465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.258 [2024-07-15 03:37:25.314508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114bed0 with addr=10.0.0.2, port=4420 00:34:19.258 [2024-07-15 03:37:25.314532] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114bed0 is same with the state(5) to be set 00:34:19.258 [2024-07-15 03:37:25.314780] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114bed0 (9): Bad file descriptor 00:34:19.258 [2024-07-15 03:37:25.315040] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:19.258 [2024-07-15 03:37:25.315068] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:19.258 [2024-07-15 03:37:25.315088] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:19.258 [2024-07-15 03:37:25.318667] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:19.258 [2024-07-15 03:37:25.327985] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:19.258 [2024-07-15 03:37:25.328559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.258 [2024-07-15 03:37:25.328606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114bed0 with addr=10.0.0.2, port=4420 00:34:19.258 [2024-07-15 03:37:25.328630] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114bed0 is same with the state(5) to be set 00:34:19.258 [2024-07-15 03:37:25.328890] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114bed0 (9): Bad file descriptor 00:34:19.258 [2024-07-15 03:37:25.329140] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:19.258 [2024-07-15 03:37:25.329167] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:19.258 [2024-07-15 03:37:25.329188] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:19.258 [2024-07-15 03:37:25.332787] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:19.258 [2024-07-15 03:37:25.341898] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:19.258 [2024-07-15 03:37:25.342472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.258 [2024-07-15 03:37:25.342518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114bed0 with addr=10.0.0.2, port=4420 00:34:19.258 [2024-07-15 03:37:25.342541] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114bed0 is same with the state(5) to be set 00:34:19.259 [2024-07-15 03:37:25.342792] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114bed0 (9): Bad file descriptor 00:34:19.259 [2024-07-15 03:37:25.343051] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:19.259 [2024-07-15 03:37:25.343078] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:19.259 [2024-07-15 03:37:25.343097] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:19.259 [2024-07-15 03:37:25.346676] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:19.259 [2024-07-15 03:37:25.355790] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:19.259 [2024-07-15 03:37:25.356196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.259 [2024-07-15 03:37:25.356229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114bed0 with addr=10.0.0.2, port=4420 00:34:19.259 [2024-07-15 03:37:25.356248] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114bed0 is same with the state(5) to be set 00:34:19.259 [2024-07-15 03:37:25.356489] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114bed0 (9): Bad file descriptor 00:34:19.259 [2024-07-15 03:37:25.356734] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:19.259 [2024-07-15 03:37:25.356759] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:19.259 [2024-07-15 03:37:25.356775] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:19.259 [2024-07-15 03:37:25.360362] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:19.259 [2024-07-15 03:37:25.369340] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:19.259 [2024-07-15 03:37:25.369703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.259 [2024-07-15 03:37:25.369732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114bed0 with addr=10.0.0.2, port=4420 00:34:19.259 [2024-07-15 03:37:25.369749] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114bed0 is same with the state(5) to be set 00:34:19.259 [2024-07-15 03:37:25.369973] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114bed0 (9): Bad file descriptor 00:34:19.259 [2024-07-15 03:37:25.370209] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:19.259 [2024-07-15 03:37:25.370231] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:19.259 [2024-07-15 03:37:25.370246] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:19.259 [2024-07-15 03:37:25.373491] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:19.259 03:37:25 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:34:19.259 03:37:25 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@862 -- # return 0 00:34:19.259 03:37:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:34:19.259 03:37:25 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@728 -- # xtrace_disable 00:34:19.259 03:37:25 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:19.259 [2024-07-15 03:37:25.382843] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:19.259 [2024-07-15 03:37:25.383234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.259 [2024-07-15 03:37:25.383264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114bed0 with addr=10.0.0.2, port=4420 00:34:19.259 [2024-07-15 03:37:25.383280] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114bed0 is same with the state(5) to be set 00:34:19.259 [2024-07-15 03:37:25.383524] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114bed0 (9): Bad file descriptor 00:34:19.259 [2024-07-15 03:37:25.383731] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:19.259 [2024-07-15 03:37:25.383754] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:19.259 [2024-07-15 03:37:25.383768] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:19.259 [2024-07-15 03:37:25.387016] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:19.259 [2024-07-15 03:37:25.396398] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:19.259 03:37:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:19.259 [2024-07-15 03:37:25.396750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.259 [2024-07-15 03:37:25.396780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114bed0 with addr=10.0.0.2, port=4420 00:34:19.259 [2024-07-15 03:37:25.396796] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114bed0 is same with the state(5) to be set 00:34:19.259 03:37:25 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:34:19.259 [2024-07-15 03:37:25.397019] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114bed0 (9): Bad file descriptor 00:34:19.259 03:37:25 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:19.259 03:37:25 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:19.259 [2024-07-15 03:37:25.397240] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:19.259 [2024-07-15 03:37:25.397263] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:19.259 [2024-07-15 03:37:25.397278] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:19.518 [2024-07-15 03:37:25.400120] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:19.518 [2024-07-15 03:37:25.400524] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:19.518 03:37:25 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:19.518 03:37:25 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:34:19.518 03:37:25 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:19.518 03:37:25 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:19.518 [2024-07-15 03:37:25.410080] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:19.518 [2024-07-15 03:37:25.410565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.518 [2024-07-15 03:37:25.410593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114bed0 with addr=10.0.0.2, port=4420 00:34:19.518 [2024-07-15 03:37:25.410610] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114bed0 is same with the state(5) to be set 00:34:19.518 [2024-07-15 03:37:25.410836] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114bed0 (9): Bad file descriptor 00:34:19.518 [2024-07-15 03:37:25.411084] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:19.518 [2024-07-15 03:37:25.411107] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:19.518 [2024-07-15 03:37:25.411122] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:19.518 [2024-07-15 03:37:25.414306] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:19.518 [2024-07-15 03:37:25.423719] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:19.518 [2024-07-15 03:37:25.424126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.518 [2024-07-15 03:37:25.424157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114bed0 with addr=10.0.0.2, port=4420 00:34:19.518 [2024-07-15 03:37:25.424175] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114bed0 is same with the state(5) to be set 00:34:19.518 [2024-07-15 03:37:25.424421] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114bed0 (9): Bad file descriptor 00:34:19.518 [2024-07-15 03:37:25.424629] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:19.518 [2024-07-15 03:37:25.424650] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:19.518 [2024-07-15 03:37:25.424664] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:19.518 [2024-07-15 03:37:25.427985] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:19.518 [2024-07-15 03:37:25.437429] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:19.518 [2024-07-15 03:37:25.437969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.518 [2024-07-15 03:37:25.438008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114bed0 with addr=10.0.0.2, port=4420 00:34:19.518 [2024-07-15 03:37:25.438029] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114bed0 is same with the state(5) to be set 00:34:19.518 [2024-07-15 03:37:25.438283] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114bed0 (9): Bad file descriptor 00:34:19.518 [2024-07-15 03:37:25.438496] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:19.518 [2024-07-15 03:37:25.438519] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:19.518 [2024-07-15 03:37:25.438538] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:19.518 Malloc0 00:34:19.518 03:37:25 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:19.518 03:37:25 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:34:19.518 03:37:25 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:19.518 03:37:25 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:19.518 [2024-07-15 03:37:25.441805] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:19.518 03:37:25 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:19.518 03:37:25 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:34:19.518 03:37:25 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:19.518 03:37:25 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:19.518 [2024-07-15 03:37:25.451059] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:19.518 [2024-07-15 03:37:25.451500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.518 [2024-07-15 03:37:25.451529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114bed0 with addr=10.0.0.2, port=4420 00:34:19.518 [2024-07-15 03:37:25.451557] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114bed0 is same with the state(5) to be set 00:34:19.518 [2024-07-15 03:37:25.451790] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114bed0 (9): Bad file descriptor 00:34:19.518 [2024-07-15 03:37:25.452054] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:19.518 [2024-07-15 03:37:25.452077] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:19.518 [2024-07-15 03:37:25.452092] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:19.518 [2024-07-15 03:37:25.455313] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:19.518 03:37:25 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:19.518 03:37:25 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:19.518 03:37:25 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:19.518 03:37:25 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:19.518 [2024-07-15 03:37:25.459791] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:19.518 03:37:25 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:19.518 03:37:25 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 3346575 00:34:19.518 [2024-07-15 03:37:25.464560] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:19.518 [2024-07-15 03:37:25.498132] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:34:29.482 00:34:29.482 Latency(us) 00:34:29.482 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:29.482 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:34:29.482 Verification LBA range: start 0x0 length 0x4000 00:34:29.482 Nvme1n1 : 15.01 6690.50 26.13 8467.90 0.00 8418.73 867.75 19418.07 00:34:29.482 =================================================================================================================== 00:34:29.482 Total : 6690.50 26.13 8467.90 0.00 8418.73 867.75 19418.07 00:34:29.482 03:37:34 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:34:29.482 03:37:34 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:34:29.482 03:37:34 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:29.482 03:37:34 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:29.482 03:37:34 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:29.482 03:37:34 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:34:29.482 03:37:34 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:34:29.482 03:37:34 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@488 -- # nvmfcleanup 00:34:29.482 03:37:34 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@117 -- # sync 00:34:29.482 03:37:34 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:34:29.482 03:37:34 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@120 -- # set +e 00:34:29.482 03:37:34 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@121 -- # for i in {1..20} 00:34:29.482 03:37:34 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:34:29.482 rmmod nvme_tcp 00:34:29.482 rmmod nvme_fabrics 00:34:29.482 rmmod nvme_keyring 00:34:29.482 03:37:34 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:34:29.482 03:37:34 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@124 -- # set -e 00:34:29.482 03:37:34 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@125 -- # return 0 00:34:29.482 03:37:34 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@489 -- # '[' -n 3347357 ']' 00:34:29.482 03:37:34 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@490 -- # killprocess 3347357 00:34:29.482 03:37:34 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@948 -- # '[' -z 3347357 ']' 00:34:29.482 03:37:34 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@952 -- # kill -0 3347357 00:34:29.482 03:37:34 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@953 -- # uname 00:34:29.482 03:37:34 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:34:29.482 03:37:34 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3347357 00:34:29.482 03:37:34 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:34:29.482 03:37:34 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:34:29.482 03:37:34 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3347357' 00:34:29.482 killing process with pid 3347357 00:34:29.482 03:37:34 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@967 -- # kill 3347357 00:34:29.482 03:37:34 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@972 -- # wait 3347357 00:34:29.482 03:37:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:34:29.482 03:37:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:34:29.482 03:37:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:34:29.482 03:37:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:34:29.482 03:37:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:34:29.482 03:37:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:29.482 03:37:35 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:34:29.482 03:37:35 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:31.387 03:37:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:34:31.387 00:34:31.387 real 0m22.184s 00:34:31.387 user 1m0.125s 00:34:31.387 sys 0m3.914s 00:34:31.387 03:37:37 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:34:31.387 03:37:37 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:31.387 ************************************ 00:34:31.387 END TEST nvmf_bdevperf 00:34:31.387 ************************************ 00:34:31.387 03:37:37 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:34:31.387 03:37:37 nvmf_tcp -- nvmf/nvmf.sh@123 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:34:31.387 03:37:37 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:34:31.387 03:37:37 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:34:31.387 03:37:37 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:31.388 ************************************ 00:34:31.388 START TEST nvmf_target_disconnect 00:34:31.388 ************************************ 00:34:31.388 03:37:37 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:34:31.388 * Looking for test storage... 00:34:31.388 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:34:31.388 03:37:37 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:31.388 03:37:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:34:31.388 03:37:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:31.388 03:37:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:31.388 03:37:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:31.388 03:37:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:31.388 03:37:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:31.388 03:37:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:31.388 03:37:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:31.388 03:37:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:31.388 03:37:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:31.388 03:37:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:31.388 03:37:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:34:31.388 03:37:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:34:31.388 03:37:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:31.388 03:37:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:31.388 03:37:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:31.388 03:37:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:31.388 03:37:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:31.388 03:37:37 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:31.388 03:37:37 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:31.388 03:37:37 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:31.388 03:37:37 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:31.388 03:37:37 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:31.388 03:37:37 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:31.388 03:37:37 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:34:31.388 03:37:37 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:31.388 03:37:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@47 -- # : 0 00:34:31.388 03:37:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:34:31.388 03:37:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:34:31.388 03:37:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:31.388 03:37:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:31.388 03:37:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:31.388 03:37:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:34:31.388 03:37:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:34:31.388 03:37:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:34:31.388 03:37:37 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:34:31.388 03:37:37 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:34:31.388 03:37:37 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:34:31.388 03:37:37 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:34:31.388 03:37:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:34:31.388 03:37:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:31.388 03:37:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:34:31.388 03:37:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:34:31.388 03:37:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:34:31.388 03:37:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:31.388 03:37:37 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:34:31.388 03:37:37 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:31.388 03:37:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:34:31.388 03:37:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:34:31.388 03:37:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:34:31.388 03:37:37 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:34:33.289 03:37:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:33.289 03:37:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:34:33.289 03:37:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:34:33.289 03:37:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:34:33.289 03:37:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:34:33.289 03:37:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:34:33.289 03:37:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:34:33.289 03:37:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:34:33.289 03:37:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:34:33.289 03:37:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@296 -- # e810=() 00:34:33.289 03:37:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:34:33.289 03:37:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@297 -- # x722=() 00:34:33.289 03:37:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:34:33.289 03:37:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:34:33.289 03:37:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:34:33.289 03:37:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:33.289 03:37:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:33.289 03:37:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:33.289 03:37:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:33.289 03:37:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:33.289 03:37:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:33.289 03:37:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:33.289 03:37:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:33.289 03:37:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:33.289 03:37:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:33.289 03:37:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:33.289 03:37:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:34:33.289 03:37:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:34:33.289 03:37:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:34:33.289 03:37:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:34:33.289 03:37:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:34:33.289 03:37:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:34:33.289 03:37:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:34:33.289 03:37:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:34:33.289 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:34:33.289 03:37:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:34:33.289 03:37:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:34:33.289 03:37:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:33.289 03:37:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:33.289 03:37:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:34:33.289 03:37:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:34:33.289 03:37:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:34:33.289 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:34:33.289 03:37:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:34:33.289 03:37:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:34:33.289 03:37:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:33.289 03:37:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:33.289 03:37:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:34:33.289 03:37:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:34:33.289 03:37:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:34:33.289 03:37:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:34:33.289 03:37:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:34:33.289 03:37:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:33.289 03:37:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:34:33.289 03:37:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:33.289 03:37:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:34:33.289 03:37:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:34:33.289 03:37:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:33.289 03:37:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:34:33.289 Found net devices under 0000:0a:00.0: cvl_0_0 00:34:33.290 03:37:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:34:33.290 03:37:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:34:33.290 03:37:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:33.290 03:37:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:34:33.290 03:37:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:33.290 03:37:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:34:33.290 03:37:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:34:33.290 03:37:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:33.290 03:37:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:34:33.290 Found net devices under 0000:0a:00.1: cvl_0_1 00:34:33.290 03:37:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:34:33.290 03:37:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:34:33.290 03:37:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:34:33.290 03:37:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:34:33.290 03:37:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:34:33.290 03:37:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:34:33.290 03:37:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:33.290 03:37:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:33.290 03:37:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:33.290 03:37:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:34:33.290 03:37:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:33.290 03:37:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:33.290 03:37:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:34:33.290 03:37:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:33.290 03:37:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:33.290 03:37:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:34:33.290 03:37:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:34:33.290 03:37:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:34:33.290 03:37:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:33.290 03:37:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:33.290 03:37:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:33.290 03:37:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:34:33.290 03:37:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:33.290 03:37:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:33.290 03:37:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:33.290 03:37:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:34:33.290 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:33.290 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.198 ms 00:34:33.290 00:34:33.290 --- 10.0.0.2 ping statistics --- 00:34:33.290 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:33.290 rtt min/avg/max/mdev = 0.198/0.198/0.198/0.000 ms 00:34:33.290 03:37:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:33.290 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:33.290 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.061 ms 00:34:33.290 00:34:33.290 --- 10.0.0.1 ping statistics --- 00:34:33.290 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:33.290 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:34:33.290 03:37:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:33.290 03:37:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@422 -- # return 0 00:34:33.290 03:37:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:34:33.290 03:37:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:33.290 03:37:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:34:33.290 03:37:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:34:33.290 03:37:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:33.290 03:37:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:34:33.290 03:37:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:34:33.290 03:37:39 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:34:33.290 03:37:39 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:34:33.290 03:37:39 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # xtrace_disable 00:34:33.290 03:37:39 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:34:33.290 ************************************ 00:34:33.290 START TEST nvmf_target_disconnect_tc1 00:34:33.290 ************************************ 00:34:33.290 03:37:39 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1123 -- # nvmf_target_disconnect_tc1 00:34:33.290 03:37:39 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:34:33.290 03:37:39 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@648 -- # local es=0 00:34:33.290 03:37:39 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:34:33.290 03:37:39 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:34:33.290 03:37:39 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:34:33.290 03:37:39 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:34:33.290 03:37:39 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:34:33.290 03:37:39 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:34:33.290 03:37:39 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:34:33.290 03:37:39 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:34:33.290 03:37:39 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:34:33.290 03:37:39 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:34:33.290 EAL: No free 2048 kB hugepages reported on node 1 00:34:33.290 [2024-07-15 03:37:39.388922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.290 [2024-07-15 03:37:39.389003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1476e70 with addr=10.0.0.2, port=4420 00:34:33.290 [2024-07-15 03:37:39.389044] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:34:33.290 [2024-07-15 03:37:39.389070] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:34:33.290 [2024-07-15 03:37:39.389086] nvme.c: 913:spdk_nvme_probe: *ERROR*: Create probe context failed 00:34:33.290 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:34:33.290 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:34:33.290 Initializing NVMe Controllers 00:34:33.290 03:37:39 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@651 -- # es=1 00:34:33.290 03:37:39 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:34:33.290 03:37:39 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:34:33.290 03:37:39 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:34:33.290 00:34:33.290 real 0m0.093s 00:34:33.290 user 0m0.044s 00:34:33.290 sys 0m0.049s 00:34:33.290 03:37:39 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:34:33.290 03:37:39 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:34:33.290 ************************************ 00:34:33.290 END TEST nvmf_target_disconnect_tc1 00:34:33.290 ************************************ 00:34:33.290 03:37:39 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1142 -- # return 0 00:34:33.290 03:37:39 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:34:33.290 03:37:39 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:34:33.290 03:37:39 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # xtrace_disable 00:34:33.290 03:37:39 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:34:33.549 ************************************ 00:34:33.549 START TEST nvmf_target_disconnect_tc2 00:34:33.549 ************************************ 00:34:33.549 03:37:39 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1123 -- # nvmf_target_disconnect_tc2 00:34:33.549 03:37:39 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:34:33.549 03:37:39 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:34:33.549 03:37:39 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:34:33.549 03:37:39 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:34:33.549 03:37:39 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:33.549 03:37:39 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=3350388 00:34:33.549 03:37:39 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:34:33.549 03:37:39 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 3350388 00:34:33.549 03:37:39 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@829 -- # '[' -z 3350388 ']' 00:34:33.549 03:37:39 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:33.549 03:37:39 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:34:33.549 03:37:39 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:33.549 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:33.549 03:37:39 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:34:33.549 03:37:39 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:33.549 [2024-07-15 03:37:39.495684] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:34:33.549 [2024-07-15 03:37:39.495757] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:33.549 EAL: No free 2048 kB hugepages reported on node 1 00:34:33.549 [2024-07-15 03:37:39.561742] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:34:33.549 [2024-07-15 03:37:39.650539] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:33.549 [2024-07-15 03:37:39.650584] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:33.549 [2024-07-15 03:37:39.650613] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:33.549 [2024-07-15 03:37:39.650625] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:33.549 [2024-07-15 03:37:39.650636] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:33.549 [2024-07-15 03:37:39.650722] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:34:33.549 [2024-07-15 03:37:39.650786] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:34:33.549 [2024-07-15 03:37:39.650837] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:34:33.549 [2024-07-15 03:37:39.650840] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:34:33.808 03:37:39 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:34:33.808 03:37:39 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@862 -- # return 0 00:34:33.808 03:37:39 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:34:33.808 03:37:39 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:34:33.808 03:37:39 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:33.808 03:37:39 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:33.808 03:37:39 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:34:33.808 03:37:39 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:33.808 03:37:39 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:33.808 Malloc0 00:34:33.808 03:37:39 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:33.808 03:37:39 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:34:33.808 03:37:39 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:33.808 03:37:39 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:33.808 [2024-07-15 03:37:39.815569] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:33.808 03:37:39 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:33.808 03:37:39 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:34:33.808 03:37:39 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:33.808 03:37:39 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:33.808 03:37:39 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:33.808 03:37:39 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:34:33.808 03:37:39 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:33.808 03:37:39 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:33.808 03:37:39 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:33.808 03:37:39 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:33.808 03:37:39 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:33.808 03:37:39 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:33.808 [2024-07-15 03:37:39.843815] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:33.808 03:37:39 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:33.808 03:37:39 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:34:33.808 03:37:39 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:33.808 03:37:39 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:33.808 03:37:39 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:33.808 03:37:39 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=3350531 00:34:33.808 03:37:39 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:34:33.808 03:37:39 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:34:33.808 EAL: No free 2048 kB hugepages reported on node 1 00:34:36.372 03:37:41 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 3350388 00:34:36.372 03:37:41 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:34:36.372 Read completed with error (sct=0, sc=8) 00:34:36.372 starting I/O failed 00:34:36.372 Read completed with error (sct=0, sc=8) 00:34:36.372 starting I/O failed 00:34:36.372 Read completed with error (sct=0, sc=8) 00:34:36.372 starting I/O failed 00:34:36.372 Read completed with error (sct=0, sc=8) 00:34:36.372 starting I/O failed 00:34:36.372 Read completed with error (sct=0, sc=8) 00:34:36.372 starting I/O failed 00:34:36.372 Read completed with error (sct=0, sc=8) 00:34:36.372 starting I/O failed 00:34:36.372 Read completed with error (sct=0, sc=8) 00:34:36.372 starting I/O failed 00:34:36.372 Read completed with error (sct=0, sc=8) 00:34:36.372 starting I/O failed 00:34:36.372 Read completed with error (sct=0, sc=8) 00:34:36.372 starting I/O failed 00:34:36.372 Read completed with error (sct=0, sc=8) 00:34:36.372 starting I/O failed 00:34:36.372 Read completed with error (sct=0, sc=8) 00:34:36.372 starting I/O failed 00:34:36.372 Read completed with error (sct=0, sc=8) 00:34:36.372 starting I/O failed 00:34:36.372 Read completed with error (sct=0, sc=8) 00:34:36.372 starting I/O failed 00:34:36.372 Write completed with error (sct=0, sc=8) 00:34:36.372 starting I/O failed 00:34:36.372 Write completed with error (sct=0, sc=8) 00:34:36.372 starting I/O failed 00:34:36.372 Read completed with error (sct=0, sc=8) 00:34:36.372 starting I/O failed 00:34:36.372 Write completed with error (sct=0, sc=8) 00:34:36.372 starting I/O failed 00:34:36.372 Write completed with error (sct=0, sc=8) 00:34:36.372 starting I/O failed 00:34:36.372 Write completed with error (sct=0, sc=8) 00:34:36.372 starting I/O failed 00:34:36.372 Read completed with error (sct=0, sc=8) 00:34:36.372 starting I/O failed 00:34:36.372 Write completed with error (sct=0, sc=8) 00:34:36.372 starting I/O failed 00:34:36.372 Read completed with error (sct=0, sc=8) 00:34:36.372 starting I/O failed 00:34:36.372 Write completed with error (sct=0, sc=8) 00:34:36.372 starting I/O failed 00:34:36.372 Read completed with error (sct=0, sc=8) 00:34:36.372 starting I/O failed 00:34:36.372 Write completed with error (sct=0, sc=8) 00:34:36.372 starting I/O failed 00:34:36.372 Write completed with error (sct=0, sc=8) 00:34:36.372 starting I/O failed 00:34:36.372 Read completed with error (sct=0, sc=8) 00:34:36.372 starting I/O failed 00:34:36.372 Write completed with error (sct=0, sc=8) 00:34:36.372 starting I/O failed 00:34:36.372 Write completed with error (sct=0, sc=8) 00:34:36.372 starting I/O failed 00:34:36.372 Read completed with error (sct=0, sc=8) 00:34:36.372 starting I/O failed 00:34:36.372 Read completed with error (sct=0, sc=8) 00:34:36.372 starting I/O failed 00:34:36.372 Write completed with error (sct=0, sc=8) 00:34:36.372 starting I/O failed 00:34:36.372 Read completed with error (sct=0, sc=8) 00:34:36.372 starting I/O failed 00:34:36.372 [2024-07-15 03:37:41.867590] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:36.372 Read completed with error (sct=0, sc=8) 00:34:36.372 starting I/O failed 00:34:36.372 Read completed with error (sct=0, sc=8) 00:34:36.372 starting I/O failed 00:34:36.372 Read completed with error (sct=0, sc=8) 00:34:36.372 starting I/O failed 00:34:36.372 Read completed with error (sct=0, sc=8) 00:34:36.372 starting I/O failed 00:34:36.372 Read completed with error (sct=0, sc=8) 00:34:36.372 starting I/O failed 00:34:36.372 Read completed with error (sct=0, sc=8) 00:34:36.372 starting I/O failed 00:34:36.372 Read completed with error (sct=0, sc=8) 00:34:36.372 starting I/O failed 00:34:36.372 Read completed with error (sct=0, sc=8) 00:34:36.372 starting I/O failed 00:34:36.372 Read completed with error (sct=0, sc=8) 00:34:36.372 starting I/O failed 00:34:36.372 Read completed with error (sct=0, sc=8) 00:34:36.372 starting I/O failed 00:34:36.372 Read completed with error (sct=0, sc=8) 00:34:36.372 starting I/O failed 00:34:36.372 Read completed with error (sct=0, sc=8) 00:34:36.372 starting I/O failed 00:34:36.372 Read completed with error (sct=0, sc=8) 00:34:36.372 starting I/O failed 00:34:36.372 Read completed with error (sct=0, sc=8) 00:34:36.372 starting I/O failed 00:34:36.372 Read completed with error (sct=0, sc=8) 00:34:36.372 starting I/O failed 00:34:36.372 Read completed with error (sct=0, sc=8) 00:34:36.372 starting I/O failed 00:34:36.372 Read completed with error (sct=0, sc=8) 00:34:36.372 starting I/O failed 00:34:36.372 Write completed with error (sct=0, sc=8) 00:34:36.372 starting I/O failed 00:34:36.372 Read completed with error (sct=0, sc=8) 00:34:36.372 starting I/O failed 00:34:36.372 Write completed with error (sct=0, sc=8) 00:34:36.372 starting I/O failed 00:34:36.372 Read completed with error (sct=0, sc=8) 00:34:36.372 starting I/O failed 00:34:36.372 Write completed with error (sct=0, sc=8) 00:34:36.372 starting I/O failed 00:34:36.372 Read completed with error (sct=0, sc=8) 00:34:36.372 starting I/O failed 00:34:36.372 Read completed with error (sct=0, sc=8) 00:34:36.372 starting I/O failed 00:34:36.372 Read completed with error (sct=0, sc=8) 00:34:36.372 starting I/O failed 00:34:36.372 Read completed with error (sct=0, sc=8) 00:34:36.372 starting I/O failed 00:34:36.372 Write completed with error (sct=0, sc=8) 00:34:36.372 starting I/O failed 00:34:36.372 Read completed with error (sct=0, sc=8) 00:34:36.372 starting I/O failed 00:34:36.372 Write completed with error (sct=0, sc=8) 00:34:36.372 starting I/O failed 00:34:36.372 Read completed with error (sct=0, sc=8) 00:34:36.372 starting I/O failed 00:34:36.372 Write completed with error (sct=0, sc=8) 00:34:36.372 starting I/O failed 00:34:36.372 [2024-07-15 03:37:41.867941] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:36.372 Read completed with error (sct=0, sc=8) 00:34:36.372 starting I/O failed 00:34:36.372 Read completed with error (sct=0, sc=8) 00:34:36.372 starting I/O failed 00:34:36.372 Read completed with error (sct=0, sc=8) 00:34:36.372 starting I/O failed 00:34:36.372 Read completed with error (sct=0, sc=8) 00:34:36.372 starting I/O failed 00:34:36.372 Read completed with error (sct=0, sc=8) 00:34:36.372 starting I/O failed 00:34:36.372 Read completed with error (sct=0, sc=8) 00:34:36.372 starting I/O failed 00:34:36.372 Read completed with error (sct=0, sc=8) 00:34:36.372 starting I/O failed 00:34:36.372 Read completed with error (sct=0, sc=8) 00:34:36.372 starting I/O failed 00:34:36.372 Read completed with error (sct=0, sc=8) 00:34:36.372 starting I/O failed 00:34:36.372 Read completed with error (sct=0, sc=8) 00:34:36.372 starting I/O failed 00:34:36.372 Read completed with error (sct=0, sc=8) 00:34:36.372 starting I/O failed 00:34:36.372 Write completed with error (sct=0, sc=8) 00:34:36.372 starting I/O failed 00:34:36.372 Write completed with error (sct=0, sc=8) 00:34:36.372 starting I/O failed 00:34:36.372 Write completed with error (sct=0, sc=8) 00:34:36.372 starting I/O failed 00:34:36.372 Read completed with error (sct=0, sc=8) 00:34:36.372 starting I/O failed 00:34:36.372 Write completed with error (sct=0, sc=8) 00:34:36.372 starting I/O failed 00:34:36.372 Write completed with error (sct=0, sc=8) 00:34:36.372 starting I/O failed 00:34:36.372 Write completed with error (sct=0, sc=8) 00:34:36.372 starting I/O failed 00:34:36.372 Write completed with error (sct=0, sc=8) 00:34:36.372 starting I/O failed 00:34:36.372 Write completed with error (sct=0, sc=8) 00:34:36.372 starting I/O failed 00:34:36.372 Read completed with error (sct=0, sc=8) 00:34:36.372 starting I/O failed 00:34:36.372 Read completed with error (sct=0, sc=8) 00:34:36.372 starting I/O failed 00:34:36.372 Write completed with error (sct=0, sc=8) 00:34:36.372 starting I/O failed 00:34:36.372 Read completed with error (sct=0, sc=8) 00:34:36.372 starting I/O failed 00:34:36.372 Read completed with error (sct=0, sc=8) 00:34:36.372 starting I/O failed 00:34:36.372 Read completed with error (sct=0, sc=8) 00:34:36.372 starting I/O failed 00:34:36.372 Write completed with error (sct=0, sc=8) 00:34:36.372 starting I/O failed 00:34:36.372 Write completed with error (sct=0, sc=8) 00:34:36.372 starting I/O failed 00:34:36.372 Read completed with error (sct=0, sc=8) 00:34:36.372 starting I/O failed 00:34:36.372 Read completed with error (sct=0, sc=8) 00:34:36.372 starting I/O failed 00:34:36.372 Write completed with error (sct=0, sc=8) 00:34:36.372 starting I/O failed 00:34:36.372 Read completed with error (sct=0, sc=8) 00:34:36.372 starting I/O failed 00:34:36.372 [2024-07-15 03:37:41.868251] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:36.372 Read completed with error (sct=0, sc=8) 00:34:36.372 starting I/O failed 00:34:36.372 Read completed with error (sct=0, sc=8) 00:34:36.372 starting I/O failed 00:34:36.372 Read completed with error (sct=0, sc=8) 00:34:36.372 starting I/O failed 00:34:36.372 Read completed with error (sct=0, sc=8) 00:34:36.372 starting I/O failed 00:34:36.372 Read completed with error (sct=0, sc=8) 00:34:36.372 starting I/O failed 00:34:36.372 Write completed with error (sct=0, sc=8) 00:34:36.372 starting I/O failed 00:34:36.372 Write completed with error (sct=0, sc=8) 00:34:36.372 starting I/O failed 00:34:36.372 Read completed with error (sct=0, sc=8) 00:34:36.372 starting I/O failed 00:34:36.372 Read completed with error (sct=0, sc=8) 00:34:36.372 starting I/O failed 00:34:36.372 Write completed with error (sct=0, sc=8) 00:34:36.372 starting I/O failed 00:34:36.372 Read completed with error (sct=0, sc=8) 00:34:36.372 starting I/O failed 00:34:36.372 Read completed with error (sct=0, sc=8) 00:34:36.372 starting I/O failed 00:34:36.372 Read completed with error (sct=0, sc=8) 00:34:36.372 starting I/O failed 00:34:36.372 Write completed with error (sct=0, sc=8) 00:34:36.372 starting I/O failed 00:34:36.372 Read completed with error (sct=0, sc=8) 00:34:36.372 starting I/O failed 00:34:36.372 Read completed with error (sct=0, sc=8) 00:34:36.372 starting I/O failed 00:34:36.372 Write completed with error (sct=0, sc=8) 00:34:36.372 starting I/O failed 00:34:36.372 Read completed with error (sct=0, sc=8) 00:34:36.372 starting I/O failed 00:34:36.372 Write completed with error (sct=0, sc=8) 00:34:36.372 starting I/O failed 00:34:36.372 Read completed with error (sct=0, sc=8) 00:34:36.372 starting I/O failed 00:34:36.372 Read completed with error (sct=0, sc=8) 00:34:36.372 starting I/O failed 00:34:36.372 Read completed with error (sct=0, sc=8) 00:34:36.372 starting I/O failed 00:34:36.372 Write completed with error (sct=0, sc=8) 00:34:36.372 starting I/O failed 00:34:36.372 Read completed with error (sct=0, sc=8) 00:34:36.372 starting I/O failed 00:34:36.372 Read completed with error (sct=0, sc=8) 00:34:36.372 starting I/O failed 00:34:36.372 Read completed with error (sct=0, sc=8) 00:34:36.372 starting I/O failed 00:34:36.372 Read completed with error (sct=0, sc=8) 00:34:36.372 starting I/O failed 00:34:36.372 Write completed with error (sct=0, sc=8) 00:34:36.372 starting I/O failed 00:34:36.372 Write completed with error (sct=0, sc=8) 00:34:36.372 starting I/O failed 00:34:36.372 Read completed with error (sct=0, sc=8) 00:34:36.372 starting I/O failed 00:34:36.372 Write completed with error (sct=0, sc=8) 00:34:36.372 starting I/O failed 00:34:36.372 Write completed with error (sct=0, sc=8) 00:34:36.372 starting I/O failed 00:34:36.372 [2024-07-15 03:37:41.868608] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:36.372 [2024-07-15 03:37:41.868840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.372 [2024-07-15 03:37:41.868897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.372 qpair failed and we were unable to recover it. 00:34:36.372 [2024-07-15 03:37:41.869031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.372 [2024-07-15 03:37:41.869058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.372 qpair failed and we were unable to recover it. 00:34:36.372 [2024-07-15 03:37:41.869172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.372 [2024-07-15 03:37:41.869199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.372 qpair failed and we were unable to recover it. 00:34:36.372 [2024-07-15 03:37:41.869344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.372 [2024-07-15 03:37:41.869370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.372 qpair failed and we were unable to recover it. 00:34:36.372 [2024-07-15 03:37:41.869493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.372 [2024-07-15 03:37:41.869518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.372 qpair failed and we were unable to recover it. 00:34:36.372 [2024-07-15 03:37:41.869656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.372 [2024-07-15 03:37:41.869682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.372 qpair failed and we were unable to recover it. 00:34:36.372 [2024-07-15 03:37:41.869851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.372 [2024-07-15 03:37:41.869885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.372 qpair failed and we were unable to recover it. 00:34:36.372 [2024-07-15 03:37:41.869994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.372 [2024-07-15 03:37:41.870020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.372 qpair failed and we were unable to recover it. 00:34:36.372 [2024-07-15 03:37:41.870140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.372 [2024-07-15 03:37:41.870174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.372 qpair failed and we were unable to recover it. 00:34:36.372 [2024-07-15 03:37:41.870296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.372 [2024-07-15 03:37:41.870323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.372 qpair failed and we were unable to recover it. 00:34:36.372 [2024-07-15 03:37:41.870461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.372 [2024-07-15 03:37:41.870487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.372 qpair failed and we were unable to recover it. 00:34:36.372 [2024-07-15 03:37:41.870612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.372 [2024-07-15 03:37:41.870641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.372 qpair failed and we were unable to recover it. 00:34:36.372 [2024-07-15 03:37:41.870805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.373 [2024-07-15 03:37:41.870831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.373 qpair failed and we were unable to recover it. 00:34:36.373 [2024-07-15 03:37:41.870964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.373 [2024-07-15 03:37:41.870991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.373 qpair failed and we were unable to recover it. 00:34:36.373 [2024-07-15 03:37:41.871105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.373 [2024-07-15 03:37:41.871131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.373 qpair failed and we were unable to recover it. 00:34:36.373 [2024-07-15 03:37:41.871268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.373 [2024-07-15 03:37:41.871294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.373 qpair failed and we were unable to recover it. 00:34:36.373 [2024-07-15 03:37:41.871434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.373 [2024-07-15 03:37:41.871460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.373 qpair failed and we were unable to recover it. 00:34:36.373 [2024-07-15 03:37:41.871568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.373 [2024-07-15 03:37:41.871594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.373 qpair failed and we were unable to recover it. 00:34:36.373 [2024-07-15 03:37:41.871733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.373 [2024-07-15 03:37:41.871759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.373 qpair failed and we were unable to recover it. 00:34:36.373 [2024-07-15 03:37:41.871887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.373 [2024-07-15 03:37:41.871913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.373 qpair failed and we were unable to recover it. 00:34:36.373 [2024-07-15 03:37:41.872038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.373 [2024-07-15 03:37:41.872065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.373 qpair failed and we were unable to recover it. 00:34:36.373 [2024-07-15 03:37:41.872201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.373 [2024-07-15 03:37:41.872241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.373 qpair failed and we were unable to recover it. 00:34:36.373 [2024-07-15 03:37:41.872384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.373 [2024-07-15 03:37:41.872413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.373 qpair failed and we were unable to recover it. 00:34:36.373 [2024-07-15 03:37:41.872552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.373 [2024-07-15 03:37:41.872579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.373 qpair failed and we were unable to recover it. 00:34:36.373 [2024-07-15 03:37:41.872728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.373 [2024-07-15 03:37:41.872754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.373 qpair failed and we were unable to recover it. 00:34:36.373 [2024-07-15 03:37:41.872936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.373 [2024-07-15 03:37:41.872965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.373 qpair failed and we were unable to recover it. 00:34:36.373 [2024-07-15 03:37:41.873108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.373 [2024-07-15 03:37:41.873134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.373 qpair failed and we were unable to recover it. 00:34:36.373 [2024-07-15 03:37:41.873275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.373 [2024-07-15 03:37:41.873302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.373 qpair failed and we were unable to recover it. 00:34:36.373 [2024-07-15 03:37:41.873423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.373 [2024-07-15 03:37:41.873449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.373 qpair failed and we were unable to recover it. 00:34:36.373 [2024-07-15 03:37:41.873600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.373 [2024-07-15 03:37:41.873626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.373 qpair failed and we were unable to recover it. 00:34:36.373 [2024-07-15 03:37:41.873789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.373 [2024-07-15 03:37:41.873814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.373 qpair failed and we were unable to recover it. 00:34:36.373 [2024-07-15 03:37:41.873961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.373 [2024-07-15 03:37:41.873988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.373 qpair failed and we were unable to recover it. 00:34:36.373 [2024-07-15 03:37:41.874096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.373 [2024-07-15 03:37:41.874122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.373 qpair failed and we were unable to recover it. 00:34:36.373 [2024-07-15 03:37:41.874238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.373 [2024-07-15 03:37:41.874264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.373 qpair failed and we were unable to recover it. 00:34:36.373 [2024-07-15 03:37:41.874403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.373 [2024-07-15 03:37:41.874430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.373 qpair failed and we were unable to recover it. 00:34:36.373 [2024-07-15 03:37:41.874578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.373 [2024-07-15 03:37:41.874622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.373 qpair failed and we were unable to recover it. 00:34:36.373 [2024-07-15 03:37:41.874747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.373 [2024-07-15 03:37:41.874774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.373 qpair failed and we were unable to recover it. 00:34:36.373 [2024-07-15 03:37:41.874920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.373 [2024-07-15 03:37:41.874947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.373 qpair failed and we were unable to recover it. 00:34:36.373 [2024-07-15 03:37:41.875052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.373 [2024-07-15 03:37:41.875078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.373 qpair failed and we were unable to recover it. 00:34:36.373 [2024-07-15 03:37:41.875213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.373 [2024-07-15 03:37:41.875240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.373 qpair failed and we were unable to recover it. 00:34:36.373 [2024-07-15 03:37:41.875381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.373 [2024-07-15 03:37:41.875407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.373 qpair failed and we were unable to recover it. 00:34:36.373 [2024-07-15 03:37:41.875563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.373 [2024-07-15 03:37:41.875592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.373 qpair failed and we were unable to recover it. 00:34:36.373 [2024-07-15 03:37:41.875747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.373 [2024-07-15 03:37:41.875773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.373 qpair failed and we were unable to recover it. 00:34:36.373 [2024-07-15 03:37:41.875945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.373 [2024-07-15 03:37:41.875986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.373 qpair failed and we were unable to recover it. 00:34:36.373 [2024-07-15 03:37:41.876147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.373 [2024-07-15 03:37:41.876198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.373 qpair failed and we were unable to recover it. 00:34:36.373 [2024-07-15 03:37:41.876355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.373 [2024-07-15 03:37:41.876398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.373 qpair failed and we were unable to recover it. 00:34:36.373 [2024-07-15 03:37:41.876547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.373 [2024-07-15 03:37:41.876575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.373 qpair failed and we were unable to recover it. 00:34:36.373 [2024-07-15 03:37:41.876691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.373 [2024-07-15 03:37:41.876718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.373 qpair failed and we were unable to recover it. 00:34:36.373 [2024-07-15 03:37:41.876893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.373 [2024-07-15 03:37:41.876933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.373 qpair failed and we were unable to recover it. 00:34:36.373 [2024-07-15 03:37:41.877052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.373 [2024-07-15 03:37:41.877081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.373 qpair failed and we were unable to recover it. 00:34:36.373 [2024-07-15 03:37:41.877201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.373 [2024-07-15 03:37:41.877227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.373 qpair failed and we were unable to recover it. 00:34:36.373 [2024-07-15 03:37:41.877372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.373 [2024-07-15 03:37:41.877398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.373 qpair failed and we were unable to recover it. 00:34:36.373 [2024-07-15 03:37:41.877540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.373 [2024-07-15 03:37:41.877565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.373 qpair failed and we were unable to recover it. 00:34:36.373 [2024-07-15 03:37:41.877755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.373 [2024-07-15 03:37:41.877781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.373 qpair failed and we were unable to recover it. 00:34:36.373 [2024-07-15 03:37:41.877896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.373 [2024-07-15 03:37:41.877923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.373 qpair failed and we were unable to recover it. 00:34:36.373 [2024-07-15 03:37:41.878063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.373 [2024-07-15 03:37:41.878089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.373 qpair failed and we were unable to recover it. 00:34:36.373 [2024-07-15 03:37:41.878236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.373 [2024-07-15 03:37:41.878262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.373 qpair failed and we were unable to recover it. 00:34:36.373 [2024-07-15 03:37:41.878428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.373 [2024-07-15 03:37:41.878454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.373 qpair failed and we were unable to recover it. 00:34:36.373 [2024-07-15 03:37:41.878592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.373 [2024-07-15 03:37:41.878622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.373 qpair failed and we were unable to recover it. 00:34:36.373 [2024-07-15 03:37:41.878810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.373 [2024-07-15 03:37:41.878836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.373 qpair failed and we were unable to recover it. 00:34:36.373 [2024-07-15 03:37:41.878963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.373 [2024-07-15 03:37:41.878991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.373 qpair failed and we were unable to recover it. 00:34:36.373 [2024-07-15 03:37:41.879165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.373 [2024-07-15 03:37:41.879193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.373 qpair failed and we were unable to recover it. 00:34:36.373 [2024-07-15 03:37:41.879326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.373 [2024-07-15 03:37:41.879353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.373 qpair failed and we were unable to recover it. 00:34:36.373 [2024-07-15 03:37:41.879490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.373 [2024-07-15 03:37:41.879516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.373 qpair failed and we were unable to recover it. 00:34:36.373 [2024-07-15 03:37:41.879631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.373 [2024-07-15 03:37:41.879657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.373 qpair failed and we were unable to recover it. 00:34:36.373 [2024-07-15 03:37:41.879766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.373 [2024-07-15 03:37:41.879794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.373 qpair failed and we were unable to recover it. 00:34:36.373 [2024-07-15 03:37:41.879940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.373 [2024-07-15 03:37:41.879967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.373 qpair failed and we were unable to recover it. 00:34:36.373 [2024-07-15 03:37:41.880128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.373 [2024-07-15 03:37:41.880155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.373 qpair failed and we were unable to recover it. 00:34:36.373 [2024-07-15 03:37:41.880290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.373 [2024-07-15 03:37:41.880316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.373 qpair failed and we were unable to recover it. 00:34:36.373 [2024-07-15 03:37:41.880479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.373 [2024-07-15 03:37:41.880505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.373 qpair failed and we were unable to recover it. 00:34:36.373 [2024-07-15 03:37:41.880643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.373 [2024-07-15 03:37:41.880669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.373 qpair failed and we were unable to recover it. 00:34:36.373 [2024-07-15 03:37:41.880833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.373 [2024-07-15 03:37:41.880859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.373 qpair failed and we were unable to recover it. 00:34:36.373 [2024-07-15 03:37:41.881021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.373 [2024-07-15 03:37:41.881049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.373 qpair failed and we were unable to recover it. 00:34:36.373 [2024-07-15 03:37:41.881189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.373 [2024-07-15 03:37:41.881220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.373 qpair failed and we were unable to recover it. 00:34:36.373 [2024-07-15 03:37:41.881354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.373 [2024-07-15 03:37:41.881380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.373 qpair failed and we were unable to recover it. 00:34:36.373 [2024-07-15 03:37:41.881524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.373 [2024-07-15 03:37:41.881554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.373 qpair failed and we were unable to recover it. 00:34:36.373 [2024-07-15 03:37:41.881722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.373 [2024-07-15 03:37:41.881749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.373 qpair failed and we were unable to recover it. 00:34:36.373 [2024-07-15 03:37:41.881867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.373 [2024-07-15 03:37:41.881915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.373 qpair failed and we were unable to recover it. 00:34:36.373 [2024-07-15 03:37:41.882036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.373 [2024-07-15 03:37:41.882066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.373 qpair failed and we were unable to recover it. 00:34:36.373 [2024-07-15 03:37:41.882209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.373 [2024-07-15 03:37:41.882236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.373 qpair failed and we were unable to recover it. 00:34:36.373 [2024-07-15 03:37:41.882377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.374 [2024-07-15 03:37:41.882404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.374 qpair failed and we were unable to recover it. 00:34:36.374 [2024-07-15 03:37:41.882601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.374 [2024-07-15 03:37:41.882627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.374 qpair failed and we were unable to recover it. 00:34:36.374 [2024-07-15 03:37:41.882772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.374 [2024-07-15 03:37:41.882799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.374 qpair failed and we were unable to recover it. 00:34:36.374 [2024-07-15 03:37:41.882951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.374 [2024-07-15 03:37:41.882980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.374 qpair failed and we were unable to recover it. 00:34:36.374 [2024-07-15 03:37:41.883101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.374 [2024-07-15 03:37:41.883128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.374 qpair failed and we were unable to recover it. 00:34:36.374 [2024-07-15 03:37:41.883297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.374 [2024-07-15 03:37:41.883324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.374 qpair failed and we were unable to recover it. 00:34:36.374 [2024-07-15 03:37:41.883489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.374 [2024-07-15 03:37:41.883533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.374 qpair failed and we were unable to recover it. 00:34:36.374 [2024-07-15 03:37:41.883721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.374 [2024-07-15 03:37:41.883748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.374 qpair failed and we were unable to recover it. 00:34:36.374 [2024-07-15 03:37:41.883890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.374 [2024-07-15 03:37:41.883917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.374 qpair failed and we were unable to recover it. 00:34:36.374 [2024-07-15 03:37:41.884063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.374 [2024-07-15 03:37:41.884090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.374 qpair failed and we were unable to recover it. 00:34:36.374 [2024-07-15 03:37:41.884204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.374 [2024-07-15 03:37:41.884231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.374 qpair failed and we were unable to recover it. 00:34:36.374 [2024-07-15 03:37:41.884371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.374 [2024-07-15 03:37:41.884399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.374 qpair failed and we were unable to recover it. 00:34:36.374 [2024-07-15 03:37:41.884560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.374 [2024-07-15 03:37:41.884589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.374 qpair failed and we were unable to recover it. 00:34:36.374 [2024-07-15 03:37:41.884775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.374 [2024-07-15 03:37:41.884802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.374 qpair failed and we were unable to recover it. 00:34:36.374 [2024-07-15 03:37:41.884909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.374 [2024-07-15 03:37:41.884935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.374 qpair failed and we were unable to recover it. 00:34:36.374 [2024-07-15 03:37:41.885047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.374 [2024-07-15 03:37:41.885073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.374 qpair failed and we were unable to recover it. 00:34:36.374 [2024-07-15 03:37:41.885235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.374 [2024-07-15 03:37:41.885262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.374 qpair failed and we were unable to recover it. 00:34:36.374 [2024-07-15 03:37:41.885440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.374 [2024-07-15 03:37:41.885487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.374 qpair failed and we were unable to recover it. 00:34:36.374 [2024-07-15 03:37:41.885649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.374 [2024-07-15 03:37:41.885675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.374 qpair failed and we were unable to recover it. 00:34:36.374 [2024-07-15 03:37:41.885794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.374 [2024-07-15 03:37:41.885820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.374 qpair failed and we were unable to recover it. 00:34:36.374 [2024-07-15 03:37:41.885956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.374 [2024-07-15 03:37:41.885983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.374 qpair failed and we were unable to recover it. 00:34:36.374 [2024-07-15 03:37:41.886089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.374 [2024-07-15 03:37:41.886115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.374 qpair failed and we were unable to recover it. 00:34:36.374 [2024-07-15 03:37:41.886248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.374 [2024-07-15 03:37:41.886294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.374 qpair failed and we were unable to recover it. 00:34:36.374 [2024-07-15 03:37:41.886444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.374 [2024-07-15 03:37:41.886473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.374 qpair failed and we were unable to recover it. 00:34:36.374 [2024-07-15 03:37:41.886644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.374 [2024-07-15 03:37:41.886670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.374 qpair failed and we were unable to recover it. 00:34:36.374 [2024-07-15 03:37:41.886813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.374 [2024-07-15 03:37:41.886839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.374 qpair failed and we were unable to recover it. 00:34:36.374 [2024-07-15 03:37:41.887018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.374 [2024-07-15 03:37:41.887046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.374 qpair failed and we were unable to recover it. 00:34:36.374 [2024-07-15 03:37:41.887163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.374 [2024-07-15 03:37:41.887190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.374 qpair failed and we were unable to recover it. 00:34:36.374 [2024-07-15 03:37:41.887292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.374 [2024-07-15 03:37:41.887318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.374 qpair failed and we were unable to recover it. 00:34:36.374 [2024-07-15 03:37:41.887479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.374 [2024-07-15 03:37:41.887506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.374 qpair failed and we were unable to recover it. 00:34:36.374 [2024-07-15 03:37:41.887650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.374 [2024-07-15 03:37:41.887676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.374 qpair failed and we were unable to recover it. 00:34:36.374 [2024-07-15 03:37:41.887848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.374 [2024-07-15 03:37:41.887902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.374 qpair failed and we were unable to recover it. 00:34:36.374 [2024-07-15 03:37:41.888054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.374 [2024-07-15 03:37:41.888083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.374 qpair failed and we were unable to recover it. 00:34:36.374 [2024-07-15 03:37:41.888227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.374 [2024-07-15 03:37:41.888254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.374 qpair failed and we were unable to recover it. 00:34:36.374 [2024-07-15 03:37:41.888393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.374 [2024-07-15 03:37:41.888419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.374 qpair failed and we were unable to recover it. 00:34:36.374 [2024-07-15 03:37:41.888613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.374 [2024-07-15 03:37:41.888639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.374 qpair failed and we were unable to recover it. 00:34:36.374 [2024-07-15 03:37:41.888787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.374 [2024-07-15 03:37:41.888815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.374 qpair failed and we were unable to recover it. 00:34:36.374 [2024-07-15 03:37:41.888959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.374 [2024-07-15 03:37:41.888986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.374 qpair failed and we were unable to recover it. 00:34:36.374 [2024-07-15 03:37:41.889099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.374 [2024-07-15 03:37:41.889125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.374 qpair failed and we were unable to recover it. 00:34:36.374 [2024-07-15 03:37:41.889287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.374 [2024-07-15 03:37:41.889313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.374 qpair failed and we were unable to recover it. 00:34:36.374 [2024-07-15 03:37:41.889456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.374 [2024-07-15 03:37:41.889483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.374 qpair failed and we were unable to recover it. 00:34:36.374 [2024-07-15 03:37:41.889623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.374 [2024-07-15 03:37:41.889651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.374 qpair failed and we were unable to recover it. 00:34:36.374 [2024-07-15 03:37:41.889794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.374 [2024-07-15 03:37:41.889820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.374 qpair failed and we were unable to recover it. 00:34:36.374 [2024-07-15 03:37:41.889937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.374 [2024-07-15 03:37:41.889965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.374 qpair failed and we were unable to recover it. 00:34:36.374 [2024-07-15 03:37:41.890128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.374 [2024-07-15 03:37:41.890170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.374 qpair failed and we were unable to recover it. 00:34:36.374 [2024-07-15 03:37:41.890327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.374 [2024-07-15 03:37:41.890354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.374 qpair failed and we were unable to recover it. 00:34:36.374 [2024-07-15 03:37:41.890518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.374 [2024-07-15 03:37:41.890544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.374 qpair failed and we were unable to recover it. 00:34:36.374 [2024-07-15 03:37:41.890715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.374 [2024-07-15 03:37:41.890741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.374 qpair failed and we were unable to recover it. 00:34:36.374 [2024-07-15 03:37:41.890895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.374 [2024-07-15 03:37:41.890923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.374 qpair failed and we were unable to recover it. 00:34:36.374 [2024-07-15 03:37:41.891087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.374 [2024-07-15 03:37:41.891118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.374 qpair failed and we were unable to recover it. 00:34:36.374 [2024-07-15 03:37:41.891254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.374 [2024-07-15 03:37:41.891281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.374 qpair failed and we were unable to recover it. 00:34:36.374 [2024-07-15 03:37:41.891411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.374 [2024-07-15 03:37:41.891437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.374 qpair failed and we were unable to recover it. 00:34:36.374 [2024-07-15 03:37:41.891550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.374 [2024-07-15 03:37:41.891576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.374 qpair failed and we were unable to recover it. 00:34:36.374 [2024-07-15 03:37:41.891706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.374 [2024-07-15 03:37:41.891746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.374 qpair failed and we were unable to recover it. 00:34:36.374 [2024-07-15 03:37:41.891958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.374 [2024-07-15 03:37:41.891999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.374 qpair failed and we were unable to recover it. 00:34:36.374 [2024-07-15 03:37:41.892146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.374 [2024-07-15 03:37:41.892174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.374 qpair failed and we were unable to recover it. 00:34:36.375 [2024-07-15 03:37:41.892313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.375 [2024-07-15 03:37:41.892340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.375 qpair failed and we were unable to recover it. 00:34:36.375 [2024-07-15 03:37:41.892462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.375 [2024-07-15 03:37:41.892489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.375 qpair failed and we were unable to recover it. 00:34:36.375 [2024-07-15 03:37:41.892659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.375 [2024-07-15 03:37:41.892686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.375 qpair failed and we were unable to recover it. 00:34:36.375 [2024-07-15 03:37:41.892800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.375 [2024-07-15 03:37:41.892827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.375 qpair failed and we were unable to recover it. 00:34:36.375 [2024-07-15 03:37:41.892937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.375 [2024-07-15 03:37:41.892965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.375 qpair failed and we were unable to recover it. 00:34:36.375 [2024-07-15 03:37:41.893108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.375 [2024-07-15 03:37:41.893135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.375 qpair failed and we were unable to recover it. 00:34:36.375 [2024-07-15 03:37:41.893278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.375 [2024-07-15 03:37:41.893304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.375 qpair failed and we were unable to recover it. 00:34:36.375 [2024-07-15 03:37:41.893446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.375 [2024-07-15 03:37:41.893473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.375 qpair failed and we were unable to recover it. 00:34:36.375 [2024-07-15 03:37:41.893635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.375 [2024-07-15 03:37:41.893662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.375 qpair failed and we were unable to recover it. 00:34:36.375 [2024-07-15 03:37:41.893803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.375 [2024-07-15 03:37:41.893832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.375 qpair failed and we were unable to recover it. 00:34:36.375 [2024-07-15 03:37:41.893968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.375 [2024-07-15 03:37:41.894008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.375 qpair failed and we were unable to recover it. 00:34:36.375 [2024-07-15 03:37:41.894136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.375 [2024-07-15 03:37:41.894176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.375 qpair failed and we were unable to recover it. 00:34:36.375 [2024-07-15 03:37:41.894298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.375 [2024-07-15 03:37:41.894326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.375 qpair failed and we were unable to recover it. 00:34:36.375 [2024-07-15 03:37:41.894466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.375 [2024-07-15 03:37:41.894493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.375 qpair failed and we were unable to recover it. 00:34:36.375 [2024-07-15 03:37:41.894628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.375 [2024-07-15 03:37:41.894655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.375 qpair failed and we were unable to recover it. 00:34:36.375 [2024-07-15 03:37:41.894777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.375 [2024-07-15 03:37:41.894805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.375 qpair failed and we were unable to recover it. 00:34:36.375 [2024-07-15 03:37:41.894947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.375 [2024-07-15 03:37:41.894974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.375 qpair failed and we were unable to recover it. 00:34:36.375 [2024-07-15 03:37:41.895115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.375 [2024-07-15 03:37:41.895142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.375 qpair failed and we were unable to recover it. 00:34:36.375 [2024-07-15 03:37:41.895278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.375 [2024-07-15 03:37:41.895305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.375 qpair failed and we were unable to recover it. 00:34:36.375 [2024-07-15 03:37:41.895439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.375 [2024-07-15 03:37:41.895481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.375 qpair failed and we were unable to recover it. 00:34:36.375 [2024-07-15 03:37:41.895635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.375 [2024-07-15 03:37:41.895662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.375 qpair failed and we were unable to recover it. 00:34:36.375 [2024-07-15 03:37:41.895773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.375 [2024-07-15 03:37:41.895799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.375 qpair failed and we were unable to recover it. 00:34:36.375 [2024-07-15 03:37:41.895966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.375 [2024-07-15 03:37:41.895994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.375 qpair failed and we were unable to recover it. 00:34:36.375 [2024-07-15 03:37:41.896106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.375 [2024-07-15 03:37:41.896133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.375 qpair failed and we were unable to recover it. 00:34:36.375 [2024-07-15 03:37:41.896306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.375 [2024-07-15 03:37:41.896333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.375 qpair failed and we were unable to recover it. 00:34:36.375 [2024-07-15 03:37:41.896506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.375 [2024-07-15 03:37:41.896533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.375 qpair failed and we were unable to recover it. 00:34:36.375 [2024-07-15 03:37:41.896698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.375 [2024-07-15 03:37:41.896725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.375 qpair failed and we were unable to recover it. 00:34:36.375 [2024-07-15 03:37:41.896843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.375 [2024-07-15 03:37:41.896870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.375 qpair failed and we were unable to recover it. 00:34:36.375 [2024-07-15 03:37:41.897013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.375 [2024-07-15 03:37:41.897040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.375 qpair failed and we were unable to recover it. 00:34:36.375 [2024-07-15 03:37:41.897152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.375 [2024-07-15 03:37:41.897179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.375 qpair failed and we were unable to recover it. 00:34:36.375 [2024-07-15 03:37:41.897352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.375 [2024-07-15 03:37:41.897378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.375 qpair failed and we were unable to recover it. 00:34:36.375 [2024-07-15 03:37:41.897512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.375 [2024-07-15 03:37:41.897552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.375 qpair failed and we were unable to recover it. 00:34:36.375 [2024-07-15 03:37:41.897668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.375 [2024-07-15 03:37:41.897695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.375 qpair failed and we were unable to recover it. 00:34:36.375 [2024-07-15 03:37:41.897858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.375 [2024-07-15 03:37:41.897898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.375 qpair failed and we were unable to recover it. 00:34:36.375 [2024-07-15 03:37:41.898020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.375 [2024-07-15 03:37:41.898047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.375 qpair failed and we were unable to recover it. 00:34:36.375 [2024-07-15 03:37:41.898179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.375 [2024-07-15 03:37:41.898205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.375 qpair failed and we were unable to recover it. 00:34:36.375 [2024-07-15 03:37:41.898336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.375 [2024-07-15 03:37:41.898362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.375 qpair failed and we were unable to recover it. 00:34:36.375 [2024-07-15 03:37:41.898496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.375 [2024-07-15 03:37:41.898525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.375 qpair failed and we were unable to recover it. 00:34:36.375 [2024-07-15 03:37:41.898713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.375 [2024-07-15 03:37:41.898739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.375 qpair failed and we were unable to recover it. 00:34:36.375 [2024-07-15 03:37:41.898883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.375 [2024-07-15 03:37:41.898916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.375 qpair failed and we were unable to recover it. 00:34:36.375 [2024-07-15 03:37:41.899037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.375 [2024-07-15 03:37:41.899065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.375 qpair failed and we were unable to recover it. 00:34:36.375 [2024-07-15 03:37:41.899206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.375 [2024-07-15 03:37:41.899233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.375 qpair failed and we were unable to recover it. 00:34:36.375 [2024-07-15 03:37:41.899400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.375 [2024-07-15 03:37:41.899426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.375 qpair failed and we were unable to recover it. 00:34:36.375 [2024-07-15 03:37:41.899563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.375 [2024-07-15 03:37:41.899590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.375 qpair failed and we were unable to recover it. 00:34:36.375 [2024-07-15 03:37:41.899767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.375 [2024-07-15 03:37:41.899807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.375 qpair failed and we were unable to recover it. 00:34:36.375 [2024-07-15 03:37:41.899934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.375 [2024-07-15 03:37:41.899963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.375 qpair failed and we were unable to recover it. 00:34:36.375 [2024-07-15 03:37:41.900077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.375 [2024-07-15 03:37:41.900103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.375 qpair failed and we were unable to recover it. 00:34:36.375 [2024-07-15 03:37:41.900247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.375 [2024-07-15 03:37:41.900274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.375 qpair failed and we were unable to recover it. 00:34:36.375 [2024-07-15 03:37:41.900413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.375 [2024-07-15 03:37:41.900439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.375 qpair failed and we were unable to recover it. 00:34:36.375 [2024-07-15 03:37:41.900578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.375 [2024-07-15 03:37:41.900604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.375 qpair failed and we were unable to recover it. 00:34:36.375 [2024-07-15 03:37:41.900746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.375 [2024-07-15 03:37:41.900772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.375 qpair failed and we were unable to recover it. 00:34:36.375 [2024-07-15 03:37:41.900887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.375 [2024-07-15 03:37:41.900914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.375 qpair failed and we were unable to recover it. 00:34:36.375 [2024-07-15 03:37:41.901055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.375 [2024-07-15 03:37:41.901082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.375 qpair failed and we were unable to recover it. 00:34:36.375 [2024-07-15 03:37:41.901243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.375 [2024-07-15 03:37:41.901273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.375 qpair failed and we were unable to recover it. 00:34:36.375 [2024-07-15 03:37:41.901440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.375 [2024-07-15 03:37:41.901467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.375 qpair failed and we were unable to recover it. 00:34:36.375 [2024-07-15 03:37:41.901609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.375 [2024-07-15 03:37:41.901636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.375 qpair failed and we were unable to recover it. 00:34:36.375 [2024-07-15 03:37:41.901845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.375 [2024-07-15 03:37:41.901890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.375 qpair failed and we were unable to recover it. 00:34:36.375 [2024-07-15 03:37:41.902036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.375 [2024-07-15 03:37:41.902063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.375 qpair failed and we were unable to recover it. 00:34:36.375 [2024-07-15 03:37:41.902172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.375 [2024-07-15 03:37:41.902198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.375 qpair failed and we were unable to recover it. 00:34:36.375 [2024-07-15 03:37:41.902317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.375 [2024-07-15 03:37:41.902344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.375 qpair failed and we were unable to recover it. 00:34:36.375 [2024-07-15 03:37:41.902508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.375 [2024-07-15 03:37:41.902560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.375 qpair failed and we were unable to recover it. 00:34:36.375 [2024-07-15 03:37:41.902724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.375 [2024-07-15 03:37:41.902751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.375 qpair failed and we were unable to recover it. 00:34:36.375 [2024-07-15 03:37:41.902893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.375 [2024-07-15 03:37:41.902921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.375 qpair failed and we were unable to recover it. 00:34:36.375 [2024-07-15 03:37:41.903060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.375 [2024-07-15 03:37:41.903087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.375 qpair failed and we were unable to recover it. 00:34:36.375 [2024-07-15 03:37:41.903196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.375 [2024-07-15 03:37:41.903222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.375 qpair failed and we were unable to recover it. 00:34:36.375 [2024-07-15 03:37:41.903357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.375 [2024-07-15 03:37:41.903384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.375 qpair failed and we were unable to recover it. 00:34:36.375 [2024-07-15 03:37:41.903514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.376 [2024-07-15 03:37:41.903541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.376 qpair failed and we were unable to recover it. 00:34:36.376 [2024-07-15 03:37:41.903702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.376 [2024-07-15 03:37:41.903728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.376 qpair failed and we were unable to recover it. 00:34:36.376 [2024-07-15 03:37:41.903887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.376 [2024-07-15 03:37:41.903928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.376 qpair failed and we were unable to recover it. 00:34:36.376 [2024-07-15 03:37:41.904047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.376 [2024-07-15 03:37:41.904075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.376 qpair failed and we were unable to recover it. 00:34:36.376 [2024-07-15 03:37:41.904240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.376 [2024-07-15 03:37:41.904267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.376 qpair failed and we were unable to recover it. 00:34:36.376 [2024-07-15 03:37:41.904522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.376 [2024-07-15 03:37:41.904591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.376 qpair failed and we were unable to recover it. 00:34:36.376 [2024-07-15 03:37:41.904810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.376 [2024-07-15 03:37:41.904864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.376 qpair failed and we were unable to recover it. 00:34:36.376 [2024-07-15 03:37:41.905051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.376 [2024-07-15 03:37:41.905079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.376 qpair failed and we were unable to recover it. 00:34:36.376 [2024-07-15 03:37:41.905288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.376 [2024-07-15 03:37:41.905315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.376 qpair failed and we were unable to recover it. 00:34:36.376 [2024-07-15 03:37:41.905457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.376 [2024-07-15 03:37:41.905484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.376 qpair failed and we were unable to recover it. 00:34:36.376 [2024-07-15 03:37:41.905621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.376 [2024-07-15 03:37:41.905648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.376 qpair failed and we were unable to recover it. 00:34:36.376 [2024-07-15 03:37:41.905763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.376 [2024-07-15 03:37:41.905791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.376 qpair failed and we were unable to recover it. 00:34:36.376 [2024-07-15 03:37:41.905982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.376 [2024-07-15 03:37:41.906022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.376 qpair failed and we were unable to recover it. 00:34:36.376 [2024-07-15 03:37:41.906181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.376 [2024-07-15 03:37:41.906213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.376 qpair failed and we were unable to recover it. 00:34:36.376 [2024-07-15 03:37:41.906363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.376 [2024-07-15 03:37:41.906389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.376 qpair failed and we were unable to recover it. 00:34:36.376 [2024-07-15 03:37:41.906630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.376 [2024-07-15 03:37:41.906660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.376 qpair failed and we were unable to recover it. 00:34:36.376 [2024-07-15 03:37:41.906824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.376 [2024-07-15 03:37:41.906850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.376 qpair failed and we were unable to recover it. 00:34:36.376 [2024-07-15 03:37:41.906998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.376 [2024-07-15 03:37:41.907025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.376 qpair failed and we were unable to recover it. 00:34:36.376 [2024-07-15 03:37:41.907181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.376 [2024-07-15 03:37:41.907257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.376 qpair failed and we were unable to recover it. 00:34:36.376 [2024-07-15 03:37:41.907433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.376 [2024-07-15 03:37:41.907459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.376 qpair failed and we were unable to recover it. 00:34:36.376 [2024-07-15 03:37:41.907572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.376 [2024-07-15 03:37:41.907599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.376 qpair failed and we were unable to recover it. 00:34:36.376 [2024-07-15 03:37:41.907712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.376 [2024-07-15 03:37:41.907738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.376 qpair failed and we were unable to recover it. 00:34:36.376 [2024-07-15 03:37:41.907852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.376 [2024-07-15 03:37:41.907887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.376 qpair failed and we were unable to recover it. 00:34:36.376 [2024-07-15 03:37:41.907997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.376 [2024-07-15 03:37:41.908023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.376 qpair failed and we were unable to recover it. 00:34:36.376 [2024-07-15 03:37:41.908158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.376 [2024-07-15 03:37:41.908189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.376 qpair failed and we were unable to recover it. 00:34:36.376 [2024-07-15 03:37:41.908355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.376 [2024-07-15 03:37:41.908382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.376 qpair failed and we were unable to recover it. 00:34:36.376 [2024-07-15 03:37:41.908569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.376 [2024-07-15 03:37:41.908594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.376 qpair failed and we were unable to recover it. 00:34:36.376 [2024-07-15 03:37:41.908732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.376 [2024-07-15 03:37:41.908758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.376 qpair failed and we were unable to recover it. 00:34:36.376 [2024-07-15 03:37:41.908897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.376 [2024-07-15 03:37:41.908923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.376 qpair failed and we were unable to recover it. 00:34:36.376 [2024-07-15 03:37:41.909038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.376 [2024-07-15 03:37:41.909064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.376 qpair failed and we were unable to recover it. 00:34:36.376 [2024-07-15 03:37:41.909221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.376 [2024-07-15 03:37:41.909257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.376 qpair failed and we were unable to recover it. 00:34:36.376 [2024-07-15 03:37:41.909424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.376 [2024-07-15 03:37:41.909450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.376 qpair failed and we were unable to recover it. 00:34:36.376 [2024-07-15 03:37:41.909589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.376 [2024-07-15 03:37:41.909615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.376 qpair failed and we were unable to recover it. 00:34:36.376 [2024-07-15 03:37:41.909777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.376 [2024-07-15 03:37:41.909804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.376 qpair failed and we were unable to recover it. 00:34:36.376 [2024-07-15 03:37:41.909952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.376 [2024-07-15 03:37:41.909983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.376 qpair failed and we were unable to recover it. 00:34:36.376 [2024-07-15 03:37:41.910136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.376 [2024-07-15 03:37:41.910173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.376 qpair failed and we were unable to recover it. 00:34:36.376 [2024-07-15 03:37:41.910288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.376 [2024-07-15 03:37:41.910315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.376 qpair failed and we were unable to recover it. 00:34:36.376 [2024-07-15 03:37:41.910457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.376 [2024-07-15 03:37:41.910484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.376 qpair failed and we were unable to recover it. 00:34:36.376 [2024-07-15 03:37:41.910621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.376 [2024-07-15 03:37:41.910648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.376 qpair failed and we were unable to recover it. 00:34:36.376 [2024-07-15 03:37:41.910810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.376 [2024-07-15 03:37:41.910839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.376 qpair failed and we were unable to recover it. 00:34:36.376 [2024-07-15 03:37:41.910986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.376 [2024-07-15 03:37:41.911026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.376 qpair failed and we were unable to recover it. 00:34:36.376 [2024-07-15 03:37:41.911164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.376 [2024-07-15 03:37:41.911205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.376 qpair failed and we were unable to recover it. 00:34:36.376 [2024-07-15 03:37:41.911383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.376 [2024-07-15 03:37:41.911410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.376 qpair failed and we were unable to recover it. 00:34:36.376 [2024-07-15 03:37:41.911577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.376 [2024-07-15 03:37:41.911604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.376 qpair failed and we were unable to recover it. 00:34:36.376 [2024-07-15 03:37:41.911799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.376 [2024-07-15 03:37:41.911825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.376 qpair failed and we were unable to recover it. 00:34:36.376 [2024-07-15 03:37:41.911952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.376 [2024-07-15 03:37:41.911981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.376 qpair failed and we were unable to recover it. 00:34:36.376 [2024-07-15 03:37:41.912100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.376 [2024-07-15 03:37:41.912126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.376 qpair failed and we were unable to recover it. 00:34:36.376 [2024-07-15 03:37:41.912262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.376 [2024-07-15 03:37:41.912288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.376 qpair failed and we were unable to recover it. 00:34:36.376 [2024-07-15 03:37:41.912436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.376 [2024-07-15 03:37:41.912463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.376 qpair failed and we were unable to recover it. 00:34:36.376 [2024-07-15 03:37:41.912605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.376 [2024-07-15 03:37:41.912632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.376 qpair failed and we were unable to recover it. 00:34:36.376 [2024-07-15 03:37:41.912740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.376 [2024-07-15 03:37:41.912766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.376 qpair failed and we were unable to recover it. 00:34:36.376 [2024-07-15 03:37:41.912963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.376 [2024-07-15 03:37:41.912989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.376 qpair failed and we were unable to recover it. 00:34:36.376 [2024-07-15 03:37:41.913106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.376 [2024-07-15 03:37:41.913134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.376 qpair failed and we were unable to recover it. 00:34:36.376 [2024-07-15 03:37:41.913314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.376 [2024-07-15 03:37:41.913355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.376 qpair failed and we were unable to recover it. 00:34:36.376 [2024-07-15 03:37:41.913513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.376 [2024-07-15 03:37:41.913540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.376 qpair failed and we were unable to recover it. 00:34:36.376 [2024-07-15 03:37:41.913706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.376 [2024-07-15 03:37:41.913732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.376 qpair failed and we were unable to recover it. 00:34:36.376 [2024-07-15 03:37:41.913894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.376 [2024-07-15 03:37:41.913921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.376 qpair failed and we were unable to recover it. 00:34:36.376 [2024-07-15 03:37:41.914060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.376 [2024-07-15 03:37:41.914086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.376 qpair failed and we were unable to recover it. 00:34:36.376 [2024-07-15 03:37:41.914209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.376 [2024-07-15 03:37:41.914235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.376 qpair failed and we were unable to recover it. 00:34:36.376 [2024-07-15 03:37:41.914401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.376 [2024-07-15 03:37:41.914427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.376 qpair failed and we were unable to recover it. 00:34:36.376 [2024-07-15 03:37:41.914536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.376 [2024-07-15 03:37:41.914563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.376 qpair failed and we were unable to recover it. 00:34:36.376 [2024-07-15 03:37:41.914710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.376 [2024-07-15 03:37:41.914736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.376 qpair failed and we were unable to recover it. 00:34:36.376 [2024-07-15 03:37:41.914883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.376 [2024-07-15 03:37:41.914910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.376 qpair failed and we were unable to recover it. 00:34:36.376 [2024-07-15 03:37:41.915041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.376 [2024-07-15 03:37:41.915067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.376 qpair failed and we were unable to recover it. 00:34:36.376 [2024-07-15 03:37:41.915205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.376 [2024-07-15 03:37:41.915236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.376 qpair failed and we were unable to recover it. 00:34:36.376 [2024-07-15 03:37:41.915379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.376 [2024-07-15 03:37:41.915406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.376 qpair failed and we were unable to recover it. 00:34:36.376 [2024-07-15 03:37:41.915575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.376 [2024-07-15 03:37:41.915602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.376 qpair failed and we were unable to recover it. 00:34:36.376 [2024-07-15 03:37:41.915715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.377 [2024-07-15 03:37:41.915742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.377 qpair failed and we were unable to recover it. 00:34:36.377 [2024-07-15 03:37:41.915891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.377 [2024-07-15 03:37:41.915920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.377 qpair failed and we were unable to recover it. 00:34:36.377 [2024-07-15 03:37:41.916087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.377 [2024-07-15 03:37:41.916133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.377 qpair failed and we were unable to recover it. 00:34:36.377 [2024-07-15 03:37:41.916317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.377 [2024-07-15 03:37:41.916363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.377 qpair failed and we were unable to recover it. 00:34:36.377 [2024-07-15 03:37:41.916513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.377 [2024-07-15 03:37:41.916539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.377 qpair failed and we were unable to recover it. 00:34:36.377 [2024-07-15 03:37:41.916658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.377 [2024-07-15 03:37:41.916685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.377 qpair failed and we were unable to recover it. 00:34:36.377 [2024-07-15 03:37:41.916797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.377 [2024-07-15 03:37:41.916826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.377 qpair failed and we were unable to recover it. 00:34:36.377 [2024-07-15 03:37:41.917005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.377 [2024-07-15 03:37:41.917056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.377 qpair failed and we were unable to recover it. 00:34:36.377 [2024-07-15 03:37:41.917215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.377 [2024-07-15 03:37:41.917259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.377 qpair failed and we were unable to recover it. 00:34:36.377 [2024-07-15 03:37:41.917418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.377 [2024-07-15 03:37:41.917444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.377 qpair failed and we were unable to recover it. 00:34:36.377 [2024-07-15 03:37:41.917612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.377 [2024-07-15 03:37:41.917639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.377 qpair failed and we were unable to recover it. 00:34:36.377 [2024-07-15 03:37:41.917781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.377 [2024-07-15 03:37:41.917808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.377 qpair failed and we were unable to recover it. 00:34:36.377 [2024-07-15 03:37:41.917943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.377 [2024-07-15 03:37:41.917990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.377 qpair failed and we were unable to recover it. 00:34:36.377 [2024-07-15 03:37:41.918127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.377 [2024-07-15 03:37:41.918157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.377 qpair failed and we were unable to recover it. 00:34:36.377 [2024-07-15 03:37:41.918337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.377 [2024-07-15 03:37:41.918364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.377 qpair failed and we were unable to recover it. 00:34:36.377 [2024-07-15 03:37:41.918516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.377 [2024-07-15 03:37:41.918543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.377 qpair failed and we were unable to recover it. 00:34:36.377 [2024-07-15 03:37:41.918664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.377 [2024-07-15 03:37:41.918691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.377 qpair failed and we were unable to recover it. 00:34:36.377 [2024-07-15 03:37:41.918856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.377 [2024-07-15 03:37:41.918901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.377 qpair failed and we were unable to recover it. 00:34:36.377 [2024-07-15 03:37:41.919066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.377 [2024-07-15 03:37:41.919111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.377 qpair failed and we were unable to recover it. 00:34:36.377 [2024-07-15 03:37:41.919304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.377 [2024-07-15 03:37:41.919356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.377 qpair failed and we were unable to recover it. 00:34:36.377 [2024-07-15 03:37:41.919518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.377 [2024-07-15 03:37:41.919545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.377 qpair failed and we were unable to recover it. 00:34:36.377 [2024-07-15 03:37:41.919699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.377 [2024-07-15 03:37:41.919727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.377 qpair failed and we were unable to recover it. 00:34:36.377 [2024-07-15 03:37:41.919894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.377 [2024-07-15 03:37:41.919940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.377 qpair failed and we were unable to recover it. 00:34:36.377 [2024-07-15 03:37:41.920084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.377 [2024-07-15 03:37:41.920112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.377 qpair failed and we were unable to recover it. 00:34:36.377 [2024-07-15 03:37:41.920285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.377 [2024-07-15 03:37:41.920311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.377 qpair failed and we were unable to recover it. 00:34:36.377 [2024-07-15 03:37:41.920497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.377 [2024-07-15 03:37:41.920527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.377 qpair failed and we were unable to recover it. 00:34:36.377 [2024-07-15 03:37:41.920696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.377 [2024-07-15 03:37:41.920723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.377 qpair failed and we were unable to recover it. 00:34:36.377 [2024-07-15 03:37:41.920892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.377 [2024-07-15 03:37:41.920920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.377 qpair failed and we were unable to recover it. 00:34:36.377 [2024-07-15 03:37:41.921076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.377 [2024-07-15 03:37:41.921121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.377 qpair failed and we were unable to recover it. 00:34:36.377 [2024-07-15 03:37:41.921271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.377 [2024-07-15 03:37:41.921315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.377 qpair failed and we were unable to recover it. 00:34:36.378 [2024-07-15 03:37:41.921444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.378 [2024-07-15 03:37:41.921487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.378 qpair failed and we were unable to recover it. 00:34:36.378 [2024-07-15 03:37:41.921652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.378 [2024-07-15 03:37:41.921689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.378 qpair failed and we were unable to recover it. 00:34:36.378 [2024-07-15 03:37:41.921827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.378 [2024-07-15 03:37:41.921854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.378 qpair failed and we were unable to recover it. 00:34:36.378 [2024-07-15 03:37:41.922058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.378 [2024-07-15 03:37:41.922097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.378 qpair failed and we were unable to recover it. 00:34:36.378 [2024-07-15 03:37:41.922249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.378 [2024-07-15 03:37:41.922295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.378 qpair failed and we were unable to recover it. 00:34:36.378 [2024-07-15 03:37:41.922529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.378 [2024-07-15 03:37:41.922581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.378 qpair failed and we were unable to recover it. 00:34:36.378 [2024-07-15 03:37:41.922819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.378 [2024-07-15 03:37:41.922872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.378 qpair failed and we were unable to recover it. 00:34:36.378 [2024-07-15 03:37:41.923035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.378 [2024-07-15 03:37:41.923066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.378 qpair failed and we were unable to recover it. 00:34:36.378 [2024-07-15 03:37:41.923230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.378 [2024-07-15 03:37:41.923260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.378 qpair failed and we were unable to recover it. 00:34:36.378 [2024-07-15 03:37:41.923375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.378 [2024-07-15 03:37:41.923404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.378 qpair failed and we were unable to recover it. 00:34:36.378 [2024-07-15 03:37:41.923561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.378 [2024-07-15 03:37:41.923590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.378 qpair failed and we were unable to recover it. 00:34:36.378 [2024-07-15 03:37:41.923726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.378 [2024-07-15 03:37:41.923755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.378 qpair failed and we were unable to recover it. 00:34:36.378 [2024-07-15 03:37:41.923922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.378 [2024-07-15 03:37:41.923950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.378 qpair failed and we were unable to recover it. 00:34:36.378 [2024-07-15 03:37:41.924058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.378 [2024-07-15 03:37:41.924085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.378 qpair failed and we were unable to recover it. 00:34:36.378 [2024-07-15 03:37:41.924229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.378 [2024-07-15 03:37:41.924256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.378 qpair failed and we were unable to recover it. 00:34:36.378 [2024-07-15 03:37:41.924435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.378 [2024-07-15 03:37:41.924463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.378 qpair failed and we were unable to recover it. 00:34:36.378 [2024-07-15 03:37:41.924603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.378 [2024-07-15 03:37:41.924647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.378 qpair failed and we were unable to recover it. 00:34:36.378 [2024-07-15 03:37:41.924792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.378 [2024-07-15 03:37:41.924823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.378 qpair failed and we were unable to recover it. 00:34:36.378 [2024-07-15 03:37:41.924982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.378 [2024-07-15 03:37:41.925010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.378 qpair failed and we were unable to recover it. 00:34:36.378 [2024-07-15 03:37:41.925211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.378 [2024-07-15 03:37:41.925241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.378 qpair failed and we were unable to recover it. 00:34:36.378 [2024-07-15 03:37:41.925418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.378 [2024-07-15 03:37:41.925458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.378 qpair failed and we were unable to recover it. 00:34:36.378 [2024-07-15 03:37:41.925609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.378 [2024-07-15 03:37:41.925637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.378 qpair failed and we were unable to recover it. 00:34:36.378 [2024-07-15 03:37:41.925800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.378 [2024-07-15 03:37:41.925829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.378 qpair failed and we were unable to recover it. 00:34:36.378 [2024-07-15 03:37:41.925998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.378 [2024-07-15 03:37:41.926025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.378 qpair failed and we were unable to recover it. 00:34:36.378 [2024-07-15 03:37:41.926175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.378 [2024-07-15 03:37:41.926208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.378 qpair failed and we were unable to recover it. 00:34:36.378 [2024-07-15 03:37:41.926352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.378 [2024-07-15 03:37:41.926383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.378 qpair failed and we were unable to recover it. 00:34:36.378 [2024-07-15 03:37:41.926535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.378 [2024-07-15 03:37:41.926564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.378 qpair failed and we were unable to recover it. 00:34:36.378 [2024-07-15 03:37:41.926721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.378 [2024-07-15 03:37:41.926753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.378 qpair failed and we were unable to recover it. 00:34:36.378 [2024-07-15 03:37:41.926911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.378 [2024-07-15 03:37:41.926938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.378 qpair failed and we were unable to recover it. 00:34:36.378 [2024-07-15 03:37:41.927049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.378 [2024-07-15 03:37:41.927075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.378 qpair failed and we were unable to recover it. 00:34:36.378 [2024-07-15 03:37:41.927904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.378 [2024-07-15 03:37:41.927959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.378 qpair failed and we were unable to recover it. 00:34:36.378 [2024-07-15 03:37:41.928187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.378 [2024-07-15 03:37:41.928228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.378 qpair failed and we were unable to recover it. 00:34:36.378 [2024-07-15 03:37:41.928419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.378 [2024-07-15 03:37:41.928454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.378 qpair failed and we were unable to recover it. 00:34:36.378 [2024-07-15 03:37:41.928641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.378 [2024-07-15 03:37:41.928692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.378 qpair failed and we were unable to recover it. 00:34:36.378 [2024-07-15 03:37:41.928897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.378 [2024-07-15 03:37:41.928951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.378 qpair failed and we were unable to recover it. 00:34:36.378 [2024-07-15 03:37:41.929121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.378 [2024-07-15 03:37:41.929156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.378 qpair failed and we were unable to recover it. 00:34:36.378 [2024-07-15 03:37:41.929316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.378 [2024-07-15 03:37:41.929368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.378 qpair failed and we were unable to recover it. 00:34:36.378 [2024-07-15 03:37:41.929553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.379 [2024-07-15 03:37:41.929594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.379 qpair failed and we were unable to recover it. 00:34:36.379 [2024-07-15 03:37:41.929772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.379 [2024-07-15 03:37:41.929799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.379 qpair failed and we were unable to recover it. 00:34:36.379 [2024-07-15 03:37:41.929969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.379 [2024-07-15 03:37:41.929997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.379 qpair failed and we were unable to recover it. 00:34:36.379 [2024-07-15 03:37:41.930133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.379 [2024-07-15 03:37:41.930188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.379 qpair failed and we were unable to recover it. 00:34:36.379 [2024-07-15 03:37:41.930342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.379 [2024-07-15 03:37:41.930371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.379 qpair failed and we were unable to recover it. 00:34:36.379 [2024-07-15 03:37:41.930523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.379 [2024-07-15 03:37:41.930552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.379 qpair failed and we were unable to recover it. 00:34:36.379 [2024-07-15 03:37:41.930726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.379 [2024-07-15 03:37:41.930755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.379 qpair failed and we were unable to recover it. 00:34:36.379 [2024-07-15 03:37:41.930906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.379 [2024-07-15 03:37:41.930951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.379 qpair failed and we were unable to recover it. 00:34:36.379 [2024-07-15 03:37:41.931099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.379 [2024-07-15 03:37:41.931126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.379 qpair failed and we were unable to recover it. 00:34:36.379 [2024-07-15 03:37:41.931248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.379 [2024-07-15 03:37:41.931274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.379 qpair failed and we were unable to recover it. 00:34:36.379 [2024-07-15 03:37:41.931445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.379 [2024-07-15 03:37:41.931473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.379 qpair failed and we were unable to recover it. 00:34:36.379 [2024-07-15 03:37:41.931688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.379 [2024-07-15 03:37:41.931717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.379 qpair failed and we were unable to recover it. 00:34:36.379 [2024-07-15 03:37:41.931870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.379 [2024-07-15 03:37:41.931931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.379 qpair failed and we were unable to recover it. 00:34:36.379 [2024-07-15 03:37:41.932042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.379 [2024-07-15 03:37:41.932069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.379 qpair failed and we were unable to recover it. 00:34:36.379 [2024-07-15 03:37:41.932207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.379 [2024-07-15 03:37:41.932249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.379 qpair failed and we were unable to recover it. 00:34:36.379 [2024-07-15 03:37:41.932423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.379 [2024-07-15 03:37:41.932451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.379 qpair failed and we were unable to recover it. 00:34:36.379 [2024-07-15 03:37:41.932590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.379 [2024-07-15 03:37:41.932619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.379 qpair failed and we were unable to recover it. 00:34:36.379 [2024-07-15 03:37:41.932766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.379 [2024-07-15 03:37:41.932795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.379 qpair failed and we were unable to recover it. 00:34:36.379 [2024-07-15 03:37:41.932937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.379 [2024-07-15 03:37:41.932964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.379 qpair failed and we were unable to recover it. 00:34:36.379 [2024-07-15 03:37:41.933130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.379 [2024-07-15 03:37:41.933156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.379 qpair failed and we were unable to recover it. 00:34:36.379 [2024-07-15 03:37:41.933282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.379 [2024-07-15 03:37:41.933308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.379 qpair failed and we were unable to recover it. 00:34:36.379 [2024-07-15 03:37:41.933437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.379 [2024-07-15 03:37:41.933463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.379 qpair failed and we were unable to recover it. 00:34:36.379 [2024-07-15 03:37:41.933628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.379 [2024-07-15 03:37:41.933657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.379 qpair failed and we were unable to recover it. 00:34:36.379 [2024-07-15 03:37:41.933905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.379 [2024-07-15 03:37:41.933951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.379 qpair failed and we were unable to recover it. 00:34:36.379 [2024-07-15 03:37:41.934064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.379 [2024-07-15 03:37:41.934090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.379 qpair failed and we were unable to recover it. 00:34:36.379 [2024-07-15 03:37:41.934240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.379 [2024-07-15 03:37:41.934269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.379 qpair failed and we were unable to recover it. 00:34:36.379 [2024-07-15 03:37:41.934456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.379 [2024-07-15 03:37:41.934491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.379 qpair failed and we were unable to recover it. 00:34:36.379 [2024-07-15 03:37:41.934670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.379 [2024-07-15 03:37:41.934699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.379 qpair failed and we were unable to recover it. 00:34:36.379 [2024-07-15 03:37:41.934883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.379 [2024-07-15 03:37:41.934928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.379 qpair failed and we were unable to recover it. 00:34:36.379 [2024-07-15 03:37:41.935031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.379 [2024-07-15 03:37:41.935074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.379 qpair failed and we were unable to recover it. 00:34:36.379 [2024-07-15 03:37:41.935259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.379 [2024-07-15 03:37:41.935288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.379 qpair failed and we were unable to recover it. 00:34:36.379 [2024-07-15 03:37:41.935420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.379 [2024-07-15 03:37:41.935449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.379 qpair failed and we were unable to recover it. 00:34:36.379 [2024-07-15 03:37:41.935626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.379 [2024-07-15 03:37:41.935655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.379 qpair failed and we were unable to recover it. 00:34:36.379 [2024-07-15 03:37:41.935776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.379 [2024-07-15 03:37:41.935805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.379 qpair failed and we were unable to recover it. 00:34:36.379 [2024-07-15 03:37:41.935984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.379 [2024-07-15 03:37:41.936012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.379 qpair failed and we were unable to recover it. 00:34:36.379 [2024-07-15 03:37:41.936138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.379 [2024-07-15 03:37:41.936164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.379 qpair failed and we were unable to recover it. 00:34:36.379 [2024-07-15 03:37:41.936282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.379 [2024-07-15 03:37:41.936326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.379 qpair failed and we were unable to recover it. 00:34:36.379 [2024-07-15 03:37:41.936450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.379 [2024-07-15 03:37:41.936479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.379 qpair failed and we were unable to recover it. 00:34:36.379 [2024-07-15 03:37:41.936658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.379 [2024-07-15 03:37:41.936686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.379 qpair failed and we were unable to recover it. 00:34:36.379 [2024-07-15 03:37:41.936806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.379 [2024-07-15 03:37:41.936835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.379 qpair failed and we were unable to recover it. 00:34:36.379 [2024-07-15 03:37:41.937026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.379 [2024-07-15 03:37:41.937053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.379 qpair failed and we were unable to recover it. 00:34:36.379 [2024-07-15 03:37:41.937189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.379 [2024-07-15 03:37:41.937215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.379 qpair failed and we were unable to recover it. 00:34:36.379 [2024-07-15 03:37:41.937344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.379 [2024-07-15 03:37:41.937388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.379 qpair failed and we were unable to recover it. 00:34:36.379 [2024-07-15 03:37:41.937540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.379 [2024-07-15 03:37:41.937569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.379 qpair failed and we were unable to recover it. 00:34:36.379 [2024-07-15 03:37:41.937685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.379 [2024-07-15 03:37:41.937729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.379 qpair failed and we were unable to recover it. 00:34:36.379 [2024-07-15 03:37:41.937904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.379 [2024-07-15 03:37:41.937948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.379 qpair failed and we were unable to recover it. 00:34:36.379 [2024-07-15 03:37:41.938087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.379 [2024-07-15 03:37:41.938114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.379 qpair failed and we were unable to recover it. 00:34:36.379 [2024-07-15 03:37:41.938266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.379 [2024-07-15 03:37:41.938292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.379 qpair failed and we were unable to recover it. 00:34:36.379 [2024-07-15 03:37:41.938440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.379 [2024-07-15 03:37:41.938473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.379 qpair failed and we were unable to recover it. 00:34:36.379 [2024-07-15 03:37:41.938596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.379 [2024-07-15 03:37:41.938627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.379 qpair failed and we were unable to recover it. 00:34:36.379 [2024-07-15 03:37:41.938860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.379 [2024-07-15 03:37:41.938907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.379 qpair failed and we were unable to recover it. 00:34:36.379 [2024-07-15 03:37:41.939033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.379 [2024-07-15 03:37:41.939059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.379 qpair failed and we were unable to recover it. 00:34:36.379 [2024-07-15 03:37:41.939174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.379 [2024-07-15 03:37:41.939201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.379 qpair failed and we were unable to recover it. 00:34:36.379 [2024-07-15 03:37:41.939354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.379 [2024-07-15 03:37:41.939383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.379 qpair failed and we were unable to recover it. 00:34:36.379 [2024-07-15 03:37:41.939525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.379 [2024-07-15 03:37:41.939554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.379 qpair failed and we were unable to recover it. 00:34:36.379 [2024-07-15 03:37:41.939671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.379 [2024-07-15 03:37:41.939700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.379 qpair failed and we were unable to recover it. 00:34:36.379 [2024-07-15 03:37:41.939891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.379 [2024-07-15 03:37:41.939949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.379 qpair failed and we were unable to recover it. 00:34:36.379 [2024-07-15 03:37:41.940093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.379 [2024-07-15 03:37:41.940121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.379 qpair failed and we were unable to recover it. 00:34:36.379 [2024-07-15 03:37:41.940288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.379 [2024-07-15 03:37:41.940331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.379 qpair failed and we were unable to recover it. 00:34:36.379 [2024-07-15 03:37:41.940489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.379 [2024-07-15 03:37:41.940533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.379 qpair failed and we were unable to recover it. 00:34:36.379 [2024-07-15 03:37:41.940666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.379 [2024-07-15 03:37:41.940695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.379 qpair failed and we were unable to recover it. 00:34:36.379 [2024-07-15 03:37:41.940828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.379 [2024-07-15 03:37:41.940855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.379 qpair failed and we were unable to recover it. 00:34:36.379 [2024-07-15 03:37:41.941019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.379 [2024-07-15 03:37:41.941047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.379 qpair failed and we were unable to recover it. 00:34:36.379 [2024-07-15 03:37:41.941192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.379 [2024-07-15 03:37:41.941219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.379 qpair failed and we were unable to recover it. 00:34:36.379 [2024-07-15 03:37:41.941420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.379 [2024-07-15 03:37:41.941449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.379 qpair failed and we were unable to recover it. 00:34:36.379 [2024-07-15 03:37:41.941720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.379 [2024-07-15 03:37:41.941754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.379 qpair failed and we were unable to recover it. 00:34:36.379 [2024-07-15 03:37:41.941903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.379 [2024-07-15 03:37:41.941947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.379 qpair failed and we were unable to recover it. 00:34:36.379 [2024-07-15 03:37:41.942057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.379 [2024-07-15 03:37:41.942083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.379 qpair failed and we were unable to recover it. 00:34:36.379 [2024-07-15 03:37:41.942273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.379 [2024-07-15 03:37:41.942302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.380 qpair failed and we were unable to recover it. 00:34:36.380 [2024-07-15 03:37:41.942472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.380 [2024-07-15 03:37:41.942501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.380 qpair failed and we were unable to recover it. 00:34:36.380 [2024-07-15 03:37:41.942663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.380 [2024-07-15 03:37:41.942689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.380 qpair failed and we were unable to recover it. 00:34:36.380 [2024-07-15 03:37:41.942863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.380 [2024-07-15 03:37:41.942902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.380 qpair failed and we were unable to recover it. 00:34:36.380 [2024-07-15 03:37:41.943014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.380 [2024-07-15 03:37:41.943041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.380 qpair failed and we were unable to recover it. 00:34:36.380 [2024-07-15 03:37:41.943152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.380 [2024-07-15 03:37:41.943188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.380 qpair failed and we were unable to recover it. 00:34:36.380 [2024-07-15 03:37:41.943372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.380 [2024-07-15 03:37:41.943401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.380 qpair failed and we were unable to recover it. 00:34:36.380 [2024-07-15 03:37:41.943557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.380 [2024-07-15 03:37:41.943590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.380 qpair failed and we were unable to recover it. 00:34:36.380 [2024-07-15 03:37:41.943708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.380 [2024-07-15 03:37:41.943737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.380 qpair failed and we were unable to recover it. 00:34:36.380 [2024-07-15 03:37:41.943869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.380 [2024-07-15 03:37:41.943907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.380 qpair failed and we were unable to recover it. 00:34:36.380 [2024-07-15 03:37:41.944030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.380 [2024-07-15 03:37:41.944057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.380 qpair failed and we were unable to recover it. 00:34:36.380 [2024-07-15 03:37:41.944233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.380 [2024-07-15 03:37:41.944261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.380 qpair failed and we were unable to recover it. 00:34:36.380 [2024-07-15 03:37:41.944468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.380 [2024-07-15 03:37:41.944497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.380 qpair failed and we were unable to recover it. 00:34:36.380 [2024-07-15 03:37:41.944617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.380 [2024-07-15 03:37:41.944646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.380 qpair failed and we were unable to recover it. 00:34:36.380 [2024-07-15 03:37:41.944795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.380 [2024-07-15 03:37:41.944824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.380 qpair failed and we were unable to recover it. 00:34:36.380 [2024-07-15 03:37:41.945007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.380 [2024-07-15 03:37:41.945036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.380 qpair failed and we were unable to recover it. 00:34:36.380 [2024-07-15 03:37:41.945156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.380 [2024-07-15 03:37:41.945183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.380 qpair failed and we were unable to recover it. 00:34:36.380 [2024-07-15 03:37:41.945304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.380 [2024-07-15 03:37:41.945330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.380 qpair failed and we were unable to recover it. 00:34:36.380 [2024-07-15 03:37:41.945493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.380 [2024-07-15 03:37:41.945522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.380 qpair failed and we were unable to recover it. 00:34:36.380 [2024-07-15 03:37:41.945641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.380 [2024-07-15 03:37:41.945670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.380 qpair failed and we were unable to recover it. 00:34:36.380 [2024-07-15 03:37:41.945842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.380 [2024-07-15 03:37:41.945889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.380 qpair failed and we were unable to recover it. 00:34:36.380 [2024-07-15 03:37:41.946051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.380 [2024-07-15 03:37:41.946078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.380 qpair failed and we were unable to recover it. 00:34:36.380 [2024-07-15 03:37:41.946239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.380 [2024-07-15 03:37:41.946268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.380 qpair failed and we were unable to recover it. 00:34:36.380 [2024-07-15 03:37:41.946424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.380 [2024-07-15 03:37:41.946453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.380 qpair failed and we were unable to recover it. 00:34:36.380 [2024-07-15 03:37:41.946644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.380 [2024-07-15 03:37:41.946673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.380 qpair failed and we were unable to recover it. 00:34:36.380 [2024-07-15 03:37:41.946822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.380 [2024-07-15 03:37:41.946851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.380 qpair failed and we were unable to recover it. 00:34:36.380 [2024-07-15 03:37:41.947024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.380 [2024-07-15 03:37:41.947051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.380 qpair failed and we were unable to recover it. 00:34:36.380 [2024-07-15 03:37:41.947188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.380 [2024-07-15 03:37:41.947215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.380 qpair failed and we were unable to recover it. 00:34:36.380 [2024-07-15 03:37:41.947374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.380 [2024-07-15 03:37:41.947403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.380 qpair failed and we were unable to recover it. 00:34:36.380 [2024-07-15 03:37:41.947580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.380 [2024-07-15 03:37:41.947609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.380 qpair failed and we were unable to recover it. 00:34:36.380 [2024-07-15 03:37:41.947784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.380 [2024-07-15 03:37:41.947813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.380 qpair failed and we were unable to recover it. 00:34:36.380 [2024-07-15 03:37:41.947978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.380 [2024-07-15 03:37:41.948006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.380 qpair failed and we were unable to recover it. 00:34:36.380 [2024-07-15 03:37:41.948110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.380 [2024-07-15 03:37:41.948137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.380 qpair failed and we were unable to recover it. 00:34:36.380 [2024-07-15 03:37:41.948273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.380 [2024-07-15 03:37:41.948299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.380 qpair failed and we were unable to recover it. 00:34:36.380 [2024-07-15 03:37:41.948442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.380 [2024-07-15 03:37:41.948472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.380 qpair failed and we were unable to recover it. 00:34:36.380 [2024-07-15 03:37:41.948643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.380 [2024-07-15 03:37:41.948672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.380 qpair failed and we were unable to recover it. 00:34:36.380 [2024-07-15 03:37:41.948834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.380 [2024-07-15 03:37:41.948860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.380 qpair failed and we were unable to recover it. 00:34:36.380 [2024-07-15 03:37:41.948983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.380 [2024-07-15 03:37:41.949010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.380 qpair failed and we were unable to recover it. 00:34:36.380 [2024-07-15 03:37:41.949113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.380 [2024-07-15 03:37:41.949140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.380 qpair failed and we were unable to recover it. 00:34:36.380 [2024-07-15 03:37:41.949337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.380 [2024-07-15 03:37:41.949363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.380 qpair failed and we were unable to recover it. 00:34:36.380 [2024-07-15 03:37:41.949541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.380 [2024-07-15 03:37:41.949570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.380 qpair failed and we were unable to recover it. 00:34:36.380 [2024-07-15 03:37:41.949699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.380 [2024-07-15 03:37:41.949728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.380 qpair failed and we were unable to recover it. 00:34:36.380 [2024-07-15 03:37:41.949889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.380 [2024-07-15 03:37:41.949917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.380 qpair failed and we were unable to recover it. 00:34:36.380 [2024-07-15 03:37:41.950061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.380 [2024-07-15 03:37:41.950088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.380 qpair failed and we were unable to recover it. 00:34:36.380 [2024-07-15 03:37:41.950225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.380 [2024-07-15 03:37:41.950254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.380 qpair failed and we were unable to recover it. 00:34:36.380 [2024-07-15 03:37:41.950415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.380 [2024-07-15 03:37:41.950444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.380 qpair failed and we were unable to recover it. 00:34:36.380 [2024-07-15 03:37:41.950601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.380 [2024-07-15 03:37:41.950631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.380 qpair failed and we were unable to recover it. 00:34:36.380 [2024-07-15 03:37:41.950782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.380 [2024-07-15 03:37:41.950811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.380 qpair failed and we were unable to recover it. 00:34:36.380 [2024-07-15 03:37:41.950969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.380 [2024-07-15 03:37:41.950996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.380 qpair failed and we were unable to recover it. 00:34:36.380 [2024-07-15 03:37:41.951105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.380 [2024-07-15 03:37:41.951133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.380 qpair failed and we were unable to recover it. 00:34:36.380 [2024-07-15 03:37:41.951305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.380 [2024-07-15 03:37:41.951346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.380 qpair failed and we were unable to recover it. 00:34:36.380 [2024-07-15 03:37:41.951508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.380 [2024-07-15 03:37:41.951537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.380 qpair failed and we were unable to recover it. 00:34:36.380 [2024-07-15 03:37:41.951689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.380 [2024-07-15 03:37:41.951719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.380 qpair failed and we were unable to recover it. 00:34:36.380 [2024-07-15 03:37:41.951846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.380 [2024-07-15 03:37:41.951890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.380 qpair failed and we were unable to recover it. 00:34:36.380 [2024-07-15 03:37:41.952035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.380 [2024-07-15 03:37:41.952063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.380 qpair failed and we were unable to recover it. 00:34:36.380 [2024-07-15 03:37:41.952171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.380 [2024-07-15 03:37:41.952197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.380 qpair failed and we were unable to recover it. 00:34:36.380 [2024-07-15 03:37:41.952363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.380 [2024-07-15 03:37:41.952389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.380 qpair failed and we were unable to recover it. 00:34:36.380 [2024-07-15 03:37:41.952573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.380 [2024-07-15 03:37:41.952602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.380 qpair failed and we were unable to recover it. 00:34:36.380 [2024-07-15 03:37:41.952757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.380 [2024-07-15 03:37:41.952786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.380 qpair failed and we were unable to recover it. 00:34:36.380 [2024-07-15 03:37:41.952957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.380 [2024-07-15 03:37:41.952985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.380 qpair failed and we were unable to recover it. 00:34:36.380 [2024-07-15 03:37:41.953103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.380 [2024-07-15 03:37:41.953129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.380 qpair failed and we were unable to recover it. 00:34:36.380 [2024-07-15 03:37:41.953318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.380 [2024-07-15 03:37:41.953346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.380 qpair failed and we were unable to recover it. 00:34:36.380 [2024-07-15 03:37:41.953505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.380 [2024-07-15 03:37:41.953534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.380 qpair failed and we were unable to recover it. 00:34:36.380 [2024-07-15 03:37:41.953682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.380 [2024-07-15 03:37:41.953711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.380 qpair failed and we were unable to recover it. 00:34:36.380 [2024-07-15 03:37:41.953862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.380 [2024-07-15 03:37:41.953900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.380 qpair failed and we were unable to recover it. 00:34:36.380 [2024-07-15 03:37:41.954033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.380 [2024-07-15 03:37:41.954059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.381 qpair failed and we were unable to recover it. 00:34:36.381 [2024-07-15 03:37:41.954201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.381 [2024-07-15 03:37:41.954228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.381 qpair failed and we were unable to recover it. 00:34:36.381 [2024-07-15 03:37:41.954339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.381 [2024-07-15 03:37:41.954382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.381 qpair failed and we were unable to recover it. 00:34:36.381 [2024-07-15 03:37:41.954561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.381 [2024-07-15 03:37:41.954590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.381 qpair failed and we were unable to recover it. 00:34:36.381 [2024-07-15 03:37:41.954740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.381 [2024-07-15 03:37:41.954769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.381 qpair failed and we were unable to recover it. 00:34:36.381 [2024-07-15 03:37:41.954902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.381 [2024-07-15 03:37:41.954945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.381 qpair failed and we were unable to recover it. 00:34:36.381 [2024-07-15 03:37:41.955064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.381 [2024-07-15 03:37:41.955091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.381 qpair failed and we were unable to recover it. 00:34:36.381 [2024-07-15 03:37:41.955233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.381 [2024-07-15 03:37:41.955259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.381 qpair failed and we were unable to recover it. 00:34:36.381 [2024-07-15 03:37:41.955375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.381 [2024-07-15 03:37:41.955403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.381 qpair failed and we were unable to recover it. 00:34:36.381 [2024-07-15 03:37:41.955534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.381 [2024-07-15 03:37:41.955564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.381 qpair failed and we were unable to recover it. 00:34:36.381 [2024-07-15 03:37:41.955773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.381 [2024-07-15 03:37:41.955802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.381 qpair failed and we were unable to recover it. 00:34:36.381 [2024-07-15 03:37:41.955984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.381 [2024-07-15 03:37:41.956012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.381 qpair failed and we were unable to recover it. 00:34:36.381 [2024-07-15 03:37:41.956123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.381 [2024-07-15 03:37:41.956150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.381 qpair failed and we were unable to recover it. 00:34:36.381 [2024-07-15 03:37:41.956300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.381 [2024-07-15 03:37:41.956326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.381 qpair failed and we were unable to recover it. 00:34:36.381 [2024-07-15 03:37:41.956489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.381 [2024-07-15 03:37:41.956515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.381 qpair failed and we were unable to recover it. 00:34:36.381 [2024-07-15 03:37:41.956698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.381 [2024-07-15 03:37:41.956727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.381 qpair failed and we were unable to recover it. 00:34:36.381 [2024-07-15 03:37:41.956861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.381 [2024-07-15 03:37:41.956896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.381 qpair failed and we were unable to recover it. 00:34:36.381 [2024-07-15 03:37:41.957038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.381 [2024-07-15 03:37:41.957065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.381 qpair failed and we were unable to recover it. 00:34:36.381 [2024-07-15 03:37:41.957180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.381 [2024-07-15 03:37:41.957207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.381 qpair failed and we were unable to recover it. 00:34:36.381 [2024-07-15 03:37:41.957318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.381 [2024-07-15 03:37:41.957344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.381 qpair failed and we were unable to recover it. 00:34:36.381 [2024-07-15 03:37:41.957455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.381 [2024-07-15 03:37:41.957480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.381 qpair failed and we were unable to recover it. 00:34:36.381 [2024-07-15 03:37:41.957644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.381 [2024-07-15 03:37:41.957673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.381 qpair failed and we were unable to recover it. 00:34:36.381 [2024-07-15 03:37:41.957803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.381 [2024-07-15 03:37:41.957829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.381 qpair failed and we were unable to recover it. 00:34:36.381 [2024-07-15 03:37:41.957986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.381 [2024-07-15 03:37:41.958014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.381 qpair failed and we were unable to recover it. 00:34:36.381 [2024-07-15 03:37:41.958163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.381 [2024-07-15 03:37:41.958212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.381 qpair failed and we were unable to recover it. 00:34:36.381 [2024-07-15 03:37:41.958374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.381 [2024-07-15 03:37:41.958400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.381 qpair failed and we were unable to recover it. 00:34:36.381 [2024-07-15 03:37:41.958540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.381 [2024-07-15 03:37:41.958566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.381 qpair failed and we were unable to recover it. 00:34:36.381 [2024-07-15 03:37:41.958704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.381 [2024-07-15 03:37:41.958730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.381 qpair failed and we were unable to recover it. 00:34:36.381 [2024-07-15 03:37:41.958874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.381 [2024-07-15 03:37:41.958909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.381 qpair failed and we were unable to recover it. 00:34:36.381 [2024-07-15 03:37:41.959067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.381 [2024-07-15 03:37:41.959096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.381 qpair failed and we were unable to recover it. 00:34:36.381 [2024-07-15 03:37:41.959246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.381 [2024-07-15 03:37:41.959276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.381 qpair failed and we were unable to recover it. 00:34:36.381 [2024-07-15 03:37:41.959408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.381 [2024-07-15 03:37:41.959434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.381 qpair failed and we were unable to recover it. 00:34:36.381 [2024-07-15 03:37:41.959570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.381 [2024-07-15 03:37:41.959596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.381 qpair failed and we were unable to recover it. 00:34:36.381 [2024-07-15 03:37:41.959785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.381 [2024-07-15 03:37:41.959814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.381 qpair failed and we were unable to recover it. 00:34:36.381 [2024-07-15 03:37:41.959990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.381 [2024-07-15 03:37:41.960018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.381 qpair failed and we were unable to recover it. 00:34:36.381 [2024-07-15 03:37:41.960138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.381 [2024-07-15 03:37:41.960164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.381 qpair failed and we were unable to recover it. 00:34:36.381 [2024-07-15 03:37:41.960335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.381 [2024-07-15 03:37:41.960361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.381 qpair failed and we were unable to recover it. 00:34:36.381 [2024-07-15 03:37:41.960492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.381 [2024-07-15 03:37:41.960522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.381 qpair failed and we were unable to recover it. 00:34:36.381 [2024-07-15 03:37:41.960631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.381 [2024-07-15 03:37:41.960657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.381 qpair failed and we were unable to recover it. 00:34:36.381 [2024-07-15 03:37:41.960796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.381 [2024-07-15 03:37:41.960824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.381 qpair failed and we were unable to recover it. 00:34:36.381 [2024-07-15 03:37:41.960979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.381 [2024-07-15 03:37:41.961007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.381 qpair failed and we were unable to recover it. 00:34:36.381 [2024-07-15 03:37:41.961128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.381 [2024-07-15 03:37:41.961155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.381 qpair failed and we were unable to recover it. 00:34:36.381 [2024-07-15 03:37:41.961328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.381 [2024-07-15 03:37:41.961356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.381 qpair failed and we were unable to recover it. 00:34:36.381 [2024-07-15 03:37:41.961515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.381 [2024-07-15 03:37:41.961542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.381 qpair failed and we were unable to recover it. 00:34:36.381 [2024-07-15 03:37:41.961676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.381 [2024-07-15 03:37:41.961718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.381 qpair failed and we were unable to recover it. 00:34:36.381 [2024-07-15 03:37:41.961835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.381 [2024-07-15 03:37:41.961864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.381 qpair failed and we were unable to recover it. 00:34:36.381 [2024-07-15 03:37:41.962015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.381 [2024-07-15 03:37:41.962042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.381 qpair failed and we were unable to recover it. 00:34:36.381 [2024-07-15 03:37:41.962155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.381 [2024-07-15 03:37:41.962184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.381 qpair failed and we were unable to recover it. 00:34:36.381 [2024-07-15 03:37:41.962345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.381 [2024-07-15 03:37:41.962374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.381 qpair failed and we were unable to recover it. 00:34:36.381 [2024-07-15 03:37:41.962554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.381 [2024-07-15 03:37:41.962581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.381 qpair failed and we were unable to recover it. 00:34:36.381 [2024-07-15 03:37:41.962757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.381 [2024-07-15 03:37:41.962785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.381 qpair failed and we were unable to recover it. 00:34:36.381 [2024-07-15 03:37:41.962923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.381 [2024-07-15 03:37:41.962954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.381 qpair failed and we were unable to recover it. 00:34:36.381 [2024-07-15 03:37:41.963088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.381 [2024-07-15 03:37:41.963114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.381 qpair failed and we were unable to recover it. 00:34:36.381 [2024-07-15 03:37:41.963227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.381 [2024-07-15 03:37:41.963253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.381 qpair failed and we were unable to recover it. 00:34:36.381 [2024-07-15 03:37:41.963412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.381 [2024-07-15 03:37:41.963441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.381 qpair failed and we were unable to recover it. 00:34:36.381 [2024-07-15 03:37:41.963626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.381 [2024-07-15 03:37:41.963652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.381 qpair failed and we were unable to recover it. 00:34:36.381 [2024-07-15 03:37:41.963763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.381 [2024-07-15 03:37:41.963806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.381 qpair failed and we were unable to recover it. 00:34:36.381 [2024-07-15 03:37:41.963922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.381 [2024-07-15 03:37:41.963952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.381 qpair failed and we were unable to recover it. 00:34:36.381 [2024-07-15 03:37:41.964115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.381 [2024-07-15 03:37:41.964142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.381 qpair failed and we were unable to recover it. 00:34:36.381 [2024-07-15 03:37:41.964257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.381 [2024-07-15 03:37:41.964283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.381 qpair failed and we were unable to recover it. 00:34:36.381 [2024-07-15 03:37:41.964447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.381 [2024-07-15 03:37:41.964476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.381 qpair failed and we were unable to recover it. 00:34:36.381 [2024-07-15 03:37:41.964629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.381 [2024-07-15 03:37:41.964655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.381 qpair failed and we were unable to recover it. 00:34:36.381 [2024-07-15 03:37:41.964759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.381 [2024-07-15 03:37:41.964785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.381 qpair failed and we were unable to recover it. 00:34:36.381 [2024-07-15 03:37:41.964966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.381 [2024-07-15 03:37:41.964994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.381 qpair failed and we were unable to recover it. 00:34:36.381 [2024-07-15 03:37:41.965103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.381 [2024-07-15 03:37:41.965134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.381 qpair failed and we were unable to recover it. 00:34:36.382 [2024-07-15 03:37:41.965248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.382 [2024-07-15 03:37:41.965275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.382 qpair failed and we were unable to recover it. 00:34:36.382 [2024-07-15 03:37:41.965481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.382 [2024-07-15 03:37:41.965511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.382 qpair failed and we were unable to recover it. 00:34:36.382 [2024-07-15 03:37:41.965677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.382 [2024-07-15 03:37:41.965703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.382 qpair failed and we were unable to recover it. 00:34:36.382 [2024-07-15 03:37:41.965811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.382 [2024-07-15 03:37:41.965837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.382 qpair failed and we were unable to recover it. 00:34:36.382 [2024-07-15 03:37:41.965978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.382 [2024-07-15 03:37:41.966016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.382 qpair failed and we were unable to recover it. 00:34:36.382 [2024-07-15 03:37:41.966157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.382 [2024-07-15 03:37:41.966190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.382 qpair failed and we were unable to recover it. 00:34:36.382 [2024-07-15 03:37:41.966346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.382 [2024-07-15 03:37:41.966377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.382 qpair failed and we were unable to recover it. 00:34:36.382 [2024-07-15 03:37:41.966544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.382 [2024-07-15 03:37:41.966574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.382 qpair failed and we were unable to recover it. 00:34:36.382 [2024-07-15 03:37:41.966741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.382 [2024-07-15 03:37:41.966767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.382 qpair failed and we were unable to recover it. 00:34:36.382 [2024-07-15 03:37:41.966926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.382 [2024-07-15 03:37:41.966953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.382 qpair failed and we were unable to recover it. 00:34:36.382 [2024-07-15 03:37:41.967058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.382 [2024-07-15 03:37:41.967084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.382 qpair failed and we were unable to recover it. 00:34:36.382 [2024-07-15 03:37:41.967254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.382 [2024-07-15 03:37:41.967280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.382 qpair failed and we were unable to recover it. 00:34:36.382 [2024-07-15 03:37:41.967418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.382 [2024-07-15 03:37:41.967447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.382 qpair failed and we were unable to recover it. 00:34:36.382 [2024-07-15 03:37:41.967571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.382 [2024-07-15 03:37:41.967600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.382 qpair failed and we were unable to recover it. 00:34:36.382 [2024-07-15 03:37:41.967749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.382 [2024-07-15 03:37:41.967776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.382 qpair failed and we were unable to recover it. 00:34:36.382 [2024-07-15 03:37:41.967894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.382 [2024-07-15 03:37:41.967922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.382 qpair failed and we were unable to recover it. 00:34:36.382 [2024-07-15 03:37:41.968039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.382 [2024-07-15 03:37:41.968065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.382 qpair failed and we were unable to recover it. 00:34:36.382 [2024-07-15 03:37:41.968212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.382 [2024-07-15 03:37:41.968238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.382 qpair failed and we were unable to recover it. 00:34:36.382 [2024-07-15 03:37:41.968401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.382 [2024-07-15 03:37:41.968448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.382 qpair failed and we were unable to recover it. 00:34:36.382 [2024-07-15 03:37:41.968601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.382 [2024-07-15 03:37:41.968632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.382 qpair failed and we were unable to recover it. 00:34:36.382 [2024-07-15 03:37:41.968755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.382 [2024-07-15 03:37:41.968782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.382 qpair failed and we were unable to recover it. 00:34:36.382 [2024-07-15 03:37:41.968942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.382 [2024-07-15 03:37:41.968982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.382 qpair failed and we were unable to recover it. 00:34:36.382 [2024-07-15 03:37:41.969106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.382 [2024-07-15 03:37:41.969135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.382 qpair failed and we were unable to recover it. 00:34:36.382 [2024-07-15 03:37:41.969312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.382 [2024-07-15 03:37:41.969339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.382 qpair failed and we were unable to recover it. 00:34:36.382 [2024-07-15 03:37:41.969472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.382 [2024-07-15 03:37:41.969502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.382 qpair failed and we were unable to recover it. 00:34:36.382 [2024-07-15 03:37:41.969680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.382 [2024-07-15 03:37:41.969728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.382 qpair failed and we were unable to recover it. 00:34:36.382 [2024-07-15 03:37:41.969860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.382 [2024-07-15 03:37:41.969913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.382 qpair failed and we were unable to recover it. 00:34:36.382 [2024-07-15 03:37:41.970036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.382 [2024-07-15 03:37:41.970062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.382 qpair failed and we were unable to recover it. 00:34:36.382 [2024-07-15 03:37:41.970182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.382 [2024-07-15 03:37:41.970212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.382 qpair failed and we were unable to recover it. 00:34:36.382 [2024-07-15 03:37:41.970366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.382 [2024-07-15 03:37:41.970393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.382 qpair failed and we were unable to recover it. 00:34:36.382 [2024-07-15 03:37:41.970491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.382 [2024-07-15 03:37:41.970517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.382 qpair failed and we were unable to recover it. 00:34:36.382 [2024-07-15 03:37:41.970683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.382 [2024-07-15 03:37:41.970713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.382 qpair failed and we were unable to recover it. 00:34:36.382 [2024-07-15 03:37:41.970841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.382 [2024-07-15 03:37:41.970868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.382 qpair failed and we were unable to recover it. 00:34:36.382 [2024-07-15 03:37:41.970997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.382 [2024-07-15 03:37:41.971024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.382 qpair failed and we were unable to recover it. 00:34:36.382 [2024-07-15 03:37:41.971138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.382 [2024-07-15 03:37:41.971165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.382 qpair failed and we were unable to recover it. 00:34:36.382 [2024-07-15 03:37:41.971307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.382 [2024-07-15 03:37:41.971333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.382 qpair failed and we were unable to recover it. 00:34:36.382 [2024-07-15 03:37:41.971463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.382 [2024-07-15 03:37:41.971509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.382 qpair failed and we were unable to recover it. 00:34:36.382 [2024-07-15 03:37:41.971673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.382 [2024-07-15 03:37:41.971700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.382 qpair failed and we were unable to recover it. 00:34:36.382 [2024-07-15 03:37:41.971841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.382 [2024-07-15 03:37:41.971869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.382 qpair failed and we were unable to recover it. 00:34:36.382 [2024-07-15 03:37:41.971990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.382 [2024-07-15 03:37:41.972017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.382 qpair failed and we were unable to recover it. 00:34:36.382 [2024-07-15 03:37:41.972135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.382 [2024-07-15 03:37:41.972162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.382 qpair failed and we were unable to recover it. 00:34:36.382 [2024-07-15 03:37:41.972300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.382 [2024-07-15 03:37:41.972326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.382 qpair failed and we were unable to recover it. 00:34:36.382 [2024-07-15 03:37:41.972438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.382 [2024-07-15 03:37:41.972465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.382 qpair failed and we were unable to recover it. 00:34:36.382 [2024-07-15 03:37:41.972603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.382 [2024-07-15 03:37:41.972630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.382 qpair failed and we were unable to recover it. 00:34:36.382 [2024-07-15 03:37:41.972787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.382 [2024-07-15 03:37:41.972817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.382 qpair failed and we were unable to recover it. 00:34:36.382 [2024-07-15 03:37:41.972988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.382 [2024-07-15 03:37:41.973028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.382 qpair failed and we were unable to recover it. 00:34:36.382 [2024-07-15 03:37:41.973147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.382 [2024-07-15 03:37:41.973175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.382 qpair failed and we were unable to recover it. 00:34:36.382 [2024-07-15 03:37:41.973310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.382 [2024-07-15 03:37:41.973336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.382 qpair failed and we were unable to recover it. 00:34:36.382 [2024-07-15 03:37:41.973490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.382 [2024-07-15 03:37:41.973522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.382 qpair failed and we were unable to recover it. 00:34:36.382 [2024-07-15 03:37:41.973667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.382 [2024-07-15 03:37:41.973714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.382 qpair failed and we were unable to recover it. 00:34:36.382 [2024-07-15 03:37:41.973887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.382 [2024-07-15 03:37:41.973914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.382 qpair failed and we were unable to recover it. 00:34:36.382 [2024-07-15 03:37:41.974025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.382 [2024-07-15 03:37:41.974051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.382 qpair failed and we were unable to recover it. 00:34:36.382 [2024-07-15 03:37:41.974179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.382 [2024-07-15 03:37:41.974208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.382 qpair failed and we were unable to recover it. 00:34:36.382 [2024-07-15 03:37:41.974375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.382 [2024-07-15 03:37:41.974401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.382 qpair failed and we were unable to recover it. 00:34:36.382 [2024-07-15 03:37:41.974536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.382 [2024-07-15 03:37:41.974580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.382 qpair failed and we were unable to recover it. 00:34:36.382 [2024-07-15 03:37:41.974702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.382 [2024-07-15 03:37:41.974732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.382 qpair failed and we were unable to recover it. 00:34:36.382 [2024-07-15 03:37:41.974869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.382 [2024-07-15 03:37:41.974902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.382 qpair failed and we were unable to recover it. 00:34:36.382 [2024-07-15 03:37:41.975031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.382 [2024-07-15 03:37:41.975058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.382 qpair failed and we were unable to recover it. 00:34:36.382 [2024-07-15 03:37:41.975197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.382 [2024-07-15 03:37:41.975226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.382 qpair failed and we were unable to recover it. 00:34:36.382 [2024-07-15 03:37:41.975358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.382 [2024-07-15 03:37:41.975385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.382 qpair failed and we were unable to recover it. 00:34:36.382 [2024-07-15 03:37:41.975529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.382 [2024-07-15 03:37:41.975555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.382 qpair failed and we were unable to recover it. 00:34:36.382 [2024-07-15 03:37:41.975686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.382 [2024-07-15 03:37:41.975716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.382 qpair failed and we were unable to recover it. 00:34:36.382 [2024-07-15 03:37:41.975872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.382 [2024-07-15 03:37:41.975904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.382 qpair failed and we were unable to recover it. 00:34:36.382 [2024-07-15 03:37:41.976014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.382 [2024-07-15 03:37:41.976041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.382 qpair failed and we were unable to recover it. 00:34:36.382 [2024-07-15 03:37:41.976177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.382 [2024-07-15 03:37:41.976204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.382 qpair failed and we were unable to recover it. 00:34:36.382 [2024-07-15 03:37:41.976347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.382 [2024-07-15 03:37:41.976375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.382 qpair failed and we were unable to recover it. 00:34:36.382 [2024-07-15 03:37:41.976561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.382 [2024-07-15 03:37:41.976592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.382 qpair failed and we were unable to recover it. 00:34:36.382 [2024-07-15 03:37:41.976749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.382 [2024-07-15 03:37:41.976779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.382 qpair failed and we were unable to recover it. 00:34:36.383 [2024-07-15 03:37:41.976930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.383 [2024-07-15 03:37:41.976957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.383 qpair failed and we were unable to recover it. 00:34:36.383 [2024-07-15 03:37:41.977074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.383 [2024-07-15 03:37:41.977102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.383 qpair failed and we were unable to recover it. 00:34:36.383 [2024-07-15 03:37:41.977254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.383 [2024-07-15 03:37:41.977285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.383 qpair failed and we were unable to recover it. 00:34:36.383 [2024-07-15 03:37:41.977476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.383 [2024-07-15 03:37:41.977503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.383 qpair failed and we were unable to recover it. 00:34:36.383 [2024-07-15 03:37:41.977610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.383 [2024-07-15 03:37:41.977656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.383 qpair failed and we were unable to recover it. 00:34:36.383 [2024-07-15 03:37:41.977809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.383 [2024-07-15 03:37:41.977838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.383 qpair failed and we were unable to recover it. 00:34:36.383 [2024-07-15 03:37:41.977979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.383 [2024-07-15 03:37:41.978007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.383 qpair failed and we were unable to recover it. 00:34:36.383 [2024-07-15 03:37:41.978146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.383 [2024-07-15 03:37:41.978173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.383 qpair failed and we were unable to recover it. 00:34:36.383 [2024-07-15 03:37:41.978338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.383 [2024-07-15 03:37:41.978382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.383 qpair failed and we were unable to recover it. 00:34:36.383 [2024-07-15 03:37:41.978536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.383 [2024-07-15 03:37:41.978563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.383 qpair failed and we were unable to recover it. 00:34:36.383 [2024-07-15 03:37:41.978742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.383 [2024-07-15 03:37:41.978772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.383 qpair failed and we were unable to recover it. 00:34:36.383 [2024-07-15 03:37:41.978944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.383 [2024-07-15 03:37:41.978971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.383 qpair failed and we were unable to recover it. 00:34:36.383 [2024-07-15 03:37:41.979094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.383 [2024-07-15 03:37:41.979120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.383 qpair failed and we were unable to recover it. 00:34:36.383 [2024-07-15 03:37:41.979261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.383 [2024-07-15 03:37:41.979306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.383 qpair failed and we were unable to recover it. 00:34:36.383 [2024-07-15 03:37:41.979492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.383 [2024-07-15 03:37:41.979521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.383 qpair failed and we were unable to recover it. 00:34:36.383 [2024-07-15 03:37:41.979679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.383 [2024-07-15 03:37:41.979705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.383 qpair failed and we were unable to recover it. 00:34:36.383 [2024-07-15 03:37:41.979860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.383 [2024-07-15 03:37:41.979895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.383 qpair failed and we were unable to recover it. 00:34:36.383 [2024-07-15 03:37:41.980026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.383 [2024-07-15 03:37:41.980055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.383 qpair failed and we were unable to recover it. 00:34:36.383 [2024-07-15 03:37:41.980194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.383 [2024-07-15 03:37:41.980221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.383 qpair failed and we were unable to recover it. 00:34:36.383 [2024-07-15 03:37:41.980367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.383 [2024-07-15 03:37:41.980410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.383 qpair failed and we were unable to recover it. 00:34:36.383 [2024-07-15 03:37:41.980525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.383 [2024-07-15 03:37:41.980555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.383 qpair failed and we were unable to recover it. 00:34:36.383 [2024-07-15 03:37:41.980714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.383 [2024-07-15 03:37:41.980740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.383 qpair failed and we were unable to recover it. 00:34:36.383 [2024-07-15 03:37:41.980884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.383 [2024-07-15 03:37:41.980931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.383 qpair failed and we were unable to recover it. 00:34:36.383 [2024-07-15 03:37:41.981099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.383 [2024-07-15 03:37:41.981126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.383 qpair failed and we were unable to recover it. 00:34:36.383 [2024-07-15 03:37:41.981265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.383 [2024-07-15 03:37:41.981292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.383 qpair failed and we were unable to recover it. 00:34:36.383 [2024-07-15 03:37:41.981428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.383 [2024-07-15 03:37:41.981478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.383 qpair failed and we were unable to recover it. 00:34:36.383 [2024-07-15 03:37:41.981633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.383 [2024-07-15 03:37:41.981662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.383 qpair failed and we were unable to recover it. 00:34:36.383 [2024-07-15 03:37:41.981800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.383 [2024-07-15 03:37:41.981827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.383 qpair failed and we were unable to recover it. 00:34:36.383 [2024-07-15 03:37:41.981975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.383 [2024-07-15 03:37:41.982043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.383 qpair failed and we were unable to recover it. 00:34:36.383 [2024-07-15 03:37:41.982175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.383 [2024-07-15 03:37:41.982206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.383 qpair failed and we were unable to recover it. 00:34:36.383 [2024-07-15 03:37:41.982403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.383 [2024-07-15 03:37:41.982429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.383 qpair failed and we were unable to recover it. 00:34:36.383 [2024-07-15 03:37:41.982596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.383 [2024-07-15 03:37:41.982624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.383 qpair failed and we were unable to recover it. 00:34:36.383 [2024-07-15 03:37:41.982765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.383 [2024-07-15 03:37:41.982793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.383 qpair failed and we were unable to recover it. 00:34:36.383 [2024-07-15 03:37:41.982942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.383 [2024-07-15 03:37:41.982968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.383 qpair failed and we were unable to recover it. 00:34:36.383 [2024-07-15 03:37:41.983108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.383 [2024-07-15 03:37:41.983153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.383 qpair failed and we were unable to recover it. 00:34:36.383 [2024-07-15 03:37:41.983274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.383 [2024-07-15 03:37:41.983303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.383 qpair failed and we were unable to recover it. 00:34:36.383 [2024-07-15 03:37:41.983466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.383 [2024-07-15 03:37:41.983492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.383 qpair failed and we were unable to recover it. 00:34:36.383 [2024-07-15 03:37:41.983634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.383 [2024-07-15 03:37:41.983677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.383 qpair failed and we were unable to recover it. 00:34:36.383 [2024-07-15 03:37:41.983829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.383 [2024-07-15 03:37:41.983858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.383 qpair failed and we were unable to recover it. 00:34:36.383 [2024-07-15 03:37:41.984014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.383 [2024-07-15 03:37:41.984041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.383 qpair failed and we were unable to recover it. 00:34:36.383 [2024-07-15 03:37:41.984143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.383 [2024-07-15 03:37:41.984179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.383 qpair failed and we were unable to recover it. 00:34:36.383 [2024-07-15 03:37:41.984324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.383 [2024-07-15 03:37:41.984353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.383 qpair failed and we were unable to recover it. 00:34:36.383 [2024-07-15 03:37:41.984505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.383 [2024-07-15 03:37:41.984531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.383 qpair failed and we were unable to recover it. 00:34:36.383 [2024-07-15 03:37:41.984670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.383 [2024-07-15 03:37:41.984713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.383 qpair failed and we were unable to recover it. 00:34:36.383 [2024-07-15 03:37:41.984866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.383 [2024-07-15 03:37:41.984925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.383 qpair failed and we were unable to recover it. 00:34:36.383 [2024-07-15 03:37:41.985074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.383 [2024-07-15 03:37:41.985101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.383 qpair failed and we were unable to recover it. 00:34:36.383 [2024-07-15 03:37:41.985249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.383 [2024-07-15 03:37:41.985295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.383 qpair failed and we were unable to recover it. 00:34:36.383 [2024-07-15 03:37:41.985473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.383 [2024-07-15 03:37:41.985503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.383 qpair failed and we were unable to recover it. 00:34:36.383 [2024-07-15 03:37:41.985698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.383 [2024-07-15 03:37:41.985725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.383 qpair failed and we were unable to recover it. 00:34:36.383 [2024-07-15 03:37:41.985831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.383 [2024-07-15 03:37:41.985875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.383 qpair failed and we were unable to recover it. 00:34:36.383 [2024-07-15 03:37:41.986021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.383 [2024-07-15 03:37:41.986049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.383 qpair failed and we were unable to recover it. 00:34:36.383 [2024-07-15 03:37:41.986188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.383 [2024-07-15 03:37:41.986214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.383 qpair failed and we were unable to recover it. 00:34:36.383 [2024-07-15 03:37:41.986351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.383 [2024-07-15 03:37:41.986378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.383 qpair failed and we were unable to recover it. 00:34:36.383 [2024-07-15 03:37:41.986580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.383 [2024-07-15 03:37:41.986606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.383 qpair failed and we were unable to recover it. 00:34:36.383 [2024-07-15 03:37:41.986743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.383 [2024-07-15 03:37:41.986769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.383 qpair failed and we were unable to recover it. 00:34:36.383 [2024-07-15 03:37:41.986908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.383 [2024-07-15 03:37:41.986953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.383 qpair failed and we were unable to recover it. 00:34:36.383 [2024-07-15 03:37:41.987077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.383 [2024-07-15 03:37:41.987107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.383 qpair failed and we were unable to recover it. 00:34:36.383 [2024-07-15 03:37:41.987289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.383 [2024-07-15 03:37:41.987316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.383 qpair failed and we were unable to recover it. 00:34:36.383 [2024-07-15 03:37:41.987422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.383 [2024-07-15 03:37:41.987464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.383 qpair failed and we were unable to recover it. 00:34:36.383 [2024-07-15 03:37:41.987642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.383 [2024-07-15 03:37:41.987671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.383 qpair failed and we were unable to recover it. 00:34:36.383 [2024-07-15 03:37:41.987800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.383 [2024-07-15 03:37:41.987827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.383 qpair failed and we were unable to recover it. 00:34:36.383 [2024-07-15 03:37:41.987941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.383 [2024-07-15 03:37:41.987968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.383 qpair failed and we were unable to recover it. 00:34:36.383 [2024-07-15 03:37:41.988106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.384 [2024-07-15 03:37:41.988135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.384 qpair failed and we were unable to recover it. 00:34:36.384 [2024-07-15 03:37:41.988293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.384 [2024-07-15 03:37:41.988319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.384 qpair failed and we were unable to recover it. 00:34:36.384 [2024-07-15 03:37:41.988454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.384 [2024-07-15 03:37:41.988495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.384 qpair failed and we were unable to recover it. 00:34:36.384 [2024-07-15 03:37:41.988650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.384 [2024-07-15 03:37:41.988679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.384 qpair failed and we were unable to recover it. 00:34:36.384 [2024-07-15 03:37:41.988818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.384 [2024-07-15 03:37:41.988844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.384 qpair failed and we were unable to recover it. 00:34:36.384 [2024-07-15 03:37:41.988998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.384 [2024-07-15 03:37:41.989027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.384 qpair failed and we were unable to recover it. 00:34:36.384 [2024-07-15 03:37:41.989226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.384 [2024-07-15 03:37:41.989253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.384 qpair failed and we were unable to recover it. 00:34:36.384 [2024-07-15 03:37:41.989392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.384 [2024-07-15 03:37:41.989418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.384 qpair failed and we were unable to recover it. 00:34:36.384 [2024-07-15 03:37:41.989529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.384 [2024-07-15 03:37:41.989571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.384 qpair failed and we were unable to recover it. 00:34:36.384 [2024-07-15 03:37:41.989736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.384 [2024-07-15 03:37:41.989763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.384 qpair failed and we were unable to recover it. 00:34:36.384 [2024-07-15 03:37:41.989886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.384 [2024-07-15 03:37:41.989914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.384 qpair failed and we were unable to recover it. 00:34:36.384 [2024-07-15 03:37:41.990020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.384 [2024-07-15 03:37:41.990047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.384 qpair failed and we were unable to recover it. 00:34:36.384 [2024-07-15 03:37:41.990245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.384 [2024-07-15 03:37:41.990274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.384 qpair failed and we were unable to recover it. 00:34:36.384 [2024-07-15 03:37:41.990407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.384 [2024-07-15 03:37:41.990433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.384 qpair failed and we were unable to recover it. 00:34:36.384 [2024-07-15 03:37:41.990548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.384 [2024-07-15 03:37:41.990575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.384 qpair failed and we were unable to recover it. 00:34:36.384 [2024-07-15 03:37:41.990719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.384 [2024-07-15 03:37:41.990748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.384 qpair failed and we were unable to recover it. 00:34:36.384 [2024-07-15 03:37:41.990915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.384 [2024-07-15 03:37:41.990942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.384 qpair failed and we were unable to recover it. 00:34:36.384 [2024-07-15 03:37:41.991099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.384 [2024-07-15 03:37:41.991133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.384 qpair failed and we were unable to recover it. 00:34:36.384 [2024-07-15 03:37:41.991291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.384 [2024-07-15 03:37:41.991320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.384 qpair failed and we were unable to recover it. 00:34:36.384 [2024-07-15 03:37:41.991446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.384 [2024-07-15 03:37:41.991474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.384 qpair failed and we were unable to recover it. 00:34:36.384 [2024-07-15 03:37:41.991661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.384 [2024-07-15 03:37:41.991690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.384 qpair failed and we were unable to recover it. 00:34:36.384 [2024-07-15 03:37:41.991816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.384 [2024-07-15 03:37:41.991845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.384 qpair failed and we were unable to recover it. 00:34:36.384 [2024-07-15 03:37:41.991985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.384 [2024-07-15 03:37:41.992012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.384 qpair failed and we were unable to recover it. 00:34:36.384 [2024-07-15 03:37:41.992143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.384 [2024-07-15 03:37:41.992185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.384 qpair failed and we were unable to recover it. 00:34:36.384 [2024-07-15 03:37:41.992296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.384 [2024-07-15 03:37:41.992325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.384 qpair failed and we were unable to recover it. 00:34:36.384 [2024-07-15 03:37:41.992453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.384 [2024-07-15 03:37:41.992479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.384 qpair failed and we were unable to recover it. 00:34:36.384 [2024-07-15 03:37:41.992591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.384 [2024-07-15 03:37:41.992617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.384 qpair failed and we were unable to recover it. 00:34:36.384 [2024-07-15 03:37:41.992763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.384 [2024-07-15 03:37:41.992789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.384 qpair failed and we were unable to recover it. 00:34:36.384 [2024-07-15 03:37:41.992950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.384 [2024-07-15 03:37:41.992978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.384 qpair failed and we were unable to recover it. 00:34:36.384 [2024-07-15 03:37:41.993083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.384 [2024-07-15 03:37:41.993109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.384 qpair failed and we were unable to recover it. 00:34:36.384 [2024-07-15 03:37:41.993315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.384 [2024-07-15 03:37:41.993344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.384 qpair failed and we were unable to recover it. 00:34:36.384 [2024-07-15 03:37:41.993499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.384 [2024-07-15 03:37:41.993526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.384 qpair failed and we were unable to recover it. 00:34:36.384 [2024-07-15 03:37:41.993641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.384 [2024-07-15 03:37:41.993684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.384 qpair failed and we were unable to recover it. 00:34:36.384 [2024-07-15 03:37:41.993844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.384 [2024-07-15 03:37:41.993870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.384 qpair failed and we were unable to recover it. 00:34:36.384 [2024-07-15 03:37:41.993997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.384 [2024-07-15 03:37:41.994025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.384 qpair failed and we were unable to recover it. 00:34:36.384 [2024-07-15 03:37:41.994139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.384 [2024-07-15 03:37:41.994165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.384 qpair failed and we were unable to recover it. 00:34:36.384 [2024-07-15 03:37:41.994353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.384 [2024-07-15 03:37:41.994382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.384 qpair failed and we were unable to recover it. 00:34:36.384 [2024-07-15 03:37:41.994535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.384 [2024-07-15 03:37:41.994562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.384 qpair failed and we were unable to recover it. 00:34:36.384 [2024-07-15 03:37:41.994723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.384 [2024-07-15 03:37:41.994752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.384 qpair failed and we were unable to recover it. 00:34:36.384 [2024-07-15 03:37:41.994906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.384 [2024-07-15 03:37:41.994937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.384 qpair failed and we were unable to recover it. 00:34:36.384 [2024-07-15 03:37:41.995069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.384 [2024-07-15 03:37:41.995095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.384 qpair failed and we were unable to recover it. 00:34:36.384 [2024-07-15 03:37:41.995234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.384 [2024-07-15 03:37:41.995260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.384 qpair failed and we were unable to recover it. 00:34:36.384 [2024-07-15 03:37:41.995422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.384 [2024-07-15 03:37:41.995451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.384 qpair failed and we were unable to recover it. 00:34:36.384 [2024-07-15 03:37:41.995606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.384 [2024-07-15 03:37:41.995632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.384 qpair failed and we were unable to recover it. 00:34:36.384 [2024-07-15 03:37:41.995745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.384 [2024-07-15 03:37:41.995775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.384 qpair failed and we were unable to recover it. 00:34:36.384 [2024-07-15 03:37:41.995925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.384 [2024-07-15 03:37:41.995952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.384 qpair failed and we were unable to recover it. 00:34:36.384 [2024-07-15 03:37:41.996064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.384 [2024-07-15 03:37:41.996090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.384 qpair failed and we were unable to recover it. 00:34:36.384 [2024-07-15 03:37:41.996268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.384 [2024-07-15 03:37:41.996297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.384 qpair failed and we were unable to recover it. 00:34:36.384 [2024-07-15 03:37:41.996413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.384 [2024-07-15 03:37:41.996442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.384 qpair failed and we were unable to recover it. 00:34:36.384 [2024-07-15 03:37:41.996601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.384 [2024-07-15 03:37:41.996626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.384 qpair failed and we were unable to recover it. 00:34:36.384 [2024-07-15 03:37:41.996744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.384 [2024-07-15 03:37:41.996770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.384 qpair failed and we were unable to recover it. 00:34:36.384 [2024-07-15 03:37:41.996888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.384 [2024-07-15 03:37:41.996919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.384 qpair failed and we were unable to recover it. 00:34:36.384 [2024-07-15 03:37:41.997064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.384 [2024-07-15 03:37:41.997090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.384 qpair failed and we were unable to recover it. 00:34:36.384 [2024-07-15 03:37:41.997248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.384 [2024-07-15 03:37:41.997277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.384 qpair failed and we were unable to recover it. 00:34:36.384 [2024-07-15 03:37:41.997394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.384 [2024-07-15 03:37:41.997423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.384 qpair failed and we were unable to recover it. 00:34:36.384 [2024-07-15 03:37:41.997575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.384 [2024-07-15 03:37:41.997602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.384 qpair failed and we were unable to recover it. 00:34:36.384 [2024-07-15 03:37:41.997718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.384 [2024-07-15 03:37:41.997744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.384 qpair failed and we were unable to recover it. 00:34:36.384 [2024-07-15 03:37:41.997906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.384 [2024-07-15 03:37:41.997935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.384 qpair failed and we were unable to recover it. 00:34:36.384 [2024-07-15 03:37:41.998102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.384 [2024-07-15 03:37:41.998129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.384 qpair failed and we were unable to recover it. 00:34:36.384 [2024-07-15 03:37:41.998272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.384 [2024-07-15 03:37:41.998316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.384 qpair failed and we were unable to recover it. 00:34:36.384 [2024-07-15 03:37:41.998464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.384 [2024-07-15 03:37:41.998493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.384 qpair failed and we were unable to recover it. 00:34:36.384 [2024-07-15 03:37:41.998656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.384 [2024-07-15 03:37:41.998683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.384 qpair failed and we were unable to recover it. 00:34:36.384 [2024-07-15 03:37:41.998822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.384 [2024-07-15 03:37:41.998866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.384 qpair failed and we were unable to recover it. 00:34:36.384 [2024-07-15 03:37:41.998997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.384 [2024-07-15 03:37:41.999027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.384 qpair failed and we were unable to recover it. 00:34:36.384 [2024-07-15 03:37:41.999157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.384 [2024-07-15 03:37:41.999184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.384 qpair failed and we were unable to recover it. 00:34:36.384 [2024-07-15 03:37:41.999306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.384 [2024-07-15 03:37:41.999332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.384 qpair failed and we were unable to recover it. 00:34:36.384 [2024-07-15 03:37:41.999491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.385 [2024-07-15 03:37:41.999520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.385 qpair failed and we were unable to recover it. 00:34:36.385 [2024-07-15 03:37:41.999658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.385 [2024-07-15 03:37:41.999684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.385 qpair failed and we were unable to recover it. 00:34:36.385 [2024-07-15 03:37:41.999795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.385 [2024-07-15 03:37:41.999821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.385 qpair failed and we were unable to recover it. 00:34:36.385 [2024-07-15 03:37:41.999925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.385 [2024-07-15 03:37:41.999952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.385 qpair failed and we were unable to recover it. 00:34:36.385 [2024-07-15 03:37:42.000093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.385 [2024-07-15 03:37:42.000119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.385 qpair failed and we were unable to recover it. 00:34:36.385 [2024-07-15 03:37:42.000302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.385 [2024-07-15 03:37:42.000331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.385 qpair failed and we were unable to recover it. 00:34:36.385 [2024-07-15 03:37:42.000510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.385 [2024-07-15 03:37:42.000537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.385 qpair failed and we were unable to recover it. 00:34:36.385 [2024-07-15 03:37:42.000671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.385 [2024-07-15 03:37:42.000698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.385 qpair failed and we were unable to recover it. 00:34:36.385 [2024-07-15 03:37:42.000833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.385 [2024-07-15 03:37:42.000971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.385 qpair failed and we were unable to recover it. 00:34:36.385 [2024-07-15 03:37:42.001098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.385 [2024-07-15 03:37:42.001125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.385 qpair failed and we were unable to recover it. 00:34:36.385 [2024-07-15 03:37:42.001322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.385 [2024-07-15 03:37:42.001348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.385 qpair failed and we were unable to recover it. 00:34:36.385 [2024-07-15 03:37:42.001486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.385 [2024-07-15 03:37:42.001513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.385 qpair failed and we were unable to recover it. 00:34:36.385 [2024-07-15 03:37:42.001631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.385 [2024-07-15 03:37:42.001658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.385 qpair failed and we were unable to recover it. 00:34:36.385 [2024-07-15 03:37:42.001838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.385 [2024-07-15 03:37:42.001864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.385 qpair failed and we were unable to recover it. 00:34:36.385 [2024-07-15 03:37:42.001992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.385 [2024-07-15 03:37:42.002019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.385 qpair failed and we were unable to recover it. 00:34:36.385 [2024-07-15 03:37:42.002134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.385 [2024-07-15 03:37:42.002160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.385 qpair failed and we were unable to recover it. 00:34:36.385 [2024-07-15 03:37:42.002296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.385 [2024-07-15 03:37:42.002323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.385 qpair failed and we were unable to recover it. 00:34:36.385 [2024-07-15 03:37:42.002468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.385 [2024-07-15 03:37:42.002496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.385 qpair failed and we were unable to recover it. 00:34:36.385 [2024-07-15 03:37:42.002654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.385 [2024-07-15 03:37:42.002683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.385 qpair failed and we were unable to recover it. 00:34:36.385 [2024-07-15 03:37:42.002839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.385 [2024-07-15 03:37:42.002865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.385 qpair failed and we were unable to recover it. 00:34:36.385 [2024-07-15 03:37:42.002988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.385 [2024-07-15 03:37:42.003014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.385 qpair failed and we were unable to recover it. 00:34:36.385 [2024-07-15 03:37:42.003124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.385 [2024-07-15 03:37:42.003151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.385 qpair failed and we were unable to recover it. 00:34:36.385 [2024-07-15 03:37:42.003284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.385 [2024-07-15 03:37:42.003309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.385 qpair failed and we were unable to recover it. 00:34:36.385 [2024-07-15 03:37:42.003418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.385 [2024-07-15 03:37:42.003444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.385 qpair failed and we were unable to recover it. 00:34:36.385 [2024-07-15 03:37:42.003592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.385 [2024-07-15 03:37:42.003618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.385 qpair failed and we were unable to recover it. 00:34:36.385 [2024-07-15 03:37:42.003731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.385 [2024-07-15 03:37:42.003757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.385 qpair failed and we were unable to recover it. 00:34:36.385 [2024-07-15 03:37:42.003895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.385 [2024-07-15 03:37:42.003922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.385 qpair failed and we were unable to recover it. 00:34:36.385 [2024-07-15 03:37:42.004062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.385 [2024-07-15 03:37:42.004088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.385 qpair failed and we were unable to recover it. 00:34:36.385 [2024-07-15 03:37:42.004228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.385 [2024-07-15 03:37:42.004254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.385 qpair failed and we were unable to recover it. 00:34:36.385 [2024-07-15 03:37:42.004406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.385 [2024-07-15 03:37:42.004435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.385 qpair failed and we were unable to recover it. 00:34:36.385 [2024-07-15 03:37:42.004565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.385 [2024-07-15 03:37:42.004594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.385 qpair failed and we were unable to recover it. 00:34:36.385 [2024-07-15 03:37:42.004756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.385 [2024-07-15 03:37:42.004782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.385 qpair failed and we were unable to recover it. 00:34:36.385 [2024-07-15 03:37:42.004927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.385 [2024-07-15 03:37:42.004955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.385 qpair failed and we were unable to recover it. 00:34:36.385 [2024-07-15 03:37:42.005100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.385 [2024-07-15 03:37:42.005136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.385 qpair failed and we were unable to recover it. 00:34:36.385 [2024-07-15 03:37:42.005281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.385 [2024-07-15 03:37:42.005307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.385 qpair failed and we were unable to recover it. 00:34:36.385 [2024-07-15 03:37:42.005412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.385 [2024-07-15 03:37:42.005438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.385 qpair failed and we were unable to recover it. 00:34:36.385 [2024-07-15 03:37:42.005609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.385 [2024-07-15 03:37:42.005635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.385 qpair failed and we were unable to recover it. 00:34:36.385 [2024-07-15 03:37:42.005814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.385 [2024-07-15 03:37:42.005841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.385 qpair failed and we were unable to recover it. 00:34:36.385 [2024-07-15 03:37:42.005972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.385 [2024-07-15 03:37:42.006017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.385 qpair failed and we were unable to recover it. 00:34:36.385 [2024-07-15 03:37:42.006141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.385 [2024-07-15 03:37:42.006180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.385 qpair failed and we were unable to recover it. 00:34:36.385 [2024-07-15 03:37:42.006347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.385 [2024-07-15 03:37:42.006381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.385 qpair failed and we were unable to recover it. 00:34:36.385 [2024-07-15 03:37:42.006534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.385 [2024-07-15 03:37:42.006564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.385 qpair failed and we were unable to recover it. 00:34:36.385 [2024-07-15 03:37:42.006724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.385 [2024-07-15 03:37:42.006753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.385 qpair failed and we were unable to recover it. 00:34:36.385 [2024-07-15 03:37:42.006937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.385 [2024-07-15 03:37:42.006965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.385 qpair failed and we were unable to recover it. 00:34:36.385 [2024-07-15 03:37:42.007082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.385 [2024-07-15 03:37:42.007127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.385 qpair failed and we were unable to recover it. 00:34:36.385 [2024-07-15 03:37:42.007286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.385 [2024-07-15 03:37:42.007315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.385 qpair failed and we were unable to recover it. 00:34:36.385 [2024-07-15 03:37:42.007525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.385 [2024-07-15 03:37:42.007556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.385 qpair failed and we were unable to recover it. 00:34:36.385 [2024-07-15 03:37:42.007712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.385 [2024-07-15 03:37:42.007741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.385 qpair failed and we were unable to recover it. 00:34:36.385 [2024-07-15 03:37:42.007940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.385 [2024-07-15 03:37:42.007967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.385 qpair failed and we were unable to recover it. 00:34:36.385 [2024-07-15 03:37:42.008100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.385 [2024-07-15 03:37:42.008126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.385 qpair failed and we were unable to recover it. 00:34:36.385 [2024-07-15 03:37:42.008254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.385 [2024-07-15 03:37:42.008304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.385 qpair failed and we were unable to recover it. 00:34:36.385 [2024-07-15 03:37:42.008485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.385 [2024-07-15 03:37:42.008514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.385 qpair failed and we were unable to recover it. 00:34:36.385 [2024-07-15 03:37:42.008671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.385 [2024-07-15 03:37:42.008697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.385 qpair failed and we were unable to recover it. 00:34:36.385 [2024-07-15 03:37:42.008819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.385 [2024-07-15 03:37:42.008860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.385 qpair failed and we were unable to recover it. 00:34:36.385 [2024-07-15 03:37:42.009020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.385 [2024-07-15 03:37:42.009049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.385 qpair failed and we were unable to recover it. 00:34:36.385 [2024-07-15 03:37:42.009188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.385 [2024-07-15 03:37:42.009214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.385 qpair failed and we were unable to recover it. 00:34:36.385 [2024-07-15 03:37:42.009351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.385 [2024-07-15 03:37:42.009393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.385 qpair failed and we were unable to recover it. 00:34:36.385 [2024-07-15 03:37:42.009545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.385 [2024-07-15 03:37:42.009575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.385 qpair failed and we were unable to recover it. 00:34:36.385 [2024-07-15 03:37:42.009739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.385 [2024-07-15 03:37:42.009768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.385 qpair failed and we were unable to recover it. 00:34:36.385 [2024-07-15 03:37:42.009936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.385 [2024-07-15 03:37:42.009964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.385 qpair failed and we were unable to recover it. 00:34:36.385 [2024-07-15 03:37:42.010090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.385 [2024-07-15 03:37:42.010117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.385 qpair failed and we were unable to recover it. 00:34:36.385 [2024-07-15 03:37:42.010288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.385 [2024-07-15 03:37:42.010315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.385 qpair failed and we were unable to recover it. 00:34:36.385 [2024-07-15 03:37:42.010481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.385 [2024-07-15 03:37:42.010507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.385 qpair failed and we were unable to recover it. 00:34:36.385 [2024-07-15 03:37:42.010662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.385 [2024-07-15 03:37:42.010690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.385 qpair failed and we were unable to recover it. 00:34:36.385 [2024-07-15 03:37:42.010868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.385 [2024-07-15 03:37:42.010904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.385 qpair failed and we were unable to recover it. 00:34:36.385 [2024-07-15 03:37:42.011051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.385 [2024-07-15 03:37:42.011077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.385 qpair failed and we were unable to recover it. 00:34:36.385 [2024-07-15 03:37:42.011189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.385 [2024-07-15 03:37:42.011215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.385 qpair failed and we were unable to recover it. 00:34:36.385 [2024-07-15 03:37:42.011344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.386 [2024-07-15 03:37:42.011373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.386 qpair failed and we were unable to recover it. 00:34:36.386 [2024-07-15 03:37:42.011563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.386 [2024-07-15 03:37:42.011589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.386 qpair failed and we were unable to recover it. 00:34:36.386 [2024-07-15 03:37:42.011743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.386 [2024-07-15 03:37:42.011772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.386 qpair failed and we were unable to recover it. 00:34:36.386 [2024-07-15 03:37:42.011966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.386 [2024-07-15 03:37:42.011993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.386 qpair failed and we were unable to recover it. 00:34:36.386 [2024-07-15 03:37:42.012132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.386 [2024-07-15 03:37:42.012158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.386 qpair failed and we were unable to recover it. 00:34:36.386 [2024-07-15 03:37:42.012290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.386 [2024-07-15 03:37:42.012333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.386 qpair failed and we were unable to recover it. 00:34:36.386 [2024-07-15 03:37:42.012475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.386 [2024-07-15 03:37:42.012509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.386 qpair failed and we were unable to recover it. 00:34:36.386 [2024-07-15 03:37:42.012663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.386 [2024-07-15 03:37:42.012689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.386 qpair failed and we were unable to recover it. 00:34:36.386 [2024-07-15 03:37:42.012821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.386 [2024-07-15 03:37:42.012847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.386 qpair failed and we were unable to recover it. 00:34:36.386 [2024-07-15 03:37:42.012994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.386 [2024-07-15 03:37:42.013023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.386 qpair failed and we were unable to recover it. 00:34:36.386 [2024-07-15 03:37:42.013163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.386 [2024-07-15 03:37:42.013200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.386 qpair failed and we were unable to recover it. 00:34:36.386 [2024-07-15 03:37:42.013379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.386 [2024-07-15 03:37:42.013408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.386 qpair failed and we were unable to recover it. 00:34:36.386 [2024-07-15 03:37:42.013560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.386 [2024-07-15 03:37:42.013594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.386 qpair failed and we were unable to recover it. 00:34:36.386 [2024-07-15 03:37:42.013727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.386 [2024-07-15 03:37:42.013753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.386 qpair failed and we were unable to recover it. 00:34:36.386 [2024-07-15 03:37:42.013863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.386 [2024-07-15 03:37:42.013907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.386 qpair failed and we were unable to recover it. 00:34:36.386 [2024-07-15 03:37:42.014048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.386 [2024-07-15 03:37:42.014077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.386 qpair failed and we were unable to recover it. 00:34:36.386 [2024-07-15 03:37:42.014199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.386 [2024-07-15 03:37:42.014225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.386 qpair failed and we were unable to recover it. 00:34:36.386 [2024-07-15 03:37:42.014390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.386 [2024-07-15 03:37:42.014432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.386 qpair failed and we were unable to recover it. 00:34:36.386 [2024-07-15 03:37:42.014577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.386 [2024-07-15 03:37:42.014606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.386 qpair failed and we were unable to recover it. 00:34:36.386 [2024-07-15 03:37:42.014747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.386 [2024-07-15 03:37:42.014774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.386 qpair failed and we were unable to recover it. 00:34:36.386 [2024-07-15 03:37:42.014895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.386 [2024-07-15 03:37:42.014922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.386 qpair failed and we were unable to recover it. 00:34:36.386 [2024-07-15 03:37:42.015035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.386 [2024-07-15 03:37:42.015061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.386 qpair failed and we were unable to recover it. 00:34:36.386 [2024-07-15 03:37:42.015227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.386 [2024-07-15 03:37:42.015255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.386 qpair failed and we were unable to recover it. 00:34:36.386 [2024-07-15 03:37:42.015365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.386 [2024-07-15 03:37:42.015391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.386 qpair failed and we were unable to recover it. 00:34:36.386 [2024-07-15 03:37:42.015558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.386 [2024-07-15 03:37:42.015587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.386 qpair failed and we were unable to recover it. 00:34:36.386 [2024-07-15 03:37:42.015716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.386 [2024-07-15 03:37:42.015744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.386 qpair failed and we were unable to recover it. 00:34:36.386 [2024-07-15 03:37:42.015870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.386 [2024-07-15 03:37:42.015909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.386 qpair failed and we were unable to recover it. 00:34:36.386 [2024-07-15 03:37:42.016067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.386 [2024-07-15 03:37:42.016093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.386 qpair failed and we were unable to recover it. 00:34:36.386 [2024-07-15 03:37:42.016266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.386 [2024-07-15 03:37:42.016292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.386 qpair failed and we were unable to recover it. 00:34:36.386 [2024-07-15 03:37:42.016484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.386 [2024-07-15 03:37:42.016513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.386 qpair failed and we were unable to recover it. 00:34:36.386 [2024-07-15 03:37:42.016658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.386 [2024-07-15 03:37:42.016688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.386 qpair failed and we were unable to recover it. 00:34:36.386 [2024-07-15 03:37:42.016817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.386 [2024-07-15 03:37:42.016844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.386 qpair failed and we were unable to recover it. 00:34:36.386 [2024-07-15 03:37:42.016979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.386 [2024-07-15 03:37:42.017006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.386 qpair failed and we were unable to recover it. 00:34:36.386 [2024-07-15 03:37:42.017180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.386 [2024-07-15 03:37:42.017213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.386 qpair failed and we were unable to recover it. 00:34:36.386 [2024-07-15 03:37:42.017351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.386 [2024-07-15 03:37:42.017377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.386 qpair failed and we were unable to recover it. 00:34:36.386 [2024-07-15 03:37:42.017482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.386 [2024-07-15 03:37:42.017508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.386 qpair failed and we were unable to recover it. 00:34:36.386 [2024-07-15 03:37:42.017653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.386 [2024-07-15 03:37:42.017682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.386 qpair failed and we were unable to recover it. 00:34:36.386 [2024-07-15 03:37:42.017807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.386 [2024-07-15 03:37:42.017833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.386 qpair failed and we were unable to recover it. 00:34:36.386 [2024-07-15 03:37:42.017956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.386 [2024-07-15 03:37:42.017984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.386 qpair failed and we were unable to recover it. 00:34:36.386 [2024-07-15 03:37:42.018171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.386 [2024-07-15 03:37:42.018200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.386 qpair failed and we were unable to recover it. 00:34:36.386 [2024-07-15 03:37:42.018346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.386 [2024-07-15 03:37:42.018373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.386 qpair failed and we were unable to recover it. 00:34:36.386 [2024-07-15 03:37:42.018509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.386 [2024-07-15 03:37:42.018535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.386 qpair failed and we were unable to recover it. 00:34:36.386 [2024-07-15 03:37:42.018713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.386 [2024-07-15 03:37:42.018741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.386 qpair failed and we were unable to recover it. 00:34:36.386 [2024-07-15 03:37:42.018902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.386 [2024-07-15 03:37:42.018929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.386 qpair failed and we were unable to recover it. 00:34:36.386 [2024-07-15 03:37:42.019054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.386 [2024-07-15 03:37:42.019097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.386 qpair failed and we were unable to recover it. 00:34:36.386 [2024-07-15 03:37:42.019298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.386 [2024-07-15 03:37:42.019325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.386 qpair failed and we were unable to recover it. 00:34:36.386 [2024-07-15 03:37:42.019455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.386 [2024-07-15 03:37:42.019481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.386 qpair failed and we were unable to recover it. 00:34:36.386 [2024-07-15 03:37:42.019624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.386 [2024-07-15 03:37:42.019668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.386 qpair failed and we were unable to recover it. 00:34:36.386 [2024-07-15 03:37:42.019812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.386 [2024-07-15 03:37:42.019841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.386 qpair failed and we were unable to recover it. 00:34:36.386 [2024-07-15 03:37:42.020011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.386 [2024-07-15 03:37:42.020039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.386 qpair failed and we were unable to recover it. 00:34:36.386 [2024-07-15 03:37:42.020198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.386 [2024-07-15 03:37:42.020227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.386 qpair failed and we were unable to recover it. 00:34:36.386 [2024-07-15 03:37:42.020373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.386 [2024-07-15 03:37:42.020402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.386 qpair failed and we were unable to recover it. 00:34:36.386 [2024-07-15 03:37:42.020530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.386 [2024-07-15 03:37:42.020556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.386 qpair failed and we were unable to recover it. 00:34:36.386 [2024-07-15 03:37:42.020673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.386 [2024-07-15 03:37:42.020713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.386 qpair failed and we were unable to recover it. 00:34:36.386 [2024-07-15 03:37:42.020926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.386 [2024-07-15 03:37:42.020958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.386 qpair failed and we were unable to recover it. 00:34:36.386 [2024-07-15 03:37:42.021108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.386 [2024-07-15 03:37:42.021135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.386 qpair failed and we were unable to recover it. 00:34:36.386 [2024-07-15 03:37:42.021241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.386 [2024-07-15 03:37:42.021268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.386 qpair failed and we were unable to recover it. 00:34:36.386 [2024-07-15 03:37:42.021445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.386 [2024-07-15 03:37:42.021476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.386 qpair failed and we were unable to recover it. 00:34:36.386 [2024-07-15 03:37:42.021636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.386 [2024-07-15 03:37:42.021663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.386 qpair failed and we were unable to recover it. 00:34:36.386 [2024-07-15 03:37:42.021782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.386 [2024-07-15 03:37:42.021825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.386 qpair failed and we were unable to recover it. 00:34:36.386 [2024-07-15 03:37:42.021989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.386 [2024-07-15 03:37:42.022020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.386 qpair failed and we were unable to recover it. 00:34:36.386 [2024-07-15 03:37:42.022157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.386 [2024-07-15 03:37:42.022187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.386 qpair failed and we were unable to recover it. 00:34:36.386 [2024-07-15 03:37:42.022297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.386 [2024-07-15 03:37:42.022323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.387 qpair failed and we were unable to recover it. 00:34:36.387 [2024-07-15 03:37:42.022461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.387 [2024-07-15 03:37:42.022487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.387 qpair failed and we were unable to recover it. 00:34:36.387 [2024-07-15 03:37:42.022623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.387 [2024-07-15 03:37:42.022649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.387 qpair failed and we were unable to recover it. 00:34:36.387 [2024-07-15 03:37:42.022766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.387 [2024-07-15 03:37:42.022811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.387 qpair failed and we were unable to recover it. 00:34:36.387 [2024-07-15 03:37:42.023003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.387 [2024-07-15 03:37:42.023034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.387 qpair failed and we were unable to recover it. 00:34:36.387 [2024-07-15 03:37:42.023174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.387 [2024-07-15 03:37:42.023201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.387 qpair failed and we were unable to recover it. 00:34:36.387 [2024-07-15 03:37:42.023386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.387 [2024-07-15 03:37:42.023415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.387 qpair failed and we were unable to recover it. 00:34:36.387 [2024-07-15 03:37:42.023567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.387 [2024-07-15 03:37:42.023598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.387 qpair failed and we were unable to recover it. 00:34:36.387 [2024-07-15 03:37:42.023781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.387 [2024-07-15 03:37:42.023811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.387 qpair failed and we were unable to recover it. 00:34:36.387 [2024-07-15 03:37:42.023959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.387 [2024-07-15 03:37:42.023986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.387 qpair failed and we were unable to recover it. 00:34:36.387 [2024-07-15 03:37:42.024102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.387 [2024-07-15 03:37:42.024128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.387 qpair failed and we were unable to recover it. 00:34:36.387 [2024-07-15 03:37:42.024267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.387 [2024-07-15 03:37:42.024293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.387 qpair failed and we were unable to recover it. 00:34:36.387 [2024-07-15 03:37:42.024431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.387 [2024-07-15 03:37:42.024457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.387 qpair failed and we were unable to recover it. 00:34:36.387 [2024-07-15 03:37:42.024625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.387 [2024-07-15 03:37:42.024654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.387 qpair failed and we were unable to recover it. 00:34:36.387 [2024-07-15 03:37:42.024774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.387 [2024-07-15 03:37:42.024801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.387 qpair failed and we were unable to recover it. 00:34:36.387 [2024-07-15 03:37:42.024919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.387 [2024-07-15 03:37:42.024947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.387 qpair failed and we were unable to recover it. 00:34:36.387 [2024-07-15 03:37:42.025069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.387 [2024-07-15 03:37:42.025096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.387 qpair failed and we were unable to recover it. 00:34:36.387 [2024-07-15 03:37:42.025238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.387 [2024-07-15 03:37:42.025264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.387 qpair failed and we were unable to recover it. 00:34:36.387 [2024-07-15 03:37:42.025397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.387 [2024-07-15 03:37:42.025424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.387 qpair failed and we were unable to recover it. 00:34:36.387 [2024-07-15 03:37:42.025529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.387 [2024-07-15 03:37:42.025555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.387 qpair failed and we were unable to recover it. 00:34:36.387 [2024-07-15 03:37:42.025697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.387 [2024-07-15 03:37:42.025724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.387 qpair failed and we were unable to recover it. 00:34:36.387 [2024-07-15 03:37:42.025844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.387 [2024-07-15 03:37:42.025913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.387 qpair failed and we were unable to recover it. 00:34:36.387 [2024-07-15 03:37:42.026031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.387 [2024-07-15 03:37:42.026060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.387 qpair failed and we were unable to recover it. 00:34:36.387 [2024-07-15 03:37:42.026194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.387 [2024-07-15 03:37:42.026220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.387 qpair failed and we were unable to recover it. 00:34:36.387 [2024-07-15 03:37:42.026381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.387 [2024-07-15 03:37:42.026407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.387 qpair failed and we were unable to recover it. 00:34:36.387 [2024-07-15 03:37:42.026568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.387 [2024-07-15 03:37:42.026602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.387 qpair failed and we were unable to recover it. 00:34:36.387 [2024-07-15 03:37:42.026762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.387 [2024-07-15 03:37:42.026788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.387 qpair failed and we were unable to recover it. 00:34:36.387 [2024-07-15 03:37:42.026915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.387 [2024-07-15 03:37:42.026942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.387 qpair failed and we were unable to recover it. 00:34:36.387 [2024-07-15 03:37:42.027085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.387 [2024-07-15 03:37:42.027116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.387 qpair failed and we were unable to recover it. 00:34:36.387 [2024-07-15 03:37:42.027258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.387 [2024-07-15 03:37:42.027286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.387 qpair failed and we were unable to recover it. 00:34:36.387 [2024-07-15 03:37:42.027401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.387 [2024-07-15 03:37:42.027427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.387 qpair failed and we were unable to recover it. 00:34:36.387 [2024-07-15 03:37:42.027587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.387 [2024-07-15 03:37:42.027616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.387 qpair failed and we were unable to recover it. 00:34:36.387 [2024-07-15 03:37:42.027772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.387 [2024-07-15 03:37:42.027798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.387 qpair failed and we were unable to recover it. 00:34:36.387 [2024-07-15 03:37:42.027937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.387 [2024-07-15 03:37:42.027998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.387 qpair failed and we were unable to recover it. 00:34:36.387 [2024-07-15 03:37:42.028133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.387 [2024-07-15 03:37:42.028175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.387 qpair failed and we were unable to recover it. 00:34:36.387 [2024-07-15 03:37:42.028329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.387 [2024-07-15 03:37:42.028356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.387 qpair failed and we were unable to recover it. 00:34:36.387 [2024-07-15 03:37:42.028464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.387 [2024-07-15 03:37:42.028491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.387 qpair failed and we were unable to recover it. 00:34:36.387 [2024-07-15 03:37:42.028674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.387 [2024-07-15 03:37:42.028700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.387 qpair failed and we were unable to recover it. 00:34:36.387 [2024-07-15 03:37:42.028844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.387 [2024-07-15 03:37:42.028891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.387 qpair failed and we were unable to recover it. 00:34:36.387 [2024-07-15 03:37:42.029027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.387 [2024-07-15 03:37:42.029055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.387 qpair failed and we were unable to recover it. 00:34:36.387 [2024-07-15 03:37:42.029186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.387 [2024-07-15 03:37:42.029216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.387 qpair failed and we were unable to recover it. 00:34:36.387 [2024-07-15 03:37:42.029365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.387 [2024-07-15 03:37:42.029391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.387 qpair failed and we were unable to recover it. 00:34:36.387 [2024-07-15 03:37:42.029498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.387 [2024-07-15 03:37:42.029524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.387 qpair failed and we were unable to recover it. 00:34:36.387 [2024-07-15 03:37:42.029630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.387 [2024-07-15 03:37:42.029656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.387 qpair failed and we were unable to recover it. 00:34:36.387 [2024-07-15 03:37:42.029791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.387 [2024-07-15 03:37:42.029817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.387 qpair failed and we were unable to recover it. 00:34:36.387 [2024-07-15 03:37:42.029929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.387 [2024-07-15 03:37:42.029957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.387 qpair failed and we were unable to recover it. 00:34:36.387 [2024-07-15 03:37:42.030074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.387 [2024-07-15 03:37:42.030100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.387 qpair failed and we were unable to recover it. 00:34:36.387 [2024-07-15 03:37:42.030253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.387 [2024-07-15 03:37:42.030280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.387 qpair failed and we were unable to recover it. 00:34:36.387 [2024-07-15 03:37:42.030431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.387 [2024-07-15 03:37:42.030459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.387 qpair failed and we were unable to recover it. 00:34:36.387 [2024-07-15 03:37:42.030641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.387 [2024-07-15 03:37:42.030671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.387 qpair failed and we were unable to recover it. 00:34:36.387 [2024-07-15 03:37:42.030811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.387 [2024-07-15 03:37:42.030837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.387 qpair failed and we were unable to recover it. 00:34:36.387 [2024-07-15 03:37:42.030956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.387 [2024-07-15 03:37:42.030983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.387 qpair failed and we were unable to recover it. 00:34:36.387 [2024-07-15 03:37:42.031098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.387 [2024-07-15 03:37:42.031130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.387 qpair failed and we were unable to recover it. 00:34:36.387 [2024-07-15 03:37:42.031278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.387 [2024-07-15 03:37:42.031304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.387 qpair failed and we were unable to recover it. 00:34:36.387 [2024-07-15 03:37:42.031481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.387 [2024-07-15 03:37:42.031510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.387 qpair failed and we were unable to recover it. 00:34:36.387 [2024-07-15 03:37:42.031670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.387 [2024-07-15 03:37:42.031696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.387 qpair failed and we were unable to recover it. 00:34:36.387 [2024-07-15 03:37:42.031896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.387 [2024-07-15 03:37:42.031940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.387 qpair failed and we were unable to recover it. 00:34:36.387 [2024-07-15 03:37:42.032051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.387 [2024-07-15 03:37:42.032077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.387 qpair failed and we were unable to recover it. 00:34:36.387 [2024-07-15 03:37:42.032226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.387 [2024-07-15 03:37:42.032254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.387 qpair failed and we were unable to recover it. 00:34:36.387 [2024-07-15 03:37:42.032410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.387 [2024-07-15 03:37:42.032437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.387 qpair failed and we were unable to recover it. 00:34:36.387 [2024-07-15 03:37:42.032607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.387 [2024-07-15 03:37:42.032639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.387 qpair failed and we were unable to recover it. 00:34:36.387 [2024-07-15 03:37:42.032778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.387 [2024-07-15 03:37:42.032807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.387 qpair failed and we were unable to recover it. 00:34:36.387 [2024-07-15 03:37:42.032952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.387 [2024-07-15 03:37:42.032980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.387 qpair failed and we were unable to recover it. 00:34:36.387 [2024-07-15 03:37:42.033093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.387 [2024-07-15 03:37:42.033119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.387 qpair failed and we were unable to recover it. 00:34:36.387 [2024-07-15 03:37:42.033273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.387 [2024-07-15 03:37:42.033302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.387 qpair failed and we were unable to recover it. 00:34:36.387 [2024-07-15 03:37:42.033486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.387 [2024-07-15 03:37:42.033512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.387 qpair failed and we were unable to recover it. 00:34:36.387 [2024-07-15 03:37:42.033675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.387 [2024-07-15 03:37:42.033704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.387 qpair failed and we were unable to recover it. 00:34:36.387 [2024-07-15 03:37:42.033827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.388 [2024-07-15 03:37:42.033855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.388 qpair failed and we were unable to recover it. 00:34:36.388 [2024-07-15 03:37:42.034036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.388 [2024-07-15 03:37:42.034063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.388 qpair failed and we were unable to recover it. 00:34:36.388 [2024-07-15 03:37:42.034214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.388 [2024-07-15 03:37:42.034240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.388 qpair failed and we were unable to recover it. 00:34:36.388 [2024-07-15 03:37:42.034383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.388 [2024-07-15 03:37:42.034425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.388 qpair failed and we were unable to recover it. 00:34:36.388 [2024-07-15 03:37:42.034588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.388 [2024-07-15 03:37:42.034614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.388 qpair failed and we were unable to recover it. 00:34:36.388 [2024-07-15 03:37:42.034756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.388 [2024-07-15 03:37:42.034782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.388 qpair failed and we were unable to recover it. 00:34:36.388 [2024-07-15 03:37:42.034899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.388 [2024-07-15 03:37:42.034926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.388 qpair failed and we were unable to recover it. 00:34:36.388 [2024-07-15 03:37:42.035040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.388 [2024-07-15 03:37:42.035066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.388 qpair failed and we were unable to recover it. 00:34:36.388 [2024-07-15 03:37:42.035216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.388 [2024-07-15 03:37:42.035259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.388 qpair failed and we were unable to recover it. 00:34:36.388 [2024-07-15 03:37:42.035435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.388 [2024-07-15 03:37:42.035463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.388 qpair failed and we were unable to recover it. 00:34:36.388 [2024-07-15 03:37:42.035591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.388 [2024-07-15 03:37:42.035619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.388 qpair failed and we were unable to recover it. 00:34:36.388 [2024-07-15 03:37:42.035763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.388 [2024-07-15 03:37:42.035789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.388 qpair failed and we were unable to recover it. 00:34:36.388 [2024-07-15 03:37:42.035960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.388 [2024-07-15 03:37:42.035990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.388 qpair failed and we were unable to recover it. 00:34:36.388 [2024-07-15 03:37:42.036126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.388 [2024-07-15 03:37:42.036152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.388 qpair failed and we were unable to recover it. 00:34:36.388 [2024-07-15 03:37:42.036289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.388 [2024-07-15 03:37:42.036332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.388 qpair failed and we were unable to recover it. 00:34:36.388 [2024-07-15 03:37:42.036481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.388 [2024-07-15 03:37:42.036510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.388 qpair failed and we were unable to recover it. 00:34:36.388 [2024-07-15 03:37:42.036669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.388 [2024-07-15 03:37:42.036695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.388 qpair failed and we were unable to recover it. 00:34:36.388 [2024-07-15 03:37:42.036874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.388 [2024-07-15 03:37:42.036928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.388 qpair failed and we were unable to recover it. 00:34:36.388 [2024-07-15 03:37:42.037068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.388 [2024-07-15 03:37:42.037100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.388 qpair failed and we were unable to recover it. 00:34:36.388 [2024-07-15 03:37:42.037280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.388 [2024-07-15 03:37:42.037307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.388 qpair failed and we were unable to recover it. 00:34:36.388 [2024-07-15 03:37:42.037490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.388 [2024-07-15 03:37:42.037520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.388 qpair failed and we were unable to recover it. 00:34:36.388 [2024-07-15 03:37:42.037677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.388 [2024-07-15 03:37:42.037706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.388 qpair failed and we were unable to recover it. 00:34:36.388 [2024-07-15 03:37:42.037869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.388 [2024-07-15 03:37:42.037904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.388 qpair failed and we were unable to recover it. 00:34:36.388 [2024-07-15 03:37:42.038027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.388 [2024-07-15 03:37:42.038053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.388 qpair failed and we were unable to recover it. 00:34:36.388 [2024-07-15 03:37:42.038189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.388 [2024-07-15 03:37:42.038219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.388 qpair failed and we were unable to recover it. 00:34:36.388 [2024-07-15 03:37:42.038379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.388 [2024-07-15 03:37:42.038405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.388 qpair failed and we were unable to recover it. 00:34:36.388 [2024-07-15 03:37:42.038594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.388 [2024-07-15 03:37:42.038624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.388 qpair failed and we were unable to recover it. 00:34:36.388 [2024-07-15 03:37:42.038785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.388 [2024-07-15 03:37:42.038815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.388 qpair failed and we were unable to recover it. 00:34:36.388 [2024-07-15 03:37:42.038965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.388 [2024-07-15 03:37:42.038992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.388 qpair failed and we were unable to recover it. 00:34:36.388 [2024-07-15 03:37:42.039102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.388 [2024-07-15 03:37:42.039129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.388 qpair failed and we were unable to recover it. 00:34:36.388 [2024-07-15 03:37:42.039301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.388 [2024-07-15 03:37:42.039333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.388 qpair failed and we were unable to recover it. 00:34:36.388 [2024-07-15 03:37:42.039516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.388 [2024-07-15 03:37:42.039543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.388 qpair failed and we were unable to recover it. 00:34:36.388 [2024-07-15 03:37:42.039730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.388 [2024-07-15 03:37:42.039760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.388 qpair failed and we were unable to recover it. 00:34:36.388 [2024-07-15 03:37:42.039944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.388 [2024-07-15 03:37:42.039972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.388 qpair failed and we were unable to recover it. 00:34:36.388 [2024-07-15 03:37:42.040090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.388 [2024-07-15 03:37:42.040117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.388 qpair failed and we were unable to recover it. 00:34:36.388 [2024-07-15 03:37:42.040240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.388 [2024-07-15 03:37:42.040267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.388 qpair failed and we were unable to recover it. 00:34:36.388 [2024-07-15 03:37:42.040382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.388 [2024-07-15 03:37:42.040409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.388 qpair failed and we were unable to recover it. 00:34:36.388 [2024-07-15 03:37:42.040547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.388 [2024-07-15 03:37:42.040574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.388 qpair failed and we were unable to recover it. 00:34:36.388 [2024-07-15 03:37:42.040733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.388 [2024-07-15 03:37:42.040763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.388 qpair failed and we were unable to recover it. 00:34:36.388 [2024-07-15 03:37:42.040966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.388 [2024-07-15 03:37:42.041016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.388 qpair failed and we were unable to recover it. 00:34:36.388 [2024-07-15 03:37:42.041157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.388 [2024-07-15 03:37:42.041188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.388 qpair failed and we were unable to recover it. 00:34:36.388 [2024-07-15 03:37:42.041349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.388 [2024-07-15 03:37:42.041378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.388 qpair failed and we were unable to recover it. 00:34:36.388 [2024-07-15 03:37:42.041531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.388 [2024-07-15 03:37:42.041560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.388 qpair failed and we were unable to recover it. 00:34:36.388 [2024-07-15 03:37:42.041747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.388 [2024-07-15 03:37:42.041773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.388 qpair failed and we were unable to recover it. 00:34:36.388 [2024-07-15 03:37:42.041896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.388 [2024-07-15 03:37:42.041923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.388 qpair failed and we were unable to recover it. 00:34:36.388 [2024-07-15 03:37:42.042035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.388 [2024-07-15 03:37:42.042063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.388 qpair failed and we were unable to recover it. 00:34:36.388 [2024-07-15 03:37:42.042183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.388 [2024-07-15 03:37:42.042210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.388 qpair failed and we were unable to recover it. 00:34:36.388 [2024-07-15 03:37:42.042387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.388 [2024-07-15 03:37:42.042416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.388 qpair failed and we were unable to recover it. 00:34:36.388 [2024-07-15 03:37:42.042620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.388 [2024-07-15 03:37:42.042666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.388 qpair failed and we were unable to recover it. 00:34:36.388 [2024-07-15 03:37:42.042802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.388 [2024-07-15 03:37:42.042831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.388 qpair failed and we were unable to recover it. 00:34:36.388 [2024-07-15 03:37:42.042980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.388 [2024-07-15 03:37:42.043008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.388 qpair failed and we were unable to recover it. 00:34:36.388 [2024-07-15 03:37:42.043147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.388 [2024-07-15 03:37:42.043192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.388 qpair failed and we were unable to recover it. 00:34:36.388 [2024-07-15 03:37:42.043377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.388 [2024-07-15 03:37:42.043403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.388 qpair failed and we were unable to recover it. 00:34:36.388 [2024-07-15 03:37:42.043561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.388 [2024-07-15 03:37:42.043591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.388 qpair failed and we were unable to recover it. 00:34:36.388 [2024-07-15 03:37:42.043769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.388 [2024-07-15 03:37:42.043799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.388 qpair failed and we were unable to recover it. 00:34:36.388 [2024-07-15 03:37:42.043962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.388 [2024-07-15 03:37:42.043991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.388 qpair failed and we were unable to recover it. 00:34:36.388 [2024-07-15 03:37:42.044102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.388 [2024-07-15 03:37:42.044130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.388 qpair failed and we were unable to recover it. 00:34:36.388 [2024-07-15 03:37:42.044305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.388 [2024-07-15 03:37:42.044336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.388 qpair failed and we were unable to recover it. 00:34:36.388 [2024-07-15 03:37:42.044464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.388 [2024-07-15 03:37:42.044492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.388 qpair failed and we were unable to recover it. 00:34:36.388 [2024-07-15 03:37:42.044630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.388 [2024-07-15 03:37:42.044657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.388 qpair failed and we were unable to recover it. 00:34:36.388 [2024-07-15 03:37:42.044811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.388 [2024-07-15 03:37:42.044841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.388 qpair failed and we were unable to recover it. 00:34:36.388 [2024-07-15 03:37:42.045014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.388 [2024-07-15 03:37:42.045043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.388 qpair failed and we were unable to recover it. 00:34:36.388 [2024-07-15 03:37:42.045228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.388 [2024-07-15 03:37:42.045257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.389 qpair failed and we were unable to recover it. 00:34:36.389 [2024-07-15 03:37:42.045399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.389 [2024-07-15 03:37:42.045446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.389 qpair failed and we were unable to recover it. 00:34:36.389 [2024-07-15 03:37:42.045585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.389 [2024-07-15 03:37:42.045613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.389 qpair failed and we were unable to recover it. 00:34:36.389 [2024-07-15 03:37:42.045752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.389 [2024-07-15 03:37:42.045780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.389 qpair failed and we were unable to recover it. 00:34:36.389 [2024-07-15 03:37:42.045936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.389 [2024-07-15 03:37:42.045966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.389 qpair failed and we were unable to recover it. 00:34:36.389 [2024-07-15 03:37:42.046078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.389 [2024-07-15 03:37:42.046106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.389 qpair failed and we were unable to recover it. 00:34:36.389 [2024-07-15 03:37:42.046290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.389 [2024-07-15 03:37:42.046319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.389 qpair failed and we were unable to recover it. 00:34:36.389 [2024-07-15 03:37:42.046491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.389 [2024-07-15 03:37:42.046538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.389 qpair failed and we were unable to recover it. 00:34:36.389 [2024-07-15 03:37:42.046676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.389 [2024-07-15 03:37:42.046703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.389 qpair failed and we were unable to recover it. 00:34:36.389 [2024-07-15 03:37:42.046840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.389 [2024-07-15 03:37:42.046905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.389 qpair failed and we were unable to recover it. 00:34:36.389 [2024-07-15 03:37:42.047037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.389 [2024-07-15 03:37:42.047066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.389 qpair failed and we were unable to recover it. 00:34:36.389 [2024-07-15 03:37:42.047200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.389 [2024-07-15 03:37:42.047227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.389 qpair failed and we were unable to recover it. 00:34:36.389 [2024-07-15 03:37:42.047363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.389 [2024-07-15 03:37:42.047408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.389 qpair failed and we were unable to recover it. 00:34:36.389 [2024-07-15 03:37:42.047537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.389 [2024-07-15 03:37:42.047569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.389 qpair failed and we were unable to recover it. 00:34:36.389 [2024-07-15 03:37:42.047783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.389 [2024-07-15 03:37:42.047813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.389 qpair failed and we were unable to recover it. 00:34:36.389 [2024-07-15 03:37:42.047963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.389 [2024-07-15 03:37:42.047991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.389 qpair failed and we were unable to recover it. 00:34:36.389 [2024-07-15 03:37:42.048163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.389 [2024-07-15 03:37:42.048205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.389 qpair failed and we were unable to recover it. 00:34:36.389 [2024-07-15 03:37:42.048366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.389 [2024-07-15 03:37:42.048396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.389 qpair failed and we were unable to recover it. 00:34:36.389 [2024-07-15 03:37:42.048503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.389 [2024-07-15 03:37:42.048529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.389 qpair failed and we were unable to recover it. 00:34:36.389 [2024-07-15 03:37:42.048694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.389 [2024-07-15 03:37:42.048725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.389 qpair failed and we were unable to recover it. 00:34:36.389 [2024-07-15 03:37:42.048864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.389 [2024-07-15 03:37:42.048899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.389 qpair failed and we were unable to recover it. 00:34:36.389 [2024-07-15 03:37:42.049065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.389 [2024-07-15 03:37:42.049091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.389 qpair failed and we were unable to recover it. 00:34:36.389 [2024-07-15 03:37:42.049299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.389 [2024-07-15 03:37:42.049347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.389 qpair failed and we were unable to recover it. 00:34:36.389 [2024-07-15 03:37:42.049504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.389 [2024-07-15 03:37:42.049530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.389 qpair failed and we were unable to recover it. 00:34:36.389 [2024-07-15 03:37:42.049663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.389 [2024-07-15 03:37:42.049705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.389 qpair failed and we were unable to recover it. 00:34:36.389 [2024-07-15 03:37:42.049824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.389 [2024-07-15 03:37:42.049853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.389 qpair failed and we were unable to recover it. 00:34:36.389 [2024-07-15 03:37:42.050024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.389 [2024-07-15 03:37:42.050051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.389 qpair failed and we were unable to recover it. 00:34:36.389 [2024-07-15 03:37:42.050158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.389 [2024-07-15 03:37:42.050206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.389 qpair failed and we were unable to recover it. 00:34:36.389 [2024-07-15 03:37:42.050403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.389 [2024-07-15 03:37:42.050449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.389 qpair failed and we were unable to recover it. 00:34:36.389 [2024-07-15 03:37:42.050615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.389 [2024-07-15 03:37:42.050641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.389 qpair failed and we were unable to recover it. 00:34:36.389 [2024-07-15 03:37:42.050781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.389 [2024-07-15 03:37:42.050808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.389 qpair failed and we were unable to recover it. 00:34:36.389 [2024-07-15 03:37:42.050975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.389 [2024-07-15 03:37:42.051005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.389 qpair failed and we were unable to recover it. 00:34:36.389 [2024-07-15 03:37:42.051163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.389 [2024-07-15 03:37:42.051191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.389 qpair failed and we were unable to recover it. 00:34:36.389 [2024-07-15 03:37:42.051349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.389 [2024-07-15 03:37:42.051378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.389 qpair failed and we were unable to recover it. 00:34:36.389 [2024-07-15 03:37:42.051574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.389 [2024-07-15 03:37:42.051624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.389 qpair failed and we were unable to recover it. 00:34:36.389 [2024-07-15 03:37:42.051757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.389 [2024-07-15 03:37:42.051783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.389 qpair failed and we were unable to recover it. 00:34:36.389 [2024-07-15 03:37:42.051912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.389 [2024-07-15 03:37:42.051940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.389 qpair failed and we were unable to recover it. 00:34:36.389 [2024-07-15 03:37:42.052076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.389 [2024-07-15 03:37:42.052102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.389 qpair failed and we were unable to recover it. 00:34:36.389 [2024-07-15 03:37:42.052272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.389 [2024-07-15 03:37:42.052298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.389 qpair failed and we were unable to recover it. 00:34:36.389 [2024-07-15 03:37:42.052484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.389 [2024-07-15 03:37:42.052513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.389 qpair failed and we were unable to recover it. 00:34:36.389 [2024-07-15 03:37:42.052662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.389 [2024-07-15 03:37:42.052692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.389 qpair failed and we were unable to recover it. 00:34:36.389 [2024-07-15 03:37:42.052821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.389 [2024-07-15 03:37:42.052847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.389 qpair failed and we were unable to recover it. 00:34:36.389 [2024-07-15 03:37:42.053016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.389 [2024-07-15 03:37:42.053043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.389 qpair failed and we were unable to recover it. 00:34:36.389 [2024-07-15 03:37:42.053176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.389 [2024-07-15 03:37:42.053206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.389 qpair failed and we were unable to recover it. 00:34:36.389 [2024-07-15 03:37:42.053371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.389 [2024-07-15 03:37:42.053397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.389 qpair failed and we were unable to recover it. 00:34:36.389 [2024-07-15 03:37:42.053565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.389 [2024-07-15 03:37:42.053591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.389 qpair failed and we were unable to recover it. 00:34:36.389 [2024-07-15 03:37:42.053816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.389 [2024-07-15 03:37:42.053845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.389 qpair failed and we were unable to recover it. 00:34:36.389 [2024-07-15 03:37:42.053977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.389 [2024-07-15 03:37:42.054004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.389 qpair failed and we were unable to recover it. 00:34:36.389 [2024-07-15 03:37:42.054120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.389 [2024-07-15 03:37:42.054147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.389 qpair failed and we were unable to recover it. 00:34:36.389 [2024-07-15 03:37:42.054285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.389 [2024-07-15 03:37:42.054311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.389 qpair failed and we were unable to recover it. 00:34:36.389 [2024-07-15 03:37:42.054460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.389 [2024-07-15 03:37:42.054487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.389 qpair failed and we were unable to recover it. 00:34:36.389 [2024-07-15 03:37:42.054628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.389 [2024-07-15 03:37:42.054672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.389 qpair failed and we were unable to recover it. 00:34:36.389 [2024-07-15 03:37:42.054822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.389 [2024-07-15 03:37:42.054853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.389 qpair failed and we were unable to recover it. 00:34:36.389 [2024-07-15 03:37:42.054987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.389 [2024-07-15 03:37:42.055014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.389 qpair failed and we were unable to recover it. 00:34:36.389 [2024-07-15 03:37:42.055204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.389 [2024-07-15 03:37:42.055233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.389 qpair failed and we were unable to recover it. 00:34:36.389 [2024-07-15 03:37:42.055398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.389 [2024-07-15 03:37:42.055444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.389 qpair failed and we were unable to recover it. 00:34:36.389 [2024-07-15 03:37:42.055602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.389 [2024-07-15 03:37:42.055628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.389 qpair failed and we were unable to recover it. 00:34:36.389 [2024-07-15 03:37:42.055808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.389 [2024-07-15 03:37:42.055842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.389 qpair failed and we were unable to recover it. 00:34:36.389 [2024-07-15 03:37:42.055984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.389 [2024-07-15 03:37:42.056012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.389 qpair failed and we were unable to recover it. 00:34:36.389 [2024-07-15 03:37:42.056154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.389 [2024-07-15 03:37:42.056180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.389 qpair failed and we were unable to recover it. 00:34:36.389 [2024-07-15 03:37:42.056343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.389 [2024-07-15 03:37:42.056369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.389 qpair failed and we were unable to recover it. 00:34:36.389 [2024-07-15 03:37:42.056578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.389 [2024-07-15 03:37:42.056626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.389 qpair failed and we were unable to recover it. 00:34:36.389 [2024-07-15 03:37:42.056763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.389 [2024-07-15 03:37:42.056789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.389 qpair failed and we were unable to recover it. 00:34:36.389 [2024-07-15 03:37:42.056912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.389 [2024-07-15 03:37:42.056940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.389 qpair failed and we were unable to recover it. 00:34:36.389 [2024-07-15 03:37:42.057101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.389 [2024-07-15 03:37:42.057130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.389 qpair failed and we were unable to recover it. 00:34:36.389 [2024-07-15 03:37:42.057268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.389 [2024-07-15 03:37:42.057296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.389 qpair failed and we were unable to recover it. 00:34:36.389 [2024-07-15 03:37:42.057431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.389 [2024-07-15 03:37:42.057458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.389 qpair failed and we were unable to recover it. 00:34:36.389 [2024-07-15 03:37:42.057596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.389 [2024-07-15 03:37:42.057626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.389 qpair failed and we were unable to recover it. 00:34:36.390 [2024-07-15 03:37:42.057745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.390 [2024-07-15 03:37:42.057772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.390 qpair failed and we were unable to recover it. 00:34:36.390 [2024-07-15 03:37:42.057887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.390 [2024-07-15 03:37:42.057914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.390 qpair failed and we were unable to recover it. 00:34:36.390 [2024-07-15 03:37:42.058070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.390 [2024-07-15 03:37:42.058099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.390 qpair failed and we were unable to recover it. 00:34:36.390 [2024-07-15 03:37:42.058293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.390 [2024-07-15 03:37:42.058319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.390 qpair failed and we were unable to recover it. 00:34:36.390 [2024-07-15 03:37:42.058467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.390 [2024-07-15 03:37:42.058495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.390 qpair failed and we were unable to recover it. 00:34:36.390 [2024-07-15 03:37:42.058655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.390 [2024-07-15 03:37:42.058684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.390 qpair failed and we were unable to recover it. 00:34:36.390 [2024-07-15 03:37:42.058870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.390 [2024-07-15 03:37:42.058902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.390 qpair failed and we were unable to recover it. 00:34:36.390 [2024-07-15 03:37:42.059069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.390 [2024-07-15 03:37:42.059098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.390 qpair failed and we were unable to recover it. 00:34:36.390 [2024-07-15 03:37:42.059245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.390 [2024-07-15 03:37:42.059274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.390 qpair failed and we were unable to recover it. 00:34:36.390 [2024-07-15 03:37:42.059404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.390 [2024-07-15 03:37:42.059430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.390 qpair failed and we were unable to recover it. 00:34:36.390 [2024-07-15 03:37:42.059594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.390 [2024-07-15 03:37:42.059620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.390 qpair failed and we were unable to recover it. 00:34:36.390 [2024-07-15 03:37:42.059770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.390 [2024-07-15 03:37:42.059800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.390 qpair failed and we were unable to recover it. 00:34:36.390 [2024-07-15 03:37:42.059949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.390 [2024-07-15 03:37:42.059976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.390 qpair failed and we were unable to recover it. 00:34:36.390 [2024-07-15 03:37:42.060115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.390 [2024-07-15 03:37:42.060143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.390 qpair failed and we were unable to recover it. 00:34:36.390 [2024-07-15 03:37:42.060289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.390 [2024-07-15 03:37:42.060315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.390 qpair failed and we were unable to recover it. 00:34:36.390 [2024-07-15 03:37:42.060453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.390 [2024-07-15 03:37:42.060479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.390 qpair failed and we were unable to recover it. 00:34:36.390 [2024-07-15 03:37:42.060643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.390 [2024-07-15 03:37:42.060672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.390 qpair failed and we were unable to recover it. 00:34:36.390 [2024-07-15 03:37:42.060819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.390 [2024-07-15 03:37:42.060849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.390 qpair failed and we were unable to recover it. 00:34:36.390 [2024-07-15 03:37:42.060985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.390 [2024-07-15 03:37:42.061012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.390 qpair failed and we were unable to recover it. 00:34:36.390 [2024-07-15 03:37:42.061149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.390 [2024-07-15 03:37:42.061176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.390 qpair failed and we were unable to recover it. 00:34:36.390 [2024-07-15 03:37:42.061361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.390 [2024-07-15 03:37:42.061390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.390 qpair failed and we were unable to recover it. 00:34:36.390 [2024-07-15 03:37:42.061546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.390 [2024-07-15 03:37:42.061573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.390 qpair failed and we were unable to recover it. 00:34:36.390 [2024-07-15 03:37:42.061710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.390 [2024-07-15 03:37:42.061752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.390 qpair failed and we were unable to recover it. 00:34:36.390 [2024-07-15 03:37:42.061907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.390 [2024-07-15 03:37:42.061936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.390 qpair failed and we were unable to recover it. 00:34:36.390 [2024-07-15 03:37:42.062088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.390 [2024-07-15 03:37:42.062114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.390 qpair failed and we were unable to recover it. 00:34:36.390 [2024-07-15 03:37:42.062278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.390 [2024-07-15 03:37:42.062322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.390 qpair failed and we were unable to recover it. 00:34:36.390 [2024-07-15 03:37:42.062466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.390 [2024-07-15 03:37:42.062495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.390 qpair failed and we were unable to recover it. 00:34:36.390 [2024-07-15 03:37:42.062652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.390 [2024-07-15 03:37:42.062678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.390 qpair failed and we were unable to recover it. 00:34:36.390 [2024-07-15 03:37:42.062793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.390 [2024-07-15 03:37:42.062819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.390 qpair failed and we were unable to recover it. 00:34:36.390 [2024-07-15 03:37:42.063033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.390 [2024-07-15 03:37:42.063064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.390 qpair failed and we were unable to recover it. 00:34:36.390 [2024-07-15 03:37:42.063202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.390 [2024-07-15 03:37:42.063228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.390 qpair failed and we were unable to recover it. 00:34:36.390 [2024-07-15 03:37:42.063382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.390 [2024-07-15 03:37:42.063410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.390 qpair failed and we were unable to recover it. 00:34:36.390 [2024-07-15 03:37:42.063559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.390 [2024-07-15 03:37:42.063588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.390 qpair failed and we were unable to recover it. 00:34:36.390 [2024-07-15 03:37:42.063715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.390 [2024-07-15 03:37:42.063759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.390 qpair failed and we were unable to recover it. 00:34:36.390 [2024-07-15 03:37:42.063905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.390 [2024-07-15 03:37:42.063948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.390 qpair failed and we were unable to recover it. 00:34:36.390 [2024-07-15 03:37:42.064085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.390 [2024-07-15 03:37:42.064111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.390 qpair failed and we were unable to recover it. 00:34:36.390 [2024-07-15 03:37:42.064218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.390 [2024-07-15 03:37:42.064244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.390 qpair failed and we were unable to recover it. 00:34:36.390 [2024-07-15 03:37:42.064423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.390 [2024-07-15 03:37:42.064452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.390 qpair failed and we were unable to recover it. 00:34:36.390 [2024-07-15 03:37:42.064575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.390 [2024-07-15 03:37:42.064604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.390 qpair failed and we were unable to recover it. 00:34:36.390 [2024-07-15 03:37:42.064765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.390 [2024-07-15 03:37:42.064791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.390 qpair failed and we were unable to recover it. 00:34:36.390 [2024-07-15 03:37:42.064932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.390 [2024-07-15 03:37:42.064959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.390 qpair failed and we were unable to recover it. 00:34:36.390 [2024-07-15 03:37:42.065135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.390 [2024-07-15 03:37:42.065162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.390 qpair failed and we were unable to recover it. 00:34:36.390 [2024-07-15 03:37:42.065323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.390 [2024-07-15 03:37:42.065349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.390 qpair failed and we were unable to recover it. 00:34:36.390 [2024-07-15 03:37:42.065491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.390 [2024-07-15 03:37:42.065517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.390 qpair failed and we were unable to recover it. 00:34:36.390 [2024-07-15 03:37:42.065677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.390 [2024-07-15 03:37:42.065720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.390 qpair failed and we were unable to recover it. 00:34:36.390 [2024-07-15 03:37:42.065908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.390 [2024-07-15 03:37:42.065935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.390 qpair failed and we were unable to recover it. 00:34:36.390 [2024-07-15 03:37:42.066119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.390 [2024-07-15 03:37:42.066148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.390 qpair failed and we were unable to recover it. 00:34:36.390 [2024-07-15 03:37:42.066339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.390 [2024-07-15 03:37:42.066369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.390 qpair failed and we were unable to recover it. 00:34:36.390 [2024-07-15 03:37:42.066531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.390 [2024-07-15 03:37:42.066558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.390 qpair failed and we were unable to recover it. 00:34:36.390 [2024-07-15 03:37:42.066740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.390 [2024-07-15 03:37:42.066769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.390 qpair failed and we were unable to recover it. 00:34:36.390 [2024-07-15 03:37:42.066940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.390 [2024-07-15 03:37:42.066985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.390 qpair failed and we were unable to recover it. 00:34:36.390 [2024-07-15 03:37:42.067128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.390 [2024-07-15 03:37:42.067156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.390 qpair failed and we were unable to recover it. 00:34:36.390 [2024-07-15 03:37:42.067297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.390 [2024-07-15 03:37:42.067324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.390 qpair failed and we were unable to recover it. 00:34:36.390 [2024-07-15 03:37:42.067472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.390 [2024-07-15 03:37:42.067502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.390 qpair failed and we were unable to recover it. 00:34:36.390 [2024-07-15 03:37:42.067686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.390 [2024-07-15 03:37:42.067712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.390 qpair failed and we were unable to recover it. 00:34:36.390 [2024-07-15 03:37:42.067865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.390 [2024-07-15 03:37:42.067905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.390 qpair failed and we were unable to recover it. 00:34:36.390 [2024-07-15 03:37:42.068031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.390 [2024-07-15 03:37:42.068063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.390 qpair failed and we were unable to recover it. 00:34:36.390 [2024-07-15 03:37:42.068225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.390 [2024-07-15 03:37:42.068253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.390 qpair failed and we were unable to recover it. 00:34:36.390 [2024-07-15 03:37:42.068436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.390 [2024-07-15 03:37:42.068465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.390 qpair failed and we were unable to recover it. 00:34:36.390 [2024-07-15 03:37:42.068604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.390 [2024-07-15 03:37:42.068641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.390 qpair failed and we were unable to recover it. 00:34:36.390 [2024-07-15 03:37:42.068885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.390 [2024-07-15 03:37:42.068912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.390 qpair failed and we were unable to recover it. 00:34:36.390 [2024-07-15 03:37:42.069071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.390 [2024-07-15 03:37:42.069100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.390 qpair failed and we were unable to recover it. 00:34:36.390 [2024-07-15 03:37:42.069285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.390 [2024-07-15 03:37:42.069312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.390 qpair failed and we were unable to recover it. 00:34:36.390 [2024-07-15 03:37:42.069475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.390 [2024-07-15 03:37:42.069502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.390 qpair failed and we were unable to recover it. 00:34:36.390 [2024-07-15 03:37:42.069612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.390 [2024-07-15 03:37:42.069655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.390 qpair failed and we were unable to recover it. 00:34:36.390 [2024-07-15 03:37:42.069811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.390 [2024-07-15 03:37:42.069840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.390 qpair failed and we were unable to recover it. 00:34:36.390 [2024-07-15 03:37:42.070003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.391 [2024-07-15 03:37:42.070030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.391 qpair failed and we were unable to recover it. 00:34:36.391 [2024-07-15 03:37:42.070218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.391 [2024-07-15 03:37:42.070247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.391 qpair failed and we were unable to recover it. 00:34:36.391 [2024-07-15 03:37:42.070423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.391 [2024-07-15 03:37:42.070470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.391 qpair failed and we were unable to recover it. 00:34:36.391 [2024-07-15 03:37:42.070653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.391 [2024-07-15 03:37:42.070683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.391 qpair failed and we were unable to recover it. 00:34:36.391 [2024-07-15 03:37:42.070788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.391 [2024-07-15 03:37:42.070830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.391 qpair failed and we were unable to recover it. 00:34:36.391 [2024-07-15 03:37:42.070987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.391 [2024-07-15 03:37:42.071017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.391 qpair failed and we were unable to recover it. 00:34:36.391 [2024-07-15 03:37:42.071238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.391 [2024-07-15 03:37:42.071264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.391 qpair failed and we were unable to recover it. 00:34:36.391 [2024-07-15 03:37:42.071419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.391 [2024-07-15 03:37:42.071449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.391 qpair failed and we were unable to recover it. 00:34:36.391 [2024-07-15 03:37:42.071596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.391 [2024-07-15 03:37:42.071625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.391 qpair failed and we were unable to recover it. 00:34:36.391 [2024-07-15 03:37:42.071797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.391 [2024-07-15 03:37:42.071826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.391 qpair failed and we were unable to recover it. 00:34:36.391 [2024-07-15 03:37:42.072062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.391 [2024-07-15 03:37:42.072088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.391 qpair failed and we were unable to recover it. 00:34:36.391 [2024-07-15 03:37:42.072255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.391 [2024-07-15 03:37:42.072299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.391 qpair failed and we were unable to recover it. 00:34:36.391 [2024-07-15 03:37:42.072461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.391 [2024-07-15 03:37:42.072487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.391 qpair failed and we were unable to recover it. 00:34:36.391 [2024-07-15 03:37:42.072644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.391 [2024-07-15 03:37:42.072673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.391 qpair failed and we were unable to recover it. 00:34:36.391 [2024-07-15 03:37:42.072849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.391 [2024-07-15 03:37:42.072885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.391 qpair failed and we were unable to recover it. 00:34:36.391 [2024-07-15 03:37:42.073044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.391 [2024-07-15 03:37:42.073070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.391 qpair failed and we were unable to recover it. 00:34:36.391 [2024-07-15 03:37:42.073211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.391 [2024-07-15 03:37:42.073255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.391 qpair failed and we were unable to recover it. 00:34:36.391 [2024-07-15 03:37:42.073431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.391 [2024-07-15 03:37:42.073478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.391 qpair failed and we were unable to recover it. 00:34:36.391 [2024-07-15 03:37:42.073640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.391 [2024-07-15 03:37:42.073668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.391 qpair failed and we were unable to recover it. 00:34:36.391 [2024-07-15 03:37:42.073775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.391 [2024-07-15 03:37:42.073802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.391 qpair failed and we were unable to recover it. 00:34:36.391 [2024-07-15 03:37:42.073935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.391 [2024-07-15 03:37:42.073966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.391 qpair failed and we were unable to recover it. 00:34:36.391 [2024-07-15 03:37:42.074119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.391 [2024-07-15 03:37:42.074145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.391 qpair failed and we were unable to recover it. 00:34:36.391 [2024-07-15 03:37:42.074278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.391 [2024-07-15 03:37:42.074320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.391 qpair failed and we were unable to recover it. 00:34:36.391 [2024-07-15 03:37:42.074494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.391 [2024-07-15 03:37:42.074542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.391 qpair failed and we were unable to recover it. 00:34:36.391 [2024-07-15 03:37:42.074704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.391 [2024-07-15 03:37:42.074731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.391 qpair failed and we were unable to recover it. 00:34:36.391 [2024-07-15 03:37:42.074871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.391 [2024-07-15 03:37:42.074923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.391 qpair failed and we were unable to recover it. 00:34:36.391 [2024-07-15 03:37:42.075051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.391 [2024-07-15 03:37:42.075080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.391 qpair failed and we were unable to recover it. 00:34:36.391 [2024-07-15 03:37:42.075243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.391 [2024-07-15 03:37:42.075271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.391 qpair failed and we were unable to recover it. 00:34:36.391 [2024-07-15 03:37:42.075405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.391 [2024-07-15 03:37:42.075447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.391 qpair failed and we were unable to recover it. 00:34:36.391 [2024-07-15 03:37:42.075571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.391 [2024-07-15 03:37:42.075600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.391 qpair failed and we were unable to recover it. 00:34:36.391 [2024-07-15 03:37:42.075793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.391 [2024-07-15 03:37:42.075820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.391 qpair failed and we were unable to recover it. 00:34:36.391 [2024-07-15 03:37:42.075956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.391 [2024-07-15 03:37:42.075983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.391 qpair failed and we were unable to recover it. 00:34:36.391 [2024-07-15 03:37:42.076098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.391 [2024-07-15 03:37:42.076125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.391 qpair failed and we were unable to recover it. 00:34:36.391 [2024-07-15 03:37:42.076288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.391 [2024-07-15 03:37:42.076316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.391 qpair failed and we were unable to recover it. 00:34:36.391 [2024-07-15 03:37:42.076431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.391 [2024-07-15 03:37:42.076473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.391 qpair failed and we were unable to recover it. 00:34:36.391 [2024-07-15 03:37:42.076654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.391 [2024-07-15 03:37:42.076683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.391 qpair failed and we were unable to recover it. 00:34:36.391 [2024-07-15 03:37:42.076808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.391 [2024-07-15 03:37:42.076834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.391 qpair failed and we were unable to recover it. 00:34:36.391 [2024-07-15 03:37:42.077062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.391 [2024-07-15 03:37:42.077100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.391 qpair failed and we were unable to recover it. 00:34:36.391 [2024-07-15 03:37:42.077266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.391 [2024-07-15 03:37:42.077295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.391 qpair failed and we were unable to recover it. 00:34:36.391 [2024-07-15 03:37:42.077438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.391 [2024-07-15 03:37:42.077465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.391 qpair failed and we were unable to recover it. 00:34:36.391 [2024-07-15 03:37:42.077641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.391 [2024-07-15 03:37:42.077671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.391 qpair failed and we were unable to recover it. 00:34:36.391 [2024-07-15 03:37:42.077896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.391 [2024-07-15 03:37:42.077926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.391 qpair failed and we were unable to recover it. 00:34:36.391 [2024-07-15 03:37:42.078055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.391 [2024-07-15 03:37:42.078081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.391 qpair failed and we were unable to recover it. 00:34:36.391 [2024-07-15 03:37:42.078223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.391 [2024-07-15 03:37:42.078270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.391 qpair failed and we were unable to recover it. 00:34:36.391 [2024-07-15 03:37:42.078479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.391 [2024-07-15 03:37:42.078528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.391 qpair failed and we were unable to recover it. 00:34:36.391 [2024-07-15 03:37:42.078665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.391 [2024-07-15 03:37:42.078693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.391 qpair failed and we were unable to recover it. 00:34:36.391 [2024-07-15 03:37:42.078833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.391 [2024-07-15 03:37:42.078882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.391 qpair failed and we were unable to recover it. 00:34:36.391 [2024-07-15 03:37:42.079004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.391 [2024-07-15 03:37:42.079033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.391 qpair failed and we were unable to recover it. 00:34:36.391 [2024-07-15 03:37:42.079159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.391 [2024-07-15 03:37:42.079186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.391 qpair failed and we were unable to recover it. 00:34:36.391 [2024-07-15 03:37:42.079289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.391 [2024-07-15 03:37:42.079317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.391 qpair failed and we were unable to recover it. 00:34:36.391 [2024-07-15 03:37:42.079540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.391 [2024-07-15 03:37:42.079567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.391 qpair failed and we were unable to recover it. 00:34:36.391 [2024-07-15 03:37:42.079735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.391 [2024-07-15 03:37:42.079761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.391 qpair failed and we were unable to recover it. 00:34:36.391 [2024-07-15 03:37:42.079920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.391 [2024-07-15 03:37:42.079950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.391 qpair failed and we were unable to recover it. 00:34:36.391 [2024-07-15 03:37:42.080111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.391 [2024-07-15 03:37:42.080140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.391 qpair failed and we were unable to recover it. 00:34:36.391 [2024-07-15 03:37:42.080325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.391 [2024-07-15 03:37:42.080352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.391 qpair failed and we were unable to recover it. 00:34:36.391 [2024-07-15 03:37:42.080534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.391 [2024-07-15 03:37:42.080563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.391 qpair failed and we were unable to recover it. 00:34:36.391 [2024-07-15 03:37:42.080717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.391 [2024-07-15 03:37:42.080746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.391 qpair failed and we were unable to recover it. 00:34:36.391 [2024-07-15 03:37:42.080909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.391 [2024-07-15 03:37:42.080937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.391 qpair failed and we were unable to recover it. 00:34:36.391 [2024-07-15 03:37:42.081101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.391 [2024-07-15 03:37:42.081127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.391 qpair failed and we were unable to recover it. 00:34:36.391 [2024-07-15 03:37:42.081379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.391 [2024-07-15 03:37:42.081427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.391 qpair failed and we were unable to recover it. 00:34:36.391 [2024-07-15 03:37:42.081580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.391 [2024-07-15 03:37:42.081607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.392 qpair failed and we were unable to recover it. 00:34:36.392 [2024-07-15 03:37:42.081750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.392 [2024-07-15 03:37:42.081776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.392 qpair failed and we were unable to recover it. 00:34:36.392 [2024-07-15 03:37:42.081939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.392 [2024-07-15 03:37:42.081966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.392 qpair failed and we were unable to recover it. 00:34:36.392 [2024-07-15 03:37:42.082083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.392 [2024-07-15 03:37:42.082109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.392 qpair failed and we were unable to recover it. 00:34:36.392 [2024-07-15 03:37:42.082245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.392 [2024-07-15 03:37:42.082272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.392 qpair failed and we were unable to recover it. 00:34:36.392 [2024-07-15 03:37:42.082464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.392 [2024-07-15 03:37:42.082493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.392 qpair failed and we were unable to recover it. 00:34:36.392 [2024-07-15 03:37:42.082655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.392 [2024-07-15 03:37:42.082681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.392 qpair failed and we were unable to recover it. 00:34:36.392 [2024-07-15 03:37:42.082810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.392 [2024-07-15 03:37:42.082836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.392 qpair failed and we were unable to recover it. 00:34:36.392 [2024-07-15 03:37:42.082991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.392 [2024-07-15 03:37:42.083036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.392 qpair failed and we were unable to recover it. 00:34:36.392 [2024-07-15 03:37:42.083205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.392 [2024-07-15 03:37:42.083233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.392 qpair failed and we were unable to recover it. 00:34:36.392 [2024-07-15 03:37:42.083393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.392 [2024-07-15 03:37:42.083425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.392 qpair failed and we were unable to recover it. 00:34:36.392 [2024-07-15 03:37:42.083593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.392 [2024-07-15 03:37:42.083644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.392 qpair failed and we were unable to recover it. 00:34:36.392 [2024-07-15 03:37:42.083800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.392 [2024-07-15 03:37:42.083827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.392 qpair failed and we were unable to recover it. 00:34:36.392 [2024-07-15 03:37:42.083941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.392 [2024-07-15 03:37:42.083969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.392 qpair failed and we were unable to recover it. 00:34:36.392 [2024-07-15 03:37:42.084138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.392 [2024-07-15 03:37:42.084168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.392 qpair failed and we were unable to recover it. 00:34:36.392 [2024-07-15 03:37:42.084320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.392 [2024-07-15 03:37:42.084347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.392 qpair failed and we were unable to recover it. 00:34:36.392 [2024-07-15 03:37:42.084480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.392 [2024-07-15 03:37:42.084524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.392 qpair failed and we were unable to recover it. 00:34:36.392 [2024-07-15 03:37:42.084669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.392 [2024-07-15 03:37:42.084699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.392 qpair failed and we were unable to recover it. 00:34:36.392 [2024-07-15 03:37:42.084889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.392 [2024-07-15 03:37:42.084917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.392 qpair failed and we were unable to recover it. 00:34:36.392 [2024-07-15 03:37:42.085076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.392 [2024-07-15 03:37:42.085106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.392 qpair failed and we were unable to recover it. 00:34:36.392 [2024-07-15 03:37:42.085303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.392 [2024-07-15 03:37:42.085351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.392 qpair failed and we were unable to recover it. 00:34:36.392 [2024-07-15 03:37:42.085513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.392 [2024-07-15 03:37:42.085540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.392 qpair failed and we were unable to recover it. 00:34:36.392 [2024-07-15 03:37:42.085722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.392 [2024-07-15 03:37:42.085752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.392 qpair failed and we were unable to recover it. 00:34:36.392 [2024-07-15 03:37:42.085932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.392 [2024-07-15 03:37:42.085962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.392 qpair failed and we were unable to recover it. 00:34:36.392 [2024-07-15 03:37:42.086136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.392 [2024-07-15 03:37:42.086163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.392 qpair failed and we were unable to recover it. 00:34:36.392 [2024-07-15 03:37:42.086321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.392 [2024-07-15 03:37:42.086348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.392 qpair failed and we were unable to recover it. 00:34:36.392 [2024-07-15 03:37:42.086488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.392 [2024-07-15 03:37:42.086517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.392 qpair failed and we were unable to recover it. 00:34:36.392 [2024-07-15 03:37:42.086718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.392 [2024-07-15 03:37:42.086745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.392 qpair failed and we were unable to recover it. 00:34:36.392 [2024-07-15 03:37:42.086907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.392 [2024-07-15 03:37:42.086937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.392 qpair failed and we were unable to recover it. 00:34:36.392 [2024-07-15 03:37:42.087102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.392 [2024-07-15 03:37:42.087129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.392 qpair failed and we were unable to recover it. 00:34:36.392 [2024-07-15 03:37:42.087291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.392 [2024-07-15 03:37:42.087318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.392 qpair failed and we were unable to recover it. 00:34:36.392 [2024-07-15 03:37:42.087477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.392 [2024-07-15 03:37:42.087506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.392 qpair failed and we were unable to recover it. 00:34:36.392 [2024-07-15 03:37:42.087654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.392 [2024-07-15 03:37:42.087685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.392 qpair failed and we were unable to recover it. 00:34:36.392 [2024-07-15 03:37:42.087806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.392 [2024-07-15 03:37:42.087833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.392 qpair failed and we were unable to recover it. 00:34:36.392 [2024-07-15 03:37:42.087977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.392 [2024-07-15 03:37:42.088004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.392 qpair failed and we were unable to recover it. 00:34:36.392 [2024-07-15 03:37:42.088191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.392 [2024-07-15 03:37:42.088243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.392 qpair failed and we were unable to recover it. 00:34:36.392 [2024-07-15 03:37:42.088379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.392 [2024-07-15 03:37:42.088405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.392 qpair failed and we were unable to recover it. 00:34:36.392 [2024-07-15 03:37:42.088550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.392 [2024-07-15 03:37:42.088576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.392 qpair failed and we were unable to recover it. 00:34:36.392 [2024-07-15 03:37:42.088759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.392 [2024-07-15 03:37:42.088789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.392 qpair failed and we were unable to recover it. 00:34:36.392 [2024-07-15 03:37:42.088962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.392 [2024-07-15 03:37:42.088989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.392 qpair failed and we were unable to recover it. 00:34:36.392 [2024-07-15 03:37:42.089095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.392 [2024-07-15 03:37:42.089122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.392 qpair failed and we were unable to recover it. 00:34:36.392 [2024-07-15 03:37:42.089255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.392 [2024-07-15 03:37:42.089284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.392 qpair failed and we were unable to recover it. 00:34:36.392 [2024-07-15 03:37:42.089443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.392 [2024-07-15 03:37:42.089470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.392 qpair failed and we were unable to recover it. 00:34:36.392 [2024-07-15 03:37:42.089602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.392 [2024-07-15 03:37:42.089646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.392 qpair failed and we were unable to recover it. 00:34:36.392 [2024-07-15 03:37:42.089805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.392 [2024-07-15 03:37:42.089837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.392 qpair failed and we were unable to recover it. 00:34:36.392 [2024-07-15 03:37:42.090037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.392 [2024-07-15 03:37:42.090064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.392 qpair failed and we were unable to recover it. 00:34:36.392 [2024-07-15 03:37:42.090198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.392 [2024-07-15 03:37:42.090228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.392 qpair failed and we were unable to recover it. 00:34:36.392 [2024-07-15 03:37:42.090405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.392 [2024-07-15 03:37:42.090434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.392 qpair failed and we were unable to recover it. 00:34:36.392 [2024-07-15 03:37:42.090591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.392 [2024-07-15 03:37:42.090617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.392 qpair failed and we were unable to recover it. 00:34:36.392 [2024-07-15 03:37:42.090756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.392 [2024-07-15 03:37:42.090800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.392 qpair failed and we were unable to recover it. 00:34:36.392 [2024-07-15 03:37:42.090991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.392 [2024-07-15 03:37:42.091023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.392 qpair failed and we were unable to recover it. 00:34:36.392 [2024-07-15 03:37:42.091187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.392 [2024-07-15 03:37:42.091213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.392 qpair failed and we were unable to recover it. 00:34:36.392 [2024-07-15 03:37:42.091402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.392 [2024-07-15 03:37:42.091432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.392 qpair failed and we were unable to recover it. 00:34:36.392 [2024-07-15 03:37:42.091622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.392 [2024-07-15 03:37:42.091649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.392 qpair failed and we were unable to recover it. 00:34:36.392 [2024-07-15 03:37:42.091785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.392 [2024-07-15 03:37:42.091812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.392 qpair failed and we were unable to recover it. 00:34:36.392 [2024-07-15 03:37:42.091996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.392 [2024-07-15 03:37:42.092027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.392 qpair failed and we were unable to recover it. 00:34:36.392 [2024-07-15 03:37:42.092178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.392 [2024-07-15 03:37:42.092208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.392 qpair failed and we were unable to recover it. 00:34:36.392 [2024-07-15 03:37:42.092397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.392 [2024-07-15 03:37:42.092423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.392 qpair failed and we were unable to recover it. 00:34:36.392 [2024-07-15 03:37:42.092581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.392 [2024-07-15 03:37:42.092611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.392 qpair failed and we were unable to recover it. 00:34:36.392 [2024-07-15 03:37:42.092734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.392 [2024-07-15 03:37:42.092765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.392 qpair failed and we were unable to recover it. 00:34:36.392 [2024-07-15 03:37:42.092898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.392 [2024-07-15 03:37:42.092926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.392 qpair failed and we were unable to recover it. 00:34:36.392 [2024-07-15 03:37:42.093068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.392 [2024-07-15 03:37:42.093110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.392 qpair failed and we were unable to recover it. 00:34:36.392 [2024-07-15 03:37:42.093275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.392 [2024-07-15 03:37:42.093302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.392 qpair failed and we were unable to recover it. 00:34:36.392 [2024-07-15 03:37:42.093465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.392 [2024-07-15 03:37:42.093491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.392 qpair failed and we were unable to recover it. 00:34:36.392 [2024-07-15 03:37:42.093628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.392 [2024-07-15 03:37:42.093657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.392 qpair failed and we were unable to recover it. 00:34:36.392 [2024-07-15 03:37:42.093807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.392 [2024-07-15 03:37:42.093836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.392 qpair failed and we were unable to recover it. 00:34:36.392 [2024-07-15 03:37:42.093980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.392 [2024-07-15 03:37:42.094008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.392 qpair failed and we were unable to recover it. 00:34:36.392 [2024-07-15 03:37:42.094149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.392 [2024-07-15 03:37:42.094192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.392 qpair failed and we were unable to recover it. 00:34:36.393 [2024-07-15 03:37:42.094356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.393 [2024-07-15 03:37:42.094406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.393 qpair failed and we were unable to recover it. 00:34:36.393 [2024-07-15 03:37:42.094627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.393 [2024-07-15 03:37:42.094653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.393 qpair failed and we were unable to recover it. 00:34:36.393 [2024-07-15 03:37:42.094810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.393 [2024-07-15 03:37:42.094839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.393 qpair failed and we were unable to recover it. 00:34:36.393 [2024-07-15 03:37:42.095007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.393 [2024-07-15 03:37:42.095052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.393 qpair failed and we were unable to recover it. 00:34:36.393 [2024-07-15 03:37:42.095211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.393 [2024-07-15 03:37:42.095239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.393 qpair failed and we were unable to recover it. 00:34:36.393 [2024-07-15 03:37:42.095392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.393 [2024-07-15 03:37:42.095438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.393 qpair failed and we were unable to recover it. 00:34:36.393 [2024-07-15 03:37:42.095625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.393 [2024-07-15 03:37:42.095674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.393 qpair failed and we were unable to recover it. 00:34:36.393 [2024-07-15 03:37:42.095837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.393 [2024-07-15 03:37:42.095864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.393 qpair failed and we were unable to recover it. 00:34:36.393 [2024-07-15 03:37:42.096023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.393 [2024-07-15 03:37:42.096053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.393 qpair failed and we were unable to recover it. 00:34:36.393 [2024-07-15 03:37:42.096233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.393 [2024-07-15 03:37:42.096283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.393 qpair failed and we were unable to recover it. 00:34:36.393 [2024-07-15 03:37:42.096439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.393 [2024-07-15 03:37:42.096465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.393 qpair failed and we were unable to recover it. 00:34:36.393 [2024-07-15 03:37:42.096582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.393 [2024-07-15 03:37:42.096609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.393 qpair failed and we were unable to recover it. 00:34:36.393 [2024-07-15 03:37:42.096714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.393 [2024-07-15 03:37:42.096743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.393 qpair failed and we were unable to recover it. 00:34:36.393 [2024-07-15 03:37:42.096966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.393 [2024-07-15 03:37:42.096994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.393 qpair failed and we were unable to recover it. 00:34:36.393 [2024-07-15 03:37:42.097120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.393 [2024-07-15 03:37:42.097147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.393 qpair failed and we were unable to recover it. 00:34:36.393 [2024-07-15 03:37:42.097282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.393 [2024-07-15 03:37:42.097327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.393 qpair failed and we were unable to recover it. 00:34:36.393 [2024-07-15 03:37:42.097511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.393 [2024-07-15 03:37:42.097537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.393 qpair failed and we were unable to recover it. 00:34:36.393 [2024-07-15 03:37:42.097757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.393 [2024-07-15 03:37:42.097785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.393 qpair failed and we were unable to recover it. 00:34:36.393 [2024-07-15 03:37:42.097938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.393 [2024-07-15 03:37:42.097969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.393 qpair failed and we were unable to recover it. 00:34:36.393 [2024-07-15 03:37:42.098158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.393 [2024-07-15 03:37:42.098185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.393 qpair failed and we were unable to recover it. 00:34:36.393 [2024-07-15 03:37:42.098342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.393 [2024-07-15 03:37:42.098372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.393 qpair failed and we were unable to recover it. 00:34:36.393 [2024-07-15 03:37:42.098624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.393 [2024-07-15 03:37:42.098675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.393 qpair failed and we were unable to recover it. 00:34:36.393 [2024-07-15 03:37:42.098835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.393 [2024-07-15 03:37:42.098867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.393 qpair failed and we were unable to recover it. 00:34:36.393 [2024-07-15 03:37:42.098996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.393 [2024-07-15 03:37:42.099023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.393 qpair failed and we were unable to recover it. 00:34:36.393 [2024-07-15 03:37:42.099159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.393 [2024-07-15 03:37:42.099185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.393 qpair failed and we were unable to recover it. 00:34:36.393 [2024-07-15 03:37:42.099321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.393 [2024-07-15 03:37:42.099349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.393 qpair failed and we were unable to recover it. 00:34:36.393 [2024-07-15 03:37:42.099491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.393 [2024-07-15 03:37:42.099518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.393 qpair failed and we were unable to recover it. 00:34:36.393 [2024-07-15 03:37:42.099656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.393 [2024-07-15 03:37:42.099683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.393 qpair failed and we were unable to recover it. 00:34:36.393 [2024-07-15 03:37:42.099819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.393 [2024-07-15 03:37:42.099846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.393 qpair failed and we were unable to recover it. 00:34:36.393 [2024-07-15 03:37:42.099989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.393 [2024-07-15 03:37:42.100016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.393 qpair failed and we were unable to recover it. 00:34:36.393 [2024-07-15 03:37:42.100177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.393 [2024-07-15 03:37:42.100208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.393 qpair failed and we were unable to recover it. 00:34:36.393 [2024-07-15 03:37:42.100370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.393 [2024-07-15 03:37:42.100397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.393 qpair failed and we were unable to recover it. 00:34:36.393 [2024-07-15 03:37:42.100563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.393 [2024-07-15 03:37:42.100589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.393 qpair failed and we were unable to recover it. 00:34:36.393 [2024-07-15 03:37:42.100722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.393 [2024-07-15 03:37:42.100751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.393 qpair failed and we were unable to recover it. 00:34:36.393 [2024-07-15 03:37:42.100936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.393 [2024-07-15 03:37:42.100963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.393 qpair failed and we were unable to recover it. 00:34:36.393 [2024-07-15 03:37:42.101116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.393 [2024-07-15 03:37:42.101146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.393 qpair failed and we were unable to recover it. 00:34:36.393 [2024-07-15 03:37:42.101302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.393 [2024-07-15 03:37:42.101332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.393 qpair failed and we were unable to recover it. 00:34:36.393 [2024-07-15 03:37:42.101491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.393 [2024-07-15 03:37:42.101517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.393 qpair failed and we were unable to recover it. 00:34:36.393 [2024-07-15 03:37:42.101679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.393 [2024-07-15 03:37:42.101705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.393 qpair failed and we were unable to recover it. 00:34:36.393 [2024-07-15 03:37:42.101867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.393 [2024-07-15 03:37:42.101903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.393 qpair failed and we were unable to recover it. 00:34:36.393 [2024-07-15 03:37:42.102037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.393 [2024-07-15 03:37:42.102063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.393 qpair failed and we were unable to recover it. 00:34:36.393 [2024-07-15 03:37:42.102165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.393 [2024-07-15 03:37:42.102191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.393 qpair failed and we were unable to recover it. 00:34:36.393 [2024-07-15 03:37:42.102349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.393 [2024-07-15 03:37:42.102378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.393 qpair failed and we were unable to recover it. 00:34:36.393 [2024-07-15 03:37:42.102533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.393 [2024-07-15 03:37:42.102559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.393 qpair failed and we were unable to recover it. 00:34:36.393 [2024-07-15 03:37:42.102714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.393 [2024-07-15 03:37:42.102744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.393 qpair failed and we were unable to recover it. 00:34:36.393 [2024-07-15 03:37:42.102897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.393 [2024-07-15 03:37:42.102940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.393 qpair failed and we were unable to recover it. 00:34:36.393 [2024-07-15 03:37:42.103060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.393 [2024-07-15 03:37:42.103086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.393 qpair failed and we were unable to recover it. 00:34:36.393 [2024-07-15 03:37:42.103239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.393 [2024-07-15 03:37:42.103265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.393 qpair failed and we were unable to recover it. 00:34:36.393 [2024-07-15 03:37:42.103496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.393 [2024-07-15 03:37:42.103544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.393 qpair failed and we were unable to recover it. 00:34:36.393 [2024-07-15 03:37:42.103713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.393 [2024-07-15 03:37:42.103740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.393 qpair failed and we were unable to recover it. 00:34:36.393 [2024-07-15 03:37:42.103858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.393 [2024-07-15 03:37:42.103890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.393 qpair failed and we were unable to recover it. 00:34:36.393 [2024-07-15 03:37:42.104000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.393 [2024-07-15 03:37:42.104028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.393 qpair failed and we were unable to recover it. 00:34:36.393 [2024-07-15 03:37:42.104135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.393 [2024-07-15 03:37:42.104161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.393 qpair failed and we were unable to recover it. 00:34:36.393 [2024-07-15 03:37:42.104300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.393 [2024-07-15 03:37:42.104343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.393 qpair failed and we were unable to recover it. 00:34:36.393 [2024-07-15 03:37:42.104488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.393 [2024-07-15 03:37:42.104533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.393 qpair failed and we were unable to recover it. 00:34:36.393 [2024-07-15 03:37:42.104694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.393 [2024-07-15 03:37:42.104723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.393 qpair failed and we were unable to recover it. 00:34:36.393 [2024-07-15 03:37:42.104862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.393 [2024-07-15 03:37:42.104914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.393 qpair failed and we were unable to recover it. 00:34:36.393 [2024-07-15 03:37:42.105064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.393 [2024-07-15 03:37:42.105093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.393 qpair failed and we were unable to recover it. 00:34:36.393 [2024-07-15 03:37:42.105266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.393 [2024-07-15 03:37:42.105292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.393 qpair failed and we were unable to recover it. 00:34:36.393 [2024-07-15 03:37:42.105408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.393 [2024-07-15 03:37:42.105435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.393 qpair failed and we were unable to recover it. 00:34:36.393 [2024-07-15 03:37:42.105574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.393 [2024-07-15 03:37:42.105602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.393 qpair failed and we were unable to recover it. 00:34:36.393 [2024-07-15 03:37:42.105741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.393 [2024-07-15 03:37:42.105767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.393 qpair failed and we were unable to recover it. 00:34:36.393 [2024-07-15 03:37:42.105943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.393 [2024-07-15 03:37:42.105977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.393 qpair failed and we were unable to recover it. 00:34:36.393 [2024-07-15 03:37:42.106133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.393 [2024-07-15 03:37:42.106161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.393 qpair failed and we were unable to recover it. 00:34:36.393 [2024-07-15 03:37:42.106328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.393 [2024-07-15 03:37:42.106355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.393 qpair failed and we were unable to recover it. 00:34:36.393 [2024-07-15 03:37:42.106528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.393 [2024-07-15 03:37:42.106555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.393 qpair failed and we were unable to recover it. 00:34:36.393 [2024-07-15 03:37:42.106755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.394 [2024-07-15 03:37:42.106783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.394 qpair failed and we were unable to recover it. 00:34:36.394 [2024-07-15 03:37:42.106926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.394 [2024-07-15 03:37:42.106954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.394 qpair failed and we were unable to recover it. 00:34:36.394 [2024-07-15 03:37:42.107114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.394 [2024-07-15 03:37:42.107144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.394 qpair failed and we were unable to recover it. 00:34:36.394 [2024-07-15 03:37:42.107296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.394 [2024-07-15 03:37:42.107327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.394 qpair failed and we were unable to recover it. 00:34:36.394 [2024-07-15 03:37:42.107489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.394 [2024-07-15 03:37:42.107516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.394 qpair failed and we were unable to recover it. 00:34:36.394 [2024-07-15 03:37:42.107656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.394 [2024-07-15 03:37:42.107683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.394 qpair failed and we were unable to recover it. 00:34:36.394 [2024-07-15 03:37:42.107848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.394 [2024-07-15 03:37:42.107880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.394 qpair failed and we were unable to recover it. 00:34:36.394 [2024-07-15 03:37:42.108022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.394 [2024-07-15 03:37:42.108048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.394 qpair failed and we were unable to recover it. 00:34:36.394 [2024-07-15 03:37:42.108167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.394 [2024-07-15 03:37:42.108210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.394 qpair failed and we were unable to recover it. 00:34:36.394 [2024-07-15 03:37:42.108363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.394 [2024-07-15 03:37:42.108392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.394 qpair failed and we were unable to recover it. 00:34:36.394 [2024-07-15 03:37:42.108555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.394 [2024-07-15 03:37:42.108581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.394 qpair failed and we were unable to recover it. 00:34:36.394 [2024-07-15 03:37:42.108720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.394 [2024-07-15 03:37:42.108746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.394 qpair failed and we were unable to recover it. 00:34:36.394 [2024-07-15 03:37:42.108915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.394 [2024-07-15 03:37:42.108941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.394 qpair failed and we were unable to recover it. 00:34:36.394 [2024-07-15 03:37:42.109048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.394 [2024-07-15 03:37:42.109075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.394 qpair failed and we were unable to recover it. 00:34:36.394 [2024-07-15 03:37:42.109237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.394 [2024-07-15 03:37:42.109279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.394 qpair failed and we were unable to recover it. 00:34:36.394 [2024-07-15 03:37:42.109495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.394 [2024-07-15 03:37:42.109549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.394 qpair failed and we were unable to recover it. 00:34:36.394 [2024-07-15 03:37:42.109716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.394 [2024-07-15 03:37:42.109746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.394 qpair failed and we were unable to recover it. 00:34:36.394 [2024-07-15 03:37:42.109937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.394 [2024-07-15 03:37:42.109964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.394 qpair failed and we were unable to recover it. 00:34:36.394 [2024-07-15 03:37:42.110080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.394 [2024-07-15 03:37:42.110107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.394 qpair failed and we were unable to recover it. 00:34:36.394 [2024-07-15 03:37:42.110264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.394 [2024-07-15 03:37:42.110291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.394 qpair failed and we were unable to recover it. 00:34:36.394 [2024-07-15 03:37:42.110472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.394 [2024-07-15 03:37:42.110501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.394 qpair failed and we were unable to recover it. 00:34:36.394 [2024-07-15 03:37:42.110662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.394 [2024-07-15 03:37:42.110691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.394 qpair failed and we were unable to recover it. 00:34:36.394 [2024-07-15 03:37:42.110820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.394 [2024-07-15 03:37:42.110847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.394 qpair failed and we were unable to recover it. 00:34:36.394 [2024-07-15 03:37:42.111004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.394 [2024-07-15 03:37:42.111031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.394 qpair failed and we were unable to recover it. 00:34:36.394 [2024-07-15 03:37:42.111171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.394 [2024-07-15 03:37:42.111198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.394 qpair failed and we were unable to recover it. 00:34:36.394 [2024-07-15 03:37:42.111329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.394 [2024-07-15 03:37:42.111355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.394 qpair failed and we were unable to recover it. 00:34:36.394 [2024-07-15 03:37:42.111492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.394 [2024-07-15 03:37:42.111519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.394 qpair failed and we were unable to recover it. 00:34:36.394 [2024-07-15 03:37:42.111658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.394 [2024-07-15 03:37:42.111684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.394 qpair failed and we were unable to recover it. 00:34:36.394 [2024-07-15 03:37:42.111819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.394 [2024-07-15 03:37:42.111845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.394 qpair failed and we were unable to recover it. 00:34:36.394 [2024-07-15 03:37:42.111986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.394 [2024-07-15 03:37:42.112013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.394 qpair failed and we were unable to recover it. 00:34:36.394 [2024-07-15 03:37:42.112143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.394 [2024-07-15 03:37:42.112184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.394 qpair failed and we were unable to recover it. 00:34:36.394 [2024-07-15 03:37:42.112340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.394 [2024-07-15 03:37:42.112366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.394 qpair failed and we were unable to recover it. 00:34:36.394 [2024-07-15 03:37:42.112551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.394 [2024-07-15 03:37:42.112580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.394 qpair failed and we were unable to recover it. 00:34:36.394 [2024-07-15 03:37:42.112723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.394 [2024-07-15 03:37:42.112752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.394 qpair failed and we were unable to recover it. 00:34:36.394 [2024-07-15 03:37:42.112908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.394 [2024-07-15 03:37:42.112934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.394 qpair failed and we were unable to recover it. 00:34:36.394 [2024-07-15 03:37:42.113113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.394 [2024-07-15 03:37:42.113142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.394 qpair failed and we were unable to recover it. 00:34:36.394 [2024-07-15 03:37:42.113325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.394 [2024-07-15 03:37:42.113355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.394 qpair failed and we were unable to recover it. 00:34:36.394 [2024-07-15 03:37:42.113506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.394 [2024-07-15 03:37:42.113533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.394 qpair failed and we were unable to recover it. 00:34:36.394 [2024-07-15 03:37:42.113716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.394 [2024-07-15 03:37:42.113745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.394 qpair failed and we were unable to recover it. 00:34:36.394 [2024-07-15 03:37:42.113903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.394 [2024-07-15 03:37:42.113931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.394 qpair failed and we were unable to recover it. 00:34:36.394 [2024-07-15 03:37:42.114093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.394 [2024-07-15 03:37:42.114120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.394 qpair failed and we were unable to recover it. 00:34:36.394 [2024-07-15 03:37:42.114249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.394 [2024-07-15 03:37:42.114280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.394 qpair failed and we were unable to recover it. 00:34:36.394 [2024-07-15 03:37:42.114432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.394 [2024-07-15 03:37:42.114461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.394 qpair failed and we were unable to recover it. 00:34:36.394 [2024-07-15 03:37:42.114611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.394 [2024-07-15 03:37:42.114637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.394 qpair failed and we were unable to recover it. 00:34:36.394 [2024-07-15 03:37:42.114773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.394 [2024-07-15 03:37:42.114817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.394 qpair failed and we were unable to recover it. 00:34:36.394 [2024-07-15 03:37:42.114996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.394 [2024-07-15 03:37:42.115026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.394 qpair failed and we were unable to recover it. 00:34:36.394 [2024-07-15 03:37:42.115161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.394 [2024-07-15 03:37:42.115187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.394 qpair failed and we were unable to recover it. 00:34:36.394 [2024-07-15 03:37:42.115322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.394 [2024-07-15 03:37:42.115349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.394 qpair failed and we were unable to recover it. 00:34:36.394 [2024-07-15 03:37:42.115510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.394 [2024-07-15 03:37:42.115539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.394 qpair failed and we were unable to recover it. 00:34:36.394 [2024-07-15 03:37:42.115724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.394 [2024-07-15 03:37:42.115750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.394 qpair failed and we were unable to recover it. 00:34:36.394 [2024-07-15 03:37:42.115937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.394 [2024-07-15 03:37:42.115967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.394 qpair failed and we were unable to recover it. 00:34:36.394 [2024-07-15 03:37:42.116119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.394 [2024-07-15 03:37:42.116148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.394 qpair failed and we were unable to recover it. 00:34:36.394 [2024-07-15 03:37:42.116303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.394 [2024-07-15 03:37:42.116329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.394 qpair failed and we were unable to recover it. 00:34:36.394 [2024-07-15 03:37:42.116464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.394 [2024-07-15 03:37:42.116507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.394 qpair failed and we were unable to recover it. 00:34:36.394 [2024-07-15 03:37:42.116661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.394 [2024-07-15 03:37:42.116690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.394 qpair failed and we were unable to recover it. 00:34:36.394 [2024-07-15 03:37:42.116823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.394 [2024-07-15 03:37:42.116867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.394 qpair failed and we were unable to recover it. 00:34:36.394 [2024-07-15 03:37:42.117028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.394 [2024-07-15 03:37:42.117054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.394 qpair failed and we were unable to recover it. 00:34:36.394 [2024-07-15 03:37:42.117223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.394 [2024-07-15 03:37:42.117252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.394 qpair failed and we were unable to recover it. 00:34:36.394 [2024-07-15 03:37:42.117441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.394 [2024-07-15 03:37:42.117467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.394 qpair failed and we were unable to recover it. 00:34:36.394 [2024-07-15 03:37:42.117622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.394 [2024-07-15 03:37:42.117651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.394 qpair failed and we were unable to recover it. 00:34:36.395 [2024-07-15 03:37:42.117767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.395 [2024-07-15 03:37:42.117797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.395 qpair failed and we were unable to recover it. 00:34:36.395 [2024-07-15 03:37:42.117972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.395 [2024-07-15 03:37:42.117999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.395 qpair failed and we were unable to recover it. 00:34:36.395 [2024-07-15 03:37:42.118187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.395 [2024-07-15 03:37:42.118216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.395 qpair failed and we were unable to recover it. 00:34:36.395 [2024-07-15 03:37:42.118340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.395 [2024-07-15 03:37:42.118370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.395 qpair failed and we were unable to recover it. 00:34:36.395 [2024-07-15 03:37:42.118538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.395 [2024-07-15 03:37:42.118568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.395 qpair failed and we were unable to recover it. 00:34:36.395 [2024-07-15 03:37:42.118751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.395 [2024-07-15 03:37:42.118777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.395 qpair failed and we were unable to recover it. 00:34:36.395 [2024-07-15 03:37:42.118910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.395 [2024-07-15 03:37:42.118937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.395 qpair failed and we were unable to recover it. 00:34:36.395 [2024-07-15 03:37:42.119062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.395 [2024-07-15 03:37:42.119091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.395 qpair failed and we were unable to recover it. 00:34:36.395 [2024-07-15 03:37:42.119263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.395 [2024-07-15 03:37:42.119291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.395 qpair failed and we were unable to recover it. 00:34:36.395 [2024-07-15 03:37:42.119443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.395 [2024-07-15 03:37:42.119470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.395 qpair failed and we were unable to recover it. 00:34:36.395 [2024-07-15 03:37:42.119612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.395 [2024-07-15 03:37:42.119639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.395 qpair failed and we were unable to recover it. 00:34:36.395 [2024-07-15 03:37:42.119785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.395 [2024-07-15 03:37:42.119814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.395 qpair failed and we were unable to recover it. 00:34:36.395 [2024-07-15 03:37:42.119973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.395 [2024-07-15 03:37:42.120004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.395 qpair failed and we were unable to recover it. 00:34:36.395 [2024-07-15 03:37:42.120152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.395 [2024-07-15 03:37:42.120190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.395 qpair failed and we were unable to recover it. 00:34:36.395 [2024-07-15 03:37:42.120368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.395 [2024-07-15 03:37:42.120403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.395 qpair failed and we were unable to recover it. 00:34:36.395 [2024-07-15 03:37:42.120571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.395 [2024-07-15 03:37:42.120598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.395 qpair failed and we were unable to recover it. 00:34:36.395 [2024-07-15 03:37:42.120724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.395 [2024-07-15 03:37:42.120771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.395 qpair failed and we were unable to recover it. 00:34:36.395 [2024-07-15 03:37:42.120924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.395 [2024-07-15 03:37:42.120954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.395 qpair failed and we were unable to recover it. 00:34:36.395 [2024-07-15 03:37:42.121092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.395 [2024-07-15 03:37:42.121120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.395 qpair failed and we were unable to recover it. 00:34:36.395 [2024-07-15 03:37:42.121263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.395 [2024-07-15 03:37:42.121291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.395 qpair failed and we were unable to recover it. 00:34:36.395 [2024-07-15 03:37:42.121437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.395 [2024-07-15 03:37:42.121473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.395 qpair failed and we were unable to recover it. 00:34:36.395 [2024-07-15 03:37:42.121643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.395 [2024-07-15 03:37:42.121669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.395 qpair failed and we were unable to recover it. 00:34:36.395 [2024-07-15 03:37:42.121791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.395 [2024-07-15 03:37:42.121817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.395 qpair failed and we were unable to recover it. 00:34:36.395 [2024-07-15 03:37:42.121966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.395 [2024-07-15 03:37:42.122003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.395 qpair failed and we were unable to recover it. 00:34:36.395 [2024-07-15 03:37:42.122122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.395 [2024-07-15 03:37:42.122150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.395 qpair failed and we were unable to recover it. 00:34:36.395 [2024-07-15 03:37:42.122297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.395 [2024-07-15 03:37:42.122324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.395 qpair failed and we were unable to recover it. 00:34:36.395 [2024-07-15 03:37:42.122446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.395 [2024-07-15 03:37:42.122474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.395 qpair failed and we were unable to recover it. 00:34:36.395 [2024-07-15 03:37:42.122611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.395 [2024-07-15 03:37:42.122646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.395 qpair failed and we were unable to recover it. 00:34:36.395 [2024-07-15 03:37:42.122786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.395 [2024-07-15 03:37:42.122813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.395 qpair failed and we were unable to recover it. 00:34:36.395 [2024-07-15 03:37:42.122959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.395 [2024-07-15 03:37:42.122987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.395 qpair failed and we were unable to recover it. 00:34:36.395 [2024-07-15 03:37:42.123099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.395 [2024-07-15 03:37:42.123126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.395 qpair failed and we were unable to recover it. 00:34:36.395 [2024-07-15 03:37:42.123227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.395 [2024-07-15 03:37:42.123254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.395 qpair failed and we were unable to recover it. 00:34:36.395 [2024-07-15 03:37:42.123412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.395 [2024-07-15 03:37:42.123443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.395 qpair failed and we were unable to recover it. 00:34:36.395 [2024-07-15 03:37:42.123622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.395 [2024-07-15 03:37:42.123651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.395 qpair failed and we were unable to recover it. 00:34:36.395 [2024-07-15 03:37:42.123786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.395 [2024-07-15 03:37:42.123814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.395 qpair failed and we were unable to recover it. 00:34:36.395 [2024-07-15 03:37:42.123985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.395 [2024-07-15 03:37:42.124013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.395 qpair failed and we were unable to recover it. 00:34:36.395 [2024-07-15 03:37:42.124134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.395 [2024-07-15 03:37:42.124162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.395 qpair failed and we were unable to recover it. 00:34:36.395 [2024-07-15 03:37:42.124299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.395 [2024-07-15 03:37:42.124328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.395 qpair failed and we were unable to recover it. 00:34:36.395 [2024-07-15 03:37:42.124470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.395 [2024-07-15 03:37:42.124497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.395 qpair failed and we were unable to recover it. 00:34:36.395 [2024-07-15 03:37:42.124609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.395 [2024-07-15 03:37:42.124636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.395 qpair failed and we were unable to recover it. 00:34:36.395 [2024-07-15 03:37:42.124822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.395 [2024-07-15 03:37:42.124853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.395 qpair failed and we were unable to recover it. 00:34:36.395 [2024-07-15 03:37:42.125014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.395 [2024-07-15 03:37:42.125040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.395 qpair failed and we were unable to recover it. 00:34:36.395 [2024-07-15 03:37:42.125208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.395 [2024-07-15 03:37:42.125238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.395 qpair failed and we were unable to recover it. 00:34:36.395 [2024-07-15 03:37:42.125410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.395 [2024-07-15 03:37:42.125437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.395 qpair failed and we were unable to recover it. 00:34:36.395 [2024-07-15 03:37:42.125625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.395 [2024-07-15 03:37:42.125656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.395 qpair failed and we were unable to recover it. 00:34:36.395 [2024-07-15 03:37:42.125808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.395 [2024-07-15 03:37:42.125837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.395 qpair failed and we were unable to recover it. 00:34:36.395 [2024-07-15 03:37:42.125958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.395 [2024-07-15 03:37:42.125998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.395 qpair failed and we were unable to recover it. 00:34:36.395 [2024-07-15 03:37:42.126138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.395 [2024-07-15 03:37:42.126165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.395 qpair failed and we were unable to recover it. 00:34:36.395 [2024-07-15 03:37:42.126304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.395 [2024-07-15 03:37:42.126355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.395 qpair failed and we were unable to recover it. 00:34:36.395 [2024-07-15 03:37:42.126512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.395 [2024-07-15 03:37:42.126543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.395 qpair failed and we were unable to recover it. 00:34:36.395 [2024-07-15 03:37:42.126706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.395 [2024-07-15 03:37:42.126738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.395 qpair failed and we were unable to recover it. 00:34:36.395 [2024-07-15 03:37:42.126900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.395 [2024-07-15 03:37:42.126927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.395 qpair failed and we were unable to recover it. 00:34:36.395 [2024-07-15 03:37:42.127061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.395 [2024-07-15 03:37:42.127125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.395 qpair failed and we were unable to recover it. 00:34:36.395 [2024-07-15 03:37:42.127307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.395 [2024-07-15 03:37:42.127336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.395 qpair failed and we were unable to recover it. 00:34:36.395 [2024-07-15 03:37:42.127481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.395 [2024-07-15 03:37:42.127511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.395 qpair failed and we were unable to recover it. 00:34:36.395 [2024-07-15 03:37:42.127669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.395 [2024-07-15 03:37:42.127696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.395 qpair failed and we were unable to recover it. 00:34:36.395 [2024-07-15 03:37:42.127836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.395 [2024-07-15 03:37:42.127891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.395 qpair failed and we were unable to recover it. 00:34:36.395 [2024-07-15 03:37:42.128048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.395 [2024-07-15 03:37:42.128081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.395 qpair failed and we were unable to recover it. 00:34:36.395 [2024-07-15 03:37:42.128223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.395 [2024-07-15 03:37:42.128253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.395 qpair failed and we were unable to recover it. 00:34:36.395 [2024-07-15 03:37:42.128411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.395 [2024-07-15 03:37:42.128440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.395 qpair failed and we were unable to recover it. 00:34:36.395 [2024-07-15 03:37:42.128550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.395 [2024-07-15 03:37:42.128576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.395 qpair failed and we were unable to recover it. 00:34:36.395 [2024-07-15 03:37:42.128737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.395 [2024-07-15 03:37:42.128766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.395 qpair failed and we were unable to recover it. 00:34:36.395 [2024-07-15 03:37:42.128930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.395 [2024-07-15 03:37:42.128960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.395 qpair failed and we were unable to recover it. 00:34:36.395 [2024-07-15 03:37:42.129150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.395 [2024-07-15 03:37:42.129177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.395 qpair failed and we were unable to recover it. 00:34:36.395 [2024-07-15 03:37:42.129362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.395 [2024-07-15 03:37:42.129391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.395 qpair failed and we were unable to recover it. 00:34:36.395 [2024-07-15 03:37:42.129519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.395 [2024-07-15 03:37:42.129556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.395 qpair failed and we were unable to recover it. 00:34:36.395 [2024-07-15 03:37:42.129685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.395 [2024-07-15 03:37:42.129714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.395 qpair failed and we were unable to recover it. 00:34:36.396 [2024-07-15 03:37:42.129846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.396 [2024-07-15 03:37:42.129874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.396 qpair failed and we were unable to recover it. 00:34:36.396 [2024-07-15 03:37:42.130018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.396 [2024-07-15 03:37:42.130044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.396 qpair failed and we were unable to recover it. 00:34:36.396 [2024-07-15 03:37:42.130211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.396 [2024-07-15 03:37:42.130248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.396 qpair failed and we were unable to recover it. 00:34:36.396 [2024-07-15 03:37:42.130392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.396 [2024-07-15 03:37:42.130421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.396 qpair failed and we were unable to recover it. 00:34:36.396 [2024-07-15 03:37:42.130554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.396 [2024-07-15 03:37:42.130581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.396 qpair failed and we were unable to recover it. 00:34:36.396 [2024-07-15 03:37:42.130722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.396 [2024-07-15 03:37:42.130756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.396 qpair failed and we were unable to recover it. 00:34:36.396 [2024-07-15 03:37:42.130904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.396 [2024-07-15 03:37:42.130935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.396 qpair failed and we were unable to recover it. 00:34:36.396 [2024-07-15 03:37:42.131065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.396 [2024-07-15 03:37:42.131095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.396 qpair failed and we were unable to recover it. 00:34:36.396 [2024-07-15 03:37:42.131259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.396 [2024-07-15 03:37:42.131286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.396 qpair failed and we were unable to recover it. 00:34:36.396 [2024-07-15 03:37:42.131442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.396 [2024-07-15 03:37:42.131471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.396 qpair failed and we were unable to recover it. 00:34:36.396 [2024-07-15 03:37:42.131630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.396 [2024-07-15 03:37:42.131656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.396 qpair failed and we were unable to recover it. 00:34:36.396 [2024-07-15 03:37:42.131820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.396 [2024-07-15 03:37:42.131865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.396 qpair failed and we were unable to recover it. 00:34:36.396 [2024-07-15 03:37:42.132015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.396 [2024-07-15 03:37:42.132041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.396 qpair failed and we were unable to recover it. 00:34:36.396 [2024-07-15 03:37:42.132162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.396 [2024-07-15 03:37:42.132189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.396 qpair failed and we were unable to recover it. 00:34:36.396 [2024-07-15 03:37:42.132348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.396 [2024-07-15 03:37:42.132377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.396 qpair failed and we were unable to recover it. 00:34:36.396 [2024-07-15 03:37:42.132538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.396 [2024-07-15 03:37:42.132569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.396 qpair failed and we were unable to recover it. 00:34:36.396 [2024-07-15 03:37:42.132763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.396 [2024-07-15 03:37:42.132789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.396 qpair failed and we were unable to recover it. 00:34:36.396 [2024-07-15 03:37:42.132964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.396 [2024-07-15 03:37:42.132994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.396 qpair failed and we were unable to recover it. 00:34:36.396 [2024-07-15 03:37:42.133143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.396 [2024-07-15 03:37:42.133188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.396 qpair failed and we were unable to recover it. 00:34:36.396 [2024-07-15 03:37:42.133343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.396 [2024-07-15 03:37:42.133373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.396 qpair failed and we were unable to recover it. 00:34:36.396 [2024-07-15 03:37:42.133526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.396 [2024-07-15 03:37:42.133552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.396 qpair failed and we were unable to recover it. 00:34:36.396 [2024-07-15 03:37:42.133690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.396 [2024-07-15 03:37:42.133734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.396 qpair failed and we were unable to recover it. 00:34:36.396 [2024-07-15 03:37:42.133855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.396 [2024-07-15 03:37:42.133901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.396 qpair failed and we were unable to recover it. 00:34:36.396 [2024-07-15 03:37:42.134081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.396 [2024-07-15 03:37:42.134108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.396 qpair failed and we were unable to recover it. 00:34:36.396 [2024-07-15 03:37:42.134270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.396 [2024-07-15 03:37:42.134296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.396 qpair failed and we were unable to recover it. 00:34:36.396 [2024-07-15 03:37:42.134479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.396 [2024-07-15 03:37:42.134508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.396 qpair failed and we were unable to recover it. 00:34:36.396 [2024-07-15 03:37:42.134637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.396 [2024-07-15 03:37:42.134666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.396 qpair failed and we were unable to recover it. 00:34:36.396 [2024-07-15 03:37:42.134811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.396 [2024-07-15 03:37:42.134840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.396 qpair failed and we were unable to recover it. 00:34:36.396 [2024-07-15 03:37:42.135008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.396 [2024-07-15 03:37:42.135035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.396 qpair failed and we were unable to recover it. 00:34:36.396 [2024-07-15 03:37:42.135154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.396 [2024-07-15 03:37:42.135201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.396 qpair failed and we were unable to recover it. 00:34:36.396 [2024-07-15 03:37:42.135354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.396 [2024-07-15 03:37:42.135384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.396 qpair failed and we were unable to recover it. 00:34:36.396 [2024-07-15 03:37:42.135507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.396 [2024-07-15 03:37:42.135539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.396 qpair failed and we were unable to recover it. 00:34:36.396 [2024-07-15 03:37:42.135699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.396 [2024-07-15 03:37:42.135725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.396 qpair failed and we were unable to recover it. 00:34:36.396 [2024-07-15 03:37:42.135862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.396 [2024-07-15 03:37:42.135909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.396 qpair failed and we were unable to recover it. 00:34:36.396 [2024-07-15 03:37:42.136062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.396 [2024-07-15 03:37:42.136091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.396 qpair failed and we were unable to recover it. 00:34:36.396 [2024-07-15 03:37:42.136219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.396 [2024-07-15 03:37:42.136247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.396 qpair failed and we were unable to recover it. 00:34:36.396 [2024-07-15 03:37:42.136399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.396 [2024-07-15 03:37:42.136425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.396 qpair failed and we were unable to recover it. 00:34:36.396 [2024-07-15 03:37:42.136585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.396 [2024-07-15 03:37:42.136624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.396 qpair failed and we were unable to recover it. 00:34:36.396 [2024-07-15 03:37:42.136759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.396 [2024-07-15 03:37:42.136789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.396 qpair failed and we were unable to recover it. 00:34:36.396 [2024-07-15 03:37:42.136973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.396 [2024-07-15 03:37:42.137000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.396 qpair failed and we were unable to recover it. 00:34:36.396 [2024-07-15 03:37:42.137116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.396 [2024-07-15 03:37:42.137142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.396 qpair failed and we were unable to recover it. 00:34:36.396 [2024-07-15 03:37:42.137293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.396 [2024-07-15 03:37:42.137319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.396 qpair failed and we were unable to recover it. 00:34:36.396 [2024-07-15 03:37:42.137478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.396 [2024-07-15 03:37:42.137509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.396 qpair failed and we were unable to recover it. 00:34:36.396 [2024-07-15 03:37:42.137668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.396 [2024-07-15 03:37:42.137698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.396 qpair failed and we were unable to recover it. 00:34:36.396 [2024-07-15 03:37:42.137860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.396 [2024-07-15 03:37:42.137894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.396 qpair failed and we were unable to recover it. 00:34:36.396 [2024-07-15 03:37:42.138015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.396 [2024-07-15 03:37:42.138058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.396 qpair failed and we were unable to recover it. 00:34:36.396 [2024-07-15 03:37:42.138232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.396 [2024-07-15 03:37:42.138274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.396 qpair failed and we were unable to recover it. 00:34:36.396 [2024-07-15 03:37:42.138434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.396 [2024-07-15 03:37:42.138463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.396 qpair failed and we were unable to recover it. 00:34:36.396 [2024-07-15 03:37:42.138617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.396 [2024-07-15 03:37:42.138643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.396 qpair failed and we were unable to recover it. 00:34:36.396 [2024-07-15 03:37:42.138789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.396 [2024-07-15 03:37:42.138833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.396 qpair failed and we were unable to recover it. 00:34:36.396 [2024-07-15 03:37:42.139020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.396 [2024-07-15 03:37:42.139047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.396 qpair failed and we were unable to recover it. 00:34:36.396 [2024-07-15 03:37:42.139202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.396 [2024-07-15 03:37:42.139232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.396 qpair failed and we were unable to recover it. 00:34:36.396 [2024-07-15 03:37:42.139386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.396 [2024-07-15 03:37:42.139413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.396 qpair failed and we were unable to recover it. 00:34:36.396 [2024-07-15 03:37:42.139531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.396 [2024-07-15 03:37:42.139558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.396 qpair failed and we were unable to recover it. 00:34:36.396 [2024-07-15 03:37:42.139696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.396 [2024-07-15 03:37:42.139725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.396 qpair failed and we were unable to recover it. 00:34:36.396 [2024-07-15 03:37:42.139891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.396 [2024-07-15 03:37:42.139922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.396 qpair failed and we were unable to recover it. 00:34:36.396 [2024-07-15 03:37:42.140064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.396 [2024-07-15 03:37:42.140090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.396 qpair failed and we were unable to recover it. 00:34:36.396 [2024-07-15 03:37:42.140247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.396 [2024-07-15 03:37:42.140273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.396 qpair failed and we were unable to recover it. 00:34:36.396 [2024-07-15 03:37:42.140409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.396 [2024-07-15 03:37:42.140448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.396 qpair failed and we were unable to recover it. 00:34:36.396 [2024-07-15 03:37:42.140618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.396 [2024-07-15 03:37:42.140647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.396 qpair failed and we were unable to recover it. 00:34:36.396 [2024-07-15 03:37:42.140812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.396 [2024-07-15 03:37:42.140841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.396 qpair failed and we were unable to recover it. 00:34:36.396 [2024-07-15 03:37:42.140977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.396 [2024-07-15 03:37:42.141004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.396 qpair failed and we were unable to recover it. 00:34:36.397 [2024-07-15 03:37:42.141162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.397 [2024-07-15 03:37:42.141190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.397 qpair failed and we were unable to recover it. 00:34:36.397 [2024-07-15 03:37:42.141338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.397 [2024-07-15 03:37:42.141367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.397 qpair failed and we were unable to recover it. 00:34:36.397 [2024-07-15 03:37:42.141530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.397 [2024-07-15 03:37:42.141558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.397 qpair failed and we were unable to recover it. 00:34:36.397 [2024-07-15 03:37:42.141733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.397 [2024-07-15 03:37:42.141767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.397 qpair failed and we were unable to recover it. 00:34:36.397 [2024-07-15 03:37:42.141904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.397 [2024-07-15 03:37:42.141935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.397 qpair failed and we were unable to recover it. 00:34:36.397 [2024-07-15 03:37:42.142116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.397 [2024-07-15 03:37:42.142146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.397 qpair failed and we were unable to recover it. 00:34:36.397 [2024-07-15 03:37:42.142309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.397 [2024-07-15 03:37:42.142341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.397 qpair failed and we were unable to recover it. 00:34:36.397 [2024-07-15 03:37:42.142451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.397 [2024-07-15 03:37:42.142499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.397 qpair failed and we were unable to recover it. 00:34:36.397 [2024-07-15 03:37:42.142678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.397 [2024-07-15 03:37:42.142707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.397 qpair failed and we were unable to recover it. 00:34:36.397 [2024-07-15 03:37:42.142839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.397 [2024-07-15 03:37:42.142868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.397 qpair failed and we were unable to recover it. 00:34:36.397 [2024-07-15 03:37:42.143030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.397 [2024-07-15 03:37:42.143057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.397 qpair failed and we were unable to recover it. 00:34:36.397 [2024-07-15 03:37:42.143177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.397 [2024-07-15 03:37:42.143204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.397 qpair failed and we were unable to recover it. 00:34:36.397 [2024-07-15 03:37:42.143313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.397 [2024-07-15 03:37:42.143339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.397 qpair failed and we were unable to recover it. 00:34:36.397 [2024-07-15 03:37:42.143520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.397 [2024-07-15 03:37:42.143549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.397 qpair failed and we were unable to recover it. 00:34:36.397 [2024-07-15 03:37:42.143687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.397 [2024-07-15 03:37:42.143714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.397 qpair failed and we were unable to recover it. 00:34:36.397 [2024-07-15 03:37:42.143859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.397 [2024-07-15 03:37:42.143890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.397 qpair failed and we were unable to recover it. 00:34:36.397 [2024-07-15 03:37:42.144057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.397 [2024-07-15 03:37:42.144084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.397 qpair failed and we were unable to recover it. 00:34:36.397 [2024-07-15 03:37:42.144244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.397 [2024-07-15 03:37:42.144281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.397 qpair failed and we were unable to recover it. 00:34:36.397 [2024-07-15 03:37:42.144433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.397 [2024-07-15 03:37:42.144464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.397 qpair failed and we were unable to recover it. 00:34:36.397 [2024-07-15 03:37:42.144578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.397 [2024-07-15 03:37:42.144604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.397 qpair failed and we were unable to recover it. 00:34:36.397 [2024-07-15 03:37:42.144733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.397 [2024-07-15 03:37:42.144774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.397 qpair failed and we were unable to recover it. 00:34:36.397 [2024-07-15 03:37:42.144933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.397 [2024-07-15 03:37:42.144963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.397 qpair failed and we were unable to recover it. 00:34:36.397 [2024-07-15 03:37:42.145124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.397 [2024-07-15 03:37:42.145150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.397 qpair failed and we were unable to recover it. 00:34:36.397 [2024-07-15 03:37:42.145282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.397 [2024-07-15 03:37:42.145326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.397 qpair failed and we were unable to recover it. 00:34:36.397 [2024-07-15 03:37:42.145465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.397 [2024-07-15 03:37:42.145495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.397 qpair failed and we were unable to recover it. 00:34:36.397 [2024-07-15 03:37:42.145647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.397 [2024-07-15 03:37:42.145678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.397 qpair failed and we were unable to recover it. 00:34:36.397 [2024-07-15 03:37:42.145834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.397 [2024-07-15 03:37:42.145862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.397 qpair failed and we were unable to recover it. 00:34:36.397 [2024-07-15 03:37:42.146004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.397 [2024-07-15 03:37:42.146048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.397 qpair failed and we were unable to recover it. 00:34:36.397 [2024-07-15 03:37:42.146176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.397 [2024-07-15 03:37:42.146206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.397 qpair failed and we were unable to recover it. 00:34:36.397 [2024-07-15 03:37:42.146362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.397 [2024-07-15 03:37:42.146403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.397 qpair failed and we were unable to recover it. 00:34:36.397 [2024-07-15 03:37:42.146594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.397 [2024-07-15 03:37:42.146622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.397 qpair failed and we were unable to recover it. 00:34:36.397 [2024-07-15 03:37:42.146779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.397 [2024-07-15 03:37:42.146808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.397 qpair failed and we were unable to recover it. 00:34:36.397 [2024-07-15 03:37:42.146951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.397 [2024-07-15 03:37:42.146980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.397 qpair failed and we were unable to recover it. 00:34:36.397 [2024-07-15 03:37:42.147105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.397 [2024-07-15 03:37:42.147135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.397 qpair failed and we were unable to recover it. 00:34:36.397 [2024-07-15 03:37:42.147260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.397 [2024-07-15 03:37:42.147287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.397 qpair failed and we were unable to recover it. 00:34:36.397 [2024-07-15 03:37:42.147399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.397 [2024-07-15 03:37:42.147426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.397 qpair failed and we were unable to recover it. 00:34:36.397 [2024-07-15 03:37:42.147580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.397 [2024-07-15 03:37:42.147609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.397 qpair failed and we were unable to recover it. 00:34:36.397 [2024-07-15 03:37:42.147721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.397 [2024-07-15 03:37:42.147750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.397 qpair failed and we were unable to recover it. 00:34:36.397 [2024-07-15 03:37:42.147887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.397 [2024-07-15 03:37:42.147914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.397 qpair failed and we were unable to recover it. 00:34:36.397 [2024-07-15 03:37:42.148075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.397 [2024-07-15 03:37:42.148102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.397 qpair failed and we were unable to recover it. 00:34:36.397 [2024-07-15 03:37:42.148233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.397 [2024-07-15 03:37:42.148264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.397 qpair failed and we were unable to recover it. 00:34:36.397 [2024-07-15 03:37:42.148408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.397 [2024-07-15 03:37:42.148439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.397 qpair failed and we were unable to recover it. 00:34:36.397 [2024-07-15 03:37:42.148561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.397 [2024-07-15 03:37:42.148588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.397 qpair failed and we were unable to recover it. 00:34:36.397 [2024-07-15 03:37:42.148749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.397 [2024-07-15 03:37:42.148795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.397 qpair failed and we were unable to recover it. 00:34:36.397 [2024-07-15 03:37:42.148925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.397 [2024-07-15 03:37:42.148954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.397 qpair failed and we were unable to recover it. 00:34:36.397 [2024-07-15 03:37:42.149106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.397 [2024-07-15 03:37:42.149135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.397 qpair failed and we were unable to recover it. 00:34:36.397 [2024-07-15 03:37:42.149315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.397 [2024-07-15 03:37:42.149341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.397 qpair failed and we were unable to recover it. 00:34:36.397 [2024-07-15 03:37:42.149497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.397 [2024-07-15 03:37:42.149530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.397 qpair failed and we were unable to recover it. 00:34:36.397 [2024-07-15 03:37:42.149682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.397 [2024-07-15 03:37:42.149711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.397 qpair failed and we were unable to recover it. 00:34:36.397 [2024-07-15 03:37:42.149863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.397 [2024-07-15 03:37:42.149899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.397 qpair failed and we were unable to recover it. 00:34:36.397 [2024-07-15 03:37:42.150034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.397 [2024-07-15 03:37:42.150060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.397 qpair failed and we were unable to recover it. 00:34:36.397 [2024-07-15 03:37:42.150201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.397 [2024-07-15 03:37:42.150243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.397 qpair failed and we were unable to recover it. 00:34:36.397 [2024-07-15 03:37:42.150363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.397 [2024-07-15 03:37:42.150392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.397 qpair failed and we were unable to recover it. 00:34:36.397 [2024-07-15 03:37:42.150523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.397 [2024-07-15 03:37:42.150556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.397 qpair failed and we were unable to recover it. 00:34:36.397 [2024-07-15 03:37:42.150720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.397 [2024-07-15 03:37:42.150747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.397 qpair failed and we were unable to recover it. 00:34:36.397 [2024-07-15 03:37:42.150902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.397 [2024-07-15 03:37:42.150947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.397 qpair failed and we were unable to recover it. 00:34:36.397 [2024-07-15 03:37:42.151076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.397 [2024-07-15 03:37:42.151105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.397 qpair failed and we were unable to recover it. 00:34:36.397 [2024-07-15 03:37:42.151283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.397 [2024-07-15 03:37:42.151322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.397 qpair failed and we were unable to recover it. 00:34:36.397 [2024-07-15 03:37:42.151467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.397 [2024-07-15 03:37:42.151494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.397 qpair failed and we were unable to recover it. 00:34:36.397 [2024-07-15 03:37:42.151681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.397 [2024-07-15 03:37:42.151710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.397 qpair failed and we were unable to recover it. 00:34:36.397 [2024-07-15 03:37:42.151887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.397 [2024-07-15 03:37:42.151917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.397 qpair failed and we were unable to recover it. 00:34:36.397 [2024-07-15 03:37:42.152083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.397 [2024-07-15 03:37:42.152113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.397 qpair failed and we were unable to recover it. 00:34:36.397 [2024-07-15 03:37:42.152249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.397 [2024-07-15 03:37:42.152276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.397 qpair failed and we were unable to recover it. 00:34:36.397 [2024-07-15 03:37:42.152379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.397 [2024-07-15 03:37:42.152406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.397 qpair failed and we were unable to recover it. 00:34:36.397 [2024-07-15 03:37:42.152536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.397 [2024-07-15 03:37:42.152563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.397 qpair failed and we were unable to recover it. 00:34:36.397 [2024-07-15 03:37:42.152709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.397 [2024-07-15 03:37:42.152738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.397 qpair failed and we were unable to recover it. 00:34:36.397 [2024-07-15 03:37:42.152897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.397 [2024-07-15 03:37:42.152937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.397 qpair failed and we were unable to recover it. 00:34:36.397 [2024-07-15 03:37:42.153116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.398 [2024-07-15 03:37:42.153146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.398 qpair failed and we were unable to recover it. 00:34:36.398 [2024-07-15 03:37:42.153279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.398 [2024-07-15 03:37:42.153308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.398 qpair failed and we were unable to recover it. 00:34:36.398 [2024-07-15 03:37:42.153426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.398 [2024-07-15 03:37:42.153456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.398 qpair failed and we were unable to recover it. 00:34:36.398 [2024-07-15 03:37:42.153622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.398 [2024-07-15 03:37:42.153648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.398 qpair failed and we were unable to recover it. 00:34:36.398 [2024-07-15 03:37:42.153778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.398 [2024-07-15 03:37:42.153805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.398 qpair failed and we were unable to recover it. 00:34:36.398 [2024-07-15 03:37:42.153963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.398 [2024-07-15 03:37:42.153990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.398 qpair failed and we were unable to recover it. 00:34:36.398 [2024-07-15 03:37:42.154106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.398 [2024-07-15 03:37:42.154132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.398 qpair failed and we were unable to recover it. 00:34:36.398 [2024-07-15 03:37:42.154317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.398 [2024-07-15 03:37:42.154344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.398 qpair failed and we were unable to recover it. 00:34:36.398 [2024-07-15 03:37:42.154449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.398 [2024-07-15 03:37:42.154493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.398 qpair failed and we were unable to recover it. 00:34:36.398 [2024-07-15 03:37:42.154640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.398 [2024-07-15 03:37:42.154670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.398 qpair failed and we were unable to recover it. 00:34:36.398 [2024-07-15 03:37:42.154837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.398 [2024-07-15 03:37:42.154866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.398 qpair failed and we were unable to recover it. 00:34:36.398 [2024-07-15 03:37:42.155030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.398 [2024-07-15 03:37:42.155057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.398 qpair failed and we were unable to recover it. 00:34:36.398 [2024-07-15 03:37:42.155190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.398 [2024-07-15 03:37:42.155217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.398 qpair failed and we were unable to recover it. 00:34:36.398 [2024-07-15 03:37:42.155386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.398 [2024-07-15 03:37:42.155426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.398 qpair failed and we were unable to recover it. 00:34:36.398 [2024-07-15 03:37:42.155581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.398 [2024-07-15 03:37:42.155610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.398 qpair failed and we were unable to recover it. 00:34:36.398 [2024-07-15 03:37:42.155740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.398 [2024-07-15 03:37:42.155766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.398 qpair failed and we were unable to recover it. 00:34:36.398 [2024-07-15 03:37:42.155931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.398 [2024-07-15 03:37:42.155976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.398 qpair failed and we were unable to recover it. 00:34:36.398 [2024-07-15 03:37:42.156100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.398 [2024-07-15 03:37:42.156129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.398 qpair failed and we were unable to recover it. 00:34:36.398 [2024-07-15 03:37:42.156284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.398 [2024-07-15 03:37:42.156318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.398 qpair failed and we were unable to recover it. 00:34:36.398 [2024-07-15 03:37:42.156476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.398 [2024-07-15 03:37:42.156503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.398 qpair failed and we were unable to recover it. 00:34:36.398 [2024-07-15 03:37:42.156644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.398 [2024-07-15 03:37:42.156692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.398 qpair failed and we were unable to recover it. 00:34:36.398 [2024-07-15 03:37:42.156837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.398 [2024-07-15 03:37:42.156866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.398 qpair failed and we were unable to recover it. 00:34:36.398 [2024-07-15 03:37:42.157034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.398 [2024-07-15 03:37:42.157071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.398 qpair failed and we were unable to recover it. 00:34:36.398 [2024-07-15 03:37:42.157211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.398 [2024-07-15 03:37:42.157237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.398 qpair failed and we were unable to recover it. 00:34:36.398 [2024-07-15 03:37:42.157345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.398 [2024-07-15 03:37:42.157371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.398 qpair failed and we were unable to recover it. 00:34:36.398 [2024-07-15 03:37:42.157529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.398 [2024-07-15 03:37:42.157558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.398 qpair failed and we were unable to recover it. 00:34:36.398 [2024-07-15 03:37:42.157680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.398 [2024-07-15 03:37:42.157709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.398 qpair failed and we were unable to recover it. 00:34:36.398 [2024-07-15 03:37:42.157903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.398 [2024-07-15 03:37:42.157937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.398 qpair failed and we were unable to recover it. 00:34:36.398 [2024-07-15 03:37:42.158072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.398 [2024-07-15 03:37:42.158102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.398 qpair failed and we were unable to recover it. 00:34:36.398 [2024-07-15 03:37:42.158259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.398 [2024-07-15 03:37:42.158292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.398 qpair failed and we were unable to recover it. 00:34:36.398 [2024-07-15 03:37:42.158438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.398 [2024-07-15 03:37:42.158468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.398 qpair failed and we were unable to recover it. 00:34:36.398 [2024-07-15 03:37:42.158630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.398 [2024-07-15 03:37:42.158656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.398 qpair failed and we were unable to recover it. 00:34:36.398 [2024-07-15 03:37:42.158820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.398 [2024-07-15 03:37:42.158846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.398 qpair failed and we were unable to recover it. 00:34:36.398 [2024-07-15 03:37:42.159043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.398 [2024-07-15 03:37:42.159069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.398 qpair failed and we were unable to recover it. 00:34:36.398 [2024-07-15 03:37:42.159238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.398 [2024-07-15 03:37:42.159264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.398 qpair failed and we were unable to recover it. 00:34:36.398 [2024-07-15 03:37:42.159415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.398 [2024-07-15 03:37:42.159442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.398 qpair failed and we were unable to recover it. 00:34:36.398 [2024-07-15 03:37:42.159582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.398 [2024-07-15 03:37:42.159624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.398 qpair failed and we were unable to recover it. 00:34:36.398 [2024-07-15 03:37:42.159774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.398 [2024-07-15 03:37:42.159804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.398 qpair failed and we were unable to recover it. 00:34:36.398 [2024-07-15 03:37:42.159973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.398 [2024-07-15 03:37:42.160000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.398 qpair failed and we were unable to recover it. 00:34:36.398 [2024-07-15 03:37:42.160138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.398 [2024-07-15 03:37:42.160166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.398 qpair failed and we were unable to recover it. 00:34:36.398 [2024-07-15 03:37:42.160275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.398 [2024-07-15 03:37:42.160302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.398 qpair failed and we were unable to recover it. 00:34:36.398 [2024-07-15 03:37:42.160464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.398 [2024-07-15 03:37:42.160502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.398 qpair failed and we were unable to recover it. 00:34:36.398 [2024-07-15 03:37:42.160658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.398 [2024-07-15 03:37:42.160688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.398 qpair failed and we were unable to recover it. 00:34:36.398 [2024-07-15 03:37:42.160816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.398 [2024-07-15 03:37:42.160842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.398 qpair failed and we were unable to recover it. 00:34:36.398 [2024-07-15 03:37:42.161013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.398 [2024-07-15 03:37:42.161039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.398 qpair failed and we were unable to recover it. 00:34:36.398 [2024-07-15 03:37:42.161185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.398 [2024-07-15 03:37:42.161214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.398 qpair failed and we were unable to recover it. 00:34:36.398 [2024-07-15 03:37:42.161387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.398 [2024-07-15 03:37:42.161414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.398 qpair failed and we were unable to recover it. 00:34:36.398 [2024-07-15 03:37:42.161598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.398 [2024-07-15 03:37:42.161639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.398 qpair failed and we were unable to recover it. 00:34:36.398 [2024-07-15 03:37:42.161801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.398 [2024-07-15 03:37:42.161846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.398 qpair failed and we were unable to recover it. 00:34:36.398 [2024-07-15 03:37:42.161998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.398 [2024-07-15 03:37:42.162025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.398 qpair failed and we were unable to recover it. 00:34:36.398 [2024-07-15 03:37:42.162186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.398 [2024-07-15 03:37:42.162212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.398 qpair failed and we were unable to recover it. 00:34:36.398 [2024-07-15 03:37:42.162369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.398 [2024-07-15 03:37:42.162413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.398 qpair failed and we were unable to recover it. 00:34:36.398 [2024-07-15 03:37:42.162575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.398 [2024-07-15 03:37:42.162621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.398 qpair failed and we were unable to recover it. 00:34:36.398 [2024-07-15 03:37:42.162741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.398 [2024-07-15 03:37:42.162767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.398 qpair failed and we were unable to recover it. 00:34:36.398 [2024-07-15 03:37:42.162924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.398 [2024-07-15 03:37:42.162954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.398 qpair failed and we were unable to recover it. 00:34:36.398 [2024-07-15 03:37:42.163145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.398 [2024-07-15 03:37:42.163172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.398 qpair failed and we were unable to recover it. 00:34:36.398 [2024-07-15 03:37:42.163308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.398 [2024-07-15 03:37:42.163353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.398 qpair failed and we were unable to recover it. 00:34:36.398 [2024-07-15 03:37:42.163510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.398 [2024-07-15 03:37:42.163554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.398 qpair failed and we were unable to recover it. 00:34:36.398 [2024-07-15 03:37:42.163691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.398 [2024-07-15 03:37:42.163718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.398 qpair failed and we were unable to recover it. 00:34:36.398 [2024-07-15 03:37:42.163870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.398 [2024-07-15 03:37:42.163905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.398 qpair failed and we were unable to recover it. 00:34:36.398 [2024-07-15 03:37:42.164051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.398 [2024-07-15 03:37:42.164086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.398 qpair failed and we were unable to recover it. 00:34:36.398 [2024-07-15 03:37:42.164238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.398 [2024-07-15 03:37:42.164265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.398 qpair failed and we were unable to recover it. 00:34:36.398 [2024-07-15 03:37:42.164394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.398 [2024-07-15 03:37:42.164446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.398 qpair failed and we were unable to recover it. 00:34:36.398 [2024-07-15 03:37:42.164603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.398 [2024-07-15 03:37:42.164649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.398 qpair failed and we were unable to recover it. 00:34:36.398 [2024-07-15 03:37:42.164779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.398 [2024-07-15 03:37:42.164805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.398 qpair failed and we were unable to recover it. 00:34:36.398 [2024-07-15 03:37:42.164964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.398 [2024-07-15 03:37:42.165009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.398 qpair failed and we were unable to recover it. 00:34:36.398 [2024-07-15 03:37:42.165175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.398 [2024-07-15 03:37:42.165220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.399 qpair failed and we were unable to recover it. 00:34:36.399 [2024-07-15 03:37:42.165414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.399 [2024-07-15 03:37:42.165443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.399 qpair failed and we were unable to recover it. 00:34:36.399 [2024-07-15 03:37:42.165574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.399 [2024-07-15 03:37:42.165601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.399 qpair failed and we were unable to recover it. 00:34:36.399 [2024-07-15 03:37:42.165740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.399 [2024-07-15 03:37:42.165767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.399 qpair failed and we were unable to recover it. 00:34:36.399 [2024-07-15 03:37:42.165909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.399 [2024-07-15 03:37:42.165937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.399 qpair failed and we were unable to recover it. 00:34:36.399 [2024-07-15 03:37:42.166114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.399 [2024-07-15 03:37:42.166142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.399 qpair failed and we were unable to recover it. 00:34:36.399 [2024-07-15 03:37:42.166285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.399 [2024-07-15 03:37:42.166311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.399 qpair failed and we were unable to recover it. 00:34:36.399 [2024-07-15 03:37:42.166435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.399 [2024-07-15 03:37:42.166478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.399 qpair failed and we were unable to recover it. 00:34:36.399 [2024-07-15 03:37:42.166621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.399 [2024-07-15 03:37:42.166660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.399 qpair failed and we were unable to recover it. 00:34:36.399 [2024-07-15 03:37:42.166797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.399 [2024-07-15 03:37:42.166824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.399 qpair failed and we were unable to recover it. 00:34:36.399 [2024-07-15 03:37:42.166977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.399 [2024-07-15 03:37:42.167022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.399 qpair failed and we were unable to recover it. 00:34:36.399 [2024-07-15 03:37:42.167154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.399 [2024-07-15 03:37:42.167199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.399 qpair failed and we were unable to recover it. 00:34:36.399 [2024-07-15 03:37:42.167330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.399 [2024-07-15 03:37:42.167374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.399 qpair failed and we were unable to recover it. 00:34:36.399 [2024-07-15 03:37:42.167544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.399 [2024-07-15 03:37:42.167571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.399 qpair failed and we were unable to recover it. 00:34:36.399 [2024-07-15 03:37:42.167716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.399 [2024-07-15 03:37:42.167742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.399 qpair failed and we were unable to recover it. 00:34:36.399 [2024-07-15 03:37:42.167891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.399 [2024-07-15 03:37:42.167919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.399 qpair failed and we were unable to recover it. 00:34:36.399 [2024-07-15 03:37:42.168047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.399 [2024-07-15 03:37:42.168074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.399 qpair failed and we were unable to recover it. 00:34:36.399 [2024-07-15 03:37:42.168236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.399 [2024-07-15 03:37:42.168270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.399 qpair failed and we were unable to recover it. 00:34:36.399 [2024-07-15 03:37:42.168421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.399 [2024-07-15 03:37:42.168447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.399 qpair failed and we were unable to recover it. 00:34:36.399 [2024-07-15 03:37:42.168585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.399 [2024-07-15 03:37:42.168611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.399 qpair failed and we were unable to recover it. 00:34:36.399 [2024-07-15 03:37:42.168726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.399 [2024-07-15 03:37:42.168753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.399 qpair failed and we were unable to recover it. 00:34:36.399 [2024-07-15 03:37:42.168942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.399 [2024-07-15 03:37:42.168999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.399 qpair failed and we were unable to recover it. 00:34:36.399 [2024-07-15 03:37:42.169164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.399 [2024-07-15 03:37:42.169209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.399 qpair failed and we were unable to recover it. 00:34:36.399 [2024-07-15 03:37:42.169331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.399 [2024-07-15 03:37:42.169360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.399 qpair failed and we were unable to recover it. 00:34:36.399 [2024-07-15 03:37:42.169538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.399 [2024-07-15 03:37:42.169565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.399 qpair failed and we were unable to recover it. 00:34:36.399 [2024-07-15 03:37:42.169703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.399 [2024-07-15 03:37:42.169730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.399 qpair failed and we were unable to recover it. 00:34:36.399 [2024-07-15 03:37:42.169901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.399 [2024-07-15 03:37:42.169947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.399 qpair failed and we were unable to recover it. 00:34:36.399 [2024-07-15 03:37:42.170132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.399 [2024-07-15 03:37:42.170162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.399 qpair failed and we were unable to recover it. 00:34:36.399 [2024-07-15 03:37:42.170354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.399 [2024-07-15 03:37:42.170399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.399 qpair failed and we were unable to recover it. 00:34:36.399 [2024-07-15 03:37:42.170521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.399 [2024-07-15 03:37:42.170548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.399 qpair failed and we were unable to recover it. 00:34:36.399 [2024-07-15 03:37:42.170712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.399 [2024-07-15 03:37:42.170738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.399 qpair failed and we were unable to recover it. 00:34:36.399 [2024-07-15 03:37:42.170942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.399 [2024-07-15 03:37:42.170969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.399 qpair failed and we were unable to recover it. 00:34:36.399 [2024-07-15 03:37:42.171140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.399 [2024-07-15 03:37:42.171185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.399 qpair failed and we were unable to recover it. 00:34:36.399 [2024-07-15 03:37:42.171348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.399 [2024-07-15 03:37:42.171393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.399 qpair failed and we were unable to recover it. 00:34:36.399 [2024-07-15 03:37:42.171536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.399 [2024-07-15 03:37:42.171567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.399 qpair failed and we were unable to recover it. 00:34:36.399 [2024-07-15 03:37:42.171703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.399 [2024-07-15 03:37:42.171729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.399 qpair failed and we were unable to recover it. 00:34:36.399 [2024-07-15 03:37:42.171908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.399 [2024-07-15 03:37:42.171935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.399 qpair failed and we were unable to recover it. 00:34:36.399 [2024-07-15 03:37:42.172060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.399 [2024-07-15 03:37:42.172104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.399 qpair failed and we were unable to recover it. 00:34:36.399 [2024-07-15 03:37:42.172251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.399 [2024-07-15 03:37:42.172296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.399 qpair failed and we were unable to recover it. 00:34:36.399 [2024-07-15 03:37:42.172410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.399 [2024-07-15 03:37:42.172436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.399 qpair failed and we were unable to recover it. 00:34:36.399 [2024-07-15 03:37:42.172567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.399 [2024-07-15 03:37:42.172593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.399 qpair failed and we were unable to recover it. 00:34:36.399 [2024-07-15 03:37:42.172726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.399 [2024-07-15 03:37:42.172753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.399 qpair failed and we were unable to recover it. 00:34:36.399 [2024-07-15 03:37:42.172903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.399 [2024-07-15 03:37:42.172930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.399 qpair failed and we were unable to recover it. 00:34:36.399 [2024-07-15 03:37:42.173093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.399 [2024-07-15 03:37:42.173143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.399 qpair failed and we were unable to recover it. 00:34:36.399 [2024-07-15 03:37:42.173320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.399 [2024-07-15 03:37:42.173348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.399 qpair failed and we were unable to recover it. 00:34:36.399 [2024-07-15 03:37:42.173456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.399 [2024-07-15 03:37:42.173483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.399 qpair failed and we were unable to recover it. 00:34:36.399 [2024-07-15 03:37:42.173623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.399 [2024-07-15 03:37:42.173650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.399 qpair failed and we were unable to recover it. 00:34:36.399 [2024-07-15 03:37:42.173782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.399 [2024-07-15 03:37:42.173809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.399 qpair failed and we were unable to recover it. 00:34:36.399 [2024-07-15 03:37:42.174004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.399 [2024-07-15 03:37:42.174050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.399 qpair failed and we were unable to recover it. 00:34:36.399 [2024-07-15 03:37:42.174216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.399 [2024-07-15 03:37:42.174261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.399 qpair failed and we were unable to recover it. 00:34:36.399 [2024-07-15 03:37:42.174439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.399 [2024-07-15 03:37:42.174486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.399 qpair failed and we were unable to recover it. 00:34:36.399 [2024-07-15 03:37:42.174601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.399 [2024-07-15 03:37:42.174628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.399 qpair failed and we were unable to recover it. 00:34:36.399 [2024-07-15 03:37:42.174770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.400 [2024-07-15 03:37:42.174797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.400 qpair failed and we were unable to recover it. 00:34:36.400 [2024-07-15 03:37:42.174936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.400 [2024-07-15 03:37:42.174966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.400 qpair failed and we were unable to recover it. 00:34:36.400 [2024-07-15 03:37:42.175108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.400 [2024-07-15 03:37:42.175152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.400 qpair failed and we were unable to recover it. 00:34:36.400 [2024-07-15 03:37:42.175347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.400 [2024-07-15 03:37:42.175391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.400 qpair failed and we were unable to recover it. 00:34:36.400 [2024-07-15 03:37:42.175500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.400 [2024-07-15 03:37:42.175526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.400 qpair failed and we were unable to recover it. 00:34:36.400 [2024-07-15 03:37:42.175672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.400 [2024-07-15 03:37:42.175699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.400 qpair failed and we were unable to recover it. 00:34:36.400 [2024-07-15 03:37:42.175813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.400 [2024-07-15 03:37:42.175840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.400 qpair failed and we were unable to recover it. 00:34:36.400 [2024-07-15 03:37:42.176034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.400 [2024-07-15 03:37:42.176078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.400 qpair failed and we were unable to recover it. 00:34:36.400 [2024-07-15 03:37:42.176243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.400 [2024-07-15 03:37:42.176292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.400 qpair failed and we were unable to recover it. 00:34:36.400 [2024-07-15 03:37:42.176438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.400 [2024-07-15 03:37:42.176469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.400 qpair failed and we were unable to recover it. 00:34:36.400 [2024-07-15 03:37:42.176615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.400 [2024-07-15 03:37:42.176642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.400 qpair failed and we were unable to recover it. 00:34:36.400 [2024-07-15 03:37:42.176788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.400 [2024-07-15 03:37:42.176815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.400 qpair failed and we were unable to recover it. 00:34:36.400 [2024-07-15 03:37:42.176977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.400 [2024-07-15 03:37:42.177021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.400 qpair failed and we were unable to recover it. 00:34:36.400 [2024-07-15 03:37:42.177206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.400 [2024-07-15 03:37:42.177251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.400 qpair failed and we were unable to recover it. 00:34:36.400 [2024-07-15 03:37:42.177442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.400 [2024-07-15 03:37:42.177496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.400 qpair failed and we were unable to recover it. 00:34:36.400 [2024-07-15 03:37:42.177634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.400 [2024-07-15 03:37:42.177660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.400 qpair failed and we were unable to recover it. 00:34:36.400 [2024-07-15 03:37:42.177787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.400 [2024-07-15 03:37:42.177815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.400 qpair failed and we were unable to recover it. 00:34:36.400 [2024-07-15 03:37:42.177989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.400 [2024-07-15 03:37:42.178035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.400 qpair failed and we were unable to recover it. 00:34:36.400 [2024-07-15 03:37:42.178173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.400 [2024-07-15 03:37:42.178218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.400 qpair failed and we were unable to recover it. 00:34:36.400 [2024-07-15 03:37:42.178385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.400 [2024-07-15 03:37:42.178429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.400 qpair failed and we were unable to recover it. 00:34:36.400 [2024-07-15 03:37:42.178565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.400 [2024-07-15 03:37:42.178592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.400 qpair failed and we were unable to recover it. 00:34:36.400 [2024-07-15 03:37:42.178758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.400 [2024-07-15 03:37:42.178784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.400 qpair failed and we were unable to recover it. 00:34:36.400 [2024-07-15 03:37:42.178944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.400 [2024-07-15 03:37:42.178994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.400 qpair failed and we were unable to recover it. 00:34:36.400 [2024-07-15 03:37:42.179185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.400 [2024-07-15 03:37:42.179240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.400 qpair failed and we were unable to recover it. 00:34:36.400 [2024-07-15 03:37:42.179447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.400 [2024-07-15 03:37:42.179491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.400 qpair failed and we were unable to recover it. 00:34:36.400 [2024-07-15 03:37:42.179610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.400 [2024-07-15 03:37:42.179639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.400 qpair failed and we were unable to recover it. 00:34:36.400 [2024-07-15 03:37:42.179785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.400 [2024-07-15 03:37:42.179812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.400 qpair failed and we were unable to recover it. 00:34:36.400 [2024-07-15 03:37:42.179971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.400 [2024-07-15 03:37:42.180016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.400 qpair failed and we were unable to recover it. 00:34:36.400 [2024-07-15 03:37:42.180180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.400 [2024-07-15 03:37:42.180224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.400 qpair failed and we were unable to recover it. 00:34:36.400 [2024-07-15 03:37:42.180362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.400 [2024-07-15 03:37:42.180406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.400 qpair failed and we were unable to recover it. 00:34:36.400 [2024-07-15 03:37:42.180548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.400 [2024-07-15 03:37:42.180575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.400 qpair failed and we were unable to recover it. 00:34:36.400 [2024-07-15 03:37:42.180710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.400 [2024-07-15 03:37:42.180736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.400 qpair failed and we were unable to recover it. 00:34:36.400 [2024-07-15 03:37:42.180926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.400 [2024-07-15 03:37:42.180972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.400 qpair failed and we were unable to recover it. 00:34:36.400 [2024-07-15 03:37:42.181109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.401 [2024-07-15 03:37:42.181153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.401 qpair failed and we were unable to recover it. 00:34:36.401 [2024-07-15 03:37:42.181296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.401 [2024-07-15 03:37:42.181322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.401 qpair failed and we were unable to recover it. 00:34:36.401 [2024-07-15 03:37:42.181462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.401 [2024-07-15 03:37:42.181489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.401 qpair failed and we were unable to recover it. 00:34:36.401 [2024-07-15 03:37:42.181621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.401 [2024-07-15 03:37:42.181648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.401 qpair failed and we were unable to recover it. 00:34:36.401 [2024-07-15 03:37:42.181813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.401 [2024-07-15 03:37:42.181840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.401 qpair failed and we were unable to recover it. 00:34:36.401 [2024-07-15 03:37:42.182015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.401 [2024-07-15 03:37:42.182042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.401 qpair failed and we were unable to recover it. 00:34:36.401 [2024-07-15 03:37:42.182184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.401 [2024-07-15 03:37:42.182211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.401 qpair failed and we were unable to recover it. 00:34:36.401 [2024-07-15 03:37:42.182338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.401 [2024-07-15 03:37:42.182386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.401 qpair failed and we were unable to recover it. 00:34:36.401 [2024-07-15 03:37:42.182527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.401 [2024-07-15 03:37:42.182554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.401 qpair failed and we were unable to recover it. 00:34:36.401 [2024-07-15 03:37:42.182669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.401 [2024-07-15 03:37:42.182696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.401 qpair failed and we were unable to recover it. 00:34:36.401 [2024-07-15 03:37:42.182807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.401 [2024-07-15 03:37:42.182833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.401 qpair failed and we were unable to recover it. 00:34:36.401 [2024-07-15 03:37:42.182975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.401 [2024-07-15 03:37:42.183003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.401 qpair failed and we were unable to recover it. 00:34:36.401 [2024-07-15 03:37:42.183137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.401 [2024-07-15 03:37:42.183190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.401 qpair failed and we were unable to recover it. 00:34:36.401 [2024-07-15 03:37:42.183308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.401 [2024-07-15 03:37:42.183335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.401 qpair failed and we were unable to recover it. 00:34:36.401 [2024-07-15 03:37:42.183480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.401 [2024-07-15 03:37:42.183507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.401 qpair failed and we were unable to recover it. 00:34:36.401 [2024-07-15 03:37:42.183672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.401 [2024-07-15 03:37:42.183698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.401 qpair failed and we were unable to recover it. 00:34:36.401 [2024-07-15 03:37:42.183864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.401 [2024-07-15 03:37:42.183906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.401 qpair failed and we were unable to recover it. 00:34:36.401 [2024-07-15 03:37:42.184035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.401 [2024-07-15 03:37:42.184079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.401 qpair failed and we were unable to recover it. 00:34:36.401 [2024-07-15 03:37:42.184220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.401 [2024-07-15 03:37:42.184264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.401 qpair failed and we were unable to recover it. 00:34:36.401 [2024-07-15 03:37:42.184419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.401 [2024-07-15 03:37:42.184474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.401 qpair failed and we were unable to recover it. 00:34:36.401 [2024-07-15 03:37:42.184614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.401 [2024-07-15 03:37:42.184641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.401 qpair failed and we were unable to recover it. 00:34:36.401 [2024-07-15 03:37:42.184779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.401 [2024-07-15 03:37:42.184805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.401 qpair failed and we were unable to recover it. 00:34:36.401 [2024-07-15 03:37:42.184964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.401 [2024-07-15 03:37:42.184995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.401 qpair failed and we were unable to recover it. 00:34:36.401 [2024-07-15 03:37:42.185172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.401 [2024-07-15 03:37:42.185217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.401 qpair failed and we were unable to recover it. 00:34:36.401 [2024-07-15 03:37:42.185401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.401 [2024-07-15 03:37:42.185445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.401 qpair failed and we were unable to recover it. 00:34:36.401 [2024-07-15 03:37:42.185587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.401 [2024-07-15 03:37:42.185614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.401 qpair failed and we were unable to recover it. 00:34:36.401 [2024-07-15 03:37:42.185758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.401 [2024-07-15 03:37:42.185785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.401 qpair failed and we were unable to recover it. 00:34:36.401 [2024-07-15 03:37:42.185943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.401 [2024-07-15 03:37:42.185988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.401 qpair failed and we were unable to recover it. 00:34:36.401 [2024-07-15 03:37:42.186127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.401 [2024-07-15 03:37:42.186155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.401 qpair failed and we were unable to recover it. 00:34:36.401 [2024-07-15 03:37:42.186267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.401 [2024-07-15 03:37:42.186294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.401 qpair failed and we were unable to recover it. 00:34:36.401 [2024-07-15 03:37:42.186446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.401 [2024-07-15 03:37:42.186473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.401 qpair failed and we were unable to recover it. 00:34:36.401 [2024-07-15 03:37:42.186624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.401 [2024-07-15 03:37:42.186651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.401 qpair failed and we were unable to recover it. 00:34:36.401 [2024-07-15 03:37:42.186803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.401 [2024-07-15 03:37:42.186830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.401 qpair failed and we were unable to recover it. 00:34:36.401 [2024-07-15 03:37:42.186994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.401 [2024-07-15 03:37:42.187021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.401 qpair failed and we were unable to recover it. 00:34:36.401 [2024-07-15 03:37:42.187179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.401 [2024-07-15 03:37:42.187209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.401 qpair failed and we were unable to recover it. 00:34:36.401 [2024-07-15 03:37:42.187410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.401 [2024-07-15 03:37:42.187455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.401 qpair failed and we were unable to recover it. 00:34:36.401 [2024-07-15 03:37:42.187597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.401 [2024-07-15 03:37:42.187624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.402 qpair failed and we were unable to recover it. 00:34:36.402 [2024-07-15 03:37:42.187770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.402 [2024-07-15 03:37:42.187806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.402 qpair failed and we were unable to recover it. 00:34:36.402 [2024-07-15 03:37:42.188005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.402 [2024-07-15 03:37:42.188035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.402 qpair failed and we were unable to recover it. 00:34:36.402 [2024-07-15 03:37:42.188198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.402 [2024-07-15 03:37:42.188243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.402 qpair failed and we were unable to recover it. 00:34:36.402 [2024-07-15 03:37:42.188430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.402 [2024-07-15 03:37:42.188475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.402 qpair failed and we were unable to recover it. 00:34:36.402 [2024-07-15 03:37:42.188621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.402 [2024-07-15 03:37:42.188648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.402 qpair failed and we were unable to recover it. 00:34:36.402 [2024-07-15 03:37:42.188794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.402 [2024-07-15 03:37:42.188821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.402 qpair failed and we were unable to recover it. 00:34:36.402 [2024-07-15 03:37:42.189014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.402 [2024-07-15 03:37:42.189059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.402 qpair failed and we were unable to recover it. 00:34:36.402 [2024-07-15 03:37:42.189222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.402 [2024-07-15 03:37:42.189266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.402 qpair failed and we were unable to recover it. 00:34:36.402 [2024-07-15 03:37:42.189431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.402 [2024-07-15 03:37:42.189460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.402 qpair failed and we were unable to recover it. 00:34:36.402 [2024-07-15 03:37:42.189629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.402 [2024-07-15 03:37:42.189655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.402 qpair failed and we were unable to recover it. 00:34:36.402 [2024-07-15 03:37:42.189825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.402 [2024-07-15 03:37:42.189852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.402 qpair failed and we were unable to recover it. 00:34:36.402 [2024-07-15 03:37:42.190027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.402 [2024-07-15 03:37:42.190072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.402 qpair failed and we were unable to recover it. 00:34:36.402 [2024-07-15 03:37:42.190234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.402 [2024-07-15 03:37:42.190263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.402 qpair failed and we were unable to recover it. 00:34:36.402 [2024-07-15 03:37:42.190473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.402 [2024-07-15 03:37:42.190502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.402 qpair failed and we were unable to recover it. 00:34:36.402 [2024-07-15 03:37:42.190653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.402 [2024-07-15 03:37:42.190679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.402 qpair failed and we were unable to recover it. 00:34:36.402 [2024-07-15 03:37:42.190824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.402 [2024-07-15 03:37:42.190851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.402 qpair failed and we were unable to recover it. 00:34:36.402 [2024-07-15 03:37:42.191050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.402 [2024-07-15 03:37:42.191080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.402 qpair failed and we were unable to recover it. 00:34:36.402 [2024-07-15 03:37:42.191260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.402 [2024-07-15 03:37:42.191307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.402 qpair failed and we were unable to recover it. 00:34:36.402 [2024-07-15 03:37:42.191495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.402 [2024-07-15 03:37:42.191525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.402 qpair failed and we were unable to recover it. 00:34:36.402 [2024-07-15 03:37:42.191653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.402 [2024-07-15 03:37:42.191684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.402 qpair failed and we were unable to recover it. 00:34:36.402 [2024-07-15 03:37:42.191791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.402 [2024-07-15 03:37:42.191817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.402 qpair failed and we were unable to recover it. 00:34:36.402 [2024-07-15 03:37:42.191978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.402 [2024-07-15 03:37:42.192027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.402 qpair failed and we were unable to recover it. 00:34:36.402 [2024-07-15 03:37:42.192186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.402 [2024-07-15 03:37:42.192230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.402 qpair failed and we were unable to recover it. 00:34:36.402 [2024-07-15 03:37:42.192425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.402 [2024-07-15 03:37:42.192469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.402 qpair failed and we were unable to recover it. 00:34:36.402 [2024-07-15 03:37:42.192612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.402 [2024-07-15 03:37:42.192638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.402 qpair failed and we were unable to recover it. 00:34:36.402 [2024-07-15 03:37:42.192802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.402 [2024-07-15 03:37:42.192829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.402 qpair failed and we were unable to recover it. 00:34:36.402 [2024-07-15 03:37:42.193027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.402 [2024-07-15 03:37:42.193071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.402 qpair failed and we were unable to recover it. 00:34:36.402 [2024-07-15 03:37:42.193213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.402 [2024-07-15 03:37:42.193257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.402 qpair failed and we were unable to recover it. 00:34:36.402 [2024-07-15 03:37:42.193401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.402 [2024-07-15 03:37:42.193431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.402 qpair failed and we were unable to recover it. 00:34:36.402 [2024-07-15 03:37:42.193593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.402 [2024-07-15 03:37:42.193619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.402 qpair failed and we were unable to recover it. 00:34:36.402 [2024-07-15 03:37:42.193760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.402 [2024-07-15 03:37:42.193788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.402 qpair failed and we were unable to recover it. 00:34:36.402 [2024-07-15 03:37:42.193934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.402 [2024-07-15 03:37:42.193961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.402 qpair failed and we were unable to recover it. 00:34:36.402 [2024-07-15 03:37:42.194103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.402 [2024-07-15 03:37:42.194130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.402 qpair failed and we were unable to recover it. 00:34:36.402 [2024-07-15 03:37:42.194297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.402 [2024-07-15 03:37:42.194324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.402 qpair failed and we were unable to recover it. 00:34:36.402 [2024-07-15 03:37:42.194434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.402 [2024-07-15 03:37:42.194462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.402 qpair failed and we were unable to recover it. 00:34:36.402 [2024-07-15 03:37:42.194590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.402 [2024-07-15 03:37:42.194616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.402 qpair failed and we were unable to recover it. 00:34:36.403 [2024-07-15 03:37:42.194753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.403 [2024-07-15 03:37:42.194780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.403 qpair failed and we were unable to recover it. 00:34:36.403 [2024-07-15 03:37:42.194947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.403 [2024-07-15 03:37:42.194977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.403 qpair failed and we were unable to recover it. 00:34:36.403 [2024-07-15 03:37:42.195178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.403 [2024-07-15 03:37:42.195232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.403 qpair failed and we were unable to recover it. 00:34:36.403 [2024-07-15 03:37:42.195417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.403 [2024-07-15 03:37:42.195460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.403 qpair failed and we were unable to recover it. 00:34:36.403 [2024-07-15 03:37:42.195630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.403 [2024-07-15 03:37:42.195657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.403 qpair failed and we were unable to recover it. 00:34:36.403 [2024-07-15 03:37:42.195788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.403 [2024-07-15 03:37:42.195814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.403 qpair failed and we were unable to recover it. 00:34:36.403 [2024-07-15 03:37:42.196010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.403 [2024-07-15 03:37:42.196054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.403 qpair failed and we were unable to recover it. 00:34:36.403 [2024-07-15 03:37:42.196208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.403 [2024-07-15 03:37:42.196261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.403 qpair failed and we were unable to recover it. 00:34:36.403 [2024-07-15 03:37:42.196430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.403 [2024-07-15 03:37:42.196478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.403 qpair failed and we were unable to recover it. 00:34:36.403 [2024-07-15 03:37:42.196594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.403 [2024-07-15 03:37:42.196621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.403 qpair failed and we were unable to recover it. 00:34:36.403 [2024-07-15 03:37:42.196788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.403 [2024-07-15 03:37:42.196815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.403 qpair failed and we were unable to recover it. 00:34:36.403 [2024-07-15 03:37:42.196937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.403 [2024-07-15 03:37:42.196967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.403 qpair failed and we were unable to recover it. 00:34:36.403 [2024-07-15 03:37:42.197111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.403 [2024-07-15 03:37:42.197155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.403 qpair failed and we were unable to recover it. 00:34:36.403 [2024-07-15 03:37:42.197314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.403 [2024-07-15 03:37:42.197358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.403 qpair failed and we were unable to recover it. 00:34:36.403 [2024-07-15 03:37:42.197546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.403 [2024-07-15 03:37:42.197589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.403 qpair failed and we were unable to recover it. 00:34:36.403 [2024-07-15 03:37:42.197730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.403 [2024-07-15 03:37:42.197756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.403 qpair failed and we were unable to recover it. 00:34:36.403 [2024-07-15 03:37:42.197921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.403 [2024-07-15 03:37:42.197952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.403 qpair failed and we were unable to recover it. 00:34:36.403 [2024-07-15 03:37:42.198103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.403 [2024-07-15 03:37:42.198146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.403 qpair failed and we were unable to recover it. 00:34:36.403 [2024-07-15 03:37:42.198305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.403 [2024-07-15 03:37:42.198349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.403 qpair failed and we were unable to recover it. 00:34:36.403 [2024-07-15 03:37:42.198485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.403 [2024-07-15 03:37:42.198511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.403 qpair failed and we were unable to recover it. 00:34:36.403 [2024-07-15 03:37:42.198643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.403 [2024-07-15 03:37:42.198670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.403 qpair failed and we were unable to recover it. 00:34:36.403 [2024-07-15 03:37:42.198810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.403 [2024-07-15 03:37:42.198836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.403 qpair failed and we were unable to recover it. 00:34:36.403 [2024-07-15 03:37:42.199004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.403 [2024-07-15 03:37:42.199049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.403 qpair failed and we were unable to recover it. 00:34:36.403 [2024-07-15 03:37:42.199239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.403 [2024-07-15 03:37:42.199288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.403 qpair failed and we were unable to recover it. 00:34:36.403 [2024-07-15 03:37:42.199452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.403 [2024-07-15 03:37:42.199497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.403 qpair failed and we were unable to recover it. 00:34:36.403 [2024-07-15 03:37:42.199642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.403 [2024-07-15 03:37:42.199668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.403 qpair failed and we were unable to recover it. 00:34:36.403 [2024-07-15 03:37:42.199781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.403 [2024-07-15 03:37:42.199808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.403 qpair failed and we were unable to recover it. 00:34:36.403 [2024-07-15 03:37:42.199971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.403 [2024-07-15 03:37:42.200001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.403 qpair failed and we were unable to recover it. 00:34:36.403 [2024-07-15 03:37:42.200180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.404 [2024-07-15 03:37:42.200209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.404 qpair failed and we were unable to recover it. 00:34:36.404 [2024-07-15 03:37:42.200413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.404 [2024-07-15 03:37:42.200442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.404 qpair failed and we were unable to recover it. 00:34:36.404 [2024-07-15 03:37:42.200600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.404 [2024-07-15 03:37:42.200626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.404 qpair failed and we were unable to recover it. 00:34:36.404 [2024-07-15 03:37:42.200768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.404 [2024-07-15 03:37:42.200795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.404 qpair failed and we were unable to recover it. 00:34:36.404 [2024-07-15 03:37:42.200950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.404 [2024-07-15 03:37:42.200996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.404 qpair failed and we were unable to recover it. 00:34:36.404 [2024-07-15 03:37:42.201128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.404 [2024-07-15 03:37:42.201172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.404 qpair failed and we were unable to recover it. 00:34:36.404 [2024-07-15 03:37:42.201311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.404 [2024-07-15 03:37:42.201338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.404 qpair failed and we were unable to recover it. 00:34:36.404 [2024-07-15 03:37:42.201469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.404 [2024-07-15 03:37:42.201513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.404 qpair failed and we were unable to recover it. 00:34:36.404 [2024-07-15 03:37:42.201645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.404 [2024-07-15 03:37:42.201671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.404 qpair failed and we were unable to recover it. 00:34:36.404 [2024-07-15 03:37:42.201786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.404 [2024-07-15 03:37:42.201813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.404 qpair failed and we were unable to recover it. 00:34:36.404 [2024-07-15 03:37:42.201958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.404 [2024-07-15 03:37:42.201985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.404 qpair failed and we were unable to recover it. 00:34:36.404 [2024-07-15 03:37:42.202134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.404 [2024-07-15 03:37:42.202160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.404 qpair failed and we were unable to recover it. 00:34:36.404 [2024-07-15 03:37:42.202312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.404 [2024-07-15 03:37:42.202357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.404 qpair failed and we were unable to recover it. 00:34:36.404 [2024-07-15 03:37:42.202495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.404 [2024-07-15 03:37:42.202521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.404 qpair failed and we were unable to recover it. 00:34:36.404 [2024-07-15 03:37:42.202659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.404 [2024-07-15 03:37:42.202686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.404 qpair failed and we were unable to recover it. 00:34:36.404 [2024-07-15 03:37:42.202791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.404 [2024-07-15 03:37:42.202818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.404 qpair failed and we were unable to recover it. 00:34:36.404 [2024-07-15 03:37:42.202950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.404 [2024-07-15 03:37:42.203006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.404 qpair failed and we were unable to recover it. 00:34:36.404 [2024-07-15 03:37:42.203162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.404 [2024-07-15 03:37:42.203207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.404 qpair failed and we were unable to recover it. 00:34:36.404 [2024-07-15 03:37:42.203406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.404 [2024-07-15 03:37:42.203451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.404 qpair failed and we were unable to recover it. 00:34:36.404 [2024-07-15 03:37:42.203614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.404 [2024-07-15 03:37:42.203640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.404 qpair failed and we were unable to recover it. 00:34:36.404 [2024-07-15 03:37:42.203751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.404 [2024-07-15 03:37:42.203778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.404 qpair failed and we were unable to recover it. 00:34:36.404 [2024-07-15 03:37:42.203966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.404 [2024-07-15 03:37:42.204012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.404 qpair failed and we were unable to recover it. 00:34:36.404 [2024-07-15 03:37:42.204198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.404 [2024-07-15 03:37:42.204243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.404 qpair failed and we were unable to recover it. 00:34:36.404 [2024-07-15 03:37:42.204431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.404 [2024-07-15 03:37:42.204475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.404 qpair failed and we were unable to recover it. 00:34:36.404 [2024-07-15 03:37:42.204625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.404 [2024-07-15 03:37:42.204651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.404 qpair failed and we were unable to recover it. 00:34:36.404 [2024-07-15 03:37:42.204792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.404 [2024-07-15 03:37:42.204818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.404 qpair failed and we were unable to recover it. 00:34:36.404 [2024-07-15 03:37:42.204995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.404 [2024-07-15 03:37:42.205043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.404 qpair failed and we were unable to recover it. 00:34:36.404 [2024-07-15 03:37:42.205238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.404 [2024-07-15 03:37:42.205268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.404 qpair failed and we were unable to recover it. 00:34:36.404 [2024-07-15 03:37:42.205472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.404 [2024-07-15 03:37:42.205517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.404 qpair failed and we were unable to recover it. 00:34:36.404 [2024-07-15 03:37:42.205637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.404 [2024-07-15 03:37:42.205664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.404 qpair failed and we were unable to recover it. 00:34:36.404 [2024-07-15 03:37:42.205828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.404 [2024-07-15 03:37:42.205855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.404 qpair failed and we were unable to recover it. 00:34:36.404 [2024-07-15 03:37:42.206023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.404 [2024-07-15 03:37:42.206067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.404 qpair failed and we were unable to recover it. 00:34:36.404 [2024-07-15 03:37:42.206238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.404 [2024-07-15 03:37:42.206281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.404 qpair failed and we were unable to recover it. 00:34:36.404 [2024-07-15 03:37:42.206425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.404 [2024-07-15 03:37:42.206469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.404 qpair failed and we were unable to recover it. 00:34:36.404 [2024-07-15 03:37:42.206610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.404 [2024-07-15 03:37:42.206636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.404 qpair failed and we were unable to recover it. 00:34:36.404 [2024-07-15 03:37:42.206775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.404 [2024-07-15 03:37:42.206806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.404 qpair failed and we were unable to recover it. 00:34:36.404 [2024-07-15 03:37:42.206934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.404 [2024-07-15 03:37:42.206965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.404 qpair failed and we were unable to recover it. 00:34:36.404 [2024-07-15 03:37:42.207173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.404 [2024-07-15 03:37:42.207203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.405 qpair failed and we were unable to recover it. 00:34:36.405 [2024-07-15 03:37:42.207421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.405 [2024-07-15 03:37:42.207468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.405 qpair failed and we were unable to recover it. 00:34:36.405 [2024-07-15 03:37:42.207584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.405 [2024-07-15 03:37:42.207611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.405 qpair failed and we were unable to recover it. 00:34:36.405 [2024-07-15 03:37:42.207754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.405 [2024-07-15 03:37:42.207781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.405 qpair failed and we were unable to recover it. 00:34:36.405 [2024-07-15 03:37:42.207933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.405 [2024-07-15 03:37:42.207989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.405 qpair failed and we were unable to recover it. 00:34:36.405 [2024-07-15 03:37:42.208117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.405 [2024-07-15 03:37:42.208162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.405 qpair failed and we were unable to recover it. 00:34:36.405 [2024-07-15 03:37:42.208302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.405 [2024-07-15 03:37:42.208335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.405 qpair failed and we were unable to recover it. 00:34:36.405 [2024-07-15 03:37:42.208490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.405 [2024-07-15 03:37:42.208516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.405 qpair failed and we were unable to recover it. 00:34:36.405 [2024-07-15 03:37:42.208656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.405 [2024-07-15 03:37:42.208683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.405 qpair failed and we were unable to recover it. 00:34:36.405 [2024-07-15 03:37:42.208808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.405 [2024-07-15 03:37:42.208834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.405 qpair failed and we were unable to recover it. 00:34:36.405 [2024-07-15 03:37:42.208983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.405 [2024-07-15 03:37:42.209011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.405 qpair failed and we were unable to recover it. 00:34:36.405 [2024-07-15 03:37:42.209170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.405 [2024-07-15 03:37:42.209196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.405 qpair failed and we were unable to recover it. 00:34:36.405 [2024-07-15 03:37:42.209314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.405 [2024-07-15 03:37:42.209341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.405 qpair failed and we were unable to recover it. 00:34:36.405 [2024-07-15 03:37:42.209479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.405 [2024-07-15 03:37:42.209506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.405 qpair failed and we were unable to recover it. 00:34:36.405 [2024-07-15 03:37:42.209646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.405 [2024-07-15 03:37:42.209673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.405 qpair failed and we were unable to recover it. 00:34:36.405 [2024-07-15 03:37:42.209816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.405 [2024-07-15 03:37:42.209843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.405 qpair failed and we were unable to recover it. 00:34:36.405 [2024-07-15 03:37:42.209974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.405 [2024-07-15 03:37:42.210002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.405 qpair failed and we were unable to recover it. 00:34:36.405 [2024-07-15 03:37:42.210142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.405 [2024-07-15 03:37:42.210176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.405 qpair failed and we were unable to recover it. 00:34:36.405 [2024-07-15 03:37:42.210286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.405 [2024-07-15 03:37:42.210312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.405 qpair failed and we were unable to recover it. 00:34:36.405 [2024-07-15 03:37:42.210463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.405 [2024-07-15 03:37:42.210490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.405 qpair failed and we were unable to recover it. 00:34:36.405 [2024-07-15 03:37:42.210629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.405 [2024-07-15 03:37:42.210656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.405 qpair failed and we were unable to recover it. 00:34:36.405 [2024-07-15 03:37:42.210807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.405 [2024-07-15 03:37:42.210834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.405 qpair failed and we were unable to recover it. 00:34:36.405 [2024-07-15 03:37:42.210978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.405 [2024-07-15 03:37:42.211005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.405 qpair failed and we were unable to recover it. 00:34:36.405 [2024-07-15 03:37:42.211169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.405 [2024-07-15 03:37:42.211196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.405 qpair failed and we were unable to recover it. 00:34:36.405 [2024-07-15 03:37:42.211338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.405 [2024-07-15 03:37:42.211365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.405 qpair failed and we were unable to recover it. 00:34:36.405 [2024-07-15 03:37:42.211553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.405 [2024-07-15 03:37:42.211596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.405 qpair failed and we were unable to recover it. 00:34:36.405 [2024-07-15 03:37:42.211739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.405 [2024-07-15 03:37:42.211766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.405 qpair failed and we were unable to recover it. 00:34:36.405 [2024-07-15 03:37:42.211904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.405 [2024-07-15 03:37:42.211932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.405 qpair failed and we were unable to recover it. 00:34:36.405 [2024-07-15 03:37:42.212087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.405 [2024-07-15 03:37:42.212133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.405 qpair failed and we were unable to recover it. 00:34:36.405 [2024-07-15 03:37:42.212318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.405 [2024-07-15 03:37:42.212362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.405 qpair failed and we were unable to recover it. 00:34:36.405 [2024-07-15 03:37:42.212537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.405 [2024-07-15 03:37:42.212564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.405 qpair failed and we were unable to recover it. 00:34:36.405 [2024-07-15 03:37:42.212740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.405 [2024-07-15 03:37:42.212766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.405 qpair failed and we were unable to recover it. 00:34:36.405 [2024-07-15 03:37:42.212926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.405 [2024-07-15 03:37:42.212956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.405 qpair failed and we were unable to recover it. 00:34:36.405 [2024-07-15 03:37:42.213109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.405 [2024-07-15 03:37:42.213153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.405 qpair failed and we were unable to recover it. 00:34:36.405 [2024-07-15 03:37:42.213287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.405 [2024-07-15 03:37:42.213314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.405 qpair failed and we were unable to recover it. 00:34:36.405 [2024-07-15 03:37:42.213461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.405 [2024-07-15 03:37:42.213488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.405 qpair failed and we were unable to recover it. 00:34:36.405 [2024-07-15 03:37:42.213641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.405 [2024-07-15 03:37:42.213677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.405 qpair failed and we were unable to recover it. 00:34:36.405 [2024-07-15 03:37:42.213817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.405 [2024-07-15 03:37:42.213852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.405 qpair failed and we were unable to recover it. 00:34:36.405 [2024-07-15 03:37:42.214000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.405 [2024-07-15 03:37:42.214031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.405 qpair failed and we were unable to recover it. 00:34:36.405 [2024-07-15 03:37:42.214137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.406 [2024-07-15 03:37:42.214174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.406 qpair failed and we were unable to recover it. 00:34:36.406 [2024-07-15 03:37:42.214318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.406 [2024-07-15 03:37:42.214344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.406 qpair failed and we were unable to recover it. 00:34:36.406 [2024-07-15 03:37:42.214483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.406 [2024-07-15 03:37:42.214510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.406 qpair failed and we were unable to recover it. 00:34:36.406 [2024-07-15 03:37:42.214625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.406 [2024-07-15 03:37:42.214651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.406 qpair failed and we were unable to recover it. 00:34:36.406 [2024-07-15 03:37:42.214792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.406 [2024-07-15 03:37:42.214821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.406 qpair failed and we were unable to recover it. 00:34:36.406 [2024-07-15 03:37:42.214999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.406 [2024-07-15 03:37:42.215043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.406 qpair failed and we were unable to recover it. 00:34:36.406 [2024-07-15 03:37:42.215178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.406 [2024-07-15 03:37:42.215223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.406 qpair failed and we were unable to recover it. 00:34:36.406 [2024-07-15 03:37:42.215385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.406 [2024-07-15 03:37:42.215414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.406 qpair failed and we were unable to recover it. 00:34:36.406 [2024-07-15 03:37:42.215573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.406 [2024-07-15 03:37:42.215599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.406 qpair failed and we were unable to recover it. 00:34:36.406 [2024-07-15 03:37:42.215716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.406 [2024-07-15 03:37:42.215747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.406 qpair failed and we were unable to recover it. 00:34:36.406 [2024-07-15 03:37:42.215858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.406 [2024-07-15 03:37:42.215893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.406 qpair failed and we were unable to recover it. 00:34:36.406 [2024-07-15 03:37:42.216056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.406 [2024-07-15 03:37:42.216101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.406 qpair failed and we were unable to recover it. 00:34:36.406 [2024-07-15 03:37:42.216235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.406 [2024-07-15 03:37:42.216281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.406 qpair failed and we were unable to recover it. 00:34:36.406 [2024-07-15 03:37:42.216479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.406 [2024-07-15 03:37:42.216523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.406 qpair failed and we were unable to recover it. 00:34:36.406 [2024-07-15 03:37:42.216658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.406 [2024-07-15 03:37:42.216691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.406 qpair failed and we were unable to recover it. 00:34:36.406 [2024-07-15 03:37:42.216817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.406 [2024-07-15 03:37:42.216844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.406 qpair failed and we were unable to recover it. 00:34:36.406 [2024-07-15 03:37:42.216980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.406 [2024-07-15 03:37:42.217029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.406 qpair failed and we were unable to recover it. 00:34:36.406 [2024-07-15 03:37:42.217161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.406 [2024-07-15 03:37:42.217206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.406 qpair failed and we were unable to recover it. 00:34:36.406 [2024-07-15 03:37:42.217377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.406 [2024-07-15 03:37:42.217432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.406 qpair failed and we were unable to recover it. 00:34:36.406 [2024-07-15 03:37:42.217550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.406 [2024-07-15 03:37:42.217578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.406 qpair failed and we were unable to recover it. 00:34:36.406 [2024-07-15 03:37:42.217755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.406 [2024-07-15 03:37:42.217782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.406 qpair failed and we were unable to recover it. 00:34:36.406 [2024-07-15 03:37:42.217937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.406 [2024-07-15 03:37:42.217982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.406 qpair failed and we were unable to recover it. 00:34:36.406 [2024-07-15 03:37:42.218152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.406 [2024-07-15 03:37:42.218197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.406 qpair failed and we were unable to recover it. 00:34:36.406 [2024-07-15 03:37:42.218340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.406 [2024-07-15 03:37:42.218368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.406 qpair failed and we were unable to recover it. 00:34:36.406 [2024-07-15 03:37:42.218480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.406 [2024-07-15 03:37:42.218507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.406 qpair failed and we were unable to recover it. 00:34:36.406 [2024-07-15 03:37:42.218643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.406 [2024-07-15 03:37:42.218670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.406 qpair failed and we were unable to recover it. 00:34:36.406 [2024-07-15 03:37:42.218791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.406 [2024-07-15 03:37:42.218818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.406 qpair failed and we were unable to recover it. 00:34:36.406 [2024-07-15 03:37:42.218984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.406 [2024-07-15 03:37:42.219027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.406 qpair failed and we were unable to recover it. 00:34:36.406 [2024-07-15 03:37:42.219162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.406 [2024-07-15 03:37:42.219207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.406 qpair failed and we were unable to recover it. 00:34:36.406 [2024-07-15 03:37:42.219343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.406 [2024-07-15 03:37:42.219399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.406 qpair failed and we were unable to recover it. 00:34:36.406 [2024-07-15 03:37:42.219509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.406 [2024-07-15 03:37:42.219535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.406 qpair failed and we were unable to recover it. 00:34:36.406 [2024-07-15 03:37:42.219752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.406 [2024-07-15 03:37:42.219779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.406 qpair failed and we were unable to recover it. 00:34:36.406 [2024-07-15 03:37:42.219963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.406 [2024-07-15 03:37:42.220009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.406 qpair failed and we were unable to recover it. 00:34:36.406 [2024-07-15 03:37:42.220198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.406 [2024-07-15 03:37:42.220228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.406 qpair failed and we were unable to recover it. 00:34:36.406 [2024-07-15 03:37:42.220359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.406 [2024-07-15 03:37:42.220385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.406 qpair failed and we were unable to recover it. 00:34:36.406 [2024-07-15 03:37:42.220498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.406 [2024-07-15 03:37:42.220526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.406 qpair failed and we were unable to recover it. 00:34:36.406 [2024-07-15 03:37:42.220744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.406 [2024-07-15 03:37:42.220771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.406 qpair failed and we were unable to recover it. 00:34:36.406 [2024-07-15 03:37:42.220928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.406 [2024-07-15 03:37:42.220956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.406 qpair failed and we were unable to recover it. 00:34:36.406 [2024-07-15 03:37:42.221099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.406 [2024-07-15 03:37:42.221126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.406 qpair failed and we were unable to recover it. 00:34:36.407 [2024-07-15 03:37:42.221277] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x230ef20 is same with the state(5) to be set 00:34:36.407 [2024-07-15 03:37:42.221488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.407 [2024-07-15 03:37:42.221531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.407 qpair failed and we were unable to recover it. 00:34:36.407 [2024-07-15 03:37:42.221707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.407 [2024-07-15 03:37:42.221738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.407 qpair failed and we were unable to recover it. 00:34:36.407 [2024-07-15 03:37:42.221926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.407 [2024-07-15 03:37:42.221954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.407 qpair failed and we were unable to recover it. 00:34:36.407 [2024-07-15 03:37:42.222128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.407 [2024-07-15 03:37:42.222157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.407 qpair failed and we were unable to recover it. 00:34:36.407 [2024-07-15 03:37:42.222283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.407 [2024-07-15 03:37:42.222312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.407 qpair failed and we were unable to recover it. 00:34:36.407 [2024-07-15 03:37:42.222461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.407 [2024-07-15 03:37:42.222491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.407 qpair failed and we were unable to recover it. 00:34:36.407 [2024-07-15 03:37:42.222635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.407 [2024-07-15 03:37:42.222664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.407 qpair failed and we were unable to recover it. 00:34:36.407 [2024-07-15 03:37:42.222806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.407 [2024-07-15 03:37:42.222834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.407 qpair failed and we were unable to recover it. 00:34:36.407 [2024-07-15 03:37:42.223017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.407 [2024-07-15 03:37:42.223045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.407 qpair failed and we were unable to recover it. 00:34:36.407 [2024-07-15 03:37:42.223261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.407 [2024-07-15 03:37:42.223306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.407 qpair failed and we were unable to recover it. 00:34:36.407 [2024-07-15 03:37:42.223461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.407 [2024-07-15 03:37:42.223506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.407 qpair failed and we were unable to recover it. 00:34:36.407 [2024-07-15 03:37:42.223664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.407 [2024-07-15 03:37:42.223709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.407 qpair failed and we were unable to recover it. 00:34:36.407 [2024-07-15 03:37:42.223873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.407 [2024-07-15 03:37:42.223905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.407 qpair failed and we were unable to recover it. 00:34:36.407 [2024-07-15 03:37:42.224016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.407 [2024-07-15 03:37:42.224047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.407 qpair failed and we were unable to recover it. 00:34:36.407 [2024-07-15 03:37:42.224156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.407 [2024-07-15 03:37:42.224188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.407 qpair failed and we were unable to recover it. 00:34:36.407 [2024-07-15 03:37:42.224355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.407 [2024-07-15 03:37:42.224384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.407 qpair failed and we were unable to recover it. 00:34:36.407 [2024-07-15 03:37:42.224524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.407 [2024-07-15 03:37:42.224569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.407 qpair failed and we were unable to recover it. 00:34:36.407 [2024-07-15 03:37:42.224717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.407 [2024-07-15 03:37:42.224743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.407 qpair failed and we were unable to recover it. 00:34:36.407 [2024-07-15 03:37:42.224851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.407 [2024-07-15 03:37:42.224886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.407 qpair failed and we were unable to recover it. 00:34:36.407 [2024-07-15 03:37:42.225051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.407 [2024-07-15 03:37:42.225097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.407 qpair failed and we were unable to recover it. 00:34:36.407 [2024-07-15 03:37:42.225255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.407 [2024-07-15 03:37:42.225298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.407 qpair failed and we were unable to recover it. 00:34:36.407 [2024-07-15 03:37:42.225480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.407 [2024-07-15 03:37:42.225511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.407 qpair failed and we were unable to recover it. 00:34:36.407 [2024-07-15 03:37:42.225639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.407 [2024-07-15 03:37:42.225668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.407 qpair failed and we were unable to recover it. 00:34:36.407 [2024-07-15 03:37:42.225819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.407 [2024-07-15 03:37:42.225848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.407 qpair failed and we were unable to recover it. 00:34:36.407 [2024-07-15 03:37:42.226003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.407 [2024-07-15 03:37:42.226030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.407 qpair failed and we were unable to recover it. 00:34:36.407 [2024-07-15 03:37:42.226143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.407 [2024-07-15 03:37:42.226169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.407 qpair failed and we were unable to recover it. 00:34:36.407 [2024-07-15 03:37:42.226303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.407 [2024-07-15 03:37:42.226329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.407 qpair failed and we were unable to recover it. 00:34:36.407 [2024-07-15 03:37:42.226443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.407 [2024-07-15 03:37:42.226470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.407 qpair failed and we were unable to recover it. 00:34:36.407 [2024-07-15 03:37:42.226584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.407 [2024-07-15 03:37:42.226614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.407 qpair failed and we were unable to recover it. 00:34:36.407 [2024-07-15 03:37:42.226748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.407 [2024-07-15 03:37:42.226774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.407 qpair failed and we were unable to recover it. 00:34:36.407 [2024-07-15 03:37:42.226907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.407 [2024-07-15 03:37:42.226936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.407 qpair failed and we were unable to recover it. 00:34:36.407 [2024-07-15 03:37:42.227066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.407 [2024-07-15 03:37:42.227111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.407 qpair failed and we were unable to recover it. 00:34:36.407 [2024-07-15 03:37:42.227262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.407 [2024-07-15 03:37:42.227306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.407 qpair failed and we were unable to recover it. 00:34:36.407 [2024-07-15 03:37:42.227470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.407 [2024-07-15 03:37:42.227516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.407 qpair failed and we were unable to recover it. 00:34:36.407 [2024-07-15 03:37:42.227656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.407 [2024-07-15 03:37:42.227702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.407 qpair failed and we were unable to recover it. 00:34:36.407 [2024-07-15 03:37:42.227843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.407 [2024-07-15 03:37:42.227870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.407 qpair failed and we were unable to recover it. 00:34:36.407 [2024-07-15 03:37:42.228023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.407 [2024-07-15 03:37:42.228068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.407 qpair failed and we were unable to recover it. 00:34:36.407 [2024-07-15 03:37:42.228254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.407 [2024-07-15 03:37:42.228297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.407 qpair failed and we were unable to recover it. 00:34:36.407 [2024-07-15 03:37:42.228495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.407 [2024-07-15 03:37:42.228540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.407 qpair failed and we were unable to recover it. 00:34:36.407 [2024-07-15 03:37:42.228774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.408 [2024-07-15 03:37:42.228807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.408 qpair failed and we were unable to recover it. 00:34:36.408 [2024-07-15 03:37:42.228982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.408 [2024-07-15 03:37:42.229012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.408 qpair failed and we were unable to recover it. 00:34:36.408 [2024-07-15 03:37:42.229145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.408 [2024-07-15 03:37:42.229174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.408 qpair failed and we were unable to recover it. 00:34:36.408 [2024-07-15 03:37:42.229336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.408 [2024-07-15 03:37:42.229364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.408 qpair failed and we were unable to recover it. 00:34:36.408 [2024-07-15 03:37:42.229519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.408 [2024-07-15 03:37:42.229550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.408 qpair failed and we were unable to recover it. 00:34:36.408 [2024-07-15 03:37:42.229666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.408 [2024-07-15 03:37:42.229696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.408 qpair failed and we were unable to recover it. 00:34:36.408 [2024-07-15 03:37:42.229831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.408 [2024-07-15 03:37:42.229860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.408 qpair failed and we were unable to recover it. 00:34:36.408 [2024-07-15 03:37:42.230041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.408 [2024-07-15 03:37:42.230068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.408 qpair failed and we were unable to recover it. 00:34:36.408 [2024-07-15 03:37:42.230197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.408 [2024-07-15 03:37:42.230242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.408 qpair failed and we were unable to recover it. 00:34:36.408 [2024-07-15 03:37:42.230395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.408 [2024-07-15 03:37:42.230439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.408 qpair failed and we were unable to recover it. 00:34:36.408 [2024-07-15 03:37:42.230600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.408 [2024-07-15 03:37:42.230646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.408 qpair failed and we were unable to recover it. 00:34:36.408 [2024-07-15 03:37:42.230790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.408 [2024-07-15 03:37:42.230817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.408 qpair failed and we were unable to recover it. 00:34:36.408 [2024-07-15 03:37:42.230955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.408 [2024-07-15 03:37:42.230982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.408 qpair failed and we were unable to recover it. 00:34:36.408 [2024-07-15 03:37:42.231119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.408 [2024-07-15 03:37:42.231164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.408 qpair failed and we were unable to recover it. 00:34:36.408 [2024-07-15 03:37:42.231317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.408 [2024-07-15 03:37:42.231366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.408 qpair failed and we were unable to recover it. 00:34:36.408 [2024-07-15 03:37:42.231562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.408 [2024-07-15 03:37:42.231606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.408 qpair failed and we were unable to recover it. 00:34:36.408 [2024-07-15 03:37:42.231743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.408 [2024-07-15 03:37:42.231771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.408 qpair failed and we were unable to recover it. 00:34:36.408 [2024-07-15 03:37:42.231928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.408 [2024-07-15 03:37:42.231959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.408 qpair failed and we were unable to recover it. 00:34:36.408 [2024-07-15 03:37:42.232135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.408 [2024-07-15 03:37:42.232178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.408 qpair failed and we were unable to recover it. 00:34:36.408 [2024-07-15 03:37:42.232365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.408 [2024-07-15 03:37:42.232409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.408 qpair failed and we were unable to recover it. 00:34:36.408 [2024-07-15 03:37:42.232572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.408 [2024-07-15 03:37:42.232616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.408 qpair failed and we were unable to recover it. 00:34:36.408 [2024-07-15 03:37:42.232781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.408 [2024-07-15 03:37:42.232808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.409 qpair failed and we were unable to recover it. 00:34:36.409 [2024-07-15 03:37:42.232978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.409 [2024-07-15 03:37:42.233023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.409 qpair failed and we were unable to recover it. 00:34:36.409 [2024-07-15 03:37:42.233200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.409 [2024-07-15 03:37:42.233247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.409 qpair failed and we were unable to recover it. 00:34:36.409 [2024-07-15 03:37:42.233432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.409 [2024-07-15 03:37:42.233477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.409 qpair failed and we were unable to recover it. 00:34:36.409 [2024-07-15 03:37:42.233644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.409 [2024-07-15 03:37:42.233671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.409 qpair failed and we were unable to recover it. 00:34:36.409 [2024-07-15 03:37:42.233841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.409 [2024-07-15 03:37:42.233867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.409 qpair failed and we were unable to recover it. 00:34:36.409 [2024-07-15 03:37:42.234029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.409 [2024-07-15 03:37:42.234059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.409 qpair failed and we were unable to recover it. 00:34:36.409 [2024-07-15 03:37:42.234263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.409 [2024-07-15 03:37:42.234293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.409 qpair failed and we were unable to recover it. 00:34:36.409 [2024-07-15 03:37:42.234454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.409 [2024-07-15 03:37:42.234499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.409 qpair failed and we were unable to recover it. 00:34:36.409 [2024-07-15 03:37:42.234660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.409 [2024-07-15 03:37:42.234692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.409 qpair failed and we were unable to recover it. 00:34:36.409 [2024-07-15 03:37:42.234843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.409 [2024-07-15 03:37:42.234871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.409 qpair failed and we were unable to recover it. 00:34:36.409 [2024-07-15 03:37:42.235029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.409 [2024-07-15 03:37:42.235059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.409 qpair failed and we were unable to recover it. 00:34:36.409 [2024-07-15 03:37:42.235182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.409 [2024-07-15 03:37:42.235211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.409 qpair failed and we were unable to recover it. 00:34:36.409 [2024-07-15 03:37:42.235329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.409 [2024-07-15 03:37:42.235358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.409 qpair failed and we were unable to recover it. 00:34:36.409 [2024-07-15 03:37:42.235499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.409 [2024-07-15 03:37:42.235528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.409 qpair failed and we were unable to recover it. 00:34:36.409 [2024-07-15 03:37:42.235705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.409 [2024-07-15 03:37:42.235765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.409 qpair failed and we were unable to recover it. 00:34:36.409 [2024-07-15 03:37:42.235915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.409 [2024-07-15 03:37:42.235943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.409 qpair failed and we were unable to recover it. 00:34:36.409 [2024-07-15 03:37:42.236094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.409 [2024-07-15 03:37:42.236138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.409 qpair failed and we were unable to recover it. 00:34:36.409 [2024-07-15 03:37:42.236291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.409 [2024-07-15 03:37:42.236336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.409 qpair failed and we were unable to recover it. 00:34:36.409 [2024-07-15 03:37:42.236463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.409 [2024-07-15 03:37:42.236492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.409 qpair failed and we were unable to recover it. 00:34:36.409 [2024-07-15 03:37:42.236615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.409 [2024-07-15 03:37:42.236651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.409 qpair failed and we were unable to recover it. 00:34:36.409 [2024-07-15 03:37:42.236792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.409 [2024-07-15 03:37:42.236819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.409 qpair failed and we were unable to recover it. 00:34:36.409 [2024-07-15 03:37:42.236986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.409 [2024-07-15 03:37:42.237030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.409 qpair failed and we were unable to recover it. 00:34:36.409 [2024-07-15 03:37:42.237192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.409 [2024-07-15 03:37:42.237238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.409 qpair failed and we were unable to recover it. 00:34:36.409 [2024-07-15 03:37:42.237430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.409 [2024-07-15 03:37:42.237485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.409 qpair failed and we were unable to recover it. 00:34:36.409 [2024-07-15 03:37:42.237596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.409 [2024-07-15 03:37:42.237623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.409 qpair failed and we were unable to recover it. 00:34:36.409 [2024-07-15 03:37:42.237763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.409 [2024-07-15 03:37:42.237797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.409 qpair failed and we were unable to recover it. 00:34:36.409 [2024-07-15 03:37:42.237964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.409 [2024-07-15 03:37:42.237996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.409 qpair failed and we were unable to recover it. 00:34:36.409 [2024-07-15 03:37:42.238146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.409 [2024-07-15 03:37:42.238175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.409 qpair failed and we were unable to recover it. 00:34:36.409 [2024-07-15 03:37:42.238293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.409 [2024-07-15 03:37:42.238322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.409 qpair failed and we were unable to recover it. 00:34:36.409 [2024-07-15 03:37:42.238465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.409 [2024-07-15 03:37:42.238494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.409 qpair failed and we were unable to recover it. 00:34:36.409 [2024-07-15 03:37:42.238672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.409 [2024-07-15 03:37:42.238701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.409 qpair failed and we were unable to recover it. 00:34:36.409 [2024-07-15 03:37:42.238818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.409 [2024-07-15 03:37:42.238847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.409 qpair failed and we were unable to recover it. 00:34:36.409 [2024-07-15 03:37:42.239013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.409 [2024-07-15 03:37:42.239041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.409 qpair failed and we were unable to recover it. 00:34:36.409 [2024-07-15 03:37:42.239185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.409 [2024-07-15 03:37:42.239229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.409 qpair failed and we were unable to recover it. 00:34:36.409 [2024-07-15 03:37:42.239436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.409 [2024-07-15 03:37:42.239481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.409 qpair failed and we were unable to recover it. 00:34:36.409 [2024-07-15 03:37:42.239612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.409 [2024-07-15 03:37:42.239654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.409 qpair failed and we were unable to recover it. 00:34:36.409 [2024-07-15 03:37:42.239823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.409 [2024-07-15 03:37:42.239850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.409 qpair failed and we were unable to recover it. 00:34:36.409 [2024-07-15 03:37:42.240024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.409 [2024-07-15 03:37:42.240063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.409 qpair failed and we were unable to recover it. 00:34:36.409 [2024-07-15 03:37:42.240238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.409 [2024-07-15 03:37:42.240284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.409 qpair failed and we were unable to recover it. 00:34:36.409 [2024-07-15 03:37:42.240424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.409 [2024-07-15 03:37:42.240468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.409 qpair failed and we were unable to recover it. 00:34:36.409 [2024-07-15 03:37:42.240623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.409 [2024-07-15 03:37:42.240649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.409 qpair failed and we were unable to recover it. 00:34:36.409 [2024-07-15 03:37:42.240768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.409 [2024-07-15 03:37:42.240795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.410 qpair failed and we were unable to recover it. 00:34:36.410 [2024-07-15 03:37:42.240931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.410 [2024-07-15 03:37:42.240959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.410 qpair failed and we were unable to recover it. 00:34:36.410 [2024-07-15 03:37:42.241124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.410 [2024-07-15 03:37:42.241155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.410 qpair failed and we were unable to recover it. 00:34:36.410 [2024-07-15 03:37:42.241349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.410 [2024-07-15 03:37:42.241375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.410 qpair failed and we were unable to recover it. 00:34:36.410 [2024-07-15 03:37:42.241526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.410 [2024-07-15 03:37:42.241553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.410 qpair failed and we were unable to recover it. 00:34:36.410 [2024-07-15 03:37:42.241720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.410 [2024-07-15 03:37:42.241747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.410 qpair failed and we were unable to recover it. 00:34:36.410 [2024-07-15 03:37:42.241908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.410 [2024-07-15 03:37:42.241935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.410 qpair failed and we were unable to recover it. 00:34:36.410 [2024-07-15 03:37:42.242091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.410 [2024-07-15 03:37:42.242135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.410 qpair failed and we were unable to recover it. 00:34:36.410 [2024-07-15 03:37:42.242307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.410 [2024-07-15 03:37:42.242350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.410 qpair failed and we were unable to recover it. 00:34:36.410 [2024-07-15 03:37:42.242511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.410 [2024-07-15 03:37:42.242543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.410 qpair failed and we were unable to recover it. 00:34:36.410 [2024-07-15 03:37:42.242694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.410 [2024-07-15 03:37:42.242723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.410 qpair failed and we were unable to recover it. 00:34:36.410 [2024-07-15 03:37:42.242889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.410 [2024-07-15 03:37:42.242928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.410 qpair failed and we were unable to recover it. 00:34:36.410 [2024-07-15 03:37:42.243092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.410 [2024-07-15 03:37:42.243122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.410 qpair failed and we were unable to recover it. 00:34:36.410 [2024-07-15 03:37:42.243280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.410 [2024-07-15 03:37:42.243309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.410 qpair failed and we were unable to recover it. 00:34:36.410 [2024-07-15 03:37:42.243435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.410 [2024-07-15 03:37:42.243464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.410 qpair failed and we were unable to recover it. 00:34:36.410 [2024-07-15 03:37:42.243609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.410 [2024-07-15 03:37:42.243637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.410 qpair failed and we were unable to recover it. 00:34:36.410 [2024-07-15 03:37:42.243772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.410 [2024-07-15 03:37:42.243801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.410 qpair failed and we were unable to recover it. 00:34:36.410 [2024-07-15 03:37:42.243944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.410 [2024-07-15 03:37:42.243972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.410 qpair failed and we were unable to recover it. 00:34:36.410 [2024-07-15 03:37:42.244142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.410 [2024-07-15 03:37:42.244176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.410 qpair failed and we were unable to recover it. 00:34:36.410 [2024-07-15 03:37:42.244328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.410 [2024-07-15 03:37:42.244357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.410 qpair failed and we were unable to recover it. 00:34:36.410 [2024-07-15 03:37:42.244481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.410 [2024-07-15 03:37:42.244510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.410 qpair failed and we were unable to recover it. 00:34:36.410 [2024-07-15 03:37:42.244648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.410 [2024-07-15 03:37:42.244676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.410 qpair failed and we were unable to recover it. 00:34:36.410 [2024-07-15 03:37:42.244819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.410 [2024-07-15 03:37:42.244846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.410 qpair failed and we were unable to recover it. 00:34:36.410 [2024-07-15 03:37:42.244970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.410 [2024-07-15 03:37:42.244998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.410 qpair failed and we were unable to recover it. 00:34:36.410 [2024-07-15 03:37:42.245162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.410 [2024-07-15 03:37:42.245207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.410 qpair failed and we were unable to recover it. 00:34:36.410 [2024-07-15 03:37:42.245376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.410 [2024-07-15 03:37:42.245406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.410 qpair failed and we were unable to recover it. 00:34:36.410 [2024-07-15 03:37:42.245581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.410 [2024-07-15 03:37:42.245624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.410 qpair failed and we were unable to recover it. 00:34:36.410 [2024-07-15 03:37:42.245734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.410 [2024-07-15 03:37:42.245760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.410 qpair failed and we were unable to recover it. 00:34:36.410 [2024-07-15 03:37:42.245895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.410 [2024-07-15 03:37:42.245923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.410 qpair failed and we were unable to recover it. 00:34:36.410 [2024-07-15 03:37:42.246054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.410 [2024-07-15 03:37:42.246098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.410 qpair failed and we were unable to recover it. 00:34:36.410 [2024-07-15 03:37:42.246249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.410 [2024-07-15 03:37:42.246292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.410 qpair failed and we were unable to recover it. 00:34:36.410 [2024-07-15 03:37:42.246456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.410 [2024-07-15 03:37:42.246501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.410 qpair failed and we were unable to recover it. 00:34:36.410 [2024-07-15 03:37:42.246645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.410 [2024-07-15 03:37:42.246672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.410 qpair failed and we were unable to recover it. 00:34:36.410 [2024-07-15 03:37:42.246813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.410 [2024-07-15 03:37:42.246841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.410 qpair failed and we were unable to recover it. 00:34:36.410 [2024-07-15 03:37:42.247032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.410 [2024-07-15 03:37:42.247063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.410 qpair failed and we were unable to recover it. 00:34:36.410 [2024-07-15 03:37:42.247205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.410 [2024-07-15 03:37:42.247234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.410 qpair failed and we were unable to recover it. 00:34:36.410 [2024-07-15 03:37:42.247463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.410 [2024-07-15 03:37:42.247492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.410 qpair failed and we were unable to recover it. 00:34:36.410 [2024-07-15 03:37:42.247652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.410 [2024-07-15 03:37:42.247678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.410 qpair failed and we were unable to recover it. 00:34:36.410 [2024-07-15 03:37:42.247815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.410 [2024-07-15 03:37:42.247841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.410 qpair failed and we were unable to recover it. 00:34:36.410 [2024-07-15 03:37:42.247988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.410 [2024-07-15 03:37:42.248015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.410 qpair failed and we were unable to recover it. 00:34:36.410 [2024-07-15 03:37:42.248121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.410 [2024-07-15 03:37:42.248148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.410 qpair failed and we were unable to recover it. 00:34:36.410 [2024-07-15 03:37:42.248284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.410 [2024-07-15 03:37:42.248313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.410 qpair failed and we were unable to recover it. 00:34:36.410 [2024-07-15 03:37:42.248467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.410 [2024-07-15 03:37:42.248496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.410 qpair failed and we were unable to recover it. 00:34:36.410 [2024-07-15 03:37:42.248662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.410 [2024-07-15 03:37:42.248691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.410 qpair failed and we were unable to recover it. 00:34:36.410 [2024-07-15 03:37:42.248835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.411 [2024-07-15 03:37:42.248861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.411 qpair failed and we were unable to recover it. 00:34:36.411 [2024-07-15 03:37:42.249027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.411 [2024-07-15 03:37:42.249054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.411 qpair failed and we were unable to recover it. 00:34:36.411 [2024-07-15 03:37:42.249192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.411 [2024-07-15 03:37:42.249221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.411 qpair failed and we were unable to recover it. 00:34:36.411 [2024-07-15 03:37:42.249376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.411 [2024-07-15 03:37:42.249405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.411 qpair failed and we were unable to recover it. 00:34:36.411 [2024-07-15 03:37:42.249535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.411 [2024-07-15 03:37:42.249578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.411 qpair failed and we were unable to recover it. 00:34:36.411 [2024-07-15 03:37:42.249712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.411 [2024-07-15 03:37:42.249741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.411 qpair failed and we were unable to recover it. 00:34:36.411 [2024-07-15 03:37:42.249899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.411 [2024-07-15 03:37:42.249942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.411 qpair failed and we were unable to recover it. 00:34:36.411 [2024-07-15 03:37:42.250055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.411 [2024-07-15 03:37:42.250081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.411 qpair failed and we were unable to recover it. 00:34:36.411 [2024-07-15 03:37:42.250221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.411 [2024-07-15 03:37:42.250263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.411 qpair failed and we were unable to recover it. 00:34:36.411 [2024-07-15 03:37:42.250441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.411 [2024-07-15 03:37:42.250470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.411 qpair failed and we were unable to recover it. 00:34:36.411 [2024-07-15 03:37:42.250700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.411 [2024-07-15 03:37:42.250729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.411 qpair failed and we were unable to recover it. 00:34:36.411 [2024-07-15 03:37:42.250885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.411 [2024-07-15 03:37:42.250937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.411 qpair failed and we were unable to recover it. 00:34:36.411 [2024-07-15 03:37:42.251055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.411 [2024-07-15 03:37:42.251082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.411 qpair failed and we were unable to recover it. 00:34:36.411 [2024-07-15 03:37:42.251220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.411 [2024-07-15 03:37:42.251246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.411 qpair failed and we were unable to recover it. 00:34:36.411 [2024-07-15 03:37:42.251432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.411 [2024-07-15 03:37:42.251461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.411 qpair failed and we were unable to recover it. 00:34:36.411 [2024-07-15 03:37:42.251623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.411 [2024-07-15 03:37:42.251652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.411 qpair failed and we were unable to recover it. 00:34:36.411 [2024-07-15 03:37:42.251827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.411 [2024-07-15 03:37:42.251852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.411 qpair failed and we were unable to recover it. 00:34:36.411 [2024-07-15 03:37:42.251989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.411 [2024-07-15 03:37:42.252015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.411 qpair failed and we were unable to recover it. 00:34:36.411 [2024-07-15 03:37:42.252148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.411 [2024-07-15 03:37:42.252192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.411 qpair failed and we were unable to recover it. 00:34:36.411 [2024-07-15 03:37:42.252357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.411 [2024-07-15 03:37:42.252383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.411 qpair failed and we were unable to recover it. 00:34:36.411 [2024-07-15 03:37:42.252564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.411 [2024-07-15 03:37:42.252593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.411 qpair failed and we were unable to recover it. 00:34:36.411 [2024-07-15 03:37:42.252736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.411 [2024-07-15 03:37:42.252765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.411 qpair failed and we were unable to recover it. 00:34:36.411 [2024-07-15 03:37:42.252893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.411 [2024-07-15 03:37:42.252919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.411 qpair failed and we were unable to recover it. 00:34:36.411 [2024-07-15 03:37:42.253091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.411 [2024-07-15 03:37:42.253117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.411 qpair failed and we were unable to recover it. 00:34:36.411 [2024-07-15 03:37:42.253255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.411 [2024-07-15 03:37:42.253283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.411 qpair failed and we were unable to recover it. 00:34:36.411 [2024-07-15 03:37:42.253461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.411 [2024-07-15 03:37:42.253489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.411 qpair failed and we were unable to recover it. 00:34:36.411 [2024-07-15 03:37:42.253676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.411 [2024-07-15 03:37:42.253704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.411 qpair failed and we were unable to recover it. 00:34:36.411 [2024-07-15 03:37:42.253857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.411 [2024-07-15 03:37:42.253894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.411 qpair failed and we were unable to recover it. 00:34:36.411 [2024-07-15 03:37:42.254081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.411 [2024-07-15 03:37:42.254107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.411 qpair failed and we were unable to recover it. 00:34:36.411 [2024-07-15 03:37:42.254242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.411 [2024-07-15 03:37:42.254270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.411 qpair failed and we were unable to recover it. 00:34:36.411 [2024-07-15 03:37:42.254391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.411 [2024-07-15 03:37:42.254419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.411 qpair failed and we were unable to recover it. 00:34:36.411 [2024-07-15 03:37:42.254582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.411 [2024-07-15 03:37:42.254611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.411 qpair failed and we were unable to recover it. 00:34:36.411 [2024-07-15 03:37:42.254750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.411 [2024-07-15 03:37:42.254778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.411 qpair failed and we were unable to recover it. 00:34:36.411 [2024-07-15 03:37:42.254956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.411 [2024-07-15 03:37:42.254984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.411 qpair failed and we were unable to recover it. 00:34:36.411 [2024-07-15 03:37:42.255148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.411 [2024-07-15 03:37:42.255184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.411 qpair failed and we were unable to recover it. 00:34:36.411 [2024-07-15 03:37:42.255321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.411 [2024-07-15 03:37:42.255347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.411 qpair failed and we were unable to recover it. 00:34:36.411 [2024-07-15 03:37:42.255502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.411 [2024-07-15 03:37:42.255530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.411 qpair failed and we were unable to recover it. 00:34:36.411 [2024-07-15 03:37:42.255742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.411 [2024-07-15 03:37:42.255771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.411 qpair failed and we were unable to recover it. 00:34:36.411 [2024-07-15 03:37:42.255939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.411 [2024-07-15 03:37:42.255965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.411 qpair failed and we were unable to recover it. 00:34:36.411 [2024-07-15 03:37:42.256098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.411 [2024-07-15 03:37:42.256124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.411 qpair failed and we were unable to recover it. 00:34:36.411 [2024-07-15 03:37:42.256297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.411 [2024-07-15 03:37:42.256323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.411 qpair failed and we were unable to recover it. 00:34:36.411 [2024-07-15 03:37:42.256478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.411 [2024-07-15 03:37:42.256507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.411 qpair failed and we were unable to recover it. 00:34:36.411 [2024-07-15 03:37:42.256709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.411 [2024-07-15 03:37:42.256742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.411 qpair failed and we were unable to recover it. 00:34:36.411 [2024-07-15 03:37:42.257003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.411 [2024-07-15 03:37:42.257029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.411 qpair failed and we were unable to recover it. 00:34:36.411 [2024-07-15 03:37:42.257187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.411 [2024-07-15 03:37:42.257216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.411 qpair failed and we were unable to recover it. 00:34:36.411 [2024-07-15 03:37:42.257368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.411 [2024-07-15 03:37:42.257396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.411 qpair failed and we were unable to recover it. 00:34:36.411 [2024-07-15 03:37:42.257614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.412 [2024-07-15 03:37:42.257642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.412 qpair failed and we were unable to recover it. 00:34:36.412 [2024-07-15 03:37:42.257783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.412 [2024-07-15 03:37:42.257812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.412 qpair failed and we were unable to recover it. 00:34:36.412 [2024-07-15 03:37:42.257981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.412 [2024-07-15 03:37:42.258007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.412 qpair failed and we were unable to recover it. 00:34:36.412 [2024-07-15 03:37:42.258118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.412 [2024-07-15 03:37:42.258144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.412 qpair failed and we were unable to recover it. 00:34:36.412 [2024-07-15 03:37:42.258332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.412 [2024-07-15 03:37:42.258360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.412 qpair failed and we were unable to recover it. 00:34:36.412 [2024-07-15 03:37:42.258485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.412 [2024-07-15 03:37:42.258514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.412 qpair failed and we were unable to recover it. 00:34:36.412 [2024-07-15 03:37:42.258659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.412 [2024-07-15 03:37:42.258688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.412 qpair failed and we were unable to recover it. 00:34:36.412 [2024-07-15 03:37:42.258835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.412 [2024-07-15 03:37:42.258864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.412 qpair failed and we were unable to recover it. 00:34:36.412 [2024-07-15 03:37:42.259075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.412 [2024-07-15 03:37:42.259101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.412 qpair failed and we were unable to recover it. 00:34:36.412 [2024-07-15 03:37:42.259248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.412 [2024-07-15 03:37:42.259274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.412 qpair failed and we were unable to recover it. 00:34:36.412 [2024-07-15 03:37:42.259387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.412 [2024-07-15 03:37:42.259431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.412 qpair failed and we were unable to recover it. 00:34:36.412 [2024-07-15 03:37:42.259582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.412 [2024-07-15 03:37:42.259611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.412 qpair failed and we were unable to recover it. 00:34:36.412 [2024-07-15 03:37:42.259741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.412 [2024-07-15 03:37:42.259784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.412 qpair failed and we were unable to recover it. 00:34:36.412 [2024-07-15 03:37:42.259974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.412 [2024-07-15 03:37:42.260001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.412 qpair failed and we were unable to recover it. 00:34:36.412 [2024-07-15 03:37:42.260151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.412 [2024-07-15 03:37:42.260203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.412 qpair failed and we were unable to recover it. 00:34:36.412 [2024-07-15 03:37:42.260357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.412 [2024-07-15 03:37:42.260383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.412 qpair failed and we were unable to recover it. 00:34:36.412 [2024-07-15 03:37:42.260514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.412 [2024-07-15 03:37:42.260540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.412 qpair failed and we were unable to recover it. 00:34:36.412 [2024-07-15 03:37:42.260730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.412 [2024-07-15 03:37:42.260759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.412 qpair failed and we were unable to recover it. 00:34:36.412 [2024-07-15 03:37:42.260922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.412 [2024-07-15 03:37:42.260948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.412 qpair failed and we were unable to recover it. 00:34:36.412 [2024-07-15 03:37:42.261083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.412 [2024-07-15 03:37:42.261126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.412 qpair failed and we were unable to recover it. 00:34:36.412 [2024-07-15 03:37:42.261250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.412 [2024-07-15 03:37:42.261278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.412 qpair failed and we were unable to recover it. 00:34:36.412 [2024-07-15 03:37:42.261400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.412 [2024-07-15 03:37:42.261426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.412 qpair failed and we were unable to recover it. 00:34:36.412 [2024-07-15 03:37:42.261563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.412 [2024-07-15 03:37:42.261589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.412 qpair failed and we were unable to recover it. 00:34:36.412 [2024-07-15 03:37:42.261747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.412 [2024-07-15 03:37:42.261780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.412 qpair failed and we were unable to recover it. 00:34:36.412 [2024-07-15 03:37:42.261946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.412 [2024-07-15 03:37:42.261973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.412 qpair failed and we were unable to recover it. 00:34:36.412 [2024-07-15 03:37:42.262087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.412 [2024-07-15 03:37:42.262113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.412 qpair failed and we were unable to recover it. 00:34:36.412 [2024-07-15 03:37:42.262287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.412 [2024-07-15 03:37:42.262313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.412 qpair failed and we were unable to recover it. 00:34:36.412 [2024-07-15 03:37:42.262451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.412 [2024-07-15 03:37:42.262477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.412 qpair failed and we were unable to recover it. 00:34:36.412 [2024-07-15 03:37:42.262659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.412 [2024-07-15 03:37:42.262688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.412 qpair failed and we were unable to recover it. 00:34:36.412 [2024-07-15 03:37:42.262805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.412 [2024-07-15 03:37:42.262834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.412 qpair failed and we were unable to recover it. 00:34:36.412 [2024-07-15 03:37:42.263006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.412 [2024-07-15 03:37:42.263034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.412 qpair failed and we were unable to recover it. 00:34:36.412 [2024-07-15 03:37:42.263196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.412 [2024-07-15 03:37:42.263222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.412 qpair failed and we were unable to recover it. 00:34:36.412 [2024-07-15 03:37:42.263379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.412 [2024-07-15 03:37:42.263408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.412 qpair failed and we were unable to recover it. 00:34:36.412 [2024-07-15 03:37:42.263541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.412 [2024-07-15 03:37:42.263567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.412 qpair failed and we were unable to recover it. 00:34:36.412 [2024-07-15 03:37:42.263728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.412 [2024-07-15 03:37:42.263754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.412 qpair failed and we were unable to recover it. 00:34:36.412 [2024-07-15 03:37:42.263918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.412 [2024-07-15 03:37:42.263949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.412 qpair failed and we were unable to recover it. 00:34:36.412 [2024-07-15 03:37:42.264103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.412 [2024-07-15 03:37:42.264129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.412 qpair failed and we were unable to recover it. 00:34:36.412 [2024-07-15 03:37:42.264314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.412 [2024-07-15 03:37:42.264343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.412 qpair failed and we were unable to recover it. 00:34:36.412 [2024-07-15 03:37:42.264533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.412 [2024-07-15 03:37:42.264560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.412 qpair failed and we were unable to recover it. 00:34:36.412 [2024-07-15 03:37:42.264736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.412 [2024-07-15 03:37:42.264765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.412 qpair failed and we were unable to recover it. 00:34:36.412 [2024-07-15 03:37:42.264905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.412 [2024-07-15 03:37:42.264949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.412 qpair failed and we were unable to recover it. 00:34:36.412 [2024-07-15 03:37:42.265062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.413 [2024-07-15 03:37:42.265088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.413 qpair failed and we were unable to recover it. 00:34:36.413 [2024-07-15 03:37:42.265258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.413 [2024-07-15 03:37:42.265284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.413 qpair failed and we were unable to recover it. 00:34:36.413 [2024-07-15 03:37:42.265464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.413 [2024-07-15 03:37:42.265493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.413 qpair failed and we were unable to recover it. 00:34:36.413 [2024-07-15 03:37:42.265634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.413 [2024-07-15 03:37:42.265663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.413 qpair failed and we were unable to recover it. 00:34:36.413 [2024-07-15 03:37:42.265817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.413 [2024-07-15 03:37:42.265843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.413 qpair failed and we were unable to recover it. 00:34:36.413 [2024-07-15 03:37:42.266020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.413 [2024-07-15 03:37:42.266050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.413 qpair failed and we were unable to recover it. 00:34:36.413 [2024-07-15 03:37:42.266214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.413 [2024-07-15 03:37:42.266240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.413 qpair failed and we were unable to recover it. 00:34:36.413 [2024-07-15 03:37:42.266373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.413 [2024-07-15 03:37:42.266399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.413 qpair failed and we were unable to recover it. 00:34:36.413 [2024-07-15 03:37:42.266507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.413 [2024-07-15 03:37:42.266533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.413 qpair failed and we were unable to recover it. 00:34:36.413 [2024-07-15 03:37:42.266670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.413 [2024-07-15 03:37:42.266703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.413 qpair failed and we were unable to recover it. 00:34:36.413 [2024-07-15 03:37:42.266858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.413 [2024-07-15 03:37:42.266894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.413 qpair failed and we were unable to recover it. 00:34:36.413 [2024-07-15 03:37:42.267035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.413 [2024-07-15 03:37:42.267078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.413 qpair failed and we were unable to recover it. 00:34:36.413 [2024-07-15 03:37:42.267220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.413 [2024-07-15 03:37:42.267249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.413 qpair failed and we were unable to recover it. 00:34:36.413 [2024-07-15 03:37:42.267400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.413 [2024-07-15 03:37:42.267427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.413 qpair failed and we were unable to recover it. 00:34:36.413 [2024-07-15 03:37:42.267610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.413 [2024-07-15 03:37:42.267639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.413 qpair failed and we were unable to recover it. 00:34:36.413 [2024-07-15 03:37:42.267791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.413 [2024-07-15 03:37:42.267820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.413 qpair failed and we were unable to recover it. 00:34:36.413 [2024-07-15 03:37:42.267975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.413 [2024-07-15 03:37:42.268002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.413 qpair failed and we were unable to recover it. 00:34:36.413 [2024-07-15 03:37:42.268139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.413 [2024-07-15 03:37:42.268192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.413 qpair failed and we were unable to recover it. 00:34:36.413 [2024-07-15 03:37:42.268366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.413 [2024-07-15 03:37:42.268395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.413 qpair failed and we were unable to recover it. 00:34:36.413 [2024-07-15 03:37:42.268553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.413 [2024-07-15 03:37:42.268579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.413 qpair failed and we were unable to recover it. 00:34:36.413 [2024-07-15 03:37:42.268708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.413 [2024-07-15 03:37:42.268750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.413 qpair failed and we were unable to recover it. 00:34:36.413 [2024-07-15 03:37:42.268932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.413 [2024-07-15 03:37:42.268963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.413 qpair failed and we were unable to recover it. 00:34:36.413 [2024-07-15 03:37:42.269095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.413 [2024-07-15 03:37:42.269122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.413 qpair failed and we were unable to recover it. 00:34:36.413 [2024-07-15 03:37:42.269305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.413 [2024-07-15 03:37:42.269333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.413 qpair failed and we were unable to recover it. 00:34:36.413 [2024-07-15 03:37:42.269509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.413 [2024-07-15 03:37:42.269538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.413 qpair failed and we were unable to recover it. 00:34:36.413 [2024-07-15 03:37:42.269716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.413 [2024-07-15 03:37:42.269741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.413 qpair failed and we were unable to recover it. 00:34:36.413 [2024-07-15 03:37:42.269898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.413 [2024-07-15 03:37:42.269928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.413 qpair failed and we were unable to recover it. 00:34:36.413 [2024-07-15 03:37:42.270081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.413 [2024-07-15 03:37:42.270110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.413 qpair failed and we were unable to recover it. 00:34:36.413 [2024-07-15 03:37:42.270240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.413 [2024-07-15 03:37:42.270266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.413 qpair failed and we were unable to recover it. 00:34:36.413 [2024-07-15 03:37:42.270403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.413 [2024-07-15 03:37:42.270429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.413 qpair failed and we were unable to recover it. 00:34:36.413 [2024-07-15 03:37:42.270608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.413 [2024-07-15 03:37:42.270634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.413 qpair failed and we were unable to recover it. 00:34:36.413 [2024-07-15 03:37:42.270751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.413 [2024-07-15 03:37:42.270776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.413 qpair failed and we were unable to recover it. 00:34:36.413 [2024-07-15 03:37:42.270919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.413 [2024-07-15 03:37:42.270946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.413 qpair failed and we were unable to recover it. 00:34:36.413 [2024-07-15 03:37:42.271096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.413 [2024-07-15 03:37:42.271125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.413 qpair failed and we were unable to recover it. 00:34:36.413 [2024-07-15 03:37:42.271278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.413 [2024-07-15 03:37:42.271304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.413 qpair failed and we were unable to recover it. 00:34:36.413 [2024-07-15 03:37:42.271437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.413 [2024-07-15 03:37:42.271480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.413 qpair failed and we were unable to recover it. 00:34:36.413 [2024-07-15 03:37:42.271654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.413 [2024-07-15 03:37:42.271683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.413 qpair failed and we were unable to recover it. 00:34:36.413 [2024-07-15 03:37:42.271815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.413 [2024-07-15 03:37:42.271841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.413 qpair failed and we were unable to recover it. 00:34:36.413 [2024-07-15 03:37:42.271975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.413 [2024-07-15 03:37:42.272001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.413 qpair failed and we were unable to recover it. 00:34:36.413 [2024-07-15 03:37:42.272141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.413 [2024-07-15 03:37:42.272176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.413 qpair failed and we were unable to recover it. 00:34:36.413 [2024-07-15 03:37:42.272345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.413 [2024-07-15 03:37:42.272371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.413 qpair failed and we were unable to recover it. 00:34:36.413 [2024-07-15 03:37:42.272505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.413 [2024-07-15 03:37:42.272530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.413 qpair failed and we were unable to recover it. 00:34:36.413 [2024-07-15 03:37:42.272659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.413 [2024-07-15 03:37:42.272687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.413 qpair failed and we were unable to recover it. 00:34:36.413 [2024-07-15 03:37:42.272808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.413 [2024-07-15 03:37:42.272850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.413 qpair failed and we were unable to recover it. 00:34:36.413 [2024-07-15 03:37:42.273014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.414 [2024-07-15 03:37:42.273053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.414 qpair failed and we were unable to recover it. 00:34:36.414 [2024-07-15 03:37:42.273254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.414 [2024-07-15 03:37:42.273285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.414 qpair failed and we were unable to recover it. 00:34:36.414 [2024-07-15 03:37:42.273420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.414 [2024-07-15 03:37:42.273449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.414 qpair failed and we were unable to recover it. 00:34:36.414 [2024-07-15 03:37:42.273563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.414 [2024-07-15 03:37:42.273591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.414 qpair failed and we were unable to recover it. 00:34:36.414 [2024-07-15 03:37:42.273754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.414 [2024-07-15 03:37:42.273784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.414 qpair failed and we were unable to recover it. 00:34:36.414 [2024-07-15 03:37:42.273912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.414 [2024-07-15 03:37:42.273940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.414 qpair failed and we were unable to recover it. 00:34:36.414 [2024-07-15 03:37:42.274081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.414 [2024-07-15 03:37:42.274108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.414 qpair failed and we were unable to recover it. 00:34:36.414 [2024-07-15 03:37:42.274246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.414 [2024-07-15 03:37:42.274275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.414 qpair failed and we were unable to recover it. 00:34:36.414 [2024-07-15 03:37:42.274459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.414 [2024-07-15 03:37:42.274485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.414 qpair failed and we were unable to recover it. 00:34:36.414 [2024-07-15 03:37:42.274595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.414 [2024-07-15 03:37:42.274638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.414 qpair failed and we were unable to recover it. 00:34:36.414 [2024-07-15 03:37:42.274817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.414 [2024-07-15 03:37:42.274846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.414 qpair failed and we were unable to recover it. 00:34:36.414 [2024-07-15 03:37:42.275055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.414 [2024-07-15 03:37:42.275081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.414 qpair failed and we were unable to recover it. 00:34:36.414 [2024-07-15 03:37:42.275182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.414 [2024-07-15 03:37:42.275224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.414 qpair failed and we were unable to recover it. 00:34:36.414 [2024-07-15 03:37:42.275350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.414 [2024-07-15 03:37:42.275380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.414 qpair failed and we were unable to recover it. 00:34:36.414 [2024-07-15 03:37:42.275560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.414 [2024-07-15 03:37:42.275586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.414 qpair failed and we were unable to recover it. 00:34:36.414 [2024-07-15 03:37:42.275694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.414 [2024-07-15 03:37:42.275737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.414 qpair failed and we were unable to recover it. 00:34:36.414 [2024-07-15 03:37:42.275917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.414 [2024-07-15 03:37:42.275944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.414 qpair failed and we were unable to recover it. 00:34:36.414 [2024-07-15 03:37:42.276094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.414 [2024-07-15 03:37:42.276120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.414 qpair failed and we were unable to recover it. 00:34:36.414 [2024-07-15 03:37:42.276310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.414 [2024-07-15 03:37:42.276357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.414 qpair failed and we were unable to recover it. 00:34:36.414 [2024-07-15 03:37:42.276536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.414 [2024-07-15 03:37:42.276564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.414 qpair failed and we were unable to recover it. 00:34:36.414 [2024-07-15 03:37:42.276750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.414 [2024-07-15 03:37:42.276777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.414 qpair failed and we were unable to recover it. 00:34:36.414 [2024-07-15 03:37:42.276967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.414 [2024-07-15 03:37:42.276998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.414 qpair failed and we were unable to recover it. 00:34:36.414 [2024-07-15 03:37:42.277147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.414 [2024-07-15 03:37:42.277178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.414 qpair failed and we were unable to recover it. 00:34:36.414 [2024-07-15 03:37:42.277379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.414 [2024-07-15 03:37:42.277405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.414 qpair failed and we were unable to recover it. 00:34:36.414 [2024-07-15 03:37:42.277530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.414 [2024-07-15 03:37:42.277561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.414 qpair failed and we were unable to recover it. 00:34:36.414 [2024-07-15 03:37:42.277690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.414 [2024-07-15 03:37:42.277718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.414 qpair failed and we were unable to recover it. 00:34:36.414 [2024-07-15 03:37:42.277870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.414 [2024-07-15 03:37:42.277911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.414 qpair failed and we were unable to recover it. 00:34:36.414 [2024-07-15 03:37:42.278022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.414 [2024-07-15 03:37:42.278048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.414 qpair failed and we were unable to recover it. 00:34:36.414 [2024-07-15 03:37:42.278176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.414 [2024-07-15 03:37:42.278205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.414 qpair failed and we were unable to recover it. 00:34:36.414 [2024-07-15 03:37:42.278345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.414 [2024-07-15 03:37:42.278371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.414 qpair failed and we were unable to recover it. 00:34:36.414 [2024-07-15 03:37:42.278503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.414 [2024-07-15 03:37:42.278530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.414 qpair failed and we were unable to recover it. 00:34:36.414 [2024-07-15 03:37:42.278682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.414 [2024-07-15 03:37:42.278711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.414 qpair failed and we were unable to recover it. 00:34:36.414 [2024-07-15 03:37:42.278841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.414 [2024-07-15 03:37:42.278867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.414 qpair failed and we were unable to recover it. 00:34:36.414 [2024-07-15 03:37:42.279035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.414 [2024-07-15 03:37:42.279066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.414 qpair failed and we were unable to recover it. 00:34:36.414 [2024-07-15 03:37:42.279204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.414 [2024-07-15 03:37:42.279230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.414 qpair failed and we were unable to recover it. 00:34:36.414 [2024-07-15 03:37:42.279392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.414 [2024-07-15 03:37:42.279418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.414 qpair failed and we were unable to recover it. 00:34:36.414 [2024-07-15 03:37:42.279536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.414 [2024-07-15 03:37:42.279562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.414 qpair failed and we were unable to recover it. 00:34:36.414 [2024-07-15 03:37:42.279693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.414 [2024-07-15 03:37:42.279719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.414 qpair failed and we were unable to recover it. 00:34:36.414 [2024-07-15 03:37:42.279889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.414 [2024-07-15 03:37:42.279915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.414 qpair failed and we were unable to recover it. 00:34:36.414 [2024-07-15 03:37:42.280026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.414 [2024-07-15 03:37:42.280052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.414 qpair failed and we were unable to recover it. 00:34:36.414 [2024-07-15 03:37:42.280161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.414 [2024-07-15 03:37:42.280188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.414 qpair failed and we were unable to recover it. 00:34:36.414 [2024-07-15 03:37:42.280323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.414 [2024-07-15 03:37:42.280349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.414 qpair failed and we were unable to recover it. 00:34:36.414 [2024-07-15 03:37:42.280495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.414 [2024-07-15 03:37:42.280537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.414 qpair failed and we were unable to recover it. 00:34:36.414 [2024-07-15 03:37:42.280693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.414 [2024-07-15 03:37:42.280722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.414 qpair failed and we were unable to recover it. 00:34:36.414 [2024-07-15 03:37:42.280901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.414 [2024-07-15 03:37:42.280945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.414 qpair failed and we were unable to recover it. 00:34:36.414 [2024-07-15 03:37:42.281086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.414 [2024-07-15 03:37:42.281112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.414 qpair failed and we were unable to recover it. 00:34:36.414 [2024-07-15 03:37:42.281253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.414 [2024-07-15 03:37:42.281279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.414 qpair failed and we were unable to recover it. 00:34:36.414 [2024-07-15 03:37:42.281430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.414 [2024-07-15 03:37:42.281456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.414 qpair failed and we were unable to recover it. 00:34:36.414 [2024-07-15 03:37:42.281595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.415 [2024-07-15 03:37:42.281621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.415 qpair failed and we were unable to recover it. 00:34:36.415 [2024-07-15 03:37:42.281742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.415 [2024-07-15 03:37:42.281770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.415 qpair failed and we were unable to recover it. 00:34:36.415 [2024-07-15 03:37:42.281934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.415 [2024-07-15 03:37:42.281961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.415 qpair failed and we were unable to recover it. 00:34:36.415 [2024-07-15 03:37:42.282094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.415 [2024-07-15 03:37:42.282137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.415 qpair failed and we were unable to recover it. 00:34:36.415 [2024-07-15 03:37:42.282249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.415 [2024-07-15 03:37:42.282278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.415 qpair failed and we were unable to recover it. 00:34:36.415 [2024-07-15 03:37:42.282433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.415 [2024-07-15 03:37:42.282459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.415 qpair failed and we were unable to recover it. 00:34:36.415 [2024-07-15 03:37:42.282599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.415 [2024-07-15 03:37:42.282643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.415 qpair failed and we were unable to recover it. 00:34:36.415 [2024-07-15 03:37:42.282824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.415 [2024-07-15 03:37:42.282853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.415 qpair failed and we were unable to recover it. 00:34:36.415 [2024-07-15 03:37:42.283011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.415 [2024-07-15 03:37:42.283037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.415 qpair failed and we were unable to recover it. 00:34:36.415 [2024-07-15 03:37:42.283149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.415 [2024-07-15 03:37:42.283191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.415 qpair failed and we were unable to recover it. 00:34:36.415 [2024-07-15 03:37:42.283339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.415 [2024-07-15 03:37:42.283368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.415 qpair failed and we were unable to recover it. 00:34:36.415 [2024-07-15 03:37:42.283551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.415 [2024-07-15 03:37:42.283577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.415 qpair failed and we were unable to recover it. 00:34:36.415 [2024-07-15 03:37:42.283731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.415 [2024-07-15 03:37:42.283763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.415 qpair failed and we were unable to recover it. 00:34:36.415 [2024-07-15 03:37:42.283926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.415 [2024-07-15 03:37:42.283952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.415 qpair failed and we were unable to recover it. 00:34:36.415 [2024-07-15 03:37:42.284065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.415 [2024-07-15 03:37:42.284091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.415 qpair failed and we were unable to recover it. 00:34:36.415 [2024-07-15 03:37:42.284222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.415 [2024-07-15 03:37:42.284249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.415 qpair failed and we were unable to recover it. 00:34:36.415 [2024-07-15 03:37:42.284413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.415 [2024-07-15 03:37:42.284441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.415 qpair failed and we were unable to recover it. 00:34:36.415 [2024-07-15 03:37:42.284598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.415 [2024-07-15 03:37:42.284624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.415 qpair failed and we were unable to recover it. 00:34:36.415 [2024-07-15 03:37:42.284771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.415 [2024-07-15 03:37:42.284828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.415 qpair failed and we were unable to recover it. 00:34:36.415 [2024-07-15 03:37:42.285024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.415 [2024-07-15 03:37:42.285056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.415 qpair failed and we were unable to recover it. 00:34:36.415 [2024-07-15 03:37:42.285239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.415 [2024-07-15 03:37:42.285265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.415 qpair failed and we were unable to recover it. 00:34:36.415 [2024-07-15 03:37:42.285449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.415 [2024-07-15 03:37:42.285478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.415 qpair failed and we were unable to recover it. 00:34:36.415 [2024-07-15 03:37:42.285625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.415 [2024-07-15 03:37:42.285669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.415 qpair failed and we were unable to recover it. 00:34:36.415 [2024-07-15 03:37:42.285842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.415 [2024-07-15 03:37:42.285869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.415 qpair failed and we were unable to recover it. 00:34:36.415 [2024-07-15 03:37:42.286043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.415 [2024-07-15 03:37:42.286074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.415 qpair failed and we were unable to recover it. 00:34:36.415 [2024-07-15 03:37:42.286257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.415 [2024-07-15 03:37:42.286287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.415 qpair failed and we were unable to recover it. 00:34:36.415 [2024-07-15 03:37:42.286451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.415 [2024-07-15 03:37:42.286478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.415 qpair failed and we were unable to recover it. 00:34:36.415 [2024-07-15 03:37:42.286619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.415 [2024-07-15 03:37:42.286662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.415 qpair failed and we were unable to recover it. 00:34:36.415 [2024-07-15 03:37:42.286840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.415 [2024-07-15 03:37:42.286869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.415 qpair failed and we were unable to recover it. 00:34:36.415 [2024-07-15 03:37:42.287036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.415 [2024-07-15 03:37:42.287064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.415 qpair failed and we were unable to recover it. 00:34:36.415 [2024-07-15 03:37:42.287235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.415 [2024-07-15 03:37:42.287265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.415 qpair failed and we were unable to recover it. 00:34:36.415 [2024-07-15 03:37:42.287399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.415 [2024-07-15 03:37:42.287428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.415 qpair failed and we were unable to recover it. 00:34:36.415 [2024-07-15 03:37:42.287592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.415 [2024-07-15 03:37:42.287619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.415 qpair failed and we were unable to recover it. 00:34:36.415 [2024-07-15 03:37:42.287762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.415 [2024-07-15 03:37:42.287789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.415 qpair failed and we were unable to recover it. 00:34:36.415 [2024-07-15 03:37:42.287954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.415 [2024-07-15 03:37:42.288001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.415 qpair failed and we were unable to recover it. 00:34:36.415 [2024-07-15 03:37:42.288187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.415 [2024-07-15 03:37:42.288214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.415 qpair failed and we were unable to recover it. 00:34:36.415 [2024-07-15 03:37:42.288399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.415 [2024-07-15 03:37:42.288428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.415 qpair failed and we were unable to recover it. 00:34:36.415 [2024-07-15 03:37:42.288579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.415 [2024-07-15 03:37:42.288609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.415 qpair failed and we were unable to recover it. 00:34:36.415 [2024-07-15 03:37:42.288742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.415 [2024-07-15 03:37:42.288787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.415 qpair failed and we were unable to recover it. 00:34:36.415 [2024-07-15 03:37:42.288966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.415 [2024-07-15 03:37:42.289006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.415 qpair failed and we were unable to recover it. 00:34:36.415 [2024-07-15 03:37:42.289195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.415 [2024-07-15 03:37:42.289226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.415 qpair failed and we were unable to recover it. 00:34:36.415 [2024-07-15 03:37:42.289366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.415 [2024-07-15 03:37:42.289392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.415 qpair failed and we were unable to recover it. 00:34:36.415 [2024-07-15 03:37:42.289530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.415 [2024-07-15 03:37:42.289556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.415 qpair failed and we were unable to recover it. 00:34:36.415 [2024-07-15 03:37:42.289701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.415 [2024-07-15 03:37:42.289731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.415 qpair failed and we were unable to recover it. 00:34:36.415 [2024-07-15 03:37:42.289857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.415 [2024-07-15 03:37:42.289897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.415 qpair failed and we were unable to recover it. 00:34:36.415 [2024-07-15 03:37:42.290044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.415 [2024-07-15 03:37:42.290070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.415 qpair failed and we were unable to recover it. 00:34:36.415 [2024-07-15 03:37:42.290199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.415 [2024-07-15 03:37:42.290225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.415 qpair failed and we were unable to recover it. 00:34:36.415 [2024-07-15 03:37:42.290397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.415 [2024-07-15 03:37:42.290423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.415 qpair failed and we were unable to recover it. 00:34:36.415 [2024-07-15 03:37:42.290572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.415 [2024-07-15 03:37:42.290600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.415 qpair failed and we were unable to recover it. 00:34:36.416 [2024-07-15 03:37:42.290766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.416 [2024-07-15 03:37:42.290793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.416 qpair failed and we were unable to recover it. 00:34:36.416 [2024-07-15 03:37:42.290901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.416 [2024-07-15 03:37:42.290928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.416 qpair failed and we were unable to recover it. 00:34:36.416 [2024-07-15 03:37:42.291091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.416 [2024-07-15 03:37:42.291136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.416 qpair failed and we were unable to recover it. 00:34:36.416 [2024-07-15 03:37:42.291287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.416 [2024-07-15 03:37:42.291318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.416 qpair failed and we were unable to recover it. 00:34:36.416 [2024-07-15 03:37:42.291482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.416 [2024-07-15 03:37:42.291509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.416 qpair failed and we were unable to recover it. 00:34:36.416 [2024-07-15 03:37:42.291694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.416 [2024-07-15 03:37:42.291723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.416 qpair failed and we were unable to recover it. 00:34:36.416 [2024-07-15 03:37:42.291871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.416 [2024-07-15 03:37:42.291910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.416 qpair failed and we were unable to recover it. 00:34:36.416 [2024-07-15 03:37:42.292091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.416 [2024-07-15 03:37:42.292118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.416 qpair failed and we were unable to recover it. 00:34:36.416 [2024-07-15 03:37:42.292224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.416 [2024-07-15 03:37:42.292251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.416 qpair failed and we were unable to recover it. 00:34:36.416 [2024-07-15 03:37:42.292415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.416 [2024-07-15 03:37:42.292445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.416 qpair failed and we were unable to recover it. 00:34:36.416 [2024-07-15 03:37:42.292633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.416 [2024-07-15 03:37:42.292660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.416 qpair failed and we were unable to recover it. 00:34:36.416 [2024-07-15 03:37:42.292788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.416 [2024-07-15 03:37:42.292828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.416 qpair failed and we were unable to recover it. 00:34:36.416 [2024-07-15 03:37:42.292975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.416 [2024-07-15 03:37:42.293005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.416 qpair failed and we were unable to recover it. 00:34:36.416 [2024-07-15 03:37:42.293147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.416 [2024-07-15 03:37:42.293174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.416 qpair failed and we were unable to recover it. 00:34:36.416 [2024-07-15 03:37:42.293292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.416 [2024-07-15 03:37:42.293317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.416 qpair failed and we were unable to recover it. 00:34:36.416 [2024-07-15 03:37:42.293463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.416 [2024-07-15 03:37:42.293491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.416 qpair failed and we were unable to recover it. 00:34:36.416 [2024-07-15 03:37:42.293657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.416 [2024-07-15 03:37:42.293683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.416 qpair failed and we were unable to recover it. 00:34:36.416 [2024-07-15 03:37:42.293851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.416 [2024-07-15 03:37:42.293889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.416 qpair failed and we were unable to recover it. 00:34:36.416 [2024-07-15 03:37:42.294052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.416 [2024-07-15 03:37:42.294078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.416 qpair failed and we were unable to recover it. 00:34:36.416 [2024-07-15 03:37:42.294242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.416 [2024-07-15 03:37:42.294268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.416 qpair failed and we were unable to recover it. 00:34:36.416 [2024-07-15 03:37:42.294423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.416 [2024-07-15 03:37:42.294453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.416 qpair failed and we were unable to recover it. 00:34:36.416 [2024-07-15 03:37:42.294599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.416 [2024-07-15 03:37:42.294629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.416 qpair failed and we were unable to recover it. 00:34:36.416 [2024-07-15 03:37:42.294781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.416 [2024-07-15 03:37:42.294809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.416 qpair failed and we were unable to recover it. 00:34:36.416 [2024-07-15 03:37:42.294933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.416 [2024-07-15 03:37:42.294993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.416 qpair failed and we were unable to recover it. 00:34:36.416 [2024-07-15 03:37:42.295157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.416 [2024-07-15 03:37:42.295188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.416 qpair failed and we were unable to recover it. 00:34:36.416 [2024-07-15 03:37:42.295351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.416 [2024-07-15 03:37:42.295379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.416 qpair failed and we were unable to recover it. 00:34:36.416 [2024-07-15 03:37:42.295519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.416 [2024-07-15 03:37:42.295564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.416 qpair failed and we were unable to recover it. 00:34:36.416 [2024-07-15 03:37:42.295712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.416 [2024-07-15 03:37:42.295741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.416 qpair failed and we were unable to recover it. 00:34:36.416 [2024-07-15 03:37:42.295927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.416 [2024-07-15 03:37:42.295954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.416 qpair failed and we were unable to recover it. 00:34:36.416 [2024-07-15 03:37:42.296083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.416 [2024-07-15 03:37:42.296112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.416 qpair failed and we were unable to recover it. 00:34:36.416 [2024-07-15 03:37:42.296240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.416 [2024-07-15 03:37:42.296275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.416 qpair failed and we were unable to recover it. 00:34:36.416 [2024-07-15 03:37:42.296431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.416 [2024-07-15 03:37:42.296457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.416 qpair failed and we were unable to recover it. 00:34:36.416 [2024-07-15 03:37:42.296637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.416 [2024-07-15 03:37:42.296665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.416 qpair failed and we were unable to recover it. 00:34:36.416 [2024-07-15 03:37:42.296792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.416 [2024-07-15 03:37:42.296823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.416 qpair failed and we were unable to recover it. 00:34:36.416 [2024-07-15 03:37:42.296970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.416 [2024-07-15 03:37:42.296998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.416 qpair failed and we were unable to recover it. 00:34:36.416 [2024-07-15 03:37:42.297139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.416 [2024-07-15 03:37:42.297166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.416 qpair failed and we were unable to recover it. 00:34:36.416 [2024-07-15 03:37:42.297315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.416 [2024-07-15 03:37:42.297341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.416 qpair failed and we were unable to recover it. 00:34:36.416 [2024-07-15 03:37:42.297482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.416 [2024-07-15 03:37:42.297510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.416 qpair failed and we were unable to recover it. 00:34:36.416 [2024-07-15 03:37:42.297645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.416 [2024-07-15 03:37:42.297671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.416 qpair failed and we were unable to recover it. 00:34:36.416 [2024-07-15 03:37:42.297834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.416 [2024-07-15 03:37:42.297884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.416 qpair failed and we were unable to recover it. 00:34:36.416 [2024-07-15 03:37:42.298051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.416 [2024-07-15 03:37:42.298077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.416 qpair failed and we were unable to recover it. 00:34:36.416 [2024-07-15 03:37:42.298238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.416 [2024-07-15 03:37:42.298271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.416 qpair failed and we were unable to recover it. 00:34:36.416 [2024-07-15 03:37:42.298400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.416 [2024-07-15 03:37:42.298431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.416 qpair failed and we were unable to recover it. 00:34:36.416 [2024-07-15 03:37:42.298624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.416 [2024-07-15 03:37:42.298651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.416 qpair failed and we were unable to recover it. 00:34:36.416 [2024-07-15 03:37:42.298787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.417 [2024-07-15 03:37:42.298818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.417 qpair failed and we were unable to recover it. 00:34:36.417 [2024-07-15 03:37:42.298978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.417 [2024-07-15 03:37:42.299009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.417 qpair failed and we were unable to recover it. 00:34:36.417 [2024-07-15 03:37:42.299170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.417 [2024-07-15 03:37:42.299197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.417 qpair failed and we were unable to recover it. 00:34:36.417 [2024-07-15 03:37:42.299397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.417 [2024-07-15 03:37:42.299452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.417 qpair failed and we were unable to recover it. 00:34:36.417 [2024-07-15 03:37:42.299632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.417 [2024-07-15 03:37:42.299661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.417 qpair failed and we were unable to recover it. 00:34:36.417 [2024-07-15 03:37:42.299821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.417 [2024-07-15 03:37:42.299847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.417 qpair failed and we were unable to recover it. 00:34:36.417 [2024-07-15 03:37:42.300040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.417 [2024-07-15 03:37:42.300069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.417 qpair failed and we were unable to recover it. 00:34:36.417 [2024-07-15 03:37:42.300192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.417 [2024-07-15 03:37:42.300220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.417 qpair failed and we were unable to recover it. 00:34:36.417 [2024-07-15 03:37:42.300383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.417 [2024-07-15 03:37:42.300409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.417 qpair failed and we were unable to recover it. 00:34:36.417 [2024-07-15 03:37:42.300549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.417 [2024-07-15 03:37:42.300574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.417 qpair failed and we were unable to recover it. 00:34:36.417 [2024-07-15 03:37:42.300711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.417 [2024-07-15 03:37:42.300737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.417 qpair failed and we were unable to recover it. 00:34:36.417 [2024-07-15 03:37:42.300853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.417 [2024-07-15 03:37:42.300887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.417 qpair failed and we were unable to recover it. 00:34:36.417 [2024-07-15 03:37:42.301009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.417 [2024-07-15 03:37:42.301051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.417 qpair failed and we were unable to recover it. 00:34:36.417 [2024-07-15 03:37:42.301237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.417 [2024-07-15 03:37:42.301267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.417 qpair failed and we were unable to recover it. 00:34:36.417 [2024-07-15 03:37:42.301456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.417 [2024-07-15 03:37:42.301482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.417 qpair failed and we were unable to recover it. 00:34:36.417 [2024-07-15 03:37:42.301597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.417 [2024-07-15 03:37:42.301624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.417 qpair failed and we were unable to recover it. 00:34:36.417 [2024-07-15 03:37:42.301730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.417 [2024-07-15 03:37:42.301757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.417 qpair failed and we were unable to recover it. 00:34:36.417 [2024-07-15 03:37:42.301897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.417 [2024-07-15 03:37:42.301924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.417 qpair failed and we were unable to recover it. 00:34:36.417 [2024-07-15 03:37:42.302082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.417 [2024-07-15 03:37:42.302112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.417 qpair failed and we were unable to recover it. 00:34:36.417 [2024-07-15 03:37:42.302237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.417 [2024-07-15 03:37:42.302266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.417 qpair failed and we were unable to recover it. 00:34:36.417 [2024-07-15 03:37:42.302449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.417 [2024-07-15 03:37:42.302474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.417 qpair failed and we were unable to recover it. 00:34:36.417 [2024-07-15 03:37:42.302623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.417 [2024-07-15 03:37:42.302651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.417 qpair failed and we were unable to recover it. 00:34:36.417 [2024-07-15 03:37:42.302836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.417 [2024-07-15 03:37:42.302862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.417 qpair failed and we were unable to recover it. 00:34:36.417 [2024-07-15 03:37:42.302982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.417 [2024-07-15 03:37:42.303008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.417 qpair failed and we were unable to recover it. 00:34:36.417 [2024-07-15 03:37:42.303171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.417 [2024-07-15 03:37:42.303197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.417 qpair failed and we were unable to recover it. 00:34:36.417 [2024-07-15 03:37:42.303338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.417 [2024-07-15 03:37:42.303367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.417 qpair failed and we were unable to recover it. 00:34:36.417 [2024-07-15 03:37:42.303523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.417 [2024-07-15 03:37:42.303549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.417 qpair failed and we were unable to recover it. 00:34:36.417 [2024-07-15 03:37:42.303700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.417 [2024-07-15 03:37:42.303759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.417 qpair failed and we were unable to recover it. 00:34:36.417 [2024-07-15 03:37:42.303925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.417 [2024-07-15 03:37:42.303957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.417 qpair failed and we were unable to recover it. 00:34:36.417 [2024-07-15 03:37:42.304091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.417 [2024-07-15 03:37:42.304119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.417 qpair failed and we were unable to recover it. 00:34:36.417 [2024-07-15 03:37:42.304229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.417 [2024-07-15 03:37:42.304256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.417 qpair failed and we were unable to recover it. 00:34:36.417 [2024-07-15 03:37:42.304395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.417 [2024-07-15 03:37:42.304422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.417 qpair failed and we were unable to recover it. 00:34:36.417 [2024-07-15 03:37:42.304550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.417 [2024-07-15 03:37:42.304576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.417 qpair failed and we were unable to recover it. 00:34:36.417 [2024-07-15 03:37:42.304720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.417 [2024-07-15 03:37:42.304750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.417 qpair failed and we were unable to recover it. 00:34:36.417 [2024-07-15 03:37:42.304899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.417 [2024-07-15 03:37:42.304943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.417 qpair failed and we were unable to recover it. 00:34:36.417 [2024-07-15 03:37:42.305080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.417 [2024-07-15 03:37:42.305106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.417 qpair failed and we were unable to recover it. 00:34:36.417 [2024-07-15 03:37:42.305237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.417 [2024-07-15 03:37:42.305263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.417 qpair failed and we were unable to recover it. 00:34:36.417 [2024-07-15 03:37:42.305403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.417 [2024-07-15 03:37:42.305429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.417 qpair failed and we were unable to recover it. 00:34:36.417 [2024-07-15 03:37:42.305566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.417 [2024-07-15 03:37:42.305592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.417 qpair failed and we were unable to recover it. 00:34:36.417 [2024-07-15 03:37:42.305725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.417 [2024-07-15 03:37:42.305751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.417 qpair failed and we were unable to recover it. 00:34:36.417 [2024-07-15 03:37:42.305887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.417 [2024-07-15 03:37:42.305913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.417 qpair failed and we were unable to recover it. 00:34:36.417 [2024-07-15 03:37:42.306077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.417 [2024-07-15 03:37:42.306103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.417 qpair failed and we were unable to recover it. 00:34:36.417 [2024-07-15 03:37:42.306213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.417 [2024-07-15 03:37:42.306239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.417 qpair failed and we were unable to recover it. 00:34:36.417 [2024-07-15 03:37:42.306344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.417 [2024-07-15 03:37:42.306370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.417 qpair failed and we were unable to recover it. 00:34:36.417 [2024-07-15 03:37:42.306473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.417 [2024-07-15 03:37:42.306499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.417 qpair failed and we were unable to recover it. 00:34:36.417 [2024-07-15 03:37:42.306613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.417 [2024-07-15 03:37:42.306639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.417 qpair failed and we were unable to recover it. 00:34:36.417 [2024-07-15 03:37:42.306799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.417 [2024-07-15 03:37:42.306828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.417 qpair failed and we were unable to recover it. 00:34:36.417 [2024-07-15 03:37:42.306997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.417 [2024-07-15 03:37:42.307023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.417 qpair failed and we were unable to recover it. 00:34:36.417 [2024-07-15 03:37:42.307161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.417 [2024-07-15 03:37:42.307204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.417 qpair failed and we were unable to recover it. 00:34:36.418 [2024-07-15 03:37:42.307344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.418 [2024-07-15 03:37:42.307373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.418 qpair failed and we were unable to recover it. 00:34:36.418 [2024-07-15 03:37:42.307556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.418 [2024-07-15 03:37:42.307582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.418 qpair failed and we were unable to recover it. 00:34:36.418 [2024-07-15 03:37:42.307742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.418 [2024-07-15 03:37:42.307774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.418 qpair failed and we were unable to recover it. 00:34:36.418 [2024-07-15 03:37:42.307931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.418 [2024-07-15 03:37:42.307961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.418 qpair failed and we were unable to recover it. 00:34:36.418 [2024-07-15 03:37:42.308119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.418 [2024-07-15 03:37:42.308146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.418 qpair failed and we were unable to recover it. 00:34:36.418 [2024-07-15 03:37:42.308357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.418 [2024-07-15 03:37:42.308409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.418 qpair failed and we were unable to recover it. 00:34:36.418 [2024-07-15 03:37:42.308535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.418 [2024-07-15 03:37:42.308564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.418 qpair failed and we were unable to recover it. 00:34:36.418 [2024-07-15 03:37:42.308721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.418 [2024-07-15 03:37:42.308747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.418 qpair failed and we were unable to recover it. 00:34:36.418 [2024-07-15 03:37:42.308888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.418 [2024-07-15 03:37:42.308920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.418 qpair failed and we were unable to recover it. 00:34:36.418 [2024-07-15 03:37:42.309088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.418 [2024-07-15 03:37:42.309115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.418 qpair failed and we were unable to recover it. 00:34:36.418 [2024-07-15 03:37:42.309271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.418 [2024-07-15 03:37:42.309297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.418 qpair failed and we were unable to recover it. 00:34:36.418 [2024-07-15 03:37:42.309435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.418 [2024-07-15 03:37:42.309461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.418 qpair failed and we were unable to recover it. 00:34:36.418 [2024-07-15 03:37:42.309648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.418 [2024-07-15 03:37:42.309677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.418 qpair failed and we were unable to recover it. 00:34:36.418 [2024-07-15 03:37:42.309811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.418 [2024-07-15 03:37:42.309836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.418 qpair failed and we were unable to recover it. 00:34:36.418 [2024-07-15 03:37:42.309987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.418 [2024-07-15 03:37:42.310014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.418 qpair failed and we were unable to recover it. 00:34:36.418 [2024-07-15 03:37:42.310179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.418 [2024-07-15 03:37:42.310208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.418 qpair failed and we were unable to recover it. 00:34:36.418 [2024-07-15 03:37:42.310362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.418 [2024-07-15 03:37:42.310388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.418 qpair failed and we were unable to recover it. 00:34:36.418 [2024-07-15 03:37:42.310602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.418 [2024-07-15 03:37:42.310665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.418 qpair failed and we were unable to recover it. 00:34:36.418 [2024-07-15 03:37:42.310787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.418 [2024-07-15 03:37:42.310816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.418 qpair failed and we were unable to recover it. 00:34:36.418 [2024-07-15 03:37:42.310961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.418 [2024-07-15 03:37:42.310988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.418 qpair failed and we were unable to recover it. 00:34:36.418 [2024-07-15 03:37:42.311150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.418 [2024-07-15 03:37:42.311176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.418 qpair failed and we were unable to recover it. 00:34:36.418 [2024-07-15 03:37:42.311343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.418 [2024-07-15 03:37:42.311372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.418 qpair failed and we were unable to recover it. 00:34:36.418 [2024-07-15 03:37:42.311532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.418 [2024-07-15 03:37:42.311558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.418 qpair failed and we were unable to recover it. 00:34:36.418 [2024-07-15 03:37:42.311720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.418 [2024-07-15 03:37:42.311745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.418 qpair failed and we were unable to recover it. 00:34:36.418 [2024-07-15 03:37:42.311936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.418 [2024-07-15 03:37:42.311966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.418 qpair failed and we were unable to recover it. 00:34:36.418 [2024-07-15 03:37:42.312098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.418 [2024-07-15 03:37:42.312124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.418 qpair failed and we were unable to recover it. 00:34:36.418 [2024-07-15 03:37:42.312284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.418 [2024-07-15 03:37:42.312310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.418 qpair failed and we were unable to recover it. 00:34:36.418 [2024-07-15 03:37:42.312446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.418 [2024-07-15 03:37:42.312475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.418 qpair failed and we were unable to recover it. 00:34:36.418 [2024-07-15 03:37:42.312636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.418 [2024-07-15 03:37:42.312662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.418 qpair failed and we were unable to recover it. 00:34:36.418 [2024-07-15 03:37:42.312837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.418 [2024-07-15 03:37:42.312891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.418 qpair failed and we were unable to recover it. 00:34:36.418 [2024-07-15 03:37:42.313082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.418 [2024-07-15 03:37:42.313110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.418 qpair failed and we were unable to recover it. 00:34:36.418 [2024-07-15 03:37:42.313281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.418 [2024-07-15 03:37:42.313308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.418 qpair failed and we were unable to recover it. 00:34:36.418 [2024-07-15 03:37:42.313447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.418 [2024-07-15 03:37:42.313478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.418 qpair failed and we were unable to recover it. 00:34:36.418 [2024-07-15 03:37:42.313632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.418 [2024-07-15 03:37:42.313661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.418 qpair failed and we were unable to recover it. 00:34:36.418 [2024-07-15 03:37:42.313794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.418 [2024-07-15 03:37:42.313820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.418 qpair failed and we were unable to recover it. 00:34:36.418 [2024-07-15 03:37:42.313959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.418 [2024-07-15 03:37:42.313986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.418 qpair failed and we were unable to recover it. 00:34:36.418 [2024-07-15 03:37:42.314177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.418 [2024-07-15 03:37:42.314205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.418 qpair failed and we were unable to recover it. 00:34:36.418 [2024-07-15 03:37:42.314389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.418 [2024-07-15 03:37:42.314415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.418 qpair failed and we were unable to recover it. 00:34:36.418 [2024-07-15 03:37:42.314555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.418 [2024-07-15 03:37:42.314580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.418 qpair failed and we were unable to recover it. 00:34:36.418 [2024-07-15 03:37:42.314776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.418 [2024-07-15 03:37:42.314805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.418 qpair failed and we were unable to recover it. 00:34:36.418 [2024-07-15 03:37:42.314952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.418 [2024-07-15 03:37:42.314979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.418 qpair failed and we were unable to recover it. 00:34:36.418 [2024-07-15 03:37:42.315088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.418 [2024-07-15 03:37:42.315114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.418 qpair failed and we were unable to recover it. 00:34:36.418 [2024-07-15 03:37:42.315267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.418 [2024-07-15 03:37:42.315296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.418 qpair failed and we were unable to recover it. 00:34:36.418 [2024-07-15 03:37:42.315460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.418 [2024-07-15 03:37:42.315486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.418 qpair failed and we were unable to recover it. 00:34:36.418 [2024-07-15 03:37:42.315620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.418 [2024-07-15 03:37:42.315645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.418 qpair failed and we were unable to recover it. 00:34:36.418 [2024-07-15 03:37:42.315816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.418 [2024-07-15 03:37:42.315845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.418 qpair failed and we were unable to recover it. 00:34:36.418 [2024-07-15 03:37:42.316024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.418 [2024-07-15 03:37:42.316050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.418 qpair failed and we were unable to recover it. 00:34:36.418 [2024-07-15 03:37:42.316187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.418 [2024-07-15 03:37:42.316231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.418 qpair failed and we were unable to recover it. 00:34:36.418 [2024-07-15 03:37:42.316402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.418 [2024-07-15 03:37:42.316432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.418 qpair failed and we were unable to recover it. 00:34:36.418 [2024-07-15 03:37:42.316580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.418 [2024-07-15 03:37:42.316606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.419 qpair failed and we were unable to recover it. 00:34:36.419 [2024-07-15 03:37:42.316783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.419 [2024-07-15 03:37:42.316811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.419 qpair failed and we were unable to recover it. 00:34:36.419 [2024-07-15 03:37:42.316957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.419 [2024-07-15 03:37:42.316987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.419 qpair failed and we were unable to recover it. 00:34:36.419 [2024-07-15 03:37:42.317141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.419 [2024-07-15 03:37:42.317167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.419 qpair failed and we were unable to recover it. 00:34:36.419 [2024-07-15 03:37:42.317305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.419 [2024-07-15 03:37:42.317347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.419 qpair failed and we were unable to recover it. 00:34:36.419 [2024-07-15 03:37:42.317463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.419 [2024-07-15 03:37:42.317492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.419 qpair failed and we were unable to recover it. 00:34:36.419 [2024-07-15 03:37:42.317641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.419 [2024-07-15 03:37:42.317667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.419 qpair failed and we were unable to recover it. 00:34:36.419 [2024-07-15 03:37:42.317795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.419 [2024-07-15 03:37:42.317822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.419 qpair failed and we were unable to recover it. 00:34:36.419 [2024-07-15 03:37:42.318038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.419 [2024-07-15 03:37:42.318065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.419 qpair failed and we were unable to recover it. 00:34:36.419 [2024-07-15 03:37:42.318173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.419 [2024-07-15 03:37:42.318199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.419 qpair failed and we were unable to recover it. 00:34:36.419 [2024-07-15 03:37:42.318377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.419 [2024-07-15 03:37:42.318411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.419 qpair failed and we were unable to recover it. 00:34:36.419 [2024-07-15 03:37:42.318568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.419 [2024-07-15 03:37:42.318596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.419 qpair failed and we were unable to recover it. 00:34:36.419 [2024-07-15 03:37:42.318755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.419 [2024-07-15 03:37:42.318781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.419 qpair failed and we were unable to recover it. 00:34:36.419 [2024-07-15 03:37:42.318925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.419 [2024-07-15 03:37:42.318971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.419 qpair failed and we were unable to recover it. 00:34:36.419 [2024-07-15 03:37:42.319112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.419 [2024-07-15 03:37:42.319141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.419 qpair failed and we were unable to recover it. 00:34:36.419 [2024-07-15 03:37:42.319302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.419 [2024-07-15 03:37:42.319327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.419 qpair failed and we were unable to recover it. 00:34:36.419 [2024-07-15 03:37:42.319445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.419 [2024-07-15 03:37:42.319472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.419 qpair failed and we were unable to recover it. 00:34:36.419 [2024-07-15 03:37:42.319586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.419 [2024-07-15 03:37:42.319612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.419 qpair failed and we were unable to recover it. 00:34:36.419 [2024-07-15 03:37:42.319720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.419 [2024-07-15 03:37:42.319746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.419 qpair failed and we were unable to recover it. 00:34:36.419 [2024-07-15 03:37:42.319885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.419 [2024-07-15 03:37:42.319927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.419 qpair failed and we were unable to recover it. 00:34:36.419 [2024-07-15 03:37:42.320069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.419 [2024-07-15 03:37:42.320098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.419 qpair failed and we were unable to recover it. 00:34:36.419 [2024-07-15 03:37:42.320251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.419 [2024-07-15 03:37:42.320277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.419 qpair failed and we were unable to recover it. 00:34:36.419 [2024-07-15 03:37:42.320385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.419 [2024-07-15 03:37:42.320411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.419 qpair failed and we were unable to recover it. 00:34:36.419 [2024-07-15 03:37:42.320605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.419 [2024-07-15 03:37:42.320633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.419 qpair failed and we were unable to recover it. 00:34:36.419 [2024-07-15 03:37:42.320783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.419 [2024-07-15 03:37:42.320811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.419 qpair failed and we were unable to recover it. 00:34:36.419 [2024-07-15 03:37:42.320975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.419 [2024-07-15 03:37:42.321003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.419 qpair failed and we were unable to recover it. 00:34:36.419 [2024-07-15 03:37:42.321139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.419 [2024-07-15 03:37:42.321183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.419 qpair failed and we were unable to recover it. 00:34:36.419 [2024-07-15 03:37:42.321369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.419 [2024-07-15 03:37:42.321394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.419 qpair failed and we were unable to recover it. 00:34:36.419 [2024-07-15 03:37:42.321512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.419 [2024-07-15 03:37:42.321538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.419 qpair failed and we were unable to recover it. 00:34:36.419 [2024-07-15 03:37:42.321679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.419 [2024-07-15 03:37:42.321705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.419 qpair failed and we were unable to recover it. 00:34:36.419 [2024-07-15 03:37:42.321843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.419 [2024-07-15 03:37:42.321869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.419 qpair failed and we were unable to recover it. 00:34:36.419 [2024-07-15 03:37:42.322041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.419 [2024-07-15 03:37:42.322067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.419 qpair failed and we were unable to recover it. 00:34:36.419 [2024-07-15 03:37:42.322207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.419 [2024-07-15 03:37:42.322249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.419 qpair failed and we were unable to recover it. 00:34:36.419 [2024-07-15 03:37:42.322397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.419 [2024-07-15 03:37:42.322423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.419 qpair failed and we were unable to recover it. 00:34:36.419 [2024-07-15 03:37:42.322567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.419 [2024-07-15 03:37:42.322593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.419 qpair failed and we were unable to recover it. 00:34:36.419 [2024-07-15 03:37:42.322732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.419 [2024-07-15 03:37:42.322774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.419 qpair failed and we were unable to recover it. 00:34:36.419 [2024-07-15 03:37:42.322906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.419 [2024-07-15 03:37:42.322932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.419 qpair failed and we were unable to recover it. 00:34:36.419 [2024-07-15 03:37:42.323101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.419 [2024-07-15 03:37:42.323131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.419 qpair failed and we were unable to recover it. 00:34:36.419 [2024-07-15 03:37:42.323329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.419 [2024-07-15 03:37:42.323354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.419 qpair failed and we were unable to recover it. 00:34:36.419 [2024-07-15 03:37:42.323488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.419 [2024-07-15 03:37:42.323514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.419 qpair failed and we were unable to recover it. 00:34:36.419 [2024-07-15 03:37:42.323654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.419 [2024-07-15 03:37:42.323701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.419 qpair failed and we were unable to recover it. 00:34:36.419 [2024-07-15 03:37:42.323851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.419 [2024-07-15 03:37:42.323884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.419 qpair failed and we were unable to recover it. 00:34:36.419 [2024-07-15 03:37:42.324048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.419 [2024-07-15 03:37:42.324074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.419 qpair failed and we were unable to recover it. 00:34:36.419 [2024-07-15 03:37:42.324180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.419 [2024-07-15 03:37:42.324222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.419 qpair failed and we were unable to recover it. 00:34:36.419 [2024-07-15 03:37:42.324351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.419 [2024-07-15 03:37:42.324381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.419 qpair failed and we were unable to recover it. 00:34:36.419 [2024-07-15 03:37:42.324566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.419 [2024-07-15 03:37:42.324592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.419 qpair failed and we were unable to recover it. 00:34:36.419 [2024-07-15 03:37:42.324777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.419 [2024-07-15 03:37:42.324806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.419 qpair failed and we were unable to recover it. 00:34:36.419 [2024-07-15 03:37:42.324965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.419 [2024-07-15 03:37:42.324995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.419 qpair failed and we were unable to recover it. 00:34:36.419 [2024-07-15 03:37:42.325160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.419 [2024-07-15 03:37:42.325186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.420 qpair failed and we were unable to recover it. 00:34:36.420 [2024-07-15 03:37:42.325317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.420 [2024-07-15 03:37:42.325343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.420 qpair failed and we were unable to recover it. 00:34:36.420 [2024-07-15 03:37:42.325503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.420 [2024-07-15 03:37:42.325532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.420 qpair failed and we were unable to recover it. 00:34:36.420 [2024-07-15 03:37:42.325694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.420 [2024-07-15 03:37:42.325722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.420 qpair failed and we were unable to recover it. 00:34:36.420 [2024-07-15 03:37:42.325857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.420 [2024-07-15 03:37:42.325909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.420 qpair failed and we were unable to recover it. 00:34:36.420 [2024-07-15 03:37:42.326098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.420 [2024-07-15 03:37:42.326130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.420 qpair failed and we were unable to recover it. 00:34:36.420 [2024-07-15 03:37:42.326294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.420 [2024-07-15 03:37:42.326321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.420 qpair failed and we were unable to recover it. 00:34:36.420 [2024-07-15 03:37:42.326505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.420 [2024-07-15 03:37:42.326535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.420 qpair failed and we were unable to recover it. 00:34:36.420 [2024-07-15 03:37:42.326683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.420 [2024-07-15 03:37:42.326712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.420 qpair failed and we were unable to recover it. 00:34:36.420 [2024-07-15 03:37:42.326903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.420 [2024-07-15 03:37:42.326931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.420 qpair failed and we were unable to recover it. 00:34:36.420 [2024-07-15 03:37:42.327091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.420 [2024-07-15 03:37:42.327121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.420 qpair failed and we were unable to recover it. 00:34:36.420 [2024-07-15 03:37:42.327269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.420 [2024-07-15 03:37:42.327299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.420 qpair failed and we were unable to recover it. 00:34:36.420 [2024-07-15 03:37:42.327481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.420 [2024-07-15 03:37:42.327507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.420 qpair failed and we were unable to recover it. 00:34:36.420 [2024-07-15 03:37:42.327640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.420 [2024-07-15 03:37:42.327670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.420 qpair failed and we were unable to recover it. 00:34:36.420 [2024-07-15 03:37:42.327823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.420 [2024-07-15 03:37:42.327853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.420 qpair failed and we were unable to recover it. 00:34:36.420 [2024-07-15 03:37:42.327990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.420 [2024-07-15 03:37:42.328017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.420 qpair failed and we were unable to recover it. 00:34:36.420 [2024-07-15 03:37:42.328161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.420 [2024-07-15 03:37:42.328194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.420 qpair failed and we were unable to recover it. 00:34:36.420 [2024-07-15 03:37:42.328366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.420 [2024-07-15 03:37:42.328393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.420 qpair failed and we were unable to recover it. 00:34:36.420 [2024-07-15 03:37:42.328529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.420 [2024-07-15 03:37:42.328555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.420 qpair failed and we were unable to recover it. 00:34:36.420 [2024-07-15 03:37:42.328735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.420 [2024-07-15 03:37:42.328765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.420 qpair failed and we were unable to recover it. 00:34:36.420 [2024-07-15 03:37:42.328884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.420 [2024-07-15 03:37:42.328929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.420 qpair failed and we were unable to recover it. 00:34:36.420 [2024-07-15 03:37:42.329094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.420 [2024-07-15 03:37:42.329120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.420 qpair failed and we were unable to recover it. 00:34:36.420 [2024-07-15 03:37:42.329272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.420 [2024-07-15 03:37:42.329301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.420 qpair failed and we were unable to recover it. 00:34:36.420 [2024-07-15 03:37:42.329449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.420 [2024-07-15 03:37:42.329478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.420 qpair failed and we were unable to recover it. 00:34:36.420 [2024-07-15 03:37:42.329625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.420 [2024-07-15 03:37:42.329652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.420 qpair failed and we were unable to recover it. 00:34:36.420 [2024-07-15 03:37:42.329785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.420 [2024-07-15 03:37:42.329827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.420 qpair failed and we were unable to recover it. 00:34:36.420 [2024-07-15 03:37:42.330009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.420 [2024-07-15 03:37:42.330039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.420 qpair failed and we were unable to recover it. 00:34:36.420 [2024-07-15 03:37:42.330221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.420 [2024-07-15 03:37:42.330248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.420 qpair failed and we were unable to recover it. 00:34:36.420 [2024-07-15 03:37:42.330403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.420 [2024-07-15 03:37:42.330433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.420 qpair failed and we were unable to recover it. 00:34:36.420 [2024-07-15 03:37:42.330597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.420 [2024-07-15 03:37:42.330625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.420 qpair failed and we were unable to recover it. 00:34:36.420 [2024-07-15 03:37:42.330745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.420 [2024-07-15 03:37:42.330772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.420 qpair failed and we were unable to recover it. 00:34:36.420 [2024-07-15 03:37:42.330885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.420 [2024-07-15 03:37:42.330912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.420 qpair failed and we were unable to recover it. 00:34:36.420 [2024-07-15 03:37:42.331083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.420 [2024-07-15 03:37:42.331112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.420 qpair failed and we were unable to recover it. 00:34:36.420 [2024-07-15 03:37:42.331271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.420 [2024-07-15 03:37:42.331297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.420 qpair failed and we were unable to recover it. 00:34:36.420 [2024-07-15 03:37:42.331436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.420 [2024-07-15 03:37:42.331464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.420 qpair failed and we were unable to recover it. 00:34:36.420 [2024-07-15 03:37:42.331630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.420 [2024-07-15 03:37:42.331660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.420 qpair failed and we were unable to recover it. 00:34:36.420 [2024-07-15 03:37:42.331847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.420 [2024-07-15 03:37:42.331874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.420 qpair failed and we were unable to recover it. 00:34:36.420 [2024-07-15 03:37:42.332070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.420 [2024-07-15 03:37:42.332099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.420 qpair failed and we were unable to recover it. 00:34:36.420 [2024-07-15 03:37:42.332226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.420 [2024-07-15 03:37:42.332256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.420 qpair failed and we were unable to recover it. 00:34:36.420 [2024-07-15 03:37:42.332414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.420 [2024-07-15 03:37:42.332441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.420 qpair failed and we were unable to recover it. 00:34:36.420 [2024-07-15 03:37:42.332582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.420 [2024-07-15 03:37:42.332626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.420 qpair failed and we were unable to recover it. 00:34:36.420 [2024-07-15 03:37:42.332777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.420 [2024-07-15 03:37:42.332807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.420 qpair failed and we were unable to recover it. 00:34:36.420 [2024-07-15 03:37:42.332966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.420 [2024-07-15 03:37:42.332994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.420 qpair failed and we were unable to recover it. 00:34:36.420 [2024-07-15 03:37:42.333183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.420 [2024-07-15 03:37:42.333213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.420 qpair failed and we were unable to recover it. 00:34:36.420 [2024-07-15 03:37:42.333362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.420 [2024-07-15 03:37:42.333391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.420 qpair failed and we were unable to recover it. 00:34:36.420 [2024-07-15 03:37:42.333543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.420 [2024-07-15 03:37:42.333569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.420 qpair failed and we were unable to recover it. 00:34:36.420 [2024-07-15 03:37:42.333754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.420 [2024-07-15 03:37:42.333783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.420 qpair failed and we were unable to recover it. 00:34:36.420 [2024-07-15 03:37:42.333934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.420 [2024-07-15 03:37:42.333964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.420 qpair failed and we were unable to recover it. 00:34:36.420 [2024-07-15 03:37:42.334087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.421 [2024-07-15 03:37:42.334115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.421 qpair failed and we were unable to recover it. 00:34:36.421 [2024-07-15 03:37:42.334255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.421 [2024-07-15 03:37:42.334283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.421 qpair failed and we were unable to recover it. 00:34:36.421 [2024-07-15 03:37:42.334465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.421 [2024-07-15 03:37:42.334491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.421 qpair failed and we were unable to recover it. 00:34:36.421 [2024-07-15 03:37:42.334621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.421 [2024-07-15 03:37:42.334647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.421 qpair failed and we were unable to recover it. 00:34:36.421 [2024-07-15 03:37:42.334757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.421 [2024-07-15 03:37:42.334784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.421 qpair failed and we were unable to recover it. 00:34:36.421 [2024-07-15 03:37:42.334943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.421 [2024-07-15 03:37:42.334973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.421 qpair failed and we were unable to recover it. 00:34:36.421 [2024-07-15 03:37:42.335131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.421 [2024-07-15 03:37:42.335158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.421 qpair failed and we were unable to recover it. 00:34:36.421 [2024-07-15 03:37:42.335270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.421 [2024-07-15 03:37:42.335297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.421 qpair failed and we were unable to recover it. 00:34:36.421 [2024-07-15 03:37:42.335488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.421 [2024-07-15 03:37:42.335522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.421 qpair failed and we were unable to recover it. 00:34:36.421 [2024-07-15 03:37:42.335683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.421 [2024-07-15 03:37:42.335710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.421 qpair failed and we were unable to recover it. 00:34:36.421 [2024-07-15 03:37:42.335846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.421 [2024-07-15 03:37:42.335873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.421 qpair failed and we were unable to recover it. 00:34:36.421 [2024-07-15 03:37:42.336020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.421 [2024-07-15 03:37:42.336050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.421 qpair failed and we were unable to recover it. 00:34:36.421 [2024-07-15 03:37:42.336209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.421 [2024-07-15 03:37:42.336236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.421 qpair failed and we were unable to recover it. 00:34:36.421 [2024-07-15 03:37:42.336416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.421 [2024-07-15 03:37:42.336445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.421 qpair failed and we were unable to recover it. 00:34:36.421 [2024-07-15 03:37:42.336635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.421 [2024-07-15 03:37:42.336679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.421 qpair failed and we were unable to recover it. 00:34:36.421 [2024-07-15 03:37:42.336834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.421 [2024-07-15 03:37:42.336864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.421 qpair failed and we were unable to recover it. 00:34:36.421 [2024-07-15 03:37:42.337030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.421 [2024-07-15 03:37:42.337058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.421 qpair failed and we were unable to recover it. 00:34:36.421 [2024-07-15 03:37:42.337242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.421 [2024-07-15 03:37:42.337272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.421 qpair failed and we were unable to recover it. 00:34:36.421 [2024-07-15 03:37:42.337457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.421 [2024-07-15 03:37:42.337483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.421 qpair failed and we were unable to recover it. 00:34:36.421 [2024-07-15 03:37:42.337713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.421 [2024-07-15 03:37:42.337773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.421 qpair failed and we were unable to recover it. 00:34:36.421 [2024-07-15 03:37:42.337929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.421 [2024-07-15 03:37:42.337960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.421 qpair failed and we were unable to recover it. 00:34:36.421 [2024-07-15 03:37:42.338126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.421 [2024-07-15 03:37:42.338153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.421 qpair failed and we were unable to recover it. 00:34:36.421 [2024-07-15 03:37:42.338304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.421 [2024-07-15 03:37:42.338331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.421 qpair failed and we were unable to recover it. 00:34:36.421 [2024-07-15 03:37:42.338514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.421 [2024-07-15 03:37:42.338572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.421 qpair failed and we were unable to recover it. 00:34:36.421 [2024-07-15 03:37:42.338731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.421 [2024-07-15 03:37:42.338758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.421 qpair failed and we were unable to recover it. 00:34:36.421 [2024-07-15 03:37:42.338872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.421 [2024-07-15 03:37:42.338904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.421 qpair failed and we were unable to recover it. 00:34:36.421 [2024-07-15 03:37:42.339043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.421 [2024-07-15 03:37:42.339070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.421 qpair failed and we were unable to recover it. 00:34:36.421 [2024-07-15 03:37:42.339212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.421 [2024-07-15 03:37:42.339239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.421 qpair failed and we were unable to recover it. 00:34:36.421 [2024-07-15 03:37:42.339348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.421 [2024-07-15 03:37:42.339390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.421 qpair failed and we were unable to recover it. 00:34:36.421 [2024-07-15 03:37:42.339541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.421 [2024-07-15 03:37:42.339570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.421 qpair failed and we were unable to recover it. 00:34:36.421 [2024-07-15 03:37:42.339704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.421 [2024-07-15 03:37:42.339731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.421 qpair failed and we were unable to recover it. 00:34:36.421 [2024-07-15 03:37:42.339862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.421 [2024-07-15 03:37:42.339893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.421 qpair failed and we were unable to recover it. 00:34:36.421 [2024-07-15 03:37:42.340057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.421 [2024-07-15 03:37:42.340084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.421 qpair failed and we were unable to recover it. 00:34:36.421 [2024-07-15 03:37:42.340192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.421 [2024-07-15 03:37:42.340219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.421 qpair failed and we were unable to recover it. 00:34:36.421 [2024-07-15 03:37:42.340364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.421 [2024-07-15 03:37:42.340408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.421 qpair failed and we were unable to recover it. 00:34:36.421 [2024-07-15 03:37:42.340594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.421 [2024-07-15 03:37:42.340626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.421 qpair failed and we were unable to recover it. 00:34:36.421 [2024-07-15 03:37:42.340762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.421 [2024-07-15 03:37:42.340790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.421 qpair failed and we were unable to recover it. 00:34:36.421 [2024-07-15 03:37:42.340931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.421 [2024-07-15 03:37:42.340958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.421 qpair failed and we were unable to recover it. 00:34:36.421 [2024-07-15 03:37:42.341108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.421 [2024-07-15 03:37:42.341135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.421 qpair failed and we were unable to recover it. 00:34:36.421 [2024-07-15 03:37:42.341287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.421 [2024-07-15 03:37:42.341314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.421 qpair failed and we were unable to recover it. 00:34:36.421 [2024-07-15 03:37:42.341477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.421 [2024-07-15 03:37:42.341504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.421 qpair failed and we were unable to recover it. 00:34:36.421 [2024-07-15 03:37:42.341660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.421 [2024-07-15 03:37:42.341689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.421 qpair failed and we were unable to recover it. 00:34:36.421 [2024-07-15 03:37:42.341846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.421 [2024-07-15 03:37:42.341873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.421 qpair failed and we were unable to recover it. 00:34:36.421 [2024-07-15 03:37:42.342014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.421 [2024-07-15 03:37:42.342040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.421 qpair failed and we were unable to recover it. 00:34:36.421 [2024-07-15 03:37:42.342146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.421 [2024-07-15 03:37:42.342194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.421 qpair failed and we were unable to recover it. 00:34:36.421 [2024-07-15 03:37:42.342356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.421 [2024-07-15 03:37:42.342383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.421 qpair failed and we were unable to recover it. 00:34:36.421 [2024-07-15 03:37:42.342493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.421 [2024-07-15 03:37:42.342520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.421 qpair failed and we were unable to recover it. 00:34:36.421 [2024-07-15 03:37:42.342691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.421 [2024-07-15 03:37:42.342722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.421 qpair failed and we were unable to recover it. 00:34:36.421 [2024-07-15 03:37:42.342909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.421 [2024-07-15 03:37:42.342955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.421 qpair failed and we were unable to recover it. 00:34:36.421 [2024-07-15 03:37:42.343074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.421 [2024-07-15 03:37:42.343101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.421 qpair failed and we were unable to recover it. 00:34:36.421 [2024-07-15 03:37:42.343347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.421 [2024-07-15 03:37:42.343398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.421 qpair failed and we were unable to recover it. 00:34:36.421 [2024-07-15 03:37:42.343582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.421 [2024-07-15 03:37:42.343609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.421 qpair failed and we were unable to recover it. 00:34:36.421 [2024-07-15 03:37:42.343785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.421 [2024-07-15 03:37:42.343814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.421 qpair failed and we were unable to recover it. 00:34:36.422 [2024-07-15 03:37:42.343985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.422 [2024-07-15 03:37:42.344013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.422 qpair failed and we were unable to recover it. 00:34:36.422 [2024-07-15 03:37:42.344177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.422 [2024-07-15 03:37:42.344204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.422 qpair failed and we were unable to recover it. 00:34:36.422 [2024-07-15 03:37:42.344362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.422 [2024-07-15 03:37:42.344391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.422 qpair failed and we were unable to recover it. 00:34:36.422 [2024-07-15 03:37:42.344553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.422 [2024-07-15 03:37:42.344579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.422 qpair failed and we were unable to recover it. 00:34:36.422 [2024-07-15 03:37:42.344711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.422 [2024-07-15 03:37:42.344737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.422 qpair failed and we were unable to recover it. 00:34:36.422 [2024-07-15 03:37:42.344882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.422 [2024-07-15 03:37:42.344909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.422 qpair failed and we were unable to recover it. 00:34:36.422 [2024-07-15 03:37:42.345047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.422 [2024-07-15 03:37:42.345073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.422 qpair failed and we were unable to recover it. 00:34:36.422 [2024-07-15 03:37:42.345208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.422 [2024-07-15 03:37:42.345234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.422 qpair failed and we were unable to recover it. 00:34:36.422 [2024-07-15 03:37:42.345418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.422 [2024-07-15 03:37:42.345448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.422 qpair failed and we were unable to recover it. 00:34:36.422 [2024-07-15 03:37:42.345726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.422 [2024-07-15 03:37:42.345776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.422 qpair failed and we were unable to recover it. 00:34:36.422 [2024-07-15 03:37:42.345940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.422 [2024-07-15 03:37:42.345968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.422 qpair failed and we were unable to recover it. 00:34:36.422 [2024-07-15 03:37:42.346086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.422 [2024-07-15 03:37:42.346112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.422 qpair failed and we were unable to recover it. 00:34:36.422 [2024-07-15 03:37:42.346277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.422 [2024-07-15 03:37:42.346303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.422 qpair failed and we were unable to recover it. 00:34:36.422 [2024-07-15 03:37:42.346412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.422 [2024-07-15 03:37:42.346438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.422 qpair failed and we were unable to recover it. 00:34:36.422 [2024-07-15 03:37:42.346615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.422 [2024-07-15 03:37:42.346644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.422 qpair failed and we were unable to recover it. 00:34:36.422 [2024-07-15 03:37:42.346794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.422 [2024-07-15 03:37:42.346823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.422 qpair failed and we were unable to recover it. 00:34:36.422 [2024-07-15 03:37:42.346974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.422 [2024-07-15 03:37:42.347001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.422 qpair failed and we were unable to recover it. 00:34:36.422 [2024-07-15 03:37:42.347141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.422 [2024-07-15 03:37:42.347168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.422 qpair failed and we were unable to recover it. 00:34:36.422 [2024-07-15 03:37:42.347334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.422 [2024-07-15 03:37:42.347365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.422 qpair failed and we were unable to recover it. 00:34:36.422 [2024-07-15 03:37:42.347528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.422 [2024-07-15 03:37:42.347554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.422 qpair failed and we were unable to recover it. 00:34:36.422 [2024-07-15 03:37:42.347694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.422 [2024-07-15 03:37:42.347737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.422 qpair failed and we were unable to recover it. 00:34:36.422 [2024-07-15 03:37:42.347923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.422 [2024-07-15 03:37:42.347950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.422 qpair failed and we were unable to recover it. 00:34:36.422 [2024-07-15 03:37:42.348095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.422 [2024-07-15 03:37:42.348122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.422 qpair failed and we were unable to recover it. 00:34:36.422 [2024-07-15 03:37:42.348280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.422 [2024-07-15 03:37:42.348309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.422 qpair failed and we were unable to recover it. 00:34:36.422 [2024-07-15 03:37:42.348431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.422 [2024-07-15 03:37:42.348461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.422 qpair failed and we were unable to recover it. 00:34:36.422 [2024-07-15 03:37:42.348614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.422 [2024-07-15 03:37:42.348640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.422 qpair failed and we were unable to recover it. 00:34:36.422 [2024-07-15 03:37:42.348780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.422 [2024-07-15 03:37:42.348822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.422 qpair failed and we were unable to recover it. 00:34:36.422 [2024-07-15 03:37:42.348986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.422 [2024-07-15 03:37:42.349016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.422 qpair failed and we were unable to recover it. 00:34:36.422 [2024-07-15 03:37:42.349178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.422 [2024-07-15 03:37:42.349204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.422 qpair failed and we were unable to recover it. 00:34:36.422 [2024-07-15 03:37:42.349346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.422 [2024-07-15 03:37:42.349390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.422 qpair failed and we were unable to recover it. 00:34:36.422 [2024-07-15 03:37:42.349541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.422 [2024-07-15 03:37:42.349571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.422 qpair failed and we were unable to recover it. 00:34:36.422 [2024-07-15 03:37:42.349733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.422 [2024-07-15 03:37:42.349759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.422 qpair failed and we were unable to recover it. 00:34:36.422 [2024-07-15 03:37:42.349901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.422 [2024-07-15 03:37:42.349928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.422 qpair failed and we were unable to recover it. 00:34:36.422 [2024-07-15 03:37:42.350060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.422 [2024-07-15 03:37:42.350089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.422 qpair failed and we were unable to recover it. 00:34:36.422 [2024-07-15 03:37:42.350234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.422 [2024-07-15 03:37:42.350261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.422 qpair failed and we were unable to recover it. 00:34:36.422 [2024-07-15 03:37:42.350401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.422 [2024-07-15 03:37:42.350449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.422 qpair failed and we were unable to recover it. 00:34:36.422 [2024-07-15 03:37:42.350624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.422 [2024-07-15 03:37:42.350654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.422 qpair failed and we were unable to recover it. 00:34:36.422 [2024-07-15 03:37:42.350829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.422 [2024-07-15 03:37:42.350858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.422 qpair failed and we were unable to recover it. 00:34:36.422 [2024-07-15 03:37:42.351043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.422 [2024-07-15 03:37:42.351069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.422 qpair failed and we were unable to recover it. 00:34:36.422 [2024-07-15 03:37:42.351196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.422 [2024-07-15 03:37:42.351225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.422 qpair failed and we were unable to recover it. 00:34:36.422 [2024-07-15 03:37:42.351378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.422 [2024-07-15 03:37:42.351405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.422 qpair failed and we were unable to recover it. 00:34:36.422 [2024-07-15 03:37:42.351542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.422 [2024-07-15 03:37:42.351585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.422 qpair failed and we were unable to recover it. 00:34:36.422 [2024-07-15 03:37:42.351736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.422 [2024-07-15 03:37:42.351765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.422 qpair failed and we were unable to recover it. 00:34:36.422 [2024-07-15 03:37:42.351926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.422 [2024-07-15 03:37:42.351953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.422 qpair failed and we were unable to recover it. 00:34:36.422 [2024-07-15 03:37:42.352061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.422 [2024-07-15 03:37:42.352087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.422 qpair failed and we were unable to recover it. 00:34:36.422 [2024-07-15 03:37:42.352202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.422 [2024-07-15 03:37:42.352229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.422 qpair failed and we were unable to recover it. 00:34:36.422 [2024-07-15 03:37:42.352365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.422 [2024-07-15 03:37:42.352392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.422 qpair failed and we were unable to recover it. 00:34:36.422 [2024-07-15 03:37:42.352496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.422 [2024-07-15 03:37:42.352537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.422 qpair failed and we were unable to recover it. 00:34:36.422 [2024-07-15 03:37:42.352662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.422 [2024-07-15 03:37:42.352691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.422 qpair failed and we were unable to recover it. 00:34:36.422 [2024-07-15 03:37:42.352886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.422 [2024-07-15 03:37:42.352913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.422 qpair failed and we were unable to recover it. 00:34:36.422 [2024-07-15 03:37:42.353033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.422 [2024-07-15 03:37:42.353062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.422 qpair failed and we were unable to recover it. 00:34:36.422 [2024-07-15 03:37:42.353229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.422 [2024-07-15 03:37:42.353273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.422 qpair failed and we were unable to recover it. 00:34:36.422 [2024-07-15 03:37:42.353417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.422 [2024-07-15 03:37:42.353445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.422 qpair failed and we were unable to recover it. 00:34:36.422 [2024-07-15 03:37:42.353624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.423 [2024-07-15 03:37:42.353653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.423 qpair failed and we were unable to recover it. 00:34:36.423 [2024-07-15 03:37:42.353832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.423 [2024-07-15 03:37:42.353861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.423 qpair failed and we were unable to recover it. 00:34:36.423 [2024-07-15 03:37:42.354066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.423 [2024-07-15 03:37:42.354093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.423 qpair failed and we were unable to recover it. 00:34:36.423 [2024-07-15 03:37:42.354245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.423 [2024-07-15 03:37:42.354274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.423 qpair failed and we were unable to recover it. 00:34:36.423 [2024-07-15 03:37:42.354455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.423 [2024-07-15 03:37:42.354482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.423 qpair failed and we were unable to recover it. 00:34:36.423 [2024-07-15 03:37:42.354643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.423 [2024-07-15 03:37:42.354670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.423 qpair failed and we were unable to recover it. 00:34:36.423 [2024-07-15 03:37:42.354826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.423 [2024-07-15 03:37:42.354856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.423 qpair failed and we were unable to recover it. 00:34:36.423 [2024-07-15 03:37:42.355053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.423 [2024-07-15 03:37:42.355080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.423 qpair failed and we were unable to recover it. 00:34:36.423 [2024-07-15 03:37:42.355187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.423 [2024-07-15 03:37:42.355213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.423 qpair failed and we were unable to recover it. 00:34:36.423 [2024-07-15 03:37:42.355354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.423 [2024-07-15 03:37:42.355381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.423 qpair failed and we were unable to recover it. 00:34:36.423 [2024-07-15 03:37:42.355525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.423 [2024-07-15 03:37:42.355553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.423 qpair failed and we were unable to recover it. 00:34:36.423 [2024-07-15 03:37:42.355663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.423 [2024-07-15 03:37:42.355690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.423 qpair failed and we were unable to recover it. 00:34:36.423 [2024-07-15 03:37:42.355829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.423 [2024-07-15 03:37:42.355856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.423 qpair failed and we were unable to recover it. 00:34:36.423 [2024-07-15 03:37:42.356026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.423 [2024-07-15 03:37:42.356055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.423 qpair failed and we were unable to recover it. 00:34:36.423 [2024-07-15 03:37:42.356212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.423 [2024-07-15 03:37:42.356239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.423 qpair failed and we were unable to recover it. 00:34:36.423 [2024-07-15 03:37:42.356380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.423 [2024-07-15 03:37:42.356407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.423 qpair failed and we were unable to recover it. 00:34:36.423 [2024-07-15 03:37:42.356514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.423 [2024-07-15 03:37:42.356541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.423 qpair failed and we were unable to recover it. 00:34:36.423 [2024-07-15 03:37:42.356707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.423 [2024-07-15 03:37:42.356733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.423 qpair failed and we were unable to recover it. 00:34:36.423 [2024-07-15 03:37:42.356892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.423 [2024-07-15 03:37:42.356924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.423 qpair failed and we were unable to recover it. 00:34:36.423 [2024-07-15 03:37:42.357074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.423 [2024-07-15 03:37:42.357105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.423 qpair failed and we were unable to recover it. 00:34:36.423 [2024-07-15 03:37:42.357271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.423 [2024-07-15 03:37:42.357297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.423 qpair failed and we were unable to recover it. 00:34:36.423 [2024-07-15 03:37:42.357435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.423 [2024-07-15 03:37:42.357462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.423 qpair failed and we were unable to recover it. 00:34:36.423 [2024-07-15 03:37:42.357572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.423 [2024-07-15 03:37:42.357604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.423 qpair failed and we were unable to recover it. 00:34:36.423 [2024-07-15 03:37:42.357746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.423 [2024-07-15 03:37:42.357773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.423 qpair failed and we were unable to recover it. 00:34:36.423 [2024-07-15 03:37:42.357909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.423 [2024-07-15 03:37:42.357936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.423 qpair failed and we were unable to recover it. 00:34:36.423 [2024-07-15 03:37:42.358071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.423 [2024-07-15 03:37:42.358098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.423 qpair failed and we were unable to recover it. 00:34:36.423 [2024-07-15 03:37:42.358230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.423 [2024-07-15 03:37:42.358257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.423 qpair failed and we were unable to recover it. 00:34:36.423 [2024-07-15 03:37:42.358363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.423 [2024-07-15 03:37:42.358390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.423 qpair failed and we were unable to recover it. 00:34:36.423 [2024-07-15 03:37:42.358594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.423 [2024-07-15 03:37:42.358623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.423 qpair failed and we were unable to recover it. 00:34:36.423 [2024-07-15 03:37:42.358772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.423 [2024-07-15 03:37:42.358802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.423 qpair failed and we were unable to recover it. 00:34:36.423 [2024-07-15 03:37:42.358991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.423 [2024-07-15 03:37:42.359019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.423 qpair failed and we were unable to recover it. 00:34:36.423 [2024-07-15 03:37:42.359135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.423 [2024-07-15 03:37:42.359177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.423 qpair failed and we were unable to recover it. 00:34:36.423 [2024-07-15 03:37:42.359307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.423 [2024-07-15 03:37:42.359334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.423 qpair failed and we were unable to recover it. 00:34:36.423 [2024-07-15 03:37:42.359474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.423 [2024-07-15 03:37:42.359502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.423 qpair failed and we were unable to recover it. 00:34:36.423 [2024-07-15 03:37:42.359668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.423 [2024-07-15 03:37:42.359700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.423 qpair failed and we were unable to recover it. 00:34:36.423 [2024-07-15 03:37:42.359860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.423 [2024-07-15 03:37:42.359892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.423 qpair failed and we were unable to recover it. 00:34:36.423 [2024-07-15 03:37:42.360004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.423 [2024-07-15 03:37:42.360047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.423 qpair failed and we were unable to recover it. 00:34:36.423 [2024-07-15 03:37:42.360176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.423 [2024-07-15 03:37:42.360206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.423 qpair failed and we were unable to recover it. 00:34:36.423 [2024-07-15 03:37:42.360362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.423 [2024-07-15 03:37:42.360388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.423 qpair failed and we were unable to recover it. 00:34:36.423 [2024-07-15 03:37:42.360503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.423 [2024-07-15 03:37:42.360529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.423 qpair failed and we were unable to recover it. 00:34:36.423 [2024-07-15 03:37:42.360631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.423 [2024-07-15 03:37:42.360658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.423 qpair failed and we were unable to recover it. 00:34:36.423 [2024-07-15 03:37:42.360797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.423 [2024-07-15 03:37:42.360824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.423 qpair failed and we were unable to recover it. 00:34:36.423 [2024-07-15 03:37:42.360954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.423 [2024-07-15 03:37:42.360982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.423 qpair failed and we were unable to recover it. 00:34:36.423 [2024-07-15 03:37:42.361114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.423 [2024-07-15 03:37:42.361141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.423 qpair failed and we were unable to recover it. 00:34:36.423 [2024-07-15 03:37:42.361324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.423 [2024-07-15 03:37:42.361351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.423 qpair failed and we were unable to recover it. 00:34:36.423 [2024-07-15 03:37:42.361533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.423 [2024-07-15 03:37:42.361562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.423 qpair failed and we were unable to recover it. 00:34:36.423 [2024-07-15 03:37:42.361710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.423 [2024-07-15 03:37:42.361739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.423 qpair failed and we were unable to recover it. 00:34:36.423 [2024-07-15 03:37:42.361893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.423 [2024-07-15 03:37:42.361921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.423 qpair failed and we were unable to recover it. 00:34:36.423 [2024-07-15 03:37:42.362034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.424 [2024-07-15 03:37:42.362060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.424 qpair failed and we were unable to recover it. 00:34:36.424 [2024-07-15 03:37:42.362198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.424 [2024-07-15 03:37:42.362230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.424 qpair failed and we were unable to recover it. 00:34:36.424 [2024-07-15 03:37:42.362356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.424 [2024-07-15 03:37:42.362383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.424 qpair failed and we were unable to recover it. 00:34:36.424 [2024-07-15 03:37:42.362502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.424 [2024-07-15 03:37:42.362529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.424 qpair failed and we were unable to recover it. 00:34:36.424 [2024-07-15 03:37:42.362680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.424 [2024-07-15 03:37:42.362710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.424 qpair failed and we were unable to recover it. 00:34:36.424 [2024-07-15 03:37:42.362899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.424 [2024-07-15 03:37:42.362926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.424 qpair failed and we were unable to recover it. 00:34:36.424 [2024-07-15 03:37:42.363041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.424 [2024-07-15 03:37:42.363086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.424 qpair failed and we were unable to recover it. 00:34:36.424 [2024-07-15 03:37:42.363231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.424 [2024-07-15 03:37:42.363261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.424 qpair failed and we were unable to recover it. 00:34:36.424 [2024-07-15 03:37:42.363386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.424 [2024-07-15 03:37:42.363414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.424 qpair failed and we were unable to recover it. 00:34:36.424 [2024-07-15 03:37:42.363578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.424 [2024-07-15 03:37:42.363624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.424 qpair failed and we were unable to recover it. 00:34:36.424 [2024-07-15 03:37:42.363807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.424 [2024-07-15 03:37:42.363837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.424 qpair failed and we were unable to recover it. 00:34:36.424 [2024-07-15 03:37:42.363979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.424 [2024-07-15 03:37:42.364007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.424 qpair failed and we were unable to recover it. 00:34:36.424 [2024-07-15 03:37:42.364150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.424 [2024-07-15 03:37:42.364195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.424 qpair failed and we were unable to recover it. 00:34:36.424 [2024-07-15 03:37:42.364417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.424 [2024-07-15 03:37:42.364471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.424 qpair failed and we were unable to recover it. 00:34:36.424 [2024-07-15 03:37:42.364656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.424 [2024-07-15 03:37:42.364687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.424 qpair failed and we were unable to recover it. 00:34:36.424 [2024-07-15 03:37:42.364846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.424 [2024-07-15 03:37:42.364884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.424 qpair failed and we were unable to recover it. 00:34:36.424 [2024-07-15 03:37:42.365057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.424 [2024-07-15 03:37:42.365085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.424 qpair failed and we were unable to recover it. 00:34:36.424 [2024-07-15 03:37:42.365227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.424 [2024-07-15 03:37:42.365254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.424 qpair failed and we were unable to recover it. 00:34:36.424 [2024-07-15 03:37:42.365431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.424 [2024-07-15 03:37:42.365461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.424 qpair failed and we were unable to recover it. 00:34:36.424 [2024-07-15 03:37:42.365681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.424 [2024-07-15 03:37:42.365736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.424 qpair failed and we were unable to recover it. 00:34:36.424 [2024-07-15 03:37:42.365922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.424 [2024-07-15 03:37:42.365949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.424 qpair failed and we were unable to recover it. 00:34:36.424 [2024-07-15 03:37:42.366059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.424 [2024-07-15 03:37:42.366086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.424 qpair failed and we were unable to recover it. 00:34:36.424 [2024-07-15 03:37:42.366245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.424 [2024-07-15 03:37:42.366274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.424 qpair failed and we were unable to recover it. 00:34:36.424 [2024-07-15 03:37:42.366408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.424 [2024-07-15 03:37:42.366434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.424 qpair failed and we were unable to recover it. 00:34:36.424 [2024-07-15 03:37:42.366599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.424 [2024-07-15 03:37:42.366625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.424 qpair failed and we were unable to recover it. 00:34:36.424 [2024-07-15 03:37:42.366760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.424 [2024-07-15 03:37:42.366791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.424 qpair failed and we were unable to recover it. 00:34:36.424 [2024-07-15 03:37:42.366950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.424 [2024-07-15 03:37:42.366978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.424 qpair failed and we were unable to recover it. 00:34:36.424 [2024-07-15 03:37:42.367113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.424 [2024-07-15 03:37:42.367157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.424 qpair failed and we were unable to recover it. 00:34:36.424 [2024-07-15 03:37:42.367370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.424 [2024-07-15 03:37:42.367425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.424 qpair failed and we were unable to recover it. 00:34:36.424 [2024-07-15 03:37:42.367576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.424 [2024-07-15 03:37:42.367604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.424 qpair failed and we were unable to recover it. 00:34:36.424 [2024-07-15 03:37:42.367721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.424 [2024-07-15 03:37:42.367747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.424 qpair failed and we were unable to recover it. 00:34:36.424 [2024-07-15 03:37:42.367885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.424 [2024-07-15 03:37:42.367912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.424 qpair failed and we were unable to recover it. 00:34:36.424 [2024-07-15 03:37:42.368055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.424 [2024-07-15 03:37:42.368081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.424 qpair failed and we were unable to recover it. 00:34:36.424 [2024-07-15 03:37:42.368237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.424 [2024-07-15 03:37:42.368267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.424 qpair failed and we were unable to recover it. 00:34:36.424 [2024-07-15 03:37:42.368393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.424 [2024-07-15 03:37:42.368425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.424 qpair failed and we were unable to recover it. 00:34:36.424 [2024-07-15 03:37:42.368592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.424 [2024-07-15 03:37:42.368619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.424 qpair failed and we were unable to recover it. 00:34:36.424 [2024-07-15 03:37:42.368757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.424 [2024-07-15 03:37:42.368804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.424 qpair failed and we were unable to recover it. 00:34:36.424 [2024-07-15 03:37:42.368972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.424 [2024-07-15 03:37:42.369002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.424 qpair failed and we were unable to recover it. 00:34:36.424 [2024-07-15 03:37:42.369134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.424 [2024-07-15 03:37:42.369160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.424 qpair failed and we were unable to recover it. 00:34:36.424 [2024-07-15 03:37:42.369336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.424 [2024-07-15 03:37:42.369366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.424 qpair failed and we were unable to recover it. 00:34:36.424 [2024-07-15 03:37:42.369532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.424 [2024-07-15 03:37:42.369583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.424 qpair failed and we were unable to recover it. 00:34:36.424 [2024-07-15 03:37:42.369728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.424 [2024-07-15 03:37:42.369765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.424 qpair failed and we were unable to recover it. 00:34:36.424 [2024-07-15 03:37:42.369899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.424 [2024-07-15 03:37:42.369926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.424 qpair failed and we were unable to recover it. 00:34:36.424 [2024-07-15 03:37:42.370086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.424 [2024-07-15 03:37:42.370115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.424 qpair failed and we were unable to recover it. 00:34:36.424 [2024-07-15 03:37:42.370262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.424 [2024-07-15 03:37:42.370289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.424 qpair failed and we were unable to recover it. 00:34:36.424 [2024-07-15 03:37:42.370398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.424 [2024-07-15 03:37:42.370424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.424 qpair failed and we were unable to recover it. 00:34:36.424 [2024-07-15 03:37:42.370595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.424 [2024-07-15 03:37:42.370625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.424 qpair failed and we were unable to recover it. 00:34:36.424 [2024-07-15 03:37:42.370754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.424 [2024-07-15 03:37:42.370801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.424 qpair failed and we were unable to recover it. 00:34:36.424 [2024-07-15 03:37:42.370986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.424 [2024-07-15 03:37:42.371013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.424 qpair failed and we were unable to recover it. 00:34:36.424 [2024-07-15 03:37:42.371146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.424 [2024-07-15 03:37:42.371191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.424 qpair failed and we were unable to recover it. 00:34:36.424 [2024-07-15 03:37:42.371329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.424 [2024-07-15 03:37:42.371363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.424 qpair failed and we were unable to recover it. 00:34:36.424 [2024-07-15 03:37:42.371509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.424 [2024-07-15 03:37:42.371552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.424 qpair failed and we were unable to recover it. 00:34:36.424 [2024-07-15 03:37:42.371745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.424 [2024-07-15 03:37:42.371771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.424 qpair failed and we were unable to recover it. 00:34:36.425 [2024-07-15 03:37:42.371896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.425 [2024-07-15 03:37:42.371924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.425 qpair failed and we were unable to recover it. 00:34:36.425 [2024-07-15 03:37:42.372065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.425 [2024-07-15 03:37:42.372113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.425 qpair failed and we were unable to recover it. 00:34:36.425 [2024-07-15 03:37:42.372316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.425 [2024-07-15 03:37:42.372345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.425 qpair failed and we were unable to recover it. 00:34:36.425 [2024-07-15 03:37:42.372507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.425 [2024-07-15 03:37:42.372534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.425 qpair failed and we were unable to recover it. 00:34:36.425 [2024-07-15 03:37:42.372688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.425 [2024-07-15 03:37:42.372717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.425 qpair failed and we were unable to recover it. 00:34:36.425 [2024-07-15 03:37:42.372892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.425 [2024-07-15 03:37:42.372923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.425 qpair failed and we were unable to recover it. 00:34:36.425 [2024-07-15 03:37:42.373093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.425 [2024-07-15 03:37:42.373120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.425 qpair failed and we were unable to recover it. 00:34:36.425 [2024-07-15 03:37:42.373229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.425 [2024-07-15 03:37:42.373272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.425 qpair failed and we were unable to recover it. 00:34:36.425 [2024-07-15 03:37:42.373454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.425 [2024-07-15 03:37:42.373483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.425 qpair failed and we were unable to recover it. 00:34:36.425 [2024-07-15 03:37:42.373613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.425 [2024-07-15 03:37:42.373641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.425 qpair failed and we were unable to recover it. 00:34:36.425 [2024-07-15 03:37:42.373825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.425 [2024-07-15 03:37:42.373854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.425 qpair failed and we were unable to recover it. 00:34:36.425 [2024-07-15 03:37:42.373981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.425 [2024-07-15 03:37:42.374010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.425 qpair failed and we were unable to recover it. 00:34:36.425 [2024-07-15 03:37:42.374166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.425 [2024-07-15 03:37:42.374192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.425 qpair failed and we were unable to recover it. 00:34:36.425 [2024-07-15 03:37:42.374313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.425 [2024-07-15 03:37:42.374341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.425 qpair failed and we were unable to recover it. 00:34:36.425 [2024-07-15 03:37:42.374503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.425 [2024-07-15 03:37:42.374532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.425 qpair failed and we were unable to recover it. 00:34:36.425 [2024-07-15 03:37:42.374704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.425 [2024-07-15 03:37:42.374730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.425 qpair failed and we were unable to recover it. 00:34:36.425 [2024-07-15 03:37:42.374867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.425 [2024-07-15 03:37:42.374928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.425 qpair failed and we were unable to recover it. 00:34:36.425 [2024-07-15 03:37:42.375082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.425 [2024-07-15 03:37:42.375111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.425 qpair failed and we were unable to recover it. 00:34:36.425 [2024-07-15 03:37:42.375295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.425 [2024-07-15 03:37:42.375321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.425 qpair failed and we were unable to recover it. 00:34:36.425 [2024-07-15 03:37:42.375475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.425 [2024-07-15 03:37:42.375505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.425 qpair failed and we were unable to recover it. 00:34:36.425 [2024-07-15 03:37:42.375652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.425 [2024-07-15 03:37:42.375682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.425 qpair failed and we were unable to recover it. 00:34:36.425 [2024-07-15 03:37:42.375823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.425 [2024-07-15 03:37:42.375851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.425 qpair failed and we were unable to recover it. 00:34:36.425 [2024-07-15 03:37:42.376050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.425 [2024-07-15 03:37:42.376080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.425 qpair failed and we were unable to recover it. 00:34:36.425 [2024-07-15 03:37:42.376257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.425 [2024-07-15 03:37:42.376286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.425 qpair failed and we were unable to recover it. 00:34:36.425 [2024-07-15 03:37:42.376442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.425 [2024-07-15 03:37:42.376470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.425 qpair failed and we were unable to recover it. 00:34:36.425 [2024-07-15 03:37:42.376604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.425 [2024-07-15 03:37:42.376630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.425 qpair failed and we were unable to recover it. 00:34:36.425 [2024-07-15 03:37:42.376805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.425 [2024-07-15 03:37:42.376835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.425 qpair failed and we were unable to recover it. 00:34:36.425 [2024-07-15 03:37:42.377010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.425 [2024-07-15 03:37:42.377038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.425 qpair failed and we were unable to recover it. 00:34:36.425 [2024-07-15 03:37:42.377234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.425 [2024-07-15 03:37:42.377263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.425 qpair failed and we were unable to recover it. 00:34:36.425 [2024-07-15 03:37:42.377401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.425 [2024-07-15 03:37:42.377432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.425 qpair failed and we were unable to recover it. 00:34:36.425 [2024-07-15 03:37:42.377603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.425 [2024-07-15 03:37:42.377630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.425 qpair failed and we were unable to recover it. 00:34:36.425 [2024-07-15 03:37:42.377767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.425 [2024-07-15 03:37:42.377795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.425 qpair failed and we were unable to recover it. 00:34:36.425 [2024-07-15 03:37:42.377949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.425 [2024-07-15 03:37:42.377980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.425 qpair failed and we were unable to recover it. 00:34:36.425 [2024-07-15 03:37:42.378153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.425 [2024-07-15 03:37:42.378179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.425 qpair failed and we were unable to recover it. 00:34:36.425 [2024-07-15 03:37:42.378327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.425 [2024-07-15 03:37:42.378356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.425 qpair failed and we were unable to recover it. 00:34:36.425 [2024-07-15 03:37:42.378500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.425 [2024-07-15 03:37:42.378529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.425 qpair failed and we were unable to recover it. 00:34:36.425 [2024-07-15 03:37:42.378669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.425 [2024-07-15 03:37:42.378696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.425 qpair failed and we were unable to recover it. 00:34:36.425 [2024-07-15 03:37:42.378831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.425 [2024-07-15 03:37:42.378857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.425 qpair failed and we were unable to recover it. 00:34:36.425 [2024-07-15 03:37:42.379035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.425 [2024-07-15 03:37:42.379066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.425 qpair failed and we were unable to recover it. 00:34:36.425 [2024-07-15 03:37:42.379206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.425 [2024-07-15 03:37:42.379239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.425 qpair failed and we were unable to recover it. 00:34:36.425 [2024-07-15 03:37:42.379374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.425 [2024-07-15 03:37:42.379416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.425 qpair failed and we were unable to recover it. 00:34:36.425 [2024-07-15 03:37:42.379541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.425 [2024-07-15 03:37:42.379572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.425 qpair failed and we were unable to recover it. 00:34:36.425 [2024-07-15 03:37:42.379720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.425 [2024-07-15 03:37:42.379749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.425 qpair failed and we were unable to recover it. 00:34:36.425 [2024-07-15 03:37:42.379904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.425 [2024-07-15 03:37:42.379948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.425 qpair failed and we were unable to recover it. 00:34:36.425 [2024-07-15 03:37:42.380097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.425 [2024-07-15 03:37:42.380124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.425 qpair failed and we were unable to recover it. 00:34:36.425 [2024-07-15 03:37:42.380312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.425 [2024-07-15 03:37:42.380338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.425 qpair failed and we were unable to recover it. 00:34:36.425 [2024-07-15 03:37:42.380478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.425 [2024-07-15 03:37:42.380504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.425 qpair failed and we were unable to recover it. 00:34:36.425 [2024-07-15 03:37:42.380640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.425 [2024-07-15 03:37:42.380666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.425 qpair failed and we were unable to recover it. 00:34:36.425 [2024-07-15 03:37:42.380801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.425 [2024-07-15 03:37:42.380829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.425 qpair failed and we were unable to recover it. 00:34:36.425 [2024-07-15 03:37:42.380970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.425 [2024-07-15 03:37:42.380997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.425 qpair failed and we were unable to recover it. 00:34:36.425 [2024-07-15 03:37:42.381140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.425 [2024-07-15 03:37:42.381183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.425 qpair failed and we were unable to recover it. 00:34:36.425 [2024-07-15 03:37:42.381347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.425 [2024-07-15 03:37:42.381373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.425 qpair failed and we were unable to recover it. 00:34:36.425 [2024-07-15 03:37:42.381557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.425 [2024-07-15 03:37:42.381588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.425 qpair failed and we were unable to recover it. 00:34:36.425 [2024-07-15 03:37:42.381771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.425 [2024-07-15 03:37:42.381801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.425 qpair failed and we were unable to recover it. 00:34:36.425 [2024-07-15 03:37:42.381950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.425 [2024-07-15 03:37:42.381977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.425 qpair failed and we were unable to recover it. 00:34:36.426 [2024-07-15 03:37:42.382115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.426 [2024-07-15 03:37:42.382141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.426 qpair failed and we were unable to recover it. 00:34:36.426 [2024-07-15 03:37:42.382268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.426 [2024-07-15 03:37:42.382328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.426 qpair failed and we were unable to recover it. 00:34:36.426 [2024-07-15 03:37:42.382533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.426 [2024-07-15 03:37:42.382560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.426 qpair failed and we were unable to recover it. 00:34:36.426 [2024-07-15 03:37:42.382713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.426 [2024-07-15 03:37:42.382752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.426 qpair failed and we were unable to recover it. 00:34:36.426 [2024-07-15 03:37:42.382956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.426 [2024-07-15 03:37:42.382984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.426 qpair failed and we were unable to recover it. 00:34:36.426 [2024-07-15 03:37:42.383111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.426 [2024-07-15 03:37:42.383137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.426 qpair failed and we were unable to recover it. 00:34:36.426 [2024-07-15 03:37:42.383250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.426 [2024-07-15 03:37:42.383278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.426 qpair failed and we were unable to recover it. 00:34:36.426 [2024-07-15 03:37:42.383482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.426 [2024-07-15 03:37:42.383512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.426 qpair failed and we were unable to recover it. 00:34:36.426 [2024-07-15 03:37:42.383633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.426 [2024-07-15 03:37:42.383661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.426 qpair failed and we were unable to recover it. 00:34:36.426 [2024-07-15 03:37:42.383795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.426 [2024-07-15 03:37:42.383831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.426 qpair failed and we were unable to recover it. 00:34:36.426 [2024-07-15 03:37:42.384007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.426 [2024-07-15 03:37:42.384046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.426 qpair failed and we were unable to recover it. 00:34:36.426 [2024-07-15 03:37:42.384231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.426 [2024-07-15 03:37:42.384264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.426 qpair failed and we were unable to recover it. 00:34:36.426 [2024-07-15 03:37:42.384392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.426 [2024-07-15 03:37:42.384421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.426 qpair failed and we were unable to recover it. 00:34:36.426 [2024-07-15 03:37:42.384572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.426 [2024-07-15 03:37:42.384607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.426 qpair failed and we were unable to recover it. 00:34:36.426 [2024-07-15 03:37:42.384771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.426 [2024-07-15 03:37:42.384797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.426 qpair failed and we were unable to recover it. 00:34:36.426 [2024-07-15 03:37:42.384904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.426 [2024-07-15 03:37:42.384932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.426 qpair failed and we were unable to recover it. 00:34:36.426 [2024-07-15 03:37:42.385092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.426 [2024-07-15 03:37:42.385122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.426 qpair failed and we were unable to recover it. 00:34:36.426 [2024-07-15 03:37:42.385277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.426 [2024-07-15 03:37:42.385305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.426 qpair failed and we were unable to recover it. 00:34:36.426 [2024-07-15 03:37:42.385445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.426 [2024-07-15 03:37:42.385490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.426 qpair failed and we were unable to recover it. 00:34:36.426 [2024-07-15 03:37:42.385612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.426 [2024-07-15 03:37:42.385641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.426 qpair failed and we were unable to recover it. 00:34:36.426 [2024-07-15 03:37:42.385802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.426 [2024-07-15 03:37:42.385828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.426 qpair failed and we were unable to recover it. 00:34:36.426 [2024-07-15 03:37:42.385946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.426 [2024-07-15 03:37:42.385974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.426 qpair failed and we were unable to recover it. 00:34:36.426 [2024-07-15 03:37:42.386087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.426 [2024-07-15 03:37:42.386116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.426 qpair failed and we were unable to recover it. 00:34:36.426 [2024-07-15 03:37:42.386239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.426 [2024-07-15 03:37:42.386267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.426 qpair failed and we were unable to recover it. 00:34:36.426 [2024-07-15 03:37:42.386433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.426 [2024-07-15 03:37:42.386460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.426 qpair failed and we were unable to recover it. 00:34:36.426 [2024-07-15 03:37:42.386603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.426 [2024-07-15 03:37:42.386644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.426 qpair failed and we were unable to recover it. 00:34:36.426 [2024-07-15 03:37:42.386794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.426 [2024-07-15 03:37:42.386837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.426 qpair failed and we were unable to recover it. 00:34:36.426 [2024-07-15 03:37:42.387041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.426 [2024-07-15 03:37:42.387067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.426 qpair failed and we were unable to recover it. 00:34:36.426 [2024-07-15 03:37:42.387215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.426 [2024-07-15 03:37:42.387261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.426 qpair failed and we were unable to recover it. 00:34:36.426 [2024-07-15 03:37:42.387449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.426 [2024-07-15 03:37:42.387477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.426 qpair failed and we were unable to recover it. 00:34:36.426 [2024-07-15 03:37:42.387642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.426 [2024-07-15 03:37:42.387672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.426 qpair failed and we were unable to recover it. 00:34:36.426 [2024-07-15 03:37:42.387822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.426 [2024-07-15 03:37:42.387851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.426 qpair failed and we were unable to recover it. 00:34:36.426 [2024-07-15 03:37:42.388022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.426 [2024-07-15 03:37:42.388050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.426 qpair failed and we were unable to recover it. 00:34:36.426 [2024-07-15 03:37:42.388208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.426 [2024-07-15 03:37:42.388238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.426 qpair failed and we were unable to recover it. 00:34:36.426 [2024-07-15 03:37:42.388424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.426 [2024-07-15 03:37:42.388453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.426 qpair failed and we were unable to recover it. 00:34:36.426 [2024-07-15 03:37:42.388641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.426 [2024-07-15 03:37:42.388668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.426 qpair failed and we were unable to recover it. 00:34:36.426 [2024-07-15 03:37:42.388801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.426 [2024-07-15 03:37:42.388827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.426 qpair failed and we were unable to recover it. 00:34:36.426 [2024-07-15 03:37:42.388995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.426 [2024-07-15 03:37:42.389026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.426 qpair failed and we were unable to recover it. 00:34:36.426 [2024-07-15 03:37:42.389165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.426 [2024-07-15 03:37:42.389191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.426 qpair failed and we were unable to recover it. 00:34:36.426 [2024-07-15 03:37:42.389313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.426 [2024-07-15 03:37:42.389339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.426 qpair failed and we were unable to recover it. 00:34:36.426 [2024-07-15 03:37:42.389469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.426 [2024-07-15 03:37:42.389499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.426 qpair failed and we were unable to recover it. 00:34:36.426 [2024-07-15 03:37:42.389659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.426 [2024-07-15 03:37:42.389685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.426 qpair failed and we were unable to recover it. 00:34:36.426 [2024-07-15 03:37:42.389866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.426 [2024-07-15 03:37:42.389906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.426 qpair failed and we were unable to recover it. 00:34:36.426 [2024-07-15 03:37:42.390033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.426 [2024-07-15 03:37:42.390062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.426 qpair failed and we were unable to recover it. 00:34:36.426 [2024-07-15 03:37:42.390193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.426 [2024-07-15 03:37:42.390220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.426 qpair failed and we were unable to recover it. 00:34:36.426 [2024-07-15 03:37:42.390356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.426 [2024-07-15 03:37:42.390385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.426 qpair failed and we were unable to recover it. 00:34:36.426 [2024-07-15 03:37:42.390524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.426 [2024-07-15 03:37:42.390555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.426 qpair failed and we were unable to recover it. 00:34:36.426 [2024-07-15 03:37:42.390718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.426 [2024-07-15 03:37:42.390745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.426 qpair failed and we were unable to recover it. 00:34:36.426 [2024-07-15 03:37:42.390914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.426 [2024-07-15 03:37:42.390944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.426 qpair failed and we were unable to recover it. 00:34:36.426 [2024-07-15 03:37:42.391097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.426 [2024-07-15 03:37:42.391125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.426 qpair failed and we were unable to recover it. 00:34:36.426 [2024-07-15 03:37:42.391321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.426 [2024-07-15 03:37:42.391347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.426 qpair failed and we were unable to recover it. 00:34:36.426 [2024-07-15 03:37:42.391512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.426 [2024-07-15 03:37:42.391542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.426 qpair failed and we were unable to recover it. 00:34:36.426 [2024-07-15 03:37:42.391723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.427 [2024-07-15 03:37:42.391750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.427 qpair failed and we were unable to recover it. 00:34:36.427 [2024-07-15 03:37:42.391886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.427 [2024-07-15 03:37:42.391918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.427 qpair failed and we were unable to recover it. 00:34:36.427 [2024-07-15 03:37:42.392100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.427 [2024-07-15 03:37:42.392129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.427 qpair failed and we were unable to recover it. 00:34:36.427 [2024-07-15 03:37:42.392287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.427 [2024-07-15 03:37:42.392322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.427 qpair failed and we were unable to recover it. 00:34:36.427 [2024-07-15 03:37:42.392456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.427 [2024-07-15 03:37:42.392482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.427 qpair failed and we were unable to recover it. 00:34:36.427 [2024-07-15 03:37:42.392614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.427 [2024-07-15 03:37:42.392640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.427 qpair failed and we were unable to recover it. 00:34:36.427 [2024-07-15 03:37:42.392809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.427 [2024-07-15 03:37:42.392838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.427 qpair failed and we were unable to recover it. 00:34:36.427 [2024-07-15 03:37:42.392980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.427 [2024-07-15 03:37:42.393006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.427 qpair failed and we were unable to recover it. 00:34:36.427 [2024-07-15 03:37:42.393149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.427 [2024-07-15 03:37:42.393203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.427 qpair failed and we were unable to recover it. 00:34:36.427 [2024-07-15 03:37:42.393370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.427 [2024-07-15 03:37:42.393400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.427 qpair failed and we were unable to recover it. 00:34:36.427 [2024-07-15 03:37:42.393531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.427 [2024-07-15 03:37:42.393557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.427 qpair failed and we were unable to recover it. 00:34:36.427 [2024-07-15 03:37:42.393688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.427 [2024-07-15 03:37:42.393714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.427 qpair failed and we were unable to recover it. 00:34:36.427 [2024-07-15 03:37:42.393886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.427 [2024-07-15 03:37:42.393913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.427 qpair failed and we were unable to recover it. 00:34:36.427 [2024-07-15 03:37:42.394059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.427 [2024-07-15 03:37:42.394085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.427 qpair failed and we were unable to recover it. 00:34:36.427 [2024-07-15 03:37:42.394223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.427 [2024-07-15 03:37:42.394249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.427 qpair failed and we were unable to recover it. 00:34:36.427 [2024-07-15 03:37:42.394437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.427 [2024-07-15 03:37:42.394466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.427 qpair failed and we were unable to recover it. 00:34:36.427 [2024-07-15 03:37:42.394589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.427 [2024-07-15 03:37:42.394623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.427 qpair failed and we were unable to recover it. 00:34:36.427 [2024-07-15 03:37:42.394768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.427 [2024-07-15 03:37:42.394812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.427 qpair failed and we were unable to recover it. 00:34:36.427 [2024-07-15 03:37:42.394973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.427 [2024-07-15 03:37:42.395004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.427 qpair failed and we were unable to recover it. 00:34:36.427 [2024-07-15 03:37:42.395144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.427 [2024-07-15 03:37:42.395180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.427 qpair failed and we were unable to recover it. 00:34:36.427 [2024-07-15 03:37:42.395321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.427 [2024-07-15 03:37:42.395347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.427 qpair failed and we were unable to recover it. 00:34:36.427 [2024-07-15 03:37:42.395486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.427 [2024-07-15 03:37:42.395512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.427 qpair failed and we were unable to recover it. 00:34:36.427 [2024-07-15 03:37:42.395649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.427 [2024-07-15 03:37:42.395675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.427 qpair failed and we were unable to recover it. 00:34:36.427 [2024-07-15 03:37:42.395779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.427 [2024-07-15 03:37:42.395805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.427 qpair failed and we were unable to recover it. 00:34:36.427 [2024-07-15 03:37:42.395970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.427 [2024-07-15 03:37:42.396000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.427 qpair failed and we were unable to recover it. 00:34:36.427 [2024-07-15 03:37:42.396163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.427 [2024-07-15 03:37:42.396189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.427 qpair failed and we were unable to recover it. 00:34:36.427 [2024-07-15 03:37:42.396372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.427 [2024-07-15 03:37:42.396400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.427 qpair failed and we were unable to recover it. 00:34:36.427 [2024-07-15 03:37:42.396568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.427 [2024-07-15 03:37:42.396598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.427 qpair failed and we were unable to recover it. 00:34:36.427 [2024-07-15 03:37:42.396734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.427 [2024-07-15 03:37:42.396760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.427 qpair failed and we were unable to recover it. 00:34:36.427 [2024-07-15 03:37:42.396903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.427 [2024-07-15 03:37:42.396931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.427 qpair failed and we were unable to recover it. 00:34:36.427 [2024-07-15 03:37:42.397099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.427 [2024-07-15 03:37:42.397129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.427 qpair failed and we were unable to recover it. 00:34:36.427 [2024-07-15 03:37:42.397258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.427 [2024-07-15 03:37:42.397285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.427 qpair failed and we were unable to recover it. 00:34:36.427 [2024-07-15 03:37:42.397427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.427 [2024-07-15 03:37:42.397454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.427 qpair failed and we were unable to recover it. 00:34:36.427 [2024-07-15 03:37:42.397597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.427 [2024-07-15 03:37:42.397625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.427 qpair failed and we were unable to recover it. 00:34:36.427 [2024-07-15 03:37:42.397760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.427 [2024-07-15 03:37:42.397786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.427 qpair failed and we were unable to recover it. 00:34:36.427 [2024-07-15 03:37:42.397923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.427 [2024-07-15 03:37:42.397950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.427 qpair failed and we were unable to recover it. 00:34:36.427 [2024-07-15 03:37:42.398154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.427 [2024-07-15 03:37:42.398190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.427 qpair failed and we were unable to recover it. 00:34:36.427 [2024-07-15 03:37:42.398327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.427 [2024-07-15 03:37:42.398359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.427 qpair failed and we were unable to recover it. 00:34:36.427 [2024-07-15 03:37:42.398542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.427 [2024-07-15 03:37:42.398571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.427 qpair failed and we were unable to recover it. 00:34:36.427 [2024-07-15 03:37:42.398725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.427 [2024-07-15 03:37:42.398754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.427 qpair failed and we were unable to recover it. 00:34:36.427 [2024-07-15 03:37:42.398900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.427 [2024-07-15 03:37:42.398927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.427 qpair failed and we were unable to recover it. 00:34:36.427 [2024-07-15 03:37:42.399060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.427 [2024-07-15 03:37:42.399092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.427 qpair failed and we were unable to recover it. 00:34:36.427 [2024-07-15 03:37:42.399235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.427 [2024-07-15 03:37:42.399265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.427 qpair failed and we were unable to recover it. 00:34:36.427 [2024-07-15 03:37:42.399423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.427 [2024-07-15 03:37:42.399450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.427 qpair failed and we were unable to recover it. 00:34:36.427 [2024-07-15 03:37:42.399588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.427 [2024-07-15 03:37:42.399631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.427 qpair failed and we were unable to recover it. 00:34:36.427 [2024-07-15 03:37:42.399786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.427 [2024-07-15 03:37:42.399816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.427 qpair failed and we were unable to recover it. 00:34:36.427 [2024-07-15 03:37:42.399987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.427 [2024-07-15 03:37:42.400017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.427 qpair failed and we were unable to recover it. 00:34:36.427 [2024-07-15 03:37:42.400140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.427 [2024-07-15 03:37:42.400167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.427 qpair failed and we were unable to recover it. 00:34:36.427 [2024-07-15 03:37:42.400282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.427 [2024-07-15 03:37:42.400308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.427 qpair failed and we were unable to recover it. 00:34:36.427 [2024-07-15 03:37:42.400474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.427 [2024-07-15 03:37:42.400501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.427 qpair failed and we were unable to recover it. 00:34:36.427 [2024-07-15 03:37:42.400652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.427 [2024-07-15 03:37:42.400681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.427 qpair failed and we were unable to recover it. 00:34:36.427 [2024-07-15 03:37:42.400859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.427 [2024-07-15 03:37:42.400901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.427 qpair failed and we were unable to recover it. 00:34:36.427 [2024-07-15 03:37:42.401066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.427 [2024-07-15 03:37:42.401094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.427 qpair failed and we were unable to recover it. 00:34:36.427 [2024-07-15 03:37:42.401287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.427 [2024-07-15 03:37:42.401324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.427 qpair failed and we were unable to recover it. 00:34:36.427 [2024-07-15 03:37:42.401446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.427 [2024-07-15 03:37:42.401475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.427 qpair failed and we were unable to recover it. 00:34:36.427 [2024-07-15 03:37:42.401640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.427 [2024-07-15 03:37:42.401666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.427 qpair failed and we were unable to recover it. 00:34:36.427 [2024-07-15 03:37:42.401846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.427 [2024-07-15 03:37:42.401887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.427 qpair failed and we were unable to recover it. 00:34:36.427 [2024-07-15 03:37:42.402044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.427 [2024-07-15 03:37:42.402075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.427 qpair failed and we were unable to recover it. 00:34:36.427 [2024-07-15 03:37:42.402244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.428 [2024-07-15 03:37:42.402270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.428 qpair failed and we were unable to recover it. 00:34:36.428 [2024-07-15 03:37:42.402407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.428 [2024-07-15 03:37:42.402454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.428 qpair failed and we were unable to recover it. 00:34:36.428 [2024-07-15 03:37:42.402583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.428 [2024-07-15 03:37:42.402612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.428 qpair failed and we were unable to recover it. 00:34:36.428 [2024-07-15 03:37:42.402801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.428 [2024-07-15 03:37:42.402827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.428 qpair failed and we were unable to recover it. 00:34:36.428 [2024-07-15 03:37:42.402966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.428 [2024-07-15 03:37:42.402993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.428 qpair failed and we were unable to recover it. 00:34:36.428 [2024-07-15 03:37:42.403115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.428 [2024-07-15 03:37:42.403142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.428 qpair failed and we were unable to recover it. 00:34:36.428 [2024-07-15 03:37:42.403321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.428 [2024-07-15 03:37:42.403347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.428 qpair failed and we were unable to recover it. 00:34:36.428 [2024-07-15 03:37:42.403477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.428 [2024-07-15 03:37:42.403519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.428 qpair failed and we were unable to recover it. 00:34:36.428 [2024-07-15 03:37:42.403635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.428 [2024-07-15 03:37:42.403672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.428 qpair failed and we were unable to recover it. 00:34:36.428 [2024-07-15 03:37:42.403890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.428 [2024-07-15 03:37:42.403917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.428 qpair failed and we were unable to recover it. 00:34:36.428 [2024-07-15 03:37:42.404037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.428 [2024-07-15 03:37:42.404064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.428 qpair failed and we were unable to recover it. 00:34:36.428 [2024-07-15 03:37:42.404232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.428 [2024-07-15 03:37:42.404265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.428 qpair failed and we were unable to recover it. 00:34:36.428 [2024-07-15 03:37:42.404405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.428 [2024-07-15 03:37:42.404433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.428 qpair failed and we were unable to recover it. 00:34:36.428 [2024-07-15 03:37:42.404573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.428 [2024-07-15 03:37:42.404617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.428 qpair failed and we were unable to recover it. 00:34:36.428 [2024-07-15 03:37:42.404763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.428 [2024-07-15 03:37:42.404795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.428 qpair failed and we were unable to recover it. 00:34:36.428 [2024-07-15 03:37:42.404959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.428 [2024-07-15 03:37:42.404997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.428 qpair failed and we were unable to recover it. 00:34:36.428 [2024-07-15 03:37:42.405105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.428 [2024-07-15 03:37:42.405133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.428 qpair failed and we were unable to recover it. 00:34:36.428 [2024-07-15 03:37:42.405280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.428 [2024-07-15 03:37:42.405312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.428 qpair failed and we were unable to recover it. 00:34:36.428 [2024-07-15 03:37:42.405449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.428 [2024-07-15 03:37:42.405476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.428 qpair failed and we were unable to recover it. 00:34:36.428 [2024-07-15 03:37:42.405611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.428 [2024-07-15 03:37:42.405638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.428 qpair failed and we were unable to recover it. 00:34:36.428 [2024-07-15 03:37:42.405791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.428 [2024-07-15 03:37:42.405820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.428 qpair failed and we were unable to recover it. 00:34:36.428 [2024-07-15 03:37:42.406010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.428 [2024-07-15 03:37:42.406038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.428 qpair failed and we were unable to recover it. 00:34:36.428 [2024-07-15 03:37:42.406169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.428 [2024-07-15 03:37:42.406213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.428 qpair failed and we were unable to recover it. 00:34:36.428 [2024-07-15 03:37:42.406369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.428 [2024-07-15 03:37:42.406404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.428 qpair failed and we were unable to recover it. 00:34:36.428 [2024-07-15 03:37:42.406571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.428 [2024-07-15 03:37:42.406598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.428 qpair failed and we were unable to recover it. 00:34:36.428 [2024-07-15 03:37:42.406759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.428 [2024-07-15 03:37:42.406786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.428 qpair failed and we were unable to recover it. 00:34:36.428 [2024-07-15 03:37:42.406957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.428 [2024-07-15 03:37:42.406988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.428 qpair failed and we were unable to recover it. 00:34:36.428 [2024-07-15 03:37:42.407147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.428 [2024-07-15 03:37:42.407177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.428 qpair failed and we were unable to recover it. 00:34:36.428 [2024-07-15 03:37:42.407342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.428 [2024-07-15 03:37:42.407372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.428 qpair failed and we were unable to recover it. 00:34:36.428 [2024-07-15 03:37:42.407522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.428 [2024-07-15 03:37:42.407551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.428 qpair failed and we were unable to recover it. 00:34:36.428 [2024-07-15 03:37:42.407711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.428 [2024-07-15 03:37:42.407738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.428 qpair failed and we were unable to recover it. 00:34:36.428 [2024-07-15 03:37:42.407874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.428 [2024-07-15 03:37:42.407905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.428 qpair failed and we were unable to recover it. 00:34:36.428 [2024-07-15 03:37:42.408058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.428 [2024-07-15 03:37:42.408087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.428 qpair failed and we were unable to recover it. 00:34:36.428 [2024-07-15 03:37:42.408248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.428 [2024-07-15 03:37:42.408275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.428 qpair failed and we were unable to recover it. 00:34:36.428 [2024-07-15 03:37:42.408386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.428 [2024-07-15 03:37:42.408414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.428 qpair failed and we were unable to recover it. 00:34:36.428 [2024-07-15 03:37:42.408553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.428 [2024-07-15 03:37:42.408582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.428 qpair failed and we were unable to recover it. 00:34:36.428 [2024-07-15 03:37:42.408759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.428 [2024-07-15 03:37:42.408789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.428 qpair failed and we were unable to recover it. 00:34:36.428 [2024-07-15 03:37:42.408930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.428 [2024-07-15 03:37:42.408964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.428 qpair failed and we were unable to recover it. 00:34:36.428 [2024-07-15 03:37:42.409115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.428 [2024-07-15 03:37:42.409142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.428 qpair failed and we were unable to recover it. 00:34:36.428 [2024-07-15 03:37:42.409306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.428 [2024-07-15 03:37:42.409332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.428 qpair failed and we were unable to recover it. 00:34:36.428 [2024-07-15 03:37:42.409508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.428 [2024-07-15 03:37:42.409538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.428 qpair failed and we were unable to recover it. 00:34:36.428 [2024-07-15 03:37:42.409682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.428 [2024-07-15 03:37:42.409712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.428 qpair failed and we were unable to recover it. 00:34:36.428 [2024-07-15 03:37:42.409863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.428 [2024-07-15 03:37:42.409907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.428 qpair failed and we were unable to recover it. 00:34:36.428 [2024-07-15 03:37:42.410045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.428 [2024-07-15 03:37:42.410073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.428 qpair failed and we were unable to recover it. 00:34:36.428 [2024-07-15 03:37:42.410214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.428 [2024-07-15 03:37:42.410252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.428 qpair failed and we were unable to recover it. 00:34:36.428 [2024-07-15 03:37:42.410386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.428 [2024-07-15 03:37:42.410412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.428 qpair failed and we were unable to recover it. 00:34:36.428 [2024-07-15 03:37:42.410550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.428 [2024-07-15 03:37:42.410577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.428 qpair failed and we were unable to recover it. 00:34:36.428 [2024-07-15 03:37:42.410717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.428 [2024-07-15 03:37:42.410747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.428 qpair failed and we were unable to recover it. 00:34:36.428 [2024-07-15 03:37:42.410922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.428 [2024-07-15 03:37:42.410954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.428 qpair failed and we were unable to recover it. 00:34:36.428 [2024-07-15 03:37:42.411098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.428 [2024-07-15 03:37:42.411128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.428 qpair failed and we were unable to recover it. 00:34:36.428 [2024-07-15 03:37:42.411291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.428 [2024-07-15 03:37:42.411320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.428 qpair failed and we were unable to recover it. 00:34:36.428 [2024-07-15 03:37:42.411462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.428 [2024-07-15 03:37:42.411490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.428 qpair failed and we were unable to recover it. 00:34:36.428 [2024-07-15 03:37:42.411628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.428 [2024-07-15 03:37:42.411668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.428 qpair failed and we were unable to recover it. 00:34:36.428 [2024-07-15 03:37:42.411805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.428 [2024-07-15 03:37:42.411834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.428 qpair failed and we were unable to recover it. 00:34:36.428 [2024-07-15 03:37:42.411982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.428 [2024-07-15 03:37:42.412016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.428 qpair failed and we were unable to recover it. 00:34:36.428 [2024-07-15 03:37:42.412160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.428 [2024-07-15 03:37:42.412204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.428 qpair failed and we were unable to recover it. 00:34:36.428 [2024-07-15 03:37:42.412359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.428 [2024-07-15 03:37:42.412390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.428 qpair failed and we were unable to recover it. 00:34:36.428 [2024-07-15 03:37:42.412530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.428 [2024-07-15 03:37:42.412557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.428 qpair failed and we were unable to recover it. 00:34:36.428 [2024-07-15 03:37:42.412720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.428 [2024-07-15 03:37:42.412746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.428 qpair failed and we were unable to recover it. 00:34:36.428 [2024-07-15 03:37:42.412915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.429 [2024-07-15 03:37:42.412947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.429 qpair failed and we were unable to recover it. 00:34:36.429 [2024-07-15 03:37:42.413099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.429 [2024-07-15 03:37:42.413125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.429 qpair failed and we were unable to recover it. 00:34:36.429 [2024-07-15 03:37:42.413271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.429 [2024-07-15 03:37:42.413313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.429 qpair failed and we were unable to recover it. 00:34:36.429 [2024-07-15 03:37:42.413461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.429 [2024-07-15 03:37:42.413497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.429 qpair failed and we were unable to recover it. 00:34:36.429 [2024-07-15 03:37:42.413688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.429 [2024-07-15 03:37:42.413719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.429 qpair failed and we were unable to recover it. 00:34:36.429 [2024-07-15 03:37:42.413885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.429 [2024-07-15 03:37:42.413914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.429 qpair failed and we were unable to recover it. 00:34:36.429 [2024-07-15 03:37:42.414094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.429 [2024-07-15 03:37:42.414125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.429 qpair failed and we were unable to recover it. 00:34:36.429 [2024-07-15 03:37:42.414257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.429 [2024-07-15 03:37:42.414285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.429 qpair failed and we were unable to recover it. 00:34:36.429 [2024-07-15 03:37:42.414448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.429 [2024-07-15 03:37:42.414493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.429 qpair failed and we were unable to recover it. 00:34:36.429 [2024-07-15 03:37:42.414654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.429 [2024-07-15 03:37:42.414685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.429 qpair failed and we were unable to recover it. 00:34:36.429 [2024-07-15 03:37:42.414842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.429 [2024-07-15 03:37:42.414871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.429 qpair failed and we were unable to recover it. 00:34:36.429 [2024-07-15 03:37:42.415009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.429 [2024-07-15 03:37:42.415036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.429 qpair failed and we were unable to recover it. 00:34:36.429 [2024-07-15 03:37:42.415177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.429 [2024-07-15 03:37:42.415209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.429 qpair failed and we were unable to recover it. 00:34:36.429 [2024-07-15 03:37:42.415353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.429 [2024-07-15 03:37:42.415380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.429 qpair failed and we were unable to recover it. 00:34:36.429 [2024-07-15 03:37:42.415516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.429 [2024-07-15 03:37:42.415559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.429 qpair failed and we were unable to recover it. 00:34:36.429 [2024-07-15 03:37:42.415688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.429 [2024-07-15 03:37:42.415718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.429 qpair failed and we were unable to recover it. 00:34:36.429 [2024-07-15 03:37:42.415871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.429 [2024-07-15 03:37:42.415916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.429 qpair failed and we were unable to recover it. 00:34:36.429 [2024-07-15 03:37:42.416047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.429 [2024-07-15 03:37:42.416073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.429 qpair failed and we were unable to recover it. 00:34:36.429 [2024-07-15 03:37:42.416255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.429 [2024-07-15 03:37:42.416286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.429 qpair failed and we were unable to recover it. 00:34:36.429 [2024-07-15 03:37:42.416430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.429 [2024-07-15 03:37:42.416456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.429 qpair failed and we were unable to recover it. 00:34:36.429 [2024-07-15 03:37:42.416601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.429 [2024-07-15 03:37:42.416629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.429 qpair failed and we were unable to recover it. 00:34:36.429 [2024-07-15 03:37:42.416796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.429 [2024-07-15 03:37:42.416826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.429 qpair failed and we were unable to recover it. 00:34:36.429 [2024-07-15 03:37:42.416981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.429 [2024-07-15 03:37:42.417008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.429 qpair failed and we were unable to recover it. 00:34:36.429 [2024-07-15 03:37:42.417188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.429 [2024-07-15 03:37:42.417217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.429 qpair failed and we were unable to recover it. 00:34:36.429 [2024-07-15 03:37:42.417335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.429 [2024-07-15 03:37:42.417375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.429 qpair failed and we were unable to recover it. 00:34:36.429 [2024-07-15 03:37:42.417538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.429 [2024-07-15 03:37:42.417564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.429 qpair failed and we were unable to recover it. 00:34:36.429 [2024-07-15 03:37:42.417726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.429 [2024-07-15 03:37:42.417758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.429 qpair failed and we were unable to recover it. 00:34:36.429 [2024-07-15 03:37:42.417916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.429 [2024-07-15 03:37:42.417960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.429 qpair failed and we were unable to recover it. 00:34:36.429 [2024-07-15 03:37:42.418079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.429 [2024-07-15 03:37:42.418107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.429 qpair failed and we were unable to recover it. 00:34:36.429 [2024-07-15 03:37:42.418211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.429 [2024-07-15 03:37:42.418237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.429 qpair failed and we were unable to recover it. 00:34:36.429 [2024-07-15 03:37:42.418343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.429 [2024-07-15 03:37:42.418387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.429 qpair failed and we were unable to recover it. 00:34:36.429 [2024-07-15 03:37:42.418560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.429 [2024-07-15 03:37:42.418588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.429 qpair failed and we were unable to recover it. 00:34:36.429 [2024-07-15 03:37:42.418728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.429 [2024-07-15 03:37:42.418755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.429 qpair failed and we were unable to recover it. 00:34:36.429 [2024-07-15 03:37:42.418906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.429 [2024-07-15 03:37:42.418933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.429 qpair failed and we were unable to recover it. 00:34:36.429 [2024-07-15 03:37:42.419070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.429 [2024-07-15 03:37:42.419098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.429 qpair failed and we were unable to recover it. 00:34:36.429 [2024-07-15 03:37:42.419237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.429 [2024-07-15 03:37:42.419264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.429 qpair failed and we were unable to recover it. 00:34:36.429 [2024-07-15 03:37:42.419444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.429 [2024-07-15 03:37:42.419489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.429 qpair failed and we were unable to recover it. 00:34:36.429 [2024-07-15 03:37:42.419654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.429 [2024-07-15 03:37:42.419691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.429 qpair failed and we were unable to recover it. 00:34:36.429 [2024-07-15 03:37:42.419848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.429 [2024-07-15 03:37:42.419884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.429 qpair failed and we were unable to recover it. 00:34:36.429 [2024-07-15 03:37:42.420011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.429 [2024-07-15 03:37:42.420041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.429 qpair failed and we were unable to recover it. 00:34:36.429 [2024-07-15 03:37:42.420197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.429 [2024-07-15 03:37:42.420231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.429 qpair failed and we were unable to recover it. 00:34:36.429 [2024-07-15 03:37:42.420351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.429 [2024-07-15 03:37:42.420378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.429 qpair failed and we were unable to recover it. 00:34:36.429 [2024-07-15 03:37:42.420545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.429 [2024-07-15 03:37:42.420576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.429 qpair failed and we were unable to recover it. 00:34:36.429 [2024-07-15 03:37:42.420753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.429 [2024-07-15 03:37:42.420780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.429 qpair failed and we were unable to recover it. 00:34:36.429 [2024-07-15 03:37:42.420934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.429 [2024-07-15 03:37:42.420969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.429 qpair failed and we were unable to recover it. 00:34:36.429 [2024-07-15 03:37:42.421103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.429 [2024-07-15 03:37:42.421134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.429 qpair failed and we were unable to recover it. 00:34:36.429 [2024-07-15 03:37:42.421302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.429 [2024-07-15 03:37:42.421329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.429 qpair failed and we were unable to recover it. 00:34:36.429 [2024-07-15 03:37:42.421518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.429 [2024-07-15 03:37:42.421558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.429 qpair failed and we were unable to recover it. 00:34:36.429 [2024-07-15 03:37:42.421710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.429 [2024-07-15 03:37:42.421749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.429 qpair failed and we were unable to recover it. 00:34:36.429 [2024-07-15 03:37:42.421916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.429 [2024-07-15 03:37:42.421944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.429 qpair failed and we were unable to recover it. 00:34:36.429 [2024-07-15 03:37:42.422100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.429 [2024-07-15 03:37:42.422131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.429 qpair failed and we were unable to recover it. 00:34:36.429 [2024-07-15 03:37:42.422251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.429 [2024-07-15 03:37:42.422281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.429 qpair failed and we were unable to recover it. 00:34:36.429 [2024-07-15 03:37:42.422467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.429 [2024-07-15 03:37:42.422495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.429 qpair failed and we were unable to recover it. 00:34:36.430 [2024-07-15 03:37:42.422603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.430 [2024-07-15 03:37:42.422646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.430 qpair failed and we were unable to recover it. 00:34:36.430 [2024-07-15 03:37:42.422818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.430 [2024-07-15 03:37:42.422849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.430 qpair failed and we were unable to recover it. 00:34:36.430 [2024-07-15 03:37:42.423014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.430 [2024-07-15 03:37:42.423041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.430 qpair failed and we were unable to recover it. 00:34:36.430 [2024-07-15 03:37:42.423149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.430 [2024-07-15 03:37:42.423175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.430 qpair failed and we were unable to recover it. 00:34:36.430 [2024-07-15 03:37:42.423373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.430 [2024-07-15 03:37:42.423403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.430 qpair failed and we were unable to recover it. 00:34:36.430 [2024-07-15 03:37:42.423568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.430 [2024-07-15 03:37:42.423595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.430 qpair failed and we were unable to recover it. 00:34:36.430 [2024-07-15 03:37:42.423782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.430 [2024-07-15 03:37:42.423821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.430 qpair failed and we were unable to recover it. 00:34:36.430 [2024-07-15 03:37:42.423989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.430 [2024-07-15 03:37:42.424016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.430 qpair failed and we were unable to recover it. 00:34:36.430 [2024-07-15 03:37:42.424185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.430 [2024-07-15 03:37:42.424212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.430 qpair failed and we were unable to recover it. 00:34:36.430 [2024-07-15 03:37:42.424378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.430 [2024-07-15 03:37:42.424421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.430 qpair failed and we were unable to recover it. 00:34:36.430 [2024-07-15 03:37:42.424572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.430 [2024-07-15 03:37:42.424602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.430 qpair failed and we were unable to recover it. 00:34:36.430 [2024-07-15 03:37:42.424739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.430 [2024-07-15 03:37:42.424766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.430 qpair failed and we were unable to recover it. 00:34:36.430 [2024-07-15 03:37:42.424881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.430 [2024-07-15 03:37:42.424912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.430 qpair failed and we were unable to recover it. 00:34:36.430 [2024-07-15 03:37:42.425051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.430 [2024-07-15 03:37:42.425078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.430 qpair failed and we were unable to recover it. 00:34:36.430 [2024-07-15 03:37:42.425191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.430 [2024-07-15 03:37:42.425219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.430 qpair failed and we were unable to recover it. 00:34:36.430 [2024-07-15 03:37:42.425370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.430 [2024-07-15 03:37:42.425414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.430 qpair failed and we were unable to recover it. 00:34:36.430 [2024-07-15 03:37:42.425556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.430 [2024-07-15 03:37:42.425586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.430 qpair failed and we were unable to recover it. 00:34:36.430 [2024-07-15 03:37:42.425731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.430 [2024-07-15 03:37:42.425759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.430 qpair failed and we were unable to recover it. 00:34:36.430 [2024-07-15 03:37:42.425893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.430 [2024-07-15 03:37:42.425948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.430 qpair failed and we were unable to recover it. 00:34:36.430 [2024-07-15 03:37:42.426092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.430 [2024-07-15 03:37:42.426122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.430 qpair failed and we were unable to recover it. 00:34:36.430 [2024-07-15 03:37:42.426261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.430 [2024-07-15 03:37:42.426288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.430 qpair failed and we were unable to recover it. 00:34:36.430 [2024-07-15 03:37:42.426398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.430 [2024-07-15 03:37:42.426424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.430 qpair failed and we were unable to recover it. 00:34:36.430 [2024-07-15 03:37:42.426589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.430 [2024-07-15 03:37:42.426615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.430 qpair failed and we were unable to recover it. 00:34:36.430 [2024-07-15 03:37:42.426754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.430 [2024-07-15 03:37:42.426781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.430 qpair failed and we were unable to recover it. 00:34:36.430 [2024-07-15 03:37:42.426936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.430 [2024-07-15 03:37:42.426965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.430 qpair failed and we were unable to recover it. 00:34:36.430 [2024-07-15 03:37:42.427119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.430 [2024-07-15 03:37:42.427146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.430 qpair failed and we were unable to recover it. 00:34:36.430 [2024-07-15 03:37:42.427278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.430 [2024-07-15 03:37:42.427305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.430 qpair failed and we were unable to recover it. 00:34:36.430 [2024-07-15 03:37:42.427413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.430 [2024-07-15 03:37:42.427441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.430 qpair failed and we were unable to recover it. 00:34:36.430 [2024-07-15 03:37:42.427638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.430 [2024-07-15 03:37:42.427664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.430 qpair failed and we were unable to recover it. 00:34:36.430 [2024-07-15 03:37:42.427825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.430 [2024-07-15 03:37:42.427859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.430 qpair failed and we were unable to recover it. 00:34:36.430 [2024-07-15 03:37:42.428022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.430 [2024-07-15 03:37:42.428051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.430 qpair failed and we were unable to recover it. 00:34:36.430 [2024-07-15 03:37:42.428226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.430 [2024-07-15 03:37:42.428260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.430 qpair failed and we were unable to recover it. 00:34:36.430 [2024-07-15 03:37:42.428401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.430 [2024-07-15 03:37:42.428427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.430 qpair failed and we were unable to recover it. 00:34:36.430 [2024-07-15 03:37:42.428544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.430 [2024-07-15 03:37:42.428570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.430 qpair failed and we were unable to recover it. 00:34:36.430 [2024-07-15 03:37:42.428761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.430 [2024-07-15 03:37:42.428796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.430 qpair failed and we were unable to recover it. 00:34:36.430 [2024-07-15 03:37:42.428934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.430 [2024-07-15 03:37:42.428961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.430 qpair failed and we were unable to recover it. 00:34:36.430 [2024-07-15 03:37:42.429101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.430 [2024-07-15 03:37:42.429133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.430 qpair failed and we were unable to recover it. 00:34:36.430 [2024-07-15 03:37:42.429307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.430 [2024-07-15 03:37:42.429334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.430 qpair failed and we were unable to recover it. 00:34:36.430 [2024-07-15 03:37:42.429494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.430 [2024-07-15 03:37:42.429521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.430 qpair failed and we were unable to recover it. 00:34:36.430 [2024-07-15 03:37:42.429634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.430 [2024-07-15 03:37:42.429683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.430 qpair failed and we were unable to recover it. 00:34:36.430 [2024-07-15 03:37:42.429859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.430 [2024-07-15 03:37:42.429895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.430 qpair failed and we were unable to recover it. 00:34:36.430 [2024-07-15 03:37:42.430063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.430 [2024-07-15 03:37:42.430090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.430 qpair failed and we were unable to recover it. 00:34:36.430 [2024-07-15 03:37:42.430203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.430 [2024-07-15 03:37:42.430247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.430 qpair failed and we were unable to recover it. 00:34:36.430 [2024-07-15 03:37:42.430401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.430 [2024-07-15 03:37:42.430430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.430 qpair failed and we were unable to recover it. 00:34:36.430 [2024-07-15 03:37:42.430561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.430 [2024-07-15 03:37:42.430587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.430 qpair failed and we were unable to recover it. 00:34:36.430 [2024-07-15 03:37:42.430727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.430 [2024-07-15 03:37:42.430754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.430 qpair failed and we were unable to recover it. 00:34:36.430 [2024-07-15 03:37:42.430913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.430 [2024-07-15 03:37:42.430941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.430 qpair failed and we were unable to recover it. 00:34:36.430 [2024-07-15 03:37:42.431059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.430 [2024-07-15 03:37:42.431090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.430 qpair failed and we were unable to recover it. 00:34:36.430 [2024-07-15 03:37:42.431231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.430 [2024-07-15 03:37:42.431258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.430 qpair failed and we were unable to recover it. 00:34:36.430 [2024-07-15 03:37:42.431380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.430 [2024-07-15 03:37:42.431409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.430 qpair failed and we were unable to recover it. 00:34:36.430 [2024-07-15 03:37:42.431574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.430 [2024-07-15 03:37:42.431600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.430 qpair failed and we were unable to recover it. 00:34:36.430 [2024-07-15 03:37:42.431704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.430 [2024-07-15 03:37:42.431731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.430 qpair failed and we were unable to recover it. 00:34:36.430 [2024-07-15 03:37:42.431899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.430 [2024-07-15 03:37:42.431940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.430 qpair failed and we were unable to recover it. 00:34:36.430 [2024-07-15 03:37:42.432075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.430 [2024-07-15 03:37:42.432103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.430 qpair failed and we were unable to recover it. 00:34:36.430 [2024-07-15 03:37:42.432267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.430 [2024-07-15 03:37:42.432294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.430 qpair failed and we were unable to recover it. 00:34:36.430 [2024-07-15 03:37:42.432462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.430 [2024-07-15 03:37:42.432492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.430 qpair failed and we were unable to recover it. 00:34:36.430 [2024-07-15 03:37:42.432651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.430 [2024-07-15 03:37:42.432679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.430 qpair failed and we were unable to recover it. 00:34:36.430 [2024-07-15 03:37:42.432814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.430 [2024-07-15 03:37:42.432858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.430 qpair failed and we were unable to recover it. 00:34:36.430 [2024-07-15 03:37:42.432989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.430 [2024-07-15 03:37:42.433019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.430 qpair failed and we were unable to recover it. 00:34:36.430 [2024-07-15 03:37:42.433199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.431 [2024-07-15 03:37:42.433225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.431 qpair failed and we were unable to recover it. 00:34:36.431 [2024-07-15 03:37:42.433417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.431 [2024-07-15 03:37:42.433447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.431 qpair failed and we were unable to recover it. 00:34:36.431 [2024-07-15 03:37:42.433575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.431 [2024-07-15 03:37:42.433606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.431 qpair failed and we were unable to recover it. 00:34:36.431 [2024-07-15 03:37:42.433788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.431 [2024-07-15 03:37:42.433815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.431 qpair failed and we were unable to recover it. 00:34:36.431 [2024-07-15 03:37:42.434003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.431 [2024-07-15 03:37:42.434033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.431 qpair failed and we were unable to recover it. 00:34:36.431 [2024-07-15 03:37:42.434213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.431 [2024-07-15 03:37:42.434243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.431 qpair failed and we were unable to recover it. 00:34:36.431 [2024-07-15 03:37:42.434407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.431 [2024-07-15 03:37:42.434433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.431 qpair failed and we were unable to recover it. 00:34:36.431 [2024-07-15 03:37:42.434565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.431 [2024-07-15 03:37:42.434592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.431 qpair failed and we were unable to recover it. 00:34:36.431 [2024-07-15 03:37:42.434728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.431 [2024-07-15 03:37:42.434755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.431 qpair failed and we were unable to recover it. 00:34:36.431 [2024-07-15 03:37:42.434938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.431 [2024-07-15 03:37:42.434965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.431 qpair failed and we were unable to recover it. 00:34:36.431 [2024-07-15 03:37:42.435150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.431 [2024-07-15 03:37:42.435190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.431 qpair failed and we were unable to recover it. 00:34:36.431 [2024-07-15 03:37:42.435339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.431 [2024-07-15 03:37:42.435370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.431 qpair failed and we were unable to recover it. 00:34:36.431 [2024-07-15 03:37:42.435504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.431 [2024-07-15 03:37:42.435535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.431 qpair failed and we were unable to recover it. 00:34:36.431 [2024-07-15 03:37:42.435677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.431 [2024-07-15 03:37:42.435703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.431 qpair failed and we were unable to recover it. 00:34:36.431 [2024-07-15 03:37:42.435840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.431 [2024-07-15 03:37:42.435882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.431 qpair failed and we were unable to recover it. 00:34:36.431 [2024-07-15 03:37:42.435986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.431 [2024-07-15 03:37:42.436013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.431 qpair failed and we were unable to recover it. 00:34:36.431 [2024-07-15 03:37:42.436155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.431 [2024-07-15 03:37:42.436204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.431 qpair failed and we were unable to recover it. 00:34:36.431 [2024-07-15 03:37:42.436356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.431 [2024-07-15 03:37:42.436386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.431 qpair failed and we were unable to recover it. 00:34:36.431 [2024-07-15 03:37:42.436533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.431 [2024-07-15 03:37:42.436559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.431 qpair failed and we were unable to recover it. 00:34:36.431 [2024-07-15 03:37:42.436695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.431 [2024-07-15 03:37:42.436739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.431 qpair failed and we were unable to recover it. 00:34:36.431 [2024-07-15 03:37:42.436864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.431 [2024-07-15 03:37:42.436901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.431 qpair failed and we were unable to recover it. 00:34:36.431 [2024-07-15 03:37:42.437083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.431 [2024-07-15 03:37:42.437109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.431 qpair failed and we were unable to recover it. 00:34:36.431 [2024-07-15 03:37:42.437271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.431 [2024-07-15 03:37:42.437302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.431 qpair failed and we were unable to recover it. 00:34:36.431 [2024-07-15 03:37:42.437485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.431 [2024-07-15 03:37:42.437516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.431 qpair failed and we were unable to recover it. 00:34:36.431 [2024-07-15 03:37:42.437643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.431 [2024-07-15 03:37:42.437670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.431 qpair failed and we were unable to recover it. 00:34:36.431 [2024-07-15 03:37:42.437809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.431 [2024-07-15 03:37:42.437835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.431 qpair failed and we were unable to recover it. 00:34:36.431 [2024-07-15 03:37:42.437993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.431 [2024-07-15 03:37:42.438022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.431 qpair failed and we were unable to recover it. 00:34:36.431 [2024-07-15 03:37:42.438193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.431 [2024-07-15 03:37:42.438220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.431 qpair failed and we were unable to recover it. 00:34:36.431 [2024-07-15 03:37:42.438400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.431 [2024-07-15 03:37:42.438429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.431 qpair failed and we were unable to recover it. 00:34:36.431 [2024-07-15 03:37:42.438557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.431 [2024-07-15 03:37:42.438586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.431 qpair failed and we were unable to recover it. 00:34:36.431 [2024-07-15 03:37:42.438771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.431 [2024-07-15 03:37:42.438798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.431 qpair failed and we were unable to recover it. 00:34:36.431 [2024-07-15 03:37:42.438941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.431 [2024-07-15 03:37:42.438969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.431 qpair failed and we were unable to recover it. 00:34:36.431 [2024-07-15 03:37:42.439104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.431 [2024-07-15 03:37:42.439130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.431 qpair failed and we were unable to recover it. 00:34:36.431 [2024-07-15 03:37:42.439269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.431 [2024-07-15 03:37:42.439295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.431 qpair failed and we were unable to recover it. 00:34:36.431 [2024-07-15 03:37:42.439478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.431 [2024-07-15 03:37:42.439507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.431 qpair failed and we were unable to recover it. 00:34:36.431 [2024-07-15 03:37:42.439651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.431 [2024-07-15 03:37:42.439680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.431 qpair failed and we were unable to recover it. 00:34:36.431 [2024-07-15 03:37:42.439833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.431 [2024-07-15 03:37:42.439859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.431 qpair failed and we were unable to recover it. 00:34:36.431 [2024-07-15 03:37:42.440031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.431 [2024-07-15 03:37:42.440061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.431 qpair failed and we were unable to recover it. 00:34:36.431 [2024-07-15 03:37:42.440240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.431 [2024-07-15 03:37:42.440269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.431 qpair failed and we were unable to recover it. 00:34:36.431 [2024-07-15 03:37:42.440427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.431 [2024-07-15 03:37:42.440454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.431 qpair failed and we were unable to recover it. 00:34:36.431 [2024-07-15 03:37:42.440635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.431 [2024-07-15 03:37:42.440663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.431 qpair failed and we were unable to recover it. 00:34:36.431 [2024-07-15 03:37:42.440836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.431 [2024-07-15 03:37:42.440882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.431 qpair failed and we were unable to recover it. 00:34:36.431 [2024-07-15 03:37:42.441003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.431 [2024-07-15 03:37:42.441030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.431 qpair failed and we were unable to recover it. 00:34:36.431 [2024-07-15 03:37:42.441192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.431 [2024-07-15 03:37:42.441218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.431 qpair failed and we were unable to recover it. 00:34:36.431 [2024-07-15 03:37:42.441337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.431 [2024-07-15 03:37:42.441364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.431 qpair failed and we were unable to recover it. 00:34:36.431 [2024-07-15 03:37:42.441557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.431 [2024-07-15 03:37:42.441584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.431 qpair failed and we were unable to recover it. 00:34:36.431 [2024-07-15 03:37:42.441762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.431 [2024-07-15 03:37:42.441791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.431 qpair failed and we were unable to recover it. 00:34:36.431 [2024-07-15 03:37:42.441920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.431 [2024-07-15 03:37:42.441957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.431 qpair failed and we were unable to recover it. 00:34:36.431 [2024-07-15 03:37:42.442155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.431 [2024-07-15 03:37:42.442192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.431 qpair failed and we were unable to recover it. 00:34:36.431 [2024-07-15 03:37:42.442314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.431 [2024-07-15 03:37:42.442343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.431 qpair failed and we were unable to recover it. 00:34:36.431 [2024-07-15 03:37:42.442470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.431 [2024-07-15 03:37:42.442501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.431 qpair failed and we were unable to recover it. 00:34:36.431 [2024-07-15 03:37:42.442678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.431 [2024-07-15 03:37:42.442705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.431 qpair failed and we were unable to recover it. 00:34:36.431 [2024-07-15 03:37:42.442891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.431 [2024-07-15 03:37:42.442926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.431 qpair failed and we were unable to recover it. 00:34:36.431 [2024-07-15 03:37:42.443079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.431 [2024-07-15 03:37:42.443109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.431 qpair failed and we were unable to recover it. 00:34:36.431 [2024-07-15 03:37:42.443232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.431 [2024-07-15 03:37:42.443258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.431 qpair failed and we were unable to recover it. 00:34:36.431 [2024-07-15 03:37:42.443374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.431 [2024-07-15 03:37:42.443402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.431 qpair failed and we were unable to recover it. 00:34:36.431 [2024-07-15 03:37:42.443567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.431 [2024-07-15 03:37:42.443593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.431 qpair failed and we were unable to recover it. 00:34:36.431 [2024-07-15 03:37:42.443780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.431 [2024-07-15 03:37:42.443809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.431 qpair failed and we were unable to recover it. 00:34:36.431 [2024-07-15 03:37:42.443983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.431 [2024-07-15 03:37:42.444010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.431 qpair failed and we were unable to recover it. 00:34:36.431 [2024-07-15 03:37:42.444192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.431 [2024-07-15 03:37:42.444221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.432 qpair failed and we were unable to recover it. 00:34:36.432 [2024-07-15 03:37:42.444403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.432 [2024-07-15 03:37:42.444429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.432 qpair failed and we were unable to recover it. 00:34:36.432 [2024-07-15 03:37:42.444573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.432 [2024-07-15 03:37:42.444599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.432 qpair failed and we were unable to recover it. 00:34:36.432 [2024-07-15 03:37:42.444760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.432 [2024-07-15 03:37:42.444803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.432 qpair failed and we were unable to recover it. 00:34:36.432 [2024-07-15 03:37:42.444944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.432 [2024-07-15 03:37:42.444972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.432 qpair failed and we were unable to recover it. 00:34:36.432 [2024-07-15 03:37:42.445109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.432 [2024-07-15 03:37:42.445151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.432 qpair failed and we were unable to recover it. 00:34:36.432 [2024-07-15 03:37:42.445313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.432 [2024-07-15 03:37:42.445340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.432 qpair failed and we were unable to recover it. 00:34:36.432 [2024-07-15 03:37:42.445507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.432 [2024-07-15 03:37:42.445533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.432 qpair failed and we were unable to recover it. 00:34:36.432 [2024-07-15 03:37:42.445725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.432 [2024-07-15 03:37:42.445754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.432 qpair failed and we were unable to recover it. 00:34:36.432 [2024-07-15 03:37:42.445902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.432 [2024-07-15 03:37:42.445932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.432 qpair failed and we were unable to recover it. 00:34:36.432 [2024-07-15 03:37:42.446062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.432 [2024-07-15 03:37:42.446089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.432 qpair failed and we were unable to recover it. 00:34:36.432 [2024-07-15 03:37:42.446228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.432 [2024-07-15 03:37:42.446256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.432 qpair failed and we were unable to recover it. 00:34:36.432 [2024-07-15 03:37:42.446413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.432 [2024-07-15 03:37:42.446442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.432 qpair failed and we were unable to recover it. 00:34:36.432 [2024-07-15 03:37:42.446598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.432 [2024-07-15 03:37:42.446624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.432 qpair failed and we were unable to recover it. 00:34:36.432 [2024-07-15 03:37:42.446760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.432 [2024-07-15 03:37:42.446802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.432 qpair failed and we were unable to recover it. 00:34:36.432 [2024-07-15 03:37:42.446951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.432 [2024-07-15 03:37:42.446982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.432 qpair failed and we were unable to recover it. 00:34:36.432 [2024-07-15 03:37:42.447136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.432 [2024-07-15 03:37:42.447173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.432 qpair failed and we were unable to recover it. 00:34:36.432 [2024-07-15 03:37:42.447358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.432 [2024-07-15 03:37:42.447386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.432 qpair failed and we were unable to recover it. 00:34:36.432 [2024-07-15 03:37:42.447562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.432 [2024-07-15 03:37:42.447592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.432 qpair failed and we were unable to recover it. 00:34:36.432 [2024-07-15 03:37:42.447740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.432 [2024-07-15 03:37:42.447766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.432 qpair failed and we were unable to recover it. 00:34:36.432 [2024-07-15 03:37:42.447908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.432 [2024-07-15 03:37:42.447953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.432 qpair failed and we were unable to recover it. 00:34:36.432 [2024-07-15 03:37:42.448071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.432 [2024-07-15 03:37:42.448100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.432 qpair failed and we were unable to recover it. 00:34:36.432 [2024-07-15 03:37:42.448284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.432 [2024-07-15 03:37:42.448311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.432 qpair failed and we were unable to recover it. 00:34:36.432 [2024-07-15 03:37:42.448466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.432 [2024-07-15 03:37:42.448495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.432 qpair failed and we were unable to recover it. 00:34:36.432 [2024-07-15 03:37:42.448673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.432 [2024-07-15 03:37:42.448702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.432 qpair failed and we were unable to recover it. 00:34:36.432 [2024-07-15 03:37:42.448853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.432 [2024-07-15 03:37:42.448898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.432 qpair failed and we were unable to recover it. 00:34:36.432 [2024-07-15 03:37:42.449082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.432 [2024-07-15 03:37:42.449112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.432 qpair failed and we were unable to recover it. 00:34:36.432 [2024-07-15 03:37:42.449298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.432 [2024-07-15 03:37:42.449327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.432 qpair failed and we were unable to recover it. 00:34:36.432 [2024-07-15 03:37:42.449488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.432 [2024-07-15 03:37:42.449516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.432 qpair failed and we were unable to recover it. 00:34:36.432 [2024-07-15 03:37:42.449654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.432 [2024-07-15 03:37:42.449698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.432 qpair failed and we were unable to recover it. 00:34:36.432 [2024-07-15 03:37:42.449853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.432 [2024-07-15 03:37:42.449899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.432 qpair failed and we were unable to recover it. 00:34:36.432 [2024-07-15 03:37:42.450088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.432 [2024-07-15 03:37:42.450114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.432 qpair failed and we were unable to recover it. 00:34:36.432 [2024-07-15 03:37:42.450275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.432 [2024-07-15 03:37:42.450305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.432 qpair failed and we were unable to recover it. 00:34:36.432 [2024-07-15 03:37:42.450456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.432 [2024-07-15 03:37:42.450489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.432 qpair failed and we were unable to recover it. 00:34:36.432 [2024-07-15 03:37:42.450640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.432 [2024-07-15 03:37:42.450670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.432 qpair failed and we were unable to recover it. 00:34:36.432 [2024-07-15 03:37:42.450845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.432 [2024-07-15 03:37:42.450886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.432 qpair failed and we were unable to recover it. 00:34:36.432 [2024-07-15 03:37:42.451059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.432 [2024-07-15 03:37:42.451086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.432 qpair failed and we were unable to recover it. 00:34:36.432 [2024-07-15 03:37:42.451222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.432 [2024-07-15 03:37:42.451250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.432 qpair failed and we were unable to recover it. 00:34:36.432 [2024-07-15 03:37:42.451390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.432 [2024-07-15 03:37:42.451416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.432 qpair failed and we were unable to recover it. 00:34:36.432 [2024-07-15 03:37:42.451580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.432 [2024-07-15 03:37:42.451607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.432 qpair failed and we were unable to recover it. 00:34:36.432 [2024-07-15 03:37:42.451711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.432 [2024-07-15 03:37:42.451737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.432 qpair failed and we were unable to recover it. 00:34:36.432 [2024-07-15 03:37:42.451914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.432 [2024-07-15 03:37:42.451941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.432 qpair failed and we were unable to recover it. 00:34:36.432 [2024-07-15 03:37:42.452094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.432 [2024-07-15 03:37:42.452124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.432 qpair failed and we were unable to recover it. 00:34:36.432 [2024-07-15 03:37:42.452286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.432 [2024-07-15 03:37:42.452312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.432 qpair failed and we were unable to recover it. 00:34:36.432 [2024-07-15 03:37:42.452424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.432 [2024-07-15 03:37:42.452451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.432 qpair failed and we were unable to recover it. 00:34:36.432 [2024-07-15 03:37:42.452646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.432 [2024-07-15 03:37:42.452675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.432 qpair failed and we were unable to recover it. 00:34:36.432 [2024-07-15 03:37:42.452858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.432 [2024-07-15 03:37:42.452890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.432 qpair failed and we were unable to recover it. 00:34:36.432 [2024-07-15 03:37:42.453057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.432 [2024-07-15 03:37:42.453087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.432 qpair failed and we were unable to recover it. 00:34:36.432 [2024-07-15 03:37:42.453229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.432 [2024-07-15 03:37:42.453259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.432 qpair failed and we were unable to recover it. 00:34:36.432 [2024-07-15 03:37:42.453442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.432 [2024-07-15 03:37:42.453468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.432 qpair failed and we were unable to recover it. 00:34:36.432 [2024-07-15 03:37:42.453589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.432 [2024-07-15 03:37:42.453619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.432 qpair failed and we were unable to recover it. 00:34:36.432 [2024-07-15 03:37:42.453794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.432 [2024-07-15 03:37:42.453824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.432 qpair failed and we were unable to recover it. 00:34:36.432 [2024-07-15 03:37:42.454008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.432 [2024-07-15 03:37:42.454035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.432 qpair failed and we were unable to recover it. 00:34:36.432 [2024-07-15 03:37:42.454177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.432 [2024-07-15 03:37:42.454203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.432 qpair failed and we were unable to recover it. 00:34:36.432 [2024-07-15 03:37:42.454346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.432 [2024-07-15 03:37:42.454372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.432 qpair failed and we were unable to recover it. 00:34:36.432 [2024-07-15 03:37:42.454539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.432 [2024-07-15 03:37:42.454565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.432 qpair failed and we were unable to recover it. 00:34:36.432 [2024-07-15 03:37:42.454742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.432 [2024-07-15 03:37:42.454771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.432 qpair failed and we were unable to recover it. 00:34:36.432 [2024-07-15 03:37:42.454914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.432 [2024-07-15 03:37:42.454944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.432 qpair failed and we were unable to recover it. 00:34:36.433 [2024-07-15 03:37:42.455104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.433 [2024-07-15 03:37:42.455131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.433 qpair failed and we were unable to recover it. 00:34:36.433 [2024-07-15 03:37:42.455284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.433 [2024-07-15 03:37:42.455314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.433 qpair failed and we were unable to recover it. 00:34:36.433 [2024-07-15 03:37:42.455468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.433 [2024-07-15 03:37:42.455497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.433 qpair failed and we were unable to recover it. 00:34:36.433 [2024-07-15 03:37:42.455650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.433 [2024-07-15 03:37:42.455677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.433 qpair failed and we were unable to recover it. 00:34:36.433 [2024-07-15 03:37:42.455823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.433 [2024-07-15 03:37:42.455852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.433 qpair failed and we were unable to recover it. 00:34:36.433 [2024-07-15 03:37:42.456032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.433 [2024-07-15 03:37:42.456062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.433 qpair failed and we were unable to recover it. 00:34:36.433 [2024-07-15 03:37:42.456202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.433 [2024-07-15 03:37:42.456228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.433 qpair failed and we were unable to recover it. 00:34:36.433 [2024-07-15 03:37:42.456330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.433 [2024-07-15 03:37:42.456357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.433 qpair failed and we were unable to recover it. 00:34:36.433 [2024-07-15 03:37:42.456484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.433 [2024-07-15 03:37:42.456513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.433 qpair failed and we were unable to recover it. 00:34:36.433 [2024-07-15 03:37:42.456675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.433 [2024-07-15 03:37:42.456702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.433 qpair failed and we were unable to recover it. 00:34:36.433 [2024-07-15 03:37:42.456862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.433 [2024-07-15 03:37:42.456894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.433 qpair failed and we were unable to recover it. 00:34:36.433 [2024-07-15 03:37:42.457026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.433 [2024-07-15 03:37:42.457056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.433 qpair failed and we were unable to recover it. 00:34:36.433 [2024-07-15 03:37:42.457210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.433 [2024-07-15 03:37:42.457237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.433 qpair failed and we were unable to recover it. 00:34:36.433 [2024-07-15 03:37:42.457372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.433 [2024-07-15 03:37:42.457415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.433 qpair failed and we were unable to recover it. 00:34:36.433 [2024-07-15 03:37:42.457592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.433 [2024-07-15 03:37:42.457621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.433 qpair failed and we were unable to recover it. 00:34:36.433 [2024-07-15 03:37:42.457770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.433 [2024-07-15 03:37:42.457801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.433 qpair failed and we were unable to recover it. 00:34:36.433 [2024-07-15 03:37:42.457944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.433 [2024-07-15 03:37:42.457971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.433 qpair failed and we were unable to recover it. 00:34:36.433 [2024-07-15 03:37:42.458111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.433 [2024-07-15 03:37:42.458138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.433 qpair failed and we were unable to recover it. 00:34:36.433 [2024-07-15 03:37:42.458291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.433 [2024-07-15 03:37:42.458318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.433 qpair failed and we were unable to recover it. 00:34:36.433 [2024-07-15 03:37:42.458486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.433 [2024-07-15 03:37:42.458512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.433 qpair failed and we were unable to recover it. 00:34:36.433 [2024-07-15 03:37:42.458638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.433 [2024-07-15 03:37:42.458667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.433 qpair failed and we were unable to recover it. 00:34:36.433 [2024-07-15 03:37:42.458797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.433 [2024-07-15 03:37:42.458841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.433 qpair failed and we were unable to recover it. 00:34:36.433 [2024-07-15 03:37:42.459007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.433 [2024-07-15 03:37:42.459034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.433 qpair failed and we were unable to recover it. 00:34:36.433 [2024-07-15 03:37:42.459170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.433 [2024-07-15 03:37:42.459196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.433 qpair failed and we were unable to recover it. 00:34:36.433 [2024-07-15 03:37:42.459337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.433 [2024-07-15 03:37:42.459364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.433 qpair failed and we were unable to recover it. 00:34:36.433 [2024-07-15 03:37:42.459520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.433 [2024-07-15 03:37:42.459548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.433 qpair failed and we were unable to recover it. 00:34:36.433 [2024-07-15 03:37:42.459725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.433 [2024-07-15 03:37:42.459755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.433 qpair failed and we were unable to recover it. 00:34:36.433 [2024-07-15 03:37:42.459884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.433 [2024-07-15 03:37:42.459910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.433 qpair failed and we were unable to recover it. 00:34:36.433 [2024-07-15 03:37:42.460056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.433 [2024-07-15 03:37:42.460083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.433 qpair failed and we were unable to recover it. 00:34:36.433 [2024-07-15 03:37:42.460290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.433 [2024-07-15 03:37:42.460317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.433 qpair failed and we were unable to recover it. 00:34:36.433 [2024-07-15 03:37:42.460478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.433 [2024-07-15 03:37:42.460505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.433 qpair failed and we were unable to recover it. 00:34:36.433 [2024-07-15 03:37:42.460668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.433 [2024-07-15 03:37:42.460711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.433 qpair failed and we were unable to recover it. 00:34:36.433 [2024-07-15 03:37:42.460894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.433 [2024-07-15 03:37:42.460924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.433 qpair failed and we were unable to recover it. 00:34:36.433 [2024-07-15 03:37:42.461105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.433 [2024-07-15 03:37:42.461131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.433 qpair failed and we were unable to recover it. 00:34:36.433 [2024-07-15 03:37:42.461314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.433 [2024-07-15 03:37:42.461343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.433 qpair failed and we were unable to recover it. 00:34:36.433 [2024-07-15 03:37:42.461495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.433 [2024-07-15 03:37:42.461525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.433 qpair failed and we were unable to recover it. 00:34:36.433 [2024-07-15 03:37:42.461681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.433 [2024-07-15 03:37:42.461708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.433 qpair failed and we were unable to recover it. 00:34:36.433 [2024-07-15 03:37:42.461814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.433 [2024-07-15 03:37:42.461841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.433 qpair failed and we were unable to recover it. 00:34:36.433 [2024-07-15 03:37:42.462034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.433 [2024-07-15 03:37:42.462064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.433 qpair failed and we were unable to recover it. 00:34:36.433 [2024-07-15 03:37:42.462251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.433 [2024-07-15 03:37:42.462277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.433 qpair failed and we were unable to recover it. 00:34:36.433 [2024-07-15 03:37:42.462392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.433 [2024-07-15 03:37:42.462419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.433 qpair failed and we were unable to recover it. 00:34:36.433 [2024-07-15 03:37:42.462582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.433 [2024-07-15 03:37:42.462609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.433 qpair failed and we were unable to recover it. 00:34:36.433 [2024-07-15 03:37:42.462739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.433 [2024-07-15 03:37:42.462778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.433 qpair failed and we were unable to recover it. 00:34:36.433 [2024-07-15 03:37:42.462954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.433 [2024-07-15 03:37:42.462984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.433 qpair failed and we were unable to recover it. 00:34:36.433 [2024-07-15 03:37:42.463126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.433 [2024-07-15 03:37:42.463153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.433 qpair failed and we were unable to recover it. 00:34:36.433 [2024-07-15 03:37:42.463335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.433 [2024-07-15 03:37:42.463383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.433 qpair failed and we were unable to recover it. 00:34:36.433 [2024-07-15 03:37:42.463569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.433 [2024-07-15 03:37:42.463598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.433 qpair failed and we were unable to recover it. 00:34:36.433 [2024-07-15 03:37:42.463728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.433 [2024-07-15 03:37:42.463755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.433 qpair failed and we were unable to recover it. 00:34:36.433 [2024-07-15 03:37:42.463917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.433 [2024-07-15 03:37:42.463945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.433 qpair failed and we were unable to recover it. 00:34:36.433 [2024-07-15 03:37:42.464058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.433 [2024-07-15 03:37:42.464085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.433 qpair failed and we were unable to recover it. 00:34:36.433 [2024-07-15 03:37:42.464240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.433 [2024-07-15 03:37:42.464269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.433 qpair failed and we were unable to recover it. 00:34:36.433 [2024-07-15 03:37:42.464413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.433 [2024-07-15 03:37:42.464442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.433 qpair failed and we were unable to recover it. 00:34:36.433 [2024-07-15 03:37:42.464627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.433 [2024-07-15 03:37:42.464657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.433 qpair failed and we were unable to recover it. 00:34:36.433 [2024-07-15 03:37:42.464829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.433 [2024-07-15 03:37:42.464858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.433 qpair failed and we were unable to recover it. 00:34:36.433 [2024-07-15 03:37:42.465002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.433 [2024-07-15 03:37:42.465029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.433 qpair failed and we were unable to recover it. 00:34:36.433 [2024-07-15 03:37:42.465168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.433 [2024-07-15 03:37:42.465216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.433 qpair failed and we were unable to recover it. 00:34:36.433 [2024-07-15 03:37:42.465335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.433 [2024-07-15 03:37:42.465364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.433 qpair failed and we were unable to recover it. 00:34:36.433 [2024-07-15 03:37:42.465513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.433 [2024-07-15 03:37:42.465542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.433 qpair failed and we were unable to recover it. 00:34:36.433 [2024-07-15 03:37:42.465690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.433 [2024-07-15 03:37:42.465719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.433 qpair failed and we were unable to recover it. 00:34:36.433 [2024-07-15 03:37:42.465869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.433 [2024-07-15 03:37:42.465904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.433 qpair failed and we were unable to recover it. 00:34:36.433 [2024-07-15 03:37:42.466071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.433 [2024-07-15 03:37:42.466111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.433 qpair failed and we were unable to recover it. 00:34:36.433 [2024-07-15 03:37:42.466302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.433 [2024-07-15 03:37:42.466347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.433 qpair failed and we were unable to recover it. 00:34:36.433 [2024-07-15 03:37:42.466534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.434 [2024-07-15 03:37:42.466579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.434 qpair failed and we were unable to recover it. 00:34:36.434 [2024-07-15 03:37:42.466747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.434 [2024-07-15 03:37:42.466774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.434 qpair failed and we were unable to recover it. 00:34:36.434 [2024-07-15 03:37:42.466913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.434 [2024-07-15 03:37:42.466941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.434 qpair failed and we were unable to recover it. 00:34:36.434 [2024-07-15 03:37:42.467098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.434 [2024-07-15 03:37:42.467141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.434 qpair failed and we were unable to recover it. 00:34:36.434 [2024-07-15 03:37:42.467300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.434 [2024-07-15 03:37:42.467346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.434 qpair failed and we were unable to recover it. 00:34:36.434 [2024-07-15 03:37:42.467479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.434 [2024-07-15 03:37:42.467525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.434 qpair failed and we were unable to recover it. 00:34:36.434 [2024-07-15 03:37:42.467675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.434 [2024-07-15 03:37:42.467702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.434 qpair failed and we were unable to recover it. 00:34:36.434 [2024-07-15 03:37:42.467817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.434 [2024-07-15 03:37:42.467844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.434 qpair failed and we were unable to recover it. 00:34:36.434 [2024-07-15 03:37:42.468028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.434 [2024-07-15 03:37:42.468076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.434 qpair failed and we were unable to recover it. 00:34:36.434 [2024-07-15 03:37:42.468259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.434 [2024-07-15 03:37:42.468303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.434 qpair failed and we were unable to recover it. 00:34:36.434 [2024-07-15 03:37:42.468462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.434 [2024-07-15 03:37:42.468506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.434 qpair failed and we were unable to recover it. 00:34:36.434 [2024-07-15 03:37:42.468641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.434 [2024-07-15 03:37:42.468667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.434 qpair failed and we were unable to recover it. 00:34:36.434 [2024-07-15 03:37:42.468836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.434 [2024-07-15 03:37:42.468863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.434 qpair failed and we were unable to recover it. 00:34:36.434 [2024-07-15 03:37:42.468997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.434 [2024-07-15 03:37:42.469037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.434 qpair failed and we were unable to recover it. 00:34:36.434 [2024-07-15 03:37:42.469215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.434 [2024-07-15 03:37:42.469255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.434 qpair failed and we were unable to recover it. 00:34:36.434 [2024-07-15 03:37:42.469408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.434 [2024-07-15 03:37:42.469435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.434 qpair failed and we were unable to recover it. 00:34:36.434 [2024-07-15 03:37:42.469574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.434 [2024-07-15 03:37:42.469604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.434 qpair failed and we were unable to recover it. 00:34:36.434 [2024-07-15 03:37:42.469753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.434 [2024-07-15 03:37:42.469782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.434 qpair failed and we were unable to recover it. 00:34:36.434 [2024-07-15 03:37:42.469972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.434 [2024-07-15 03:37:42.470000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.434 qpair failed and we were unable to recover it. 00:34:36.434 [2024-07-15 03:37:42.470137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.434 [2024-07-15 03:37:42.470166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.434 qpair failed and we were unable to recover it. 00:34:36.434 [2024-07-15 03:37:42.470306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.434 [2024-07-15 03:37:42.470338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.434 qpair failed and we were unable to recover it. 00:34:36.434 [2024-07-15 03:37:42.470506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.434 [2024-07-15 03:37:42.470535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.434 qpair failed and we were unable to recover it. 00:34:36.434 [2024-07-15 03:37:42.470681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.434 [2024-07-15 03:37:42.470710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.434 qpair failed and we were unable to recover it. 00:34:36.434 [2024-07-15 03:37:42.470867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.434 [2024-07-15 03:37:42.470907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.434 qpair failed and we were unable to recover it. 00:34:36.434 [2024-07-15 03:37:42.471074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.434 [2024-07-15 03:37:42.471101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.434 qpair failed and we were unable to recover it. 00:34:36.434 [2024-07-15 03:37:42.471334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.434 [2024-07-15 03:37:42.471388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.434 qpair failed and we were unable to recover it. 00:34:36.434 [2024-07-15 03:37:42.471541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.434 [2024-07-15 03:37:42.471572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.434 qpair failed and we were unable to recover it. 00:34:36.434 [2024-07-15 03:37:42.471725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.434 [2024-07-15 03:37:42.471754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.434 qpair failed and we were unable to recover it. 00:34:36.434 [2024-07-15 03:37:42.471915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.434 [2024-07-15 03:37:42.471943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.434 qpair failed and we were unable to recover it. 00:34:36.434 [2024-07-15 03:37:42.472089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.434 [2024-07-15 03:37:42.472115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.434 qpair failed and we were unable to recover it. 00:34:36.434 [2024-07-15 03:37:42.472248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.434 [2024-07-15 03:37:42.472301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.434 qpair failed and we were unable to recover it. 00:34:36.434 [2024-07-15 03:37:42.472558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.434 [2024-07-15 03:37:42.472609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.434 qpair failed and we were unable to recover it. 00:34:36.434 [2024-07-15 03:37:42.472752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.434 [2024-07-15 03:37:42.472778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.434 qpair failed and we were unable to recover it. 00:34:36.434 [2024-07-15 03:37:42.472917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.434 [2024-07-15 03:37:42.472944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.434 qpair failed and we were unable to recover it. 00:34:36.434 [2024-07-15 03:37:42.473090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.434 [2024-07-15 03:37:42.473117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.434 qpair failed and we were unable to recover it. 00:34:36.434 [2024-07-15 03:37:42.473279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.434 [2024-07-15 03:37:42.473323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.434 qpair failed and we were unable to recover it. 00:34:36.434 [2024-07-15 03:37:42.473490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.434 [2024-07-15 03:37:42.473536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.434 qpair failed and we were unable to recover it. 00:34:36.434 [2024-07-15 03:37:42.473664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.434 [2024-07-15 03:37:42.473692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.434 qpair failed and we were unable to recover it. 00:34:36.434 [2024-07-15 03:37:42.473837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.434 [2024-07-15 03:37:42.473863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.434 qpair failed and we were unable to recover it. 00:34:36.434 [2024-07-15 03:37:42.474015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.434 [2024-07-15 03:37:42.474041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.434 qpair failed and we were unable to recover it. 00:34:36.434 [2024-07-15 03:37:42.474224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.434 [2024-07-15 03:37:42.474293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.434 qpair failed and we were unable to recover it. 00:34:36.434 [2024-07-15 03:37:42.474450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.434 [2024-07-15 03:37:42.474503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.434 qpair failed and we were unable to recover it. 00:34:36.434 [2024-07-15 03:37:42.474657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.434 [2024-07-15 03:37:42.474685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.434 qpair failed and we were unable to recover it. 00:34:36.434 [2024-07-15 03:37:42.474806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.434 [2024-07-15 03:37:42.474832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.434 qpair failed and we were unable to recover it. 00:34:36.434 [2024-07-15 03:37:42.474953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.434 [2024-07-15 03:37:42.474981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.434 qpair failed and we were unable to recover it. 00:34:36.434 [2024-07-15 03:37:42.475121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.434 [2024-07-15 03:37:42.475165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.434 qpair failed and we were unable to recover it. 00:34:36.434 [2024-07-15 03:37:42.475341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.434 [2024-07-15 03:37:42.475370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.434 qpair failed and we were unable to recover it. 00:34:36.434 [2024-07-15 03:37:42.475520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.434 [2024-07-15 03:37:42.475554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.434 qpair failed and we were unable to recover it. 00:34:36.434 [2024-07-15 03:37:42.475712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.434 [2024-07-15 03:37:42.475741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.434 qpair failed and we were unable to recover it. 00:34:36.434 [2024-07-15 03:37:42.475905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.434 [2024-07-15 03:37:42.475932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.434 qpair failed and we were unable to recover it. 00:34:36.434 [2024-07-15 03:37:42.476063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.434 [2024-07-15 03:37:42.476089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.434 qpair failed and we were unable to recover it. 00:34:36.434 [2024-07-15 03:37:42.476275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.434 [2024-07-15 03:37:42.476304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.434 qpair failed and we were unable to recover it. 00:34:36.434 [2024-07-15 03:37:42.476593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.434 [2024-07-15 03:37:42.476657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.434 qpair failed and we were unable to recover it. 00:34:36.434 [2024-07-15 03:37:42.476850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.434 [2024-07-15 03:37:42.476888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.434 qpair failed and we were unable to recover it. 00:34:36.434 [2024-07-15 03:37:42.477028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.434 [2024-07-15 03:37:42.477054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.434 qpair failed and we were unable to recover it. 00:34:36.434 [2024-07-15 03:37:42.477219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.434 [2024-07-15 03:37:42.477245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.434 qpair failed and we were unable to recover it. 00:34:36.434 [2024-07-15 03:37:42.477411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.434 [2024-07-15 03:37:42.477461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.434 qpair failed and we were unable to recover it. 00:34:36.434 [2024-07-15 03:37:42.477617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.434 [2024-07-15 03:37:42.477646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.434 qpair failed and we were unable to recover it. 00:34:36.434 [2024-07-15 03:37:42.477840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.434 [2024-07-15 03:37:42.477869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.434 qpair failed and we were unable to recover it. 00:34:36.434 [2024-07-15 03:37:42.478037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.434 [2024-07-15 03:37:42.478064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.434 qpair failed and we were unable to recover it. 00:34:36.434 [2024-07-15 03:37:42.478230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.434 [2024-07-15 03:37:42.478256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.434 qpair failed and we were unable to recover it. 00:34:36.434 [2024-07-15 03:37:42.478442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.434 [2024-07-15 03:37:42.478497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.434 qpair failed and we were unable to recover it. 00:34:36.434 [2024-07-15 03:37:42.478627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.434 [2024-07-15 03:37:42.478670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.434 qpair failed and we were unable to recover it. 00:34:36.434 [2024-07-15 03:37:42.478820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.434 [2024-07-15 03:37:42.478849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.434 qpair failed and we were unable to recover it. 00:34:36.434 [2024-07-15 03:37:42.479023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.434 [2024-07-15 03:37:42.479051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.434 qpair failed and we were unable to recover it. 00:34:36.434 [2024-07-15 03:37:42.479206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.435 [2024-07-15 03:37:42.479236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.435 qpair failed and we were unable to recover it. 00:34:36.435 [2024-07-15 03:37:42.479387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.435 [2024-07-15 03:37:42.479417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.435 qpair failed and we were unable to recover it. 00:34:36.435 [2024-07-15 03:37:42.479628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.435 [2024-07-15 03:37:42.479657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.435 qpair failed and we were unable to recover it. 00:34:36.435 [2024-07-15 03:37:42.479805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.435 [2024-07-15 03:37:42.479834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.435 qpair failed and we were unable to recover it. 00:34:36.435 [2024-07-15 03:37:42.480002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.435 [2024-07-15 03:37:42.480029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.435 qpair failed and we were unable to recover it. 00:34:36.435 [2024-07-15 03:37:42.480159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.435 [2024-07-15 03:37:42.480185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.435 qpair failed and we were unable to recover it. 00:34:36.435 [2024-07-15 03:37:42.480340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.435 [2024-07-15 03:37:42.480369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.435 qpair failed and we were unable to recover it. 00:34:36.435 [2024-07-15 03:37:42.480519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.435 [2024-07-15 03:37:42.480548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.435 qpair failed and we were unable to recover it. 00:34:36.435 [2024-07-15 03:37:42.480697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.435 [2024-07-15 03:37:42.480725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.435 qpair failed and we were unable to recover it. 00:34:36.435 [2024-07-15 03:37:42.480867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.435 [2024-07-15 03:37:42.480904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.435 qpair failed and we were unable to recover it. 00:34:36.435 [2024-07-15 03:37:42.481051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.435 [2024-07-15 03:37:42.481077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.435 qpair failed and we were unable to recover it. 00:34:36.435 [2024-07-15 03:37:42.481196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.435 [2024-07-15 03:37:42.481223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.435 qpair failed and we were unable to recover it. 00:34:36.435 [2024-07-15 03:37:42.481365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.435 [2024-07-15 03:37:42.481409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.435 qpair failed and we were unable to recover it. 00:34:36.435 [2024-07-15 03:37:42.481531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.435 [2024-07-15 03:37:42.481560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.435 qpair failed and we were unable to recover it. 00:34:36.435 [2024-07-15 03:37:42.481687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.435 [2024-07-15 03:37:42.481729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.435 qpair failed and we were unable to recover it. 00:34:36.435 [2024-07-15 03:37:42.481869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.435 [2024-07-15 03:37:42.481919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.435 qpair failed and we were unable to recover it. 00:34:36.435 [2024-07-15 03:37:42.482062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.435 [2024-07-15 03:37:42.482090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.435 qpair failed and we were unable to recover it. 00:34:36.435 [2024-07-15 03:37:42.482228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.435 [2024-07-15 03:37:42.482254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.435 qpair failed and we were unable to recover it. 00:34:36.435 [2024-07-15 03:37:42.482436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.435 [2024-07-15 03:37:42.482465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.435 qpair failed and we were unable to recover it. 00:34:36.435 [2024-07-15 03:37:42.482581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.435 [2024-07-15 03:37:42.482610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.435 qpair failed and we were unable to recover it. 00:34:36.435 [2024-07-15 03:37:42.482828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.435 [2024-07-15 03:37:42.482857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.435 qpair failed and we were unable to recover it. 00:34:36.435 [2024-07-15 03:37:42.483016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.435 [2024-07-15 03:37:42.483043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.435 qpair failed and we were unable to recover it. 00:34:36.435 [2024-07-15 03:37:42.483191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.435 [2024-07-15 03:37:42.483220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.435 qpair failed and we were unable to recover it. 00:34:36.435 [2024-07-15 03:37:42.483362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.435 [2024-07-15 03:37:42.483389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.435 qpair failed and we were unable to recover it. 00:34:36.435 [2024-07-15 03:37:42.483522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.435 [2024-07-15 03:37:42.483549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.435 qpair failed and we were unable to recover it. 00:34:36.435 [2024-07-15 03:37:42.483709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.435 [2024-07-15 03:37:42.483738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.435 qpair failed and we were unable to recover it. 00:34:36.435 [2024-07-15 03:37:42.483897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.435 [2024-07-15 03:37:42.483924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.435 qpair failed and we were unable to recover it. 00:34:36.435 [2024-07-15 03:37:42.484033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.435 [2024-07-15 03:37:42.484059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.435 qpair failed and we were unable to recover it. 00:34:36.435 [2024-07-15 03:37:42.484235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.435 [2024-07-15 03:37:42.484264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.435 qpair failed and we were unable to recover it. 00:34:36.435 [2024-07-15 03:37:42.484525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.435 [2024-07-15 03:37:42.484580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.435 qpair failed and we were unable to recover it. 00:34:36.435 [2024-07-15 03:37:42.484701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.435 [2024-07-15 03:37:42.484730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.435 qpair failed and we were unable to recover it. 00:34:36.435 [2024-07-15 03:37:42.484895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.435 [2024-07-15 03:37:42.484922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.435 qpair failed and we were unable to recover it. 00:34:36.435 [2024-07-15 03:37:42.485041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.435 [2024-07-15 03:37:42.485067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.435 qpair failed and we were unable to recover it. 00:34:36.435 [2024-07-15 03:37:42.485171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.435 [2024-07-15 03:37:42.485197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.435 qpair failed and we were unable to recover it. 00:34:36.435 [2024-07-15 03:37:42.485332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.435 [2024-07-15 03:37:42.485361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.435 qpair failed and we were unable to recover it. 00:34:36.435 [2024-07-15 03:37:42.485571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.435 [2024-07-15 03:37:42.485600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.435 qpair failed and we were unable to recover it. 00:34:36.435 [2024-07-15 03:37:42.485757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.435 [2024-07-15 03:37:42.485787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.435 qpair failed and we were unable to recover it. 00:34:36.435 [2024-07-15 03:37:42.485955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.435 [2024-07-15 03:37:42.485984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.435 qpair failed and we were unable to recover it. 00:34:36.435 [2024-07-15 03:37:42.486155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.435 [2024-07-15 03:37:42.486182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.435 qpair failed and we were unable to recover it. 00:34:36.435 [2024-07-15 03:37:42.486379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.435 [2024-07-15 03:37:42.486405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.435 qpair failed and we were unable to recover it. 00:34:36.435 [2024-07-15 03:37:42.486578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.435 [2024-07-15 03:37:42.486607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.435 qpair failed and we were unable to recover it. 00:34:36.435 [2024-07-15 03:37:42.486732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.435 [2024-07-15 03:37:42.486759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.435 qpair failed and we were unable to recover it. 00:34:36.435 [2024-07-15 03:37:42.486899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.435 [2024-07-15 03:37:42.486926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.435 qpair failed and we were unable to recover it. 00:34:36.435 [2024-07-15 03:37:42.487065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.435 [2024-07-15 03:37:42.487091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.435 qpair failed and we were unable to recover it. 00:34:36.435 [2024-07-15 03:37:42.487241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.435 [2024-07-15 03:37:42.487267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.435 qpair failed and we were unable to recover it. 00:34:36.435 [2024-07-15 03:37:42.487418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.435 [2024-07-15 03:37:42.487447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.435 qpair failed and we were unable to recover it. 00:34:36.435 [2024-07-15 03:37:42.487601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.435 [2024-07-15 03:37:42.487630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.435 qpair failed and we were unable to recover it. 00:34:36.435 [2024-07-15 03:37:42.487761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.435 [2024-07-15 03:37:42.487788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.435 qpair failed and we were unable to recover it. 00:34:36.435 [2024-07-15 03:37:42.487933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.435 [2024-07-15 03:37:42.487960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.435 qpair failed and we were unable to recover it. 00:34:36.435 [2024-07-15 03:37:42.488127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.435 [2024-07-15 03:37:42.488173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.435 qpair failed and we were unable to recover it. 00:34:36.435 [2024-07-15 03:37:42.488336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.435 [2024-07-15 03:37:42.488363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.435 qpair failed and we were unable to recover it. 00:34:36.435 [2024-07-15 03:37:42.488473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.435 [2024-07-15 03:37:42.488500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.435 qpair failed and we were unable to recover it. 00:34:36.435 [2024-07-15 03:37:42.488645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.435 [2024-07-15 03:37:42.488671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.435 qpair failed and we were unable to recover it. 00:34:36.435 [2024-07-15 03:37:42.488811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.435 [2024-07-15 03:37:42.488837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.435 qpair failed and we were unable to recover it. 00:34:36.722 [2024-07-15 03:37:42.488977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.722 [2024-07-15 03:37:42.489016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.722 qpair failed and we were unable to recover it. 00:34:36.722 [2024-07-15 03:37:42.489171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.722 [2024-07-15 03:37:42.489199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.722 qpair failed and we were unable to recover it. 00:34:36.722 [2024-07-15 03:37:42.489307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.722 [2024-07-15 03:37:42.489334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.722 qpair failed and we were unable to recover it. 00:34:36.722 [2024-07-15 03:37:42.489438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.722 [2024-07-15 03:37:42.489464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.722 qpair failed and we were unable to recover it. 00:34:36.722 [2024-07-15 03:37:42.489593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.722 [2024-07-15 03:37:42.489622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.722 qpair failed and we were unable to recover it. 00:34:36.722 [2024-07-15 03:37:42.489837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.722 [2024-07-15 03:37:42.489868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.722 qpair failed and we were unable to recover it. 00:34:36.722 [2024-07-15 03:37:42.490067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.722 [2024-07-15 03:37:42.490094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.722 qpair failed and we were unable to recover it. 00:34:36.722 [2024-07-15 03:37:42.490284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.722 [2024-07-15 03:37:42.490313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.722 qpair failed and we were unable to recover it. 00:34:36.722 [2024-07-15 03:37:42.490445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.722 [2024-07-15 03:37:42.490472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.722 qpair failed and we were unable to recover it. 00:34:36.722 [2024-07-15 03:37:42.490591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.722 [2024-07-15 03:37:42.490620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.722 qpair failed and we were unable to recover it. 00:34:36.722 [2024-07-15 03:37:42.490810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.722 [2024-07-15 03:37:42.490840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.722 qpair failed and we were unable to recover it. 00:34:36.722 [2024-07-15 03:37:42.490977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.722 [2024-07-15 03:37:42.491004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.722 qpair failed and we were unable to recover it. 00:34:36.722 [2024-07-15 03:37:42.491132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.722 [2024-07-15 03:37:42.491158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.722 qpair failed and we were unable to recover it. 00:34:36.722 [2024-07-15 03:37:42.491326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.722 [2024-07-15 03:37:42.491356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.722 qpair failed and we were unable to recover it. 00:34:36.722 [2024-07-15 03:37:42.491515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.722 [2024-07-15 03:37:42.491542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.722 qpair failed and we were unable to recover it. 00:34:36.722 [2024-07-15 03:37:42.491648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.722 [2024-07-15 03:37:42.491691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.722 qpair failed and we were unable to recover it. 00:34:36.722 [2024-07-15 03:37:42.491808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.722 [2024-07-15 03:37:42.491837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.722 qpair failed and we were unable to recover it. 00:34:36.722 [2024-07-15 03:37:42.491983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.722 [2024-07-15 03:37:42.492010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.722 qpair failed and we were unable to recover it. 00:34:36.722 [2024-07-15 03:37:42.492150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.722 [2024-07-15 03:37:42.492177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.722 qpair failed and we were unable to recover it. 00:34:36.722 [2024-07-15 03:37:42.492303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.722 [2024-07-15 03:37:42.492333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.722 qpair failed and we were unable to recover it. 00:34:36.722 [2024-07-15 03:37:42.492465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.722 [2024-07-15 03:37:42.492492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.722 qpair failed and we were unable to recover it. 00:34:36.722 [2024-07-15 03:37:42.492626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.722 [2024-07-15 03:37:42.492652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.722 qpair failed and we were unable to recover it. 00:34:36.722 [2024-07-15 03:37:42.492766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.722 [2024-07-15 03:37:42.492799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.722 qpair failed and we were unable to recover it. 00:34:36.722 [2024-07-15 03:37:42.492940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.723 [2024-07-15 03:37:42.492968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.723 qpair failed and we were unable to recover it. 00:34:36.723 [2024-07-15 03:37:42.493079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.723 [2024-07-15 03:37:42.493106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.723 qpair failed and we were unable to recover it. 00:34:36.723 [2024-07-15 03:37:42.493243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.723 [2024-07-15 03:37:42.493270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.723 qpair failed and we were unable to recover it. 00:34:36.723 [2024-07-15 03:37:42.493383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.723 [2024-07-15 03:37:42.493411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.723 qpair failed and we were unable to recover it. 00:34:36.723 [2024-07-15 03:37:42.493553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.723 [2024-07-15 03:37:42.493596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.723 qpair failed and we were unable to recover it. 00:34:36.723 [2024-07-15 03:37:42.493750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.723 [2024-07-15 03:37:42.493779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.723 qpair failed and we were unable to recover it. 00:34:36.723 [2024-07-15 03:37:42.493914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.723 [2024-07-15 03:37:42.493942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.723 qpair failed and we were unable to recover it. 00:34:36.723 [2024-07-15 03:37:42.494057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.723 [2024-07-15 03:37:42.494084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.723 qpair failed and we were unable to recover it. 00:34:36.723 [2024-07-15 03:37:42.494244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.723 [2024-07-15 03:37:42.494273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.723 qpair failed and we were unable to recover it. 00:34:36.723 [2024-07-15 03:37:42.494425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.723 [2024-07-15 03:37:42.494451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.723 qpair failed and we were unable to recover it. 00:34:36.723 [2024-07-15 03:37:42.494590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.723 [2024-07-15 03:37:42.494617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.723 qpair failed and we were unable to recover it. 00:34:36.723 [2024-07-15 03:37:42.494755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.723 [2024-07-15 03:37:42.494781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.723 qpair failed and we were unable to recover it. 00:34:36.723 [2024-07-15 03:37:42.494893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.723 [2024-07-15 03:37:42.494920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.723 qpair failed and we were unable to recover it. 00:34:36.723 [2024-07-15 03:37:42.495040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.723 [2024-07-15 03:37:42.495067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.723 qpair failed and we were unable to recover it. 00:34:36.723 [2024-07-15 03:37:42.495208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.723 [2024-07-15 03:37:42.495235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.723 qpair failed and we were unable to recover it. 00:34:36.723 [2024-07-15 03:37:42.495398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.723 [2024-07-15 03:37:42.495425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.723 qpair failed and we were unable to recover it. 00:34:36.723 [2024-07-15 03:37:42.495558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.723 [2024-07-15 03:37:42.495601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.723 qpair failed and we were unable to recover it. 00:34:36.723 [2024-07-15 03:37:42.495721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.723 [2024-07-15 03:37:42.495751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.723 qpair failed and we were unable to recover it. 00:34:36.723 [2024-07-15 03:37:42.495891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.723 [2024-07-15 03:37:42.495918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.723 qpair failed and we were unable to recover it. 00:34:36.723 [2024-07-15 03:37:42.496058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.723 [2024-07-15 03:37:42.496085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.723 qpair failed and we were unable to recover it. 00:34:36.723 [2024-07-15 03:37:42.496211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.723 [2024-07-15 03:37:42.496240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.723 qpair failed and we were unable to recover it. 00:34:36.723 [2024-07-15 03:37:42.496371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.723 [2024-07-15 03:37:42.496397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.723 qpair failed and we were unable to recover it. 00:34:36.723 [2024-07-15 03:37:42.496562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.723 [2024-07-15 03:37:42.496606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.723 qpair failed and we were unable to recover it. 00:34:36.723 [2024-07-15 03:37:42.496752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.723 [2024-07-15 03:37:42.496782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.723 qpair failed and we were unable to recover it. 00:34:36.723 [2024-07-15 03:37:42.496965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.723 [2024-07-15 03:37:42.496992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.723 qpair failed and we were unable to recover it. 00:34:36.723 [2024-07-15 03:37:42.497107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.723 [2024-07-15 03:37:42.497134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.723 qpair failed and we were unable to recover it. 00:34:36.723 [2024-07-15 03:37:42.497301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.723 [2024-07-15 03:37:42.497330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.723 qpair failed and we were unable to recover it. 00:34:36.723 [2024-07-15 03:37:42.497494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.723 [2024-07-15 03:37:42.497520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.723 qpair failed and we were unable to recover it. 00:34:36.723 [2024-07-15 03:37:42.497702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.723 [2024-07-15 03:37:42.497731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.723 qpair failed and we were unable to recover it. 00:34:36.723 [2024-07-15 03:37:42.497889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.723 [2024-07-15 03:37:42.497934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.723 qpair failed and we were unable to recover it. 00:34:36.723 [2024-07-15 03:37:42.498043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.723 [2024-07-15 03:37:42.498071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.723 qpair failed and we were unable to recover it. 00:34:36.723 [2024-07-15 03:37:42.498192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.723 [2024-07-15 03:37:42.498219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.723 qpair failed and we were unable to recover it. 00:34:36.723 [2024-07-15 03:37:42.498360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.723 [2024-07-15 03:37:42.498386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.723 qpair failed and we were unable to recover it. 00:34:36.723 [2024-07-15 03:37:42.498491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.723 [2024-07-15 03:37:42.498519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.723 qpair failed and we were unable to recover it. 00:34:36.723 [2024-07-15 03:37:42.498655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.723 [2024-07-15 03:37:42.498682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.723 qpair failed and we were unable to recover it. 00:34:36.723 [2024-07-15 03:37:42.498849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.723 [2024-07-15 03:37:42.498881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.723 qpair failed and we were unable to recover it. 00:34:36.723 [2024-07-15 03:37:42.499026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.724 [2024-07-15 03:37:42.499052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.724 qpair failed and we were unable to recover it. 00:34:36.724 [2024-07-15 03:37:42.499156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.724 [2024-07-15 03:37:42.499199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.724 qpair failed and we were unable to recover it. 00:34:36.724 [2024-07-15 03:37:42.499320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.724 [2024-07-15 03:37:42.499349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.724 qpair failed and we were unable to recover it. 00:34:36.724 [2024-07-15 03:37:42.499499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.724 [2024-07-15 03:37:42.499530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.724 qpair failed and we were unable to recover it. 00:34:36.724 [2024-07-15 03:37:42.499669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.724 [2024-07-15 03:37:42.499712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.724 qpair failed and we were unable to recover it. 00:34:36.724 [2024-07-15 03:37:42.499826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.724 [2024-07-15 03:37:42.499855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.724 qpair failed and we were unable to recover it. 00:34:36.724 [2024-07-15 03:37:42.500031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.724 [2024-07-15 03:37:42.500058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.724 qpair failed and we were unable to recover it. 00:34:36.724 [2024-07-15 03:37:42.500197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.724 [2024-07-15 03:37:42.500224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.724 qpair failed and we were unable to recover it. 00:34:36.724 [2024-07-15 03:37:42.500364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.724 [2024-07-15 03:37:42.500390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.724 qpair failed and we were unable to recover it. 00:34:36.724 [2024-07-15 03:37:42.500522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.724 [2024-07-15 03:37:42.500549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.724 qpair failed and we were unable to recover it. 00:34:36.724 [2024-07-15 03:37:42.500684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.724 [2024-07-15 03:37:42.500712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.724 qpair failed and we were unable to recover it. 00:34:36.724 [2024-07-15 03:37:42.500906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.724 [2024-07-15 03:37:42.500954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.724 qpair failed and we were unable to recover it. 00:34:36.724 [2024-07-15 03:37:42.501093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.724 [2024-07-15 03:37:42.501119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.724 qpair failed and we were unable to recover it. 00:34:36.724 [2024-07-15 03:37:42.501273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.724 [2024-07-15 03:37:42.501303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.724 qpair failed and we were unable to recover it. 00:34:36.724 [2024-07-15 03:37:42.501447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.724 [2024-07-15 03:37:42.501476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.724 qpair failed and we were unable to recover it. 00:34:36.724 [2024-07-15 03:37:42.501624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.724 [2024-07-15 03:37:42.501650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.724 qpair failed and we were unable to recover it. 00:34:36.724 [2024-07-15 03:37:42.501809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.724 [2024-07-15 03:37:42.501839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.724 qpair failed and we were unable to recover it. 00:34:36.724 [2024-07-15 03:37:42.501991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.724 [2024-07-15 03:37:42.502018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.724 qpair failed and we were unable to recover it. 00:34:36.724 [2024-07-15 03:37:42.502185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.724 [2024-07-15 03:37:42.502212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.724 qpair failed and we were unable to recover it. 00:34:36.724 [2024-07-15 03:37:42.502364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.724 [2024-07-15 03:37:42.502393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.724 qpair failed and we were unable to recover it. 00:34:36.724 [2024-07-15 03:37:42.502545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.724 [2024-07-15 03:37:42.502574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.724 qpair failed and we were unable to recover it. 00:34:36.724 [2024-07-15 03:37:42.502733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.724 [2024-07-15 03:37:42.502759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.724 qpair failed and we were unable to recover it. 00:34:36.724 [2024-07-15 03:37:42.502890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.724 [2024-07-15 03:37:42.502917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.724 qpair failed and we were unable to recover it. 00:34:36.724 [2024-07-15 03:37:42.503083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.724 [2024-07-15 03:37:42.503109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.724 qpair failed and we were unable to recover it. 00:34:36.724 [2024-07-15 03:37:42.503273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.724 [2024-07-15 03:37:42.503299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.724 qpair failed and we were unable to recover it. 00:34:36.724 [2024-07-15 03:37:42.503556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.724 [2024-07-15 03:37:42.503610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.724 qpair failed and we were unable to recover it. 00:34:36.724 [2024-07-15 03:37:42.503757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.724 [2024-07-15 03:37:42.503786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.724 qpair failed and we were unable to recover it. 00:34:36.724 [2024-07-15 03:37:42.503941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.724 [2024-07-15 03:37:42.503967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.724 qpair failed and we were unable to recover it. 00:34:36.724 [2024-07-15 03:37:42.504070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.724 [2024-07-15 03:37:42.504097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.724 qpair failed and we were unable to recover it. 00:34:36.724 [2024-07-15 03:37:42.504285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.724 [2024-07-15 03:37:42.504314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.724 qpair failed and we were unable to recover it. 00:34:36.724 [2024-07-15 03:37:42.504503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.724 [2024-07-15 03:37:42.504529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.724 qpair failed and we were unable to recover it. 00:34:36.724 [2024-07-15 03:37:42.504643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.724 [2024-07-15 03:37:42.504684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.724 qpair failed and we were unable to recover it. 00:34:36.724 [2024-07-15 03:37:42.504837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.724 [2024-07-15 03:37:42.504884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.724 qpair failed and we were unable to recover it. 00:34:36.724 [2024-07-15 03:37:42.505028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.724 [2024-07-15 03:37:42.505055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.724 qpair failed and we were unable to recover it. 00:34:36.724 [2024-07-15 03:37:42.505167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.724 [2024-07-15 03:37:42.505203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.724 qpair failed and we were unable to recover it. 00:34:36.724 [2024-07-15 03:37:42.505366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.724 [2024-07-15 03:37:42.505396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.724 qpair failed and we were unable to recover it. 00:34:36.724 [2024-07-15 03:37:42.505551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.724 [2024-07-15 03:37:42.505578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.724 qpair failed and we were unable to recover it. 00:34:36.725 [2024-07-15 03:37:42.505712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.725 [2024-07-15 03:37:42.505755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.725 qpair failed and we were unable to recover it. 00:34:36.725 [2024-07-15 03:37:42.505882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.725 [2024-07-15 03:37:42.505912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.725 qpair failed and we were unable to recover it. 00:34:36.725 [2024-07-15 03:37:42.506098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.725 [2024-07-15 03:37:42.506124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.725 qpair failed and we were unable to recover it. 00:34:36.725 [2024-07-15 03:37:42.506276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.725 [2024-07-15 03:37:42.506306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.725 qpair failed and we were unable to recover it. 00:34:36.725 [2024-07-15 03:37:42.506461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.725 [2024-07-15 03:37:42.506490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.725 qpair failed and we were unable to recover it. 00:34:36.725 [2024-07-15 03:37:42.506674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.725 [2024-07-15 03:37:42.506700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.725 qpair failed and we were unable to recover it. 00:34:36.725 [2024-07-15 03:37:42.506856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.725 [2024-07-15 03:37:42.506897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.725 qpair failed and we were unable to recover it. 00:34:36.725 [2024-07-15 03:37:42.507049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.725 [2024-07-15 03:37:42.507079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.725 qpair failed and we were unable to recover it. 00:34:36.725 [2024-07-15 03:37:42.507272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.725 [2024-07-15 03:37:42.507299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.725 qpair failed and we were unable to recover it. 00:34:36.725 [2024-07-15 03:37:42.507453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.725 [2024-07-15 03:37:42.507483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.725 qpair failed and we were unable to recover it. 00:34:36.725 [2024-07-15 03:37:42.507606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.725 [2024-07-15 03:37:42.507636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.725 qpair failed and we were unable to recover it. 00:34:36.725 [2024-07-15 03:37:42.507794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.725 [2024-07-15 03:37:42.507821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.725 qpair failed and we were unable to recover it. 00:34:36.725 [2024-07-15 03:37:42.507940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.725 [2024-07-15 03:37:42.507969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.725 qpair failed and we were unable to recover it. 00:34:36.725 [2024-07-15 03:37:42.508099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.725 [2024-07-15 03:37:42.508125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.725 qpair failed and we were unable to recover it. 00:34:36.725 [2024-07-15 03:37:42.508260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.725 [2024-07-15 03:37:42.508286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.725 qpair failed and we were unable to recover it. 00:34:36.725 [2024-07-15 03:37:42.508422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.725 [2024-07-15 03:37:42.508448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.725 qpair failed and we were unable to recover it. 00:34:36.725 [2024-07-15 03:37:42.508564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.725 [2024-07-15 03:37:42.508591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.725 qpair failed and we were unable to recover it. 00:34:36.725 [2024-07-15 03:37:42.508700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.725 [2024-07-15 03:37:42.508726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.725 qpair failed and we were unable to recover it. 00:34:36.725 [2024-07-15 03:37:42.508890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.725 [2024-07-15 03:37:42.508917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.725 qpair failed and we were unable to recover it. 00:34:36.725 [2024-07-15 03:37:42.509083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.725 [2024-07-15 03:37:42.509113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.725 qpair failed and we were unable to recover it. 00:34:36.725 [2024-07-15 03:37:42.509283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.725 [2024-07-15 03:37:42.509309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.725 qpair failed and we were unable to recover it. 00:34:36.725 [2024-07-15 03:37:42.509450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.725 [2024-07-15 03:37:42.509477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.725 qpair failed and we were unable to recover it. 00:34:36.725 [2024-07-15 03:37:42.509613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.725 [2024-07-15 03:37:42.509641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.725 qpair failed and we were unable to recover it. 00:34:36.725 [2024-07-15 03:37:42.509799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.725 [2024-07-15 03:37:42.509829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.725 qpair failed and we were unable to recover it. 00:34:36.725 [2024-07-15 03:37:42.509991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.725 [2024-07-15 03:37:42.510018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.725 qpair failed and we were unable to recover it. 00:34:36.725 [2024-07-15 03:37:42.510155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.725 [2024-07-15 03:37:42.510183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.725 qpair failed and we were unable to recover it. 00:34:36.725 [2024-07-15 03:37:42.510368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.725 [2024-07-15 03:37:42.510395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.725 qpair failed and we were unable to recover it. 00:34:36.725 [2024-07-15 03:37:42.510541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.725 [2024-07-15 03:37:42.510571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.725 qpair failed and we were unable to recover it. 00:34:36.725 [2024-07-15 03:37:42.510706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.725 [2024-07-15 03:37:42.510735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.725 qpair failed and we were unable to recover it. 00:34:36.725 [2024-07-15 03:37:42.510924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.725 [2024-07-15 03:37:42.510951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.725 qpair failed and we were unable to recover it. 00:34:36.725 [2024-07-15 03:37:42.511142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.725 [2024-07-15 03:37:42.511171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.725 qpair failed and we were unable to recover it. 00:34:36.725 [2024-07-15 03:37:42.511322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.725 [2024-07-15 03:37:42.511352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.725 qpair failed and we were unable to recover it. 00:34:36.725 [2024-07-15 03:37:42.511511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.725 [2024-07-15 03:37:42.511538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.725 qpair failed and we were unable to recover it. 00:34:36.725 [2024-07-15 03:37:42.511728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.725 [2024-07-15 03:37:42.511757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.725 qpair failed and we were unable to recover it. 00:34:36.725 [2024-07-15 03:37:42.511905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.726 [2024-07-15 03:37:42.511936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.726 qpair failed and we were unable to recover it. 00:34:36.726 [2024-07-15 03:37:42.512065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.726 [2024-07-15 03:37:42.512092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.726 qpair failed and we were unable to recover it. 00:34:36.726 [2024-07-15 03:37:42.512233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.726 [2024-07-15 03:37:42.512259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.726 qpair failed and we were unable to recover it. 00:34:36.726 [2024-07-15 03:37:42.512428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.726 [2024-07-15 03:37:42.512457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.726 qpair failed and we were unable to recover it. 00:34:36.726 [2024-07-15 03:37:42.512612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.726 [2024-07-15 03:37:42.512638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.726 qpair failed and we were unable to recover it. 00:34:36.726 [2024-07-15 03:37:42.512830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.726 [2024-07-15 03:37:42.512860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.726 qpair failed and we were unable to recover it. 00:34:36.726 [2024-07-15 03:37:42.513069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.726 [2024-07-15 03:37:42.513096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.726 qpair failed and we were unable to recover it. 00:34:36.726 [2024-07-15 03:37:42.513262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.726 [2024-07-15 03:37:42.513288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.726 qpair failed and we were unable to recover it. 00:34:36.726 [2024-07-15 03:37:42.513473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.726 [2024-07-15 03:37:42.513503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.726 qpair failed and we were unable to recover it. 00:34:36.726 [2024-07-15 03:37:42.513649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.726 [2024-07-15 03:37:42.513679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.726 qpair failed and we were unable to recover it. 00:34:36.726 [2024-07-15 03:37:42.513841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.726 [2024-07-15 03:37:42.513867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.726 qpair failed and we were unable to recover it. 00:34:36.726 [2024-07-15 03:37:42.514060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.726 [2024-07-15 03:37:42.514090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.726 qpair failed and we were unable to recover it. 00:34:36.726 [2024-07-15 03:37:42.514269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.726 [2024-07-15 03:37:42.514304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.726 qpair failed and we were unable to recover it. 00:34:36.726 [2024-07-15 03:37:42.514488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.726 [2024-07-15 03:37:42.514515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.726 qpair failed and we were unable to recover it. 00:34:36.726 [2024-07-15 03:37:42.514669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.726 [2024-07-15 03:37:42.514698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.726 qpair failed and we were unable to recover it. 00:34:36.726 [2024-07-15 03:37:42.514854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.726 [2024-07-15 03:37:42.514890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.726 qpair failed and we were unable to recover it. 00:34:36.726 [2024-07-15 03:37:42.515086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.726 [2024-07-15 03:37:42.515112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.726 qpair failed and we were unable to recover it. 00:34:36.726 [2024-07-15 03:37:42.515305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.726 [2024-07-15 03:37:42.515335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.726 qpair failed and we were unable to recover it. 00:34:36.726 [2024-07-15 03:37:42.515483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.726 [2024-07-15 03:37:42.515521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.726 qpair failed and we were unable to recover it. 00:34:36.726 [2024-07-15 03:37:42.515692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.726 [2024-07-15 03:37:42.515719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.726 qpair failed and we were unable to recover it. 00:34:36.726 [2024-07-15 03:37:42.515905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.726 [2024-07-15 03:37:42.515932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.726 qpair failed and we were unable to recover it. 00:34:36.726 [2024-07-15 03:37:42.516043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.726 [2024-07-15 03:37:42.516071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.726 qpair failed and we were unable to recover it. 00:34:36.726 [2024-07-15 03:37:42.516256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.726 [2024-07-15 03:37:42.516284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.726 qpair failed and we were unable to recover it. 00:34:36.726 [2024-07-15 03:37:42.516456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.726 [2024-07-15 03:37:42.516484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.726 qpair failed and we were unable to recover it. 00:34:36.726 [2024-07-15 03:37:42.516629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.726 [2024-07-15 03:37:42.516659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.726 qpair failed and we were unable to recover it. 00:34:36.726 [2024-07-15 03:37:42.516783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.726 [2024-07-15 03:37:42.516810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.726 qpair failed and we were unable to recover it. 00:34:36.726 [2024-07-15 03:37:42.516952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.726 [2024-07-15 03:37:42.516979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.726 qpair failed and we were unable to recover it. 00:34:36.726 [2024-07-15 03:37:42.517139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.726 [2024-07-15 03:37:42.517180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.726 qpair failed and we were unable to recover it. 00:34:36.726 [2024-07-15 03:37:42.517369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.726 [2024-07-15 03:37:42.517396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.726 qpair failed and we were unable to recover it. 00:34:36.726 [2024-07-15 03:37:42.517551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.726 [2024-07-15 03:37:42.517581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.726 qpair failed and we were unable to recover it. 00:34:36.726 [2024-07-15 03:37:42.517737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.726 [2024-07-15 03:37:42.517766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.726 qpair failed and we were unable to recover it. 00:34:36.726 [2024-07-15 03:37:42.517927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.726 [2024-07-15 03:37:42.517955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.726 qpair failed and we were unable to recover it. 00:34:36.726 [2024-07-15 03:37:42.518120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.726 [2024-07-15 03:37:42.518147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.726 qpair failed and we were unable to recover it. 00:34:36.726 [2024-07-15 03:37:42.518313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.726 [2024-07-15 03:37:42.518343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.726 qpair failed and we were unable to recover it. 00:34:36.726 [2024-07-15 03:37:42.518502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.726 [2024-07-15 03:37:42.518528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.726 qpair failed and we were unable to recover it. 00:34:36.726 [2024-07-15 03:37:42.518636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.727 [2024-07-15 03:37:42.518663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.727 qpair failed and we were unable to recover it. 00:34:36.727 [2024-07-15 03:37:42.518803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.727 [2024-07-15 03:37:42.518832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.727 qpair failed and we were unable to recover it. 00:34:36.727 [2024-07-15 03:37:42.518962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.727 [2024-07-15 03:37:42.518989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.727 qpair failed and we were unable to recover it. 00:34:36.727 [2024-07-15 03:37:42.519129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.727 [2024-07-15 03:37:42.519156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.727 qpair failed and we were unable to recover it. 00:34:36.727 [2024-07-15 03:37:42.519328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.727 [2024-07-15 03:37:42.519355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.727 qpair failed and we were unable to recover it. 00:34:36.727 [2024-07-15 03:37:42.519493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.727 [2024-07-15 03:37:42.519520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.727 qpair failed and we were unable to recover it. 00:34:36.727 [2024-07-15 03:37:42.519631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.727 [2024-07-15 03:37:42.519659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.727 qpair failed and we were unable to recover it. 00:34:36.727 [2024-07-15 03:37:42.519772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.727 [2024-07-15 03:37:42.519802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.727 qpair failed and we were unable to recover it. 00:34:36.727 [2024-07-15 03:37:42.519951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.727 [2024-07-15 03:37:42.519979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.727 qpair failed and we were unable to recover it. 00:34:36.727 [2024-07-15 03:37:42.520144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.727 [2024-07-15 03:37:42.520181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.727 qpair failed and we were unable to recover it. 00:34:36.727 [2024-07-15 03:37:42.520343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.727 [2024-07-15 03:37:42.520372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.727 qpair failed and we were unable to recover it. 00:34:36.727 [2024-07-15 03:37:42.520531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.727 [2024-07-15 03:37:42.520558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.727 qpair failed and we were unable to recover it. 00:34:36.727 [2024-07-15 03:37:42.520702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.727 [2024-07-15 03:37:42.520730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.727 qpair failed and we were unable to recover it. 00:34:36.727 [2024-07-15 03:37:42.520860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.727 [2024-07-15 03:37:42.520892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.727 qpair failed and we were unable to recover it. 00:34:36.727 [2024-07-15 03:37:42.521042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.727 [2024-07-15 03:37:42.521072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.727 qpair failed and we were unable to recover it. 00:34:36.727 [2024-07-15 03:37:42.521225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.727 [2024-07-15 03:37:42.521255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.727 qpair failed and we were unable to recover it. 00:34:36.727 [2024-07-15 03:37:42.521402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.727 [2024-07-15 03:37:42.521432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.727 qpair failed and we were unable to recover it. 00:34:36.727 [2024-07-15 03:37:42.521602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.727 [2024-07-15 03:37:42.521648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.727 qpair failed and we were unable to recover it. 00:34:36.727 [2024-07-15 03:37:42.521815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.727 [2024-07-15 03:37:42.521864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.727 qpair failed and we were unable to recover it. 00:34:36.727 [2024-07-15 03:37:42.522036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.727 [2024-07-15 03:37:42.522063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.727 qpair failed and we were unable to recover it. 00:34:36.727 [2024-07-15 03:37:42.522236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.727 [2024-07-15 03:37:42.522263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.727 qpair failed and we were unable to recover it. 00:34:36.727 [2024-07-15 03:37:42.522451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.727 [2024-07-15 03:37:42.522497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.727 qpair failed and we were unable to recover it. 00:34:36.727 [2024-07-15 03:37:42.522617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.727 [2024-07-15 03:37:42.522662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:36.727 qpair failed and we were unable to recover it. 00:34:36.727 [2024-07-15 03:37:42.522809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.727 [2024-07-15 03:37:42.522838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.727 qpair failed and we were unable to recover it. 00:34:36.727 [2024-07-15 03:37:42.523010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.727 [2024-07-15 03:37:42.523037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.727 qpair failed and we were unable to recover it. 00:34:36.727 [2024-07-15 03:37:42.523220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.727 [2024-07-15 03:37:42.523250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.727 qpair failed and we were unable to recover it. 00:34:36.727 [2024-07-15 03:37:42.523434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.727 [2024-07-15 03:37:42.523464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.727 qpair failed and we were unable to recover it. 00:34:36.727 [2024-07-15 03:37:42.523617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.727 [2024-07-15 03:37:42.523646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.727 qpair failed and we were unable to recover it. 00:34:36.727 [2024-07-15 03:37:42.523817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.728 [2024-07-15 03:37:42.523847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.728 qpair failed and we were unable to recover it. 00:34:36.728 [2024-07-15 03:37:42.524018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.728 [2024-07-15 03:37:42.524046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.728 qpair failed and we were unable to recover it. 00:34:36.728 [2024-07-15 03:37:42.524165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.728 [2024-07-15 03:37:42.524192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.728 qpair failed and we were unable to recover it. 00:34:36.728 [2024-07-15 03:37:42.524377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.728 [2024-07-15 03:37:42.524407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.728 qpair failed and we were unable to recover it. 00:34:36.728 [2024-07-15 03:37:42.524540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.728 [2024-07-15 03:37:42.524584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.728 qpair failed and we were unable to recover it. 00:34:36.728 [2024-07-15 03:37:42.524737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.728 [2024-07-15 03:37:42.524767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.728 qpair failed and we were unable to recover it. 00:34:36.728 [2024-07-15 03:37:42.524898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.728 [2024-07-15 03:37:42.524942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.728 qpair failed and we were unable to recover it. 00:34:36.728 [2024-07-15 03:37:42.525077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.728 [2024-07-15 03:37:42.525103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.728 qpair failed and we were unable to recover it. 00:34:36.728 [2024-07-15 03:37:42.525211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.728 [2024-07-15 03:37:42.525238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.728 qpair failed and we were unable to recover it. 00:34:36.728 [2024-07-15 03:37:42.525375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.728 [2024-07-15 03:37:42.525402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.728 qpair failed and we were unable to recover it. 00:34:36.728 [2024-07-15 03:37:42.525567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.728 [2024-07-15 03:37:42.525597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.728 qpair failed and we were unable to recover it. 00:34:36.728 [2024-07-15 03:37:42.525751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.728 [2024-07-15 03:37:42.525780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.728 qpair failed and we were unable to recover it. 00:34:36.728 [2024-07-15 03:37:42.525924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.728 [2024-07-15 03:37:42.525951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.728 qpair failed and we were unable to recover it. 00:34:36.728 [2024-07-15 03:37:42.526060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.728 [2024-07-15 03:37:42.526087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.728 qpair failed and we were unable to recover it. 00:34:36.728 [2024-07-15 03:37:42.526223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.728 [2024-07-15 03:37:42.526266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.728 qpair failed and we were unable to recover it. 00:34:36.728 [2024-07-15 03:37:42.526396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.728 [2024-07-15 03:37:42.526426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.728 qpair failed and we were unable to recover it. 00:34:36.728 [2024-07-15 03:37:42.526599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.728 [2024-07-15 03:37:42.526629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.728 qpair failed and we were unable to recover it. 00:34:36.728 [2024-07-15 03:37:42.526775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.728 [2024-07-15 03:37:42.526805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.728 qpair failed and we were unable to recover it. 00:34:36.728 [2024-07-15 03:37:42.526962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.728 [2024-07-15 03:37:42.526990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.728 qpair failed and we were unable to recover it. 00:34:36.728 [2024-07-15 03:37:42.527129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.728 [2024-07-15 03:37:42.527156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.728 qpair failed and we were unable to recover it. 00:34:36.728 [2024-07-15 03:37:42.527264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.728 [2024-07-15 03:37:42.527307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.728 qpair failed and we were unable to recover it. 00:34:36.728 [2024-07-15 03:37:42.527497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.728 [2024-07-15 03:37:42.527541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.728 qpair failed and we were unable to recover it. 00:34:36.728 [2024-07-15 03:37:42.527788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.728 [2024-07-15 03:37:42.527817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.728 qpair failed and we were unable to recover it. 00:34:36.728 [2024-07-15 03:37:42.527985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.728 [2024-07-15 03:37:42.528012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.728 qpair failed and we were unable to recover it. 00:34:36.728 [2024-07-15 03:37:42.528152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.728 [2024-07-15 03:37:42.528186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.728 qpair failed and we were unable to recover it. 00:34:36.728 [2024-07-15 03:37:42.528380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.728 [2024-07-15 03:37:42.528410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.728 qpair failed and we were unable to recover it. 00:34:36.728 [2024-07-15 03:37:42.528542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.728 [2024-07-15 03:37:42.528572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.728 qpair failed and we were unable to recover it. 00:34:36.728 [2024-07-15 03:37:42.528719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.728 [2024-07-15 03:37:42.528759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.728 qpair failed and we were unable to recover it. 00:34:36.728 [2024-07-15 03:37:42.528899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.728 [2024-07-15 03:37:42.528929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.728 qpair failed and we were unable to recover it. 00:34:36.728 [2024-07-15 03:37:42.529094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.728 [2024-07-15 03:37:42.529124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.728 qpair failed and we were unable to recover it. 00:34:36.728 [2024-07-15 03:37:42.529262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.728 [2024-07-15 03:37:42.529301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.728 qpair failed and we were unable to recover it. 00:34:36.728 [2024-07-15 03:37:42.529511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.728 [2024-07-15 03:37:42.529541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.728 qpair failed and we were unable to recover it. 00:34:36.728 [2024-07-15 03:37:42.529660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.728 [2024-07-15 03:37:42.529690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.728 qpair failed and we were unable to recover it. 00:34:36.728 [2024-07-15 03:37:42.529808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.728 [2024-07-15 03:37:42.529837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.728 qpair failed and we were unable to recover it. 00:34:36.728 [2024-07-15 03:37:42.530038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.728 [2024-07-15 03:37:42.530065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.728 qpair failed and we were unable to recover it. 00:34:36.728 [2024-07-15 03:37:42.530230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.728 [2024-07-15 03:37:42.530260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.728 qpair failed and we were unable to recover it. 00:34:36.728 [2024-07-15 03:37:42.530403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.729 [2024-07-15 03:37:42.530444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.729 qpair failed and we were unable to recover it. 00:34:36.729 [2024-07-15 03:37:42.530626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.729 [2024-07-15 03:37:42.530664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.729 qpair failed and we were unable to recover it. 00:34:36.729 [2024-07-15 03:37:42.530804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.729 [2024-07-15 03:37:42.530831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.729 qpair failed and we were unable to recover it. 00:34:36.729 [2024-07-15 03:37:42.530967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.729 [2024-07-15 03:37:42.530994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.729 qpair failed and we were unable to recover it. 00:34:36.729 [2024-07-15 03:37:42.531157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.729 [2024-07-15 03:37:42.531195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.729 qpair failed and we were unable to recover it. 00:34:36.729 [2024-07-15 03:37:42.531345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.729 [2024-07-15 03:37:42.531375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.729 qpair failed and we were unable to recover it. 00:34:36.729 [2024-07-15 03:37:42.531526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.729 [2024-07-15 03:37:42.531562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.729 qpair failed and we were unable to recover it. 00:34:36.729 [2024-07-15 03:37:42.531754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.729 [2024-07-15 03:37:42.531784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.729 qpair failed and we were unable to recover it. 00:34:36.729 [2024-07-15 03:37:42.531953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.729 [2024-07-15 03:37:42.531980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.729 qpair failed and we were unable to recover it. 00:34:36.729 [2024-07-15 03:37:42.532170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.729 [2024-07-15 03:37:42.532211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.729 qpair failed and we were unable to recover it. 00:34:36.729 [2024-07-15 03:37:42.532375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.729 [2024-07-15 03:37:42.532401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.729 qpair failed and we were unable to recover it. 00:34:36.729 [2024-07-15 03:37:42.532520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.729 [2024-07-15 03:37:42.532547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.729 qpair failed and we were unable to recover it. 00:34:36.729 [2024-07-15 03:37:42.532685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.729 [2024-07-15 03:37:42.532712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.729 qpair failed and we were unable to recover it. 00:34:36.729 [2024-07-15 03:37:42.532824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.729 [2024-07-15 03:37:42.532850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.729 qpair failed and we were unable to recover it. 00:34:36.729 [2024-07-15 03:37:42.533004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.729 [2024-07-15 03:37:42.533031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.729 qpair failed and we were unable to recover it. 00:34:36.729 [2024-07-15 03:37:42.533190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.729 [2024-07-15 03:37:42.533219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.729 qpair failed and we were unable to recover it. 00:34:36.729 [2024-07-15 03:37:42.533373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.729 [2024-07-15 03:37:42.533400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.729 qpair failed and we were unable to recover it. 00:34:36.729 [2024-07-15 03:37:42.533540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.729 [2024-07-15 03:37:42.533584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.729 qpair failed and we were unable to recover it. 00:34:36.729 [2024-07-15 03:37:42.533737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.729 [2024-07-15 03:37:42.533766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.729 qpair failed and we were unable to recover it. 00:34:36.729 [2024-07-15 03:37:42.533926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.729 [2024-07-15 03:37:42.533953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.729 qpair failed and we were unable to recover it. 00:34:36.729 [2024-07-15 03:37:42.534091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.729 [2024-07-15 03:37:42.534118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.729 qpair failed and we were unable to recover it. 00:34:36.729 [2024-07-15 03:37:42.534291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.729 [2024-07-15 03:37:42.534318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.729 qpair failed and we were unable to recover it. 00:34:36.729 [2024-07-15 03:37:42.534593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.729 [2024-07-15 03:37:42.534644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.729 qpair failed and we were unable to recover it. 00:34:36.729 [2024-07-15 03:37:42.534803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.729 [2024-07-15 03:37:42.534832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.729 qpair failed and we were unable to recover it. 00:34:36.729 [2024-07-15 03:37:42.535007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.729 [2024-07-15 03:37:42.535034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.729 qpair failed and we were unable to recover it. 00:34:36.729 [2024-07-15 03:37:42.535192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.729 [2024-07-15 03:37:42.535219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.729 qpair failed and we were unable to recover it. 00:34:36.729 [2024-07-15 03:37:42.535399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.729 [2024-07-15 03:37:42.535428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.729 qpair failed and we were unable to recover it. 00:34:36.729 [2024-07-15 03:37:42.535547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.729 [2024-07-15 03:37:42.535577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.729 qpair failed and we were unable to recover it. 00:34:36.729 [2024-07-15 03:37:42.535745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.729 [2024-07-15 03:37:42.535775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.729 qpair failed and we were unable to recover it. 00:34:36.729 [2024-07-15 03:37:42.535937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.729 [2024-07-15 03:37:42.535964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.729 qpair failed and we were unable to recover it. 00:34:36.729 [2024-07-15 03:37:42.536105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.729 [2024-07-15 03:37:42.536132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.729 qpair failed and we were unable to recover it. 00:34:36.729 [2024-07-15 03:37:42.536326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.729 [2024-07-15 03:37:42.536353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.729 qpair failed and we were unable to recover it. 00:34:36.729 [2024-07-15 03:37:42.536508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.729 [2024-07-15 03:37:42.536537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.729 qpair failed and we were unable to recover it. 00:34:36.729 [2024-07-15 03:37:42.536714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.729 [2024-07-15 03:37:42.536747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.729 qpair failed and we were unable to recover it. 00:34:36.729 [2024-07-15 03:37:42.536913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.730 [2024-07-15 03:37:42.536941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.730 qpair failed and we were unable to recover it. 00:34:36.730 [2024-07-15 03:37:42.537103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.730 [2024-07-15 03:37:42.537130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.730 qpair failed and we were unable to recover it. 00:34:36.730 [2024-07-15 03:37:42.537321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.730 [2024-07-15 03:37:42.537350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.730 qpair failed and we were unable to recover it. 00:34:36.730 [2024-07-15 03:37:42.537577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.730 [2024-07-15 03:37:42.537635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.730 qpair failed and we were unable to recover it. 00:34:36.730 [2024-07-15 03:37:42.537756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.730 [2024-07-15 03:37:42.537786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.730 qpair failed and we were unable to recover it. 00:34:36.730 [2024-07-15 03:37:42.537917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.730 [2024-07-15 03:37:42.537944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.730 qpair failed and we were unable to recover it. 00:34:36.730 [2024-07-15 03:37:42.538081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.730 [2024-07-15 03:37:42.538108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.730 qpair failed and we were unable to recover it. 00:34:36.730 [2024-07-15 03:37:42.538259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.730 [2024-07-15 03:37:42.538290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.730 qpair failed and we were unable to recover it. 00:34:36.730 [2024-07-15 03:37:42.538435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.730 [2024-07-15 03:37:42.538465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.730 qpair failed and we were unable to recover it. 00:34:36.730 [2024-07-15 03:37:42.538617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.730 [2024-07-15 03:37:42.538646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.730 qpair failed and we were unable to recover it. 00:34:36.730 [2024-07-15 03:37:42.538798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.730 [2024-07-15 03:37:42.538827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.730 qpair failed and we were unable to recover it. 00:34:36.730 [2024-07-15 03:37:42.539017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.730 [2024-07-15 03:37:42.539044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.730 qpair failed and we were unable to recover it. 00:34:36.730 [2024-07-15 03:37:42.539211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.730 [2024-07-15 03:37:42.539238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.730 qpair failed and we were unable to recover it. 00:34:36.730 [2024-07-15 03:37:42.539360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.730 [2024-07-15 03:37:42.539403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.730 qpair failed and we were unable to recover it. 00:34:36.730 [2024-07-15 03:37:42.539559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.730 [2024-07-15 03:37:42.539588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.730 qpair failed and we were unable to recover it. 00:34:36.730 [2024-07-15 03:37:42.539739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.730 [2024-07-15 03:37:42.539769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.730 qpair failed and we were unable to recover it. 00:34:36.730 [2024-07-15 03:37:42.539931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.730 [2024-07-15 03:37:42.539959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.730 qpair failed and we were unable to recover it. 00:34:36.730 [2024-07-15 03:37:42.540069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.730 [2024-07-15 03:37:42.540095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.730 qpair failed and we were unable to recover it. 00:34:36.730 [2024-07-15 03:37:42.540236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.730 [2024-07-15 03:37:42.540262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.730 qpair failed and we were unable to recover it. 00:34:36.730 [2024-07-15 03:37:42.540443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.730 [2024-07-15 03:37:42.540473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.730 qpair failed and we were unable to recover it. 00:34:36.730 [2024-07-15 03:37:42.540648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.730 [2024-07-15 03:37:42.540677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.730 qpair failed and we were unable to recover it. 00:34:36.730 [2024-07-15 03:37:42.540808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.730 [2024-07-15 03:37:42.540835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.730 qpair failed and we were unable to recover it. 00:34:36.730 [2024-07-15 03:37:42.540988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.730 [2024-07-15 03:37:42.541015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.730 qpair failed and we were unable to recover it. 00:34:36.730 [2024-07-15 03:37:42.541194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.730 [2024-07-15 03:37:42.541224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.730 qpair failed and we were unable to recover it. 00:34:36.730 [2024-07-15 03:37:42.541403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.730 [2024-07-15 03:37:42.541429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.730 qpair failed and we were unable to recover it. 00:34:36.730 [2024-07-15 03:37:42.541537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.730 [2024-07-15 03:37:42.541581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.730 qpair failed and we were unable to recover it. 00:34:36.730 [2024-07-15 03:37:42.541730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.730 [2024-07-15 03:37:42.541761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.730 qpair failed and we were unable to recover it. 00:34:36.730 [2024-07-15 03:37:42.541897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.730 [2024-07-15 03:37:42.541925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.730 qpair failed and we were unable to recover it. 00:34:36.730 [2024-07-15 03:37:42.542061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.730 [2024-07-15 03:37:42.542099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.730 qpair failed and we were unable to recover it. 00:34:36.730 [2024-07-15 03:37:42.542252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.730 [2024-07-15 03:37:42.542282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.730 qpair failed and we were unable to recover it. 00:34:36.730 [2024-07-15 03:37:42.542443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.730 [2024-07-15 03:37:42.542471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.730 qpair failed and we were unable to recover it. 00:34:36.730 [2024-07-15 03:37:42.542629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.730 [2024-07-15 03:37:42.542659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.730 qpair failed and we were unable to recover it. 00:34:36.730 [2024-07-15 03:37:42.542776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.730 [2024-07-15 03:37:42.542806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.730 qpair failed and we were unable to recover it. 00:34:36.730 [2024-07-15 03:37:42.542997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.730 [2024-07-15 03:37:42.543024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.730 qpair failed and we were unable to recover it. 00:34:36.730 [2024-07-15 03:37:42.543182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.730 [2024-07-15 03:37:42.543211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.730 qpair failed and we were unable to recover it. 00:34:36.730 [2024-07-15 03:37:42.543327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.730 [2024-07-15 03:37:42.543357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.730 qpair failed and we were unable to recover it. 00:34:36.731 [2024-07-15 03:37:42.543486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.731 [2024-07-15 03:37:42.543513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.731 qpair failed and we were unable to recover it. 00:34:36.731 [2024-07-15 03:37:42.543653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.731 [2024-07-15 03:37:42.543696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.731 qpair failed and we were unable to recover it. 00:34:36.731 [2024-07-15 03:37:42.543848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.731 [2024-07-15 03:37:42.543884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.731 qpair failed and we were unable to recover it. 00:34:36.731 [2024-07-15 03:37:42.544015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.731 [2024-07-15 03:37:42.544047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.731 qpair failed and we were unable to recover it. 00:34:36.731 [2024-07-15 03:37:42.544188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.731 [2024-07-15 03:37:42.544231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.731 qpair failed and we were unable to recover it. 00:34:36.731 [2024-07-15 03:37:42.544412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.731 [2024-07-15 03:37:42.544442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.731 qpair failed and we were unable to recover it. 00:34:36.731 [2024-07-15 03:37:42.544589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.731 [2024-07-15 03:37:42.544616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.731 qpair failed and we were unable to recover it. 00:34:36.731 [2024-07-15 03:37:42.544754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.731 [2024-07-15 03:37:42.544798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.731 qpair failed and we were unable to recover it. 00:34:36.731 [2024-07-15 03:37:42.544939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.731 [2024-07-15 03:37:42.544966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.731 qpair failed and we were unable to recover it. 00:34:36.731 [2024-07-15 03:37:42.545127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.731 [2024-07-15 03:37:42.545154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.731 qpair failed and we were unable to recover it. 00:34:36.731 [2024-07-15 03:37:42.545333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.731 [2024-07-15 03:37:42.545363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.731 qpair failed and we were unable to recover it. 00:34:36.731 [2024-07-15 03:37:42.545479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.731 [2024-07-15 03:37:42.545509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.731 qpair failed and we were unable to recover it. 00:34:36.731 [2024-07-15 03:37:42.545640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.731 [2024-07-15 03:37:42.545668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.731 qpair failed and we were unable to recover it. 00:34:36.731 [2024-07-15 03:37:42.545808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.731 [2024-07-15 03:37:42.545835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.731 qpair failed and we were unable to recover it. 00:34:36.731 [2024-07-15 03:37:42.546017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.731 [2024-07-15 03:37:42.546047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.731 qpair failed and we were unable to recover it. 00:34:36.731 [2024-07-15 03:37:42.546228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.731 [2024-07-15 03:37:42.546255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.731 qpair failed and we were unable to recover it. 00:34:36.731 [2024-07-15 03:37:42.546393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.731 [2024-07-15 03:37:42.546420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.731 qpair failed and we were unable to recover it. 00:34:36.731 [2024-07-15 03:37:42.546560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.731 [2024-07-15 03:37:42.546587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.731 qpair failed and we were unable to recover it. 00:34:36.731 [2024-07-15 03:37:42.546698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.731 [2024-07-15 03:37:42.546725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.731 qpair failed and we were unable to recover it. 00:34:36.731 [2024-07-15 03:37:42.546832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.731 [2024-07-15 03:37:42.546859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.731 qpair failed and we were unable to recover it. 00:34:36.731 [2024-07-15 03:37:42.546969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.731 [2024-07-15 03:37:42.546996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.731 qpair failed and we were unable to recover it. 00:34:36.731 [2024-07-15 03:37:42.547134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.731 [2024-07-15 03:37:42.547162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.731 qpair failed and we were unable to recover it. 00:34:36.731 [2024-07-15 03:37:42.547275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.731 [2024-07-15 03:37:42.547318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.731 qpair failed and we were unable to recover it. 00:34:36.731 [2024-07-15 03:37:42.547451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.731 [2024-07-15 03:37:42.547481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.731 qpair failed and we were unable to recover it. 00:34:36.731 [2024-07-15 03:37:42.547678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.731 [2024-07-15 03:37:42.547705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.731 qpair failed and we were unable to recover it. 00:34:36.731 [2024-07-15 03:37:42.547860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.731 [2024-07-15 03:37:42.547910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.731 qpair failed and we were unable to recover it. 00:34:36.731 [2024-07-15 03:37:42.548091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.731 [2024-07-15 03:37:42.548122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.731 qpair failed and we were unable to recover it. 00:34:36.731 [2024-07-15 03:37:42.548284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.731 [2024-07-15 03:37:42.548311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.731 qpair failed and we were unable to recover it. 00:34:36.731 [2024-07-15 03:37:42.548426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.731 [2024-07-15 03:37:42.548454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.731 qpair failed and we were unable to recover it. 00:34:36.731 [2024-07-15 03:37:42.548627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.731 [2024-07-15 03:37:42.548654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.731 qpair failed and we were unable to recover it. 00:34:36.731 [2024-07-15 03:37:42.548831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.731 [2024-07-15 03:37:42.548861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.731 qpair failed and we were unable to recover it. 00:34:36.731 [2024-07-15 03:37:42.549025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.731 [2024-07-15 03:37:42.549054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.731 qpair failed and we were unable to recover it. 00:34:36.731 [2024-07-15 03:37:42.549235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.731 [2024-07-15 03:37:42.549264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.731 qpair failed and we were unable to recover it. 00:34:36.731 [2024-07-15 03:37:42.549426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.731 [2024-07-15 03:37:42.549452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.731 qpair failed and we were unable to recover it. 00:34:36.731 [2024-07-15 03:37:42.549604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.731 [2024-07-15 03:37:42.549633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.731 qpair failed and we were unable to recover it. 00:34:36.731 [2024-07-15 03:37:42.549754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.731 [2024-07-15 03:37:42.549784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.731 qpair failed and we were unable to recover it. 00:34:36.732 [2024-07-15 03:37:42.549951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.732 [2024-07-15 03:37:42.549978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.732 qpair failed and we were unable to recover it. 00:34:36.732 [2024-07-15 03:37:42.550158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.732 [2024-07-15 03:37:42.550188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.732 qpair failed and we were unable to recover it. 00:34:36.732 [2024-07-15 03:37:42.550315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.732 [2024-07-15 03:37:42.550345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.732 qpair failed and we were unable to recover it. 00:34:36.732 [2024-07-15 03:37:42.550507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.732 [2024-07-15 03:37:42.550534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.732 qpair failed and we were unable to recover it. 00:34:36.732 [2024-07-15 03:37:42.550727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.732 [2024-07-15 03:37:42.550757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.732 qpair failed and we were unable to recover it. 00:34:36.732 [2024-07-15 03:37:42.550904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.732 [2024-07-15 03:37:42.550936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.732 qpair failed and we were unable to recover it. 00:34:36.732 [2024-07-15 03:37:42.551121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.732 [2024-07-15 03:37:42.551148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.732 qpair failed and we were unable to recover it. 00:34:36.732 [2024-07-15 03:37:42.551333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.732 [2024-07-15 03:37:42.551364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.732 qpair failed and we were unable to recover it. 00:34:36.732 [2024-07-15 03:37:42.551519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.732 [2024-07-15 03:37:42.551549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.732 qpair failed and we were unable to recover it. 00:34:36.732 [2024-07-15 03:37:42.551705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.732 [2024-07-15 03:37:42.551732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.732 qpair failed and we were unable to recover it. 00:34:36.732 [2024-07-15 03:37:42.551839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.732 [2024-07-15 03:37:42.551866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.732 qpair failed and we were unable to recover it. 00:34:36.732 [2024-07-15 03:37:42.552046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.732 [2024-07-15 03:37:42.552076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.732 qpair failed and we were unable to recover it. 00:34:36.732 [2024-07-15 03:37:42.552204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.732 [2024-07-15 03:37:42.552231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.732 qpair failed and we were unable to recover it. 00:34:36.732 [2024-07-15 03:37:42.552371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.732 [2024-07-15 03:37:42.552398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.732 qpair failed and we were unable to recover it. 00:34:36.732 [2024-07-15 03:37:42.552565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.732 [2024-07-15 03:37:42.552592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.732 qpair failed and we were unable to recover it. 00:34:36.732 [2024-07-15 03:37:42.552721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.732 [2024-07-15 03:37:42.552751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.732 qpair failed and we were unable to recover it. 00:34:36.732 [2024-07-15 03:37:42.552943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.732 [2024-07-15 03:37:42.552970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.732 qpair failed and we were unable to recover it. 00:34:36.732 [2024-07-15 03:37:42.553109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.732 [2024-07-15 03:37:42.553136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.732 qpair failed and we were unable to recover it. 00:34:36.732 [2024-07-15 03:37:42.553316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.732 [2024-07-15 03:37:42.553343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.732 qpair failed and we were unable to recover it. 00:34:36.732 [2024-07-15 03:37:42.553507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.732 [2024-07-15 03:37:42.553552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.732 qpair failed and we were unable to recover it. 00:34:36.732 [2024-07-15 03:37:42.553704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.732 [2024-07-15 03:37:42.553733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.732 qpair failed and we were unable to recover it. 00:34:36.732 [2024-07-15 03:37:42.553892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.732 [2024-07-15 03:37:42.553920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.732 qpair failed and we were unable to recover it. 00:34:36.732 [2024-07-15 03:37:42.554102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.732 [2024-07-15 03:37:42.554132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.732 qpair failed and we were unable to recover it. 00:34:36.732 [2024-07-15 03:37:42.554279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.732 [2024-07-15 03:37:42.554309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.732 qpair failed and we were unable to recover it. 00:34:36.732 [2024-07-15 03:37:42.554479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.732 [2024-07-15 03:37:42.554506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.732 qpair failed and we were unable to recover it. 00:34:36.732 [2024-07-15 03:37:42.554609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.732 [2024-07-15 03:37:42.554635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.732 qpair failed and we were unable to recover it. 00:34:36.732 [2024-07-15 03:37:42.554804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.732 [2024-07-15 03:37:42.554833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.732 qpair failed and we were unable to recover it. 00:34:36.732 [2024-07-15 03:37:42.555028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.732 [2024-07-15 03:37:42.555056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.732 qpair failed and we were unable to recover it. 00:34:36.732 [2024-07-15 03:37:42.555216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.732 [2024-07-15 03:37:42.555246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.732 qpair failed and we were unable to recover it. 00:34:36.732 [2024-07-15 03:37:42.555441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.732 [2024-07-15 03:37:42.555468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.732 qpair failed and we were unable to recover it. 00:34:36.732 [2024-07-15 03:37:42.555611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.732 [2024-07-15 03:37:42.555638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.732 qpair failed and we were unable to recover it. 00:34:36.733 [2024-07-15 03:37:42.555809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.733 [2024-07-15 03:37:42.555835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.733 qpair failed and we were unable to recover it. 00:34:36.733 [2024-07-15 03:37:42.556013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.733 [2024-07-15 03:37:42.556044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.733 qpair failed and we were unable to recover it. 00:34:36.733 [2024-07-15 03:37:42.556195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.733 [2024-07-15 03:37:42.556222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.733 qpair failed and we were unable to recover it. 00:34:36.733 [2024-07-15 03:37:42.556363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.733 [2024-07-15 03:37:42.556411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.733 qpair failed and we were unable to recover it. 00:34:36.733 [2024-07-15 03:37:42.556564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.733 [2024-07-15 03:37:42.556594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.733 qpair failed and we were unable to recover it. 00:34:36.733 [2024-07-15 03:37:42.556780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.733 [2024-07-15 03:37:42.556806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.733 qpair failed and we were unable to recover it. 00:34:36.733 [2024-07-15 03:37:42.556954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.733 [2024-07-15 03:37:42.556984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.733 qpair failed and we were unable to recover it. 00:34:36.733 [2024-07-15 03:37:42.557161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.733 [2024-07-15 03:37:42.557191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.733 qpair failed and we were unable to recover it. 00:34:36.733 [2024-07-15 03:37:42.557319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.733 [2024-07-15 03:37:42.557346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.733 qpair failed and we were unable to recover it. 00:34:36.733 [2024-07-15 03:37:42.557507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.733 [2024-07-15 03:37:42.557554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.733 qpair failed and we were unable to recover it. 00:34:36.733 [2024-07-15 03:37:42.557667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.733 [2024-07-15 03:37:42.557697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.733 qpair failed and we were unable to recover it. 00:34:36.733 [2024-07-15 03:37:42.557858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.733 [2024-07-15 03:37:42.557890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.733 qpair failed and we were unable to recover it. 00:34:36.733 [2024-07-15 03:37:42.558056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.733 [2024-07-15 03:37:42.558086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.733 qpair failed and we were unable to recover it. 00:34:36.733 [2024-07-15 03:37:42.558229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.733 [2024-07-15 03:37:42.558259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.733 qpair failed and we were unable to recover it. 00:34:36.733 [2024-07-15 03:37:42.558413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.733 [2024-07-15 03:37:42.558440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.733 qpair failed and we were unable to recover it. 00:34:36.733 [2024-07-15 03:37:42.558616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.733 [2024-07-15 03:37:42.558645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.733 qpair failed and we were unable to recover it. 00:34:36.733 [2024-07-15 03:37:42.558799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.733 [2024-07-15 03:37:42.558829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.733 qpair failed and we were unable to recover it. 00:34:36.733 [2024-07-15 03:37:42.559004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.733 [2024-07-15 03:37:42.559031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.733 qpair failed and we were unable to recover it. 00:34:36.733 [2024-07-15 03:37:42.559191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.733 [2024-07-15 03:37:42.559218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.733 qpair failed and we were unable to recover it. 00:34:36.733 [2024-07-15 03:37:42.559405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.733 [2024-07-15 03:37:42.559435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.733 qpair failed and we were unable to recover it. 00:34:36.733 [2024-07-15 03:37:42.559597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.733 [2024-07-15 03:37:42.559624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.733 qpair failed and we were unable to recover it. 00:34:36.733 [2024-07-15 03:37:42.559758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.733 [2024-07-15 03:37:42.559785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.733 qpair failed and we were unable to recover it. 00:34:36.733 [2024-07-15 03:37:42.559940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.733 [2024-07-15 03:37:42.559971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.733 qpair failed and we were unable to recover it. 00:34:36.733 [2024-07-15 03:37:42.560131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.733 [2024-07-15 03:37:42.560158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.733 qpair failed and we were unable to recover it. 00:34:36.733 [2024-07-15 03:37:42.560343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.733 [2024-07-15 03:37:42.560373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.733 qpair failed and we were unable to recover it. 00:34:36.733 [2024-07-15 03:37:42.560501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.733 [2024-07-15 03:37:42.560531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.733 qpair failed and we were unable to recover it. 00:34:36.733 [2024-07-15 03:37:42.560711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.733 [2024-07-15 03:37:42.560740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.733 qpair failed and we were unable to recover it. 00:34:36.733 [2024-07-15 03:37:42.560891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.733 [2024-07-15 03:37:42.560936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.733 qpair failed and we were unable to recover it. 00:34:36.733 [2024-07-15 03:37:42.561094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.733 [2024-07-15 03:37:42.561120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.733 qpair failed and we were unable to recover it. 00:34:36.733 [2024-07-15 03:37:42.561264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.733 [2024-07-15 03:37:42.561292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.733 qpair failed and we were unable to recover it. 00:34:36.733 [2024-07-15 03:37:42.561428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.733 [2024-07-15 03:37:42.561455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.733 qpair failed and we were unable to recover it. 00:34:36.733 [2024-07-15 03:37:42.561585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.733 [2024-07-15 03:37:42.561611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.733 qpair failed and we were unable to recover it. 00:34:36.733 [2024-07-15 03:37:42.561779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.733 [2024-07-15 03:37:42.561806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.733 qpair failed and we were unable to recover it. 00:34:36.733 [2024-07-15 03:37:42.561966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.733 [2024-07-15 03:37:42.561997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.733 qpair failed and we were unable to recover it. 00:34:36.733 [2024-07-15 03:37:42.562121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.733 [2024-07-15 03:37:42.562151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.733 qpair failed and we were unable to recover it. 00:34:36.733 [2024-07-15 03:37:42.562311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.733 [2024-07-15 03:37:42.562340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.733 qpair failed and we were unable to recover it. 00:34:36.733 [2024-07-15 03:37:42.562455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.734 [2024-07-15 03:37:42.562483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.734 qpair failed and we were unable to recover it. 00:34:36.734 [2024-07-15 03:37:42.562604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.734 [2024-07-15 03:37:42.562631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.734 qpair failed and we were unable to recover it. 00:34:36.734 [2024-07-15 03:37:42.562769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.734 [2024-07-15 03:37:42.562796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.734 qpair failed and we were unable to recover it. 00:34:36.734 [2024-07-15 03:37:42.562949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.734 [2024-07-15 03:37:42.562980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.734 qpair failed and we were unable to recover it. 00:34:36.734 [2024-07-15 03:37:42.563128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.734 [2024-07-15 03:37:42.563158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.734 qpair failed and we were unable to recover it. 00:34:36.734 [2024-07-15 03:37:42.563318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.734 [2024-07-15 03:37:42.563345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.734 qpair failed and we were unable to recover it. 00:34:36.734 [2024-07-15 03:37:42.563480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.734 [2024-07-15 03:37:42.563524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.734 qpair failed and we were unable to recover it. 00:34:36.734 [2024-07-15 03:37:42.563701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.734 [2024-07-15 03:37:42.563736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.734 qpair failed and we were unable to recover it. 00:34:36.734 [2024-07-15 03:37:42.563898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.734 [2024-07-15 03:37:42.563925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.734 qpair failed and we were unable to recover it. 00:34:36.734 [2024-07-15 03:37:42.564058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.734 [2024-07-15 03:37:42.564085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.734 qpair failed and we were unable to recover it. 00:34:36.734 [2024-07-15 03:37:42.564238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.734 [2024-07-15 03:37:42.564268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.734 qpair failed and we were unable to recover it. 00:34:36.734 [2024-07-15 03:37:42.564426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.734 [2024-07-15 03:37:42.564453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.734 qpair failed and we were unable to recover it. 00:34:36.734 [2024-07-15 03:37:42.564634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.734 [2024-07-15 03:37:42.564664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.734 qpair failed and we were unable to recover it. 00:34:36.734 [2024-07-15 03:37:42.564791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.734 [2024-07-15 03:37:42.564821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.734 qpair failed and we were unable to recover it. 00:34:36.734 [2024-07-15 03:37:42.565006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.734 [2024-07-15 03:37:42.565034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.734 qpair failed and we were unable to recover it. 00:34:36.734 [2024-07-15 03:37:42.565179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.734 [2024-07-15 03:37:42.565205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.734 qpair failed and we were unable to recover it. 00:34:36.734 [2024-07-15 03:37:42.565401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.734 [2024-07-15 03:37:42.565430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.734 qpair failed and we were unable to recover it. 00:34:36.734 [2024-07-15 03:37:42.565621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.734 [2024-07-15 03:37:42.565647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.734 qpair failed and we were unable to recover it. 00:34:36.734 [2024-07-15 03:37:42.565834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.734 [2024-07-15 03:37:42.565864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.734 qpair failed and we were unable to recover it. 00:34:36.734 [2024-07-15 03:37:42.566020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.734 [2024-07-15 03:37:42.566050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.734 qpair failed and we were unable to recover it. 00:34:36.734 [2024-07-15 03:37:42.566210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.734 [2024-07-15 03:37:42.566237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.734 qpair failed and we were unable to recover it. 00:34:36.734 [2024-07-15 03:37:42.566386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.734 [2024-07-15 03:37:42.566413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.734 qpair failed and we were unable to recover it. 00:34:36.734 [2024-07-15 03:37:42.566606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.734 [2024-07-15 03:37:42.566635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.734 qpair failed and we were unable to recover it. 00:34:36.734 [2024-07-15 03:37:42.566775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.734 [2024-07-15 03:37:42.566803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.734 qpair failed and we were unable to recover it. 00:34:36.734 [2024-07-15 03:37:42.566918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.734 [2024-07-15 03:37:42.566946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.734 qpair failed and we were unable to recover it. 00:34:36.734 [2024-07-15 03:37:42.567110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.734 [2024-07-15 03:37:42.567140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.734 qpair failed and we were unable to recover it. 00:34:36.734 [2024-07-15 03:37:42.567327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.734 [2024-07-15 03:37:42.567354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.734 qpair failed and we were unable to recover it. 00:34:36.734 [2024-07-15 03:37:42.567523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.734 [2024-07-15 03:37:42.567554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.734 qpair failed and we were unable to recover it. 00:34:36.734 [2024-07-15 03:37:42.567709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.734 [2024-07-15 03:37:42.567739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.734 qpair failed and we were unable to recover it. 00:34:36.734 [2024-07-15 03:37:42.567894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.734 [2024-07-15 03:37:42.567921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.734 qpair failed and we were unable to recover it. 00:34:36.734 [2024-07-15 03:37:42.568039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.734 [2024-07-15 03:37:42.568066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.734 qpair failed and we were unable to recover it. 00:34:36.734 [2024-07-15 03:37:42.568222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.734 [2024-07-15 03:37:42.568252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.734 qpair failed and we were unable to recover it. 00:34:36.734 [2024-07-15 03:37:42.568412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.734 [2024-07-15 03:37:42.568439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.734 qpair failed and we were unable to recover it. 00:34:36.734 [2024-07-15 03:37:42.568579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.734 [2024-07-15 03:37:42.568621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.734 qpair failed and we were unable to recover it. 00:34:36.734 [2024-07-15 03:37:42.568757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.734 [2024-07-15 03:37:42.568787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.734 qpair failed and we were unable to recover it. 00:34:36.734 [2024-07-15 03:37:42.568939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.734 [2024-07-15 03:37:42.568967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.734 qpair failed and we were unable to recover it. 00:34:36.734 [2024-07-15 03:37:42.569104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.735 [2024-07-15 03:37:42.569148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.735 qpair failed and we were unable to recover it. 00:34:36.735 [2024-07-15 03:37:42.569271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.735 [2024-07-15 03:37:42.569301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.735 qpair failed and we were unable to recover it. 00:34:36.735 [2024-07-15 03:37:42.569485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.735 [2024-07-15 03:37:42.569511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.735 qpair failed and we were unable to recover it. 00:34:36.735 [2024-07-15 03:37:42.569647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.735 [2024-07-15 03:37:42.569676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.735 qpair failed and we were unable to recover it. 00:34:36.735 [2024-07-15 03:37:42.569855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.735 [2024-07-15 03:37:42.569892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.735 qpair failed and we were unable to recover it. 00:34:36.735 [2024-07-15 03:37:42.570030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.735 [2024-07-15 03:37:42.570057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.735 qpair failed and we were unable to recover it. 00:34:36.735 [2024-07-15 03:37:42.570162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.735 [2024-07-15 03:37:42.570189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.735 qpair failed and we were unable to recover it. 00:34:36.735 [2024-07-15 03:37:42.570356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.735 [2024-07-15 03:37:42.570399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.735 qpair failed and we were unable to recover it. 00:34:36.735 [2024-07-15 03:37:42.570581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.735 [2024-07-15 03:37:42.570608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.735 qpair failed and we were unable to recover it. 00:34:36.735 [2024-07-15 03:37:42.570758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.735 [2024-07-15 03:37:42.570788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.735 qpair failed and we were unable to recover it. 00:34:36.735 [2024-07-15 03:37:42.570918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.735 [2024-07-15 03:37:42.570950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.735 qpair failed and we were unable to recover it. 00:34:36.735 [2024-07-15 03:37:42.571108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.735 [2024-07-15 03:37:42.571139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.735 qpair failed and we were unable to recover it. 00:34:36.735 [2024-07-15 03:37:42.571277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.735 [2024-07-15 03:37:42.571322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.735 qpair failed and we were unable to recover it. 00:34:36.735 [2024-07-15 03:37:42.571445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.735 [2024-07-15 03:37:42.571475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.735 qpair failed and we were unable to recover it. 00:34:36.735 [2024-07-15 03:37:42.571604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.735 [2024-07-15 03:37:42.571631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.735 qpair failed and we were unable to recover it. 00:34:36.735 [2024-07-15 03:37:42.571769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.735 [2024-07-15 03:37:42.571796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.735 qpair failed and we were unable to recover it. 00:34:36.735 [2024-07-15 03:37:42.571951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.735 [2024-07-15 03:37:42.571982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.735 qpair failed and we were unable to recover it. 00:34:36.735 [2024-07-15 03:37:42.572151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.735 [2024-07-15 03:37:42.572179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.735 qpair failed and we were unable to recover it. 00:34:36.735 [2024-07-15 03:37:42.572341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.735 [2024-07-15 03:37:42.572371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.735 qpair failed and we were unable to recover it. 00:34:36.735 [2024-07-15 03:37:42.572554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.735 [2024-07-15 03:37:42.572581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.735 qpair failed and we were unable to recover it. 00:34:36.735 [2024-07-15 03:37:42.572718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.735 [2024-07-15 03:37:42.572745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.735 qpair failed and we were unable to recover it. 00:34:36.735 [2024-07-15 03:37:42.572927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.735 [2024-07-15 03:37:42.572958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.735 qpair failed and we were unable to recover it. 00:34:36.735 [2024-07-15 03:37:42.573133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.735 [2024-07-15 03:37:42.573163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.735 qpair failed and we were unable to recover it. 00:34:36.735 [2024-07-15 03:37:42.573351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.735 [2024-07-15 03:37:42.573378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.735 qpair failed and we were unable to recover it. 00:34:36.735 [2024-07-15 03:37:42.573532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.735 [2024-07-15 03:37:42.573563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.735 qpair failed and we were unable to recover it. 00:34:36.735 [2024-07-15 03:37:42.573747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.735 [2024-07-15 03:37:42.573777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.735 qpair failed and we were unable to recover it. 00:34:36.735 [2024-07-15 03:37:42.573914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.735 [2024-07-15 03:37:42.573942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.735 qpair failed and we were unable to recover it. 00:34:36.735 [2024-07-15 03:37:42.574081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.735 [2024-07-15 03:37:42.574127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.735 qpair failed and we were unable to recover it. 00:34:36.735 [2024-07-15 03:37:42.574271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.735 [2024-07-15 03:37:42.574301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.735 qpair failed and we were unable to recover it. 00:34:36.735 [2024-07-15 03:37:42.574430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.735 [2024-07-15 03:37:42.574457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.735 qpair failed and we were unable to recover it. 00:34:36.735 [2024-07-15 03:37:42.574626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.735 [2024-07-15 03:37:42.574672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.735 qpair failed and we were unable to recover it. 00:34:36.735 [2024-07-15 03:37:42.574823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.735 [2024-07-15 03:37:42.574852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.735 qpair failed and we were unable to recover it. 00:34:36.735 [2024-07-15 03:37:42.575014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.735 [2024-07-15 03:37:42.575042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.735 qpair failed and we were unable to recover it. 00:34:36.735 [2024-07-15 03:37:42.575155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.735 [2024-07-15 03:37:42.575182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.735 qpair failed and we were unable to recover it. 00:34:36.735 [2024-07-15 03:37:42.575387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.735 [2024-07-15 03:37:42.575416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.735 qpair failed and we were unable to recover it. 00:34:36.735 [2024-07-15 03:37:42.575565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.735 [2024-07-15 03:37:42.575592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.735 qpair failed and we were unable to recover it. 00:34:36.735 [2024-07-15 03:37:42.575730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.736 [2024-07-15 03:37:42.575778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.736 qpair failed and we were unable to recover it. 00:34:36.736 [2024-07-15 03:37:42.575976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.736 [2024-07-15 03:37:42.576004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.736 qpair failed and we were unable to recover it. 00:34:36.736 [2024-07-15 03:37:42.576141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.736 [2024-07-15 03:37:42.576168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.736 qpair failed and we were unable to recover it. 00:34:36.736 [2024-07-15 03:37:42.576307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.736 [2024-07-15 03:37:42.576351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.736 qpair failed and we were unable to recover it. 00:34:36.736 [2024-07-15 03:37:42.576508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.736 [2024-07-15 03:37:42.576537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.736 qpair failed and we were unable to recover it. 00:34:36.736 [2024-07-15 03:37:42.576724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.736 [2024-07-15 03:37:42.576751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.736 qpair failed and we were unable to recover it. 00:34:36.736 [2024-07-15 03:37:42.576863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.736 [2024-07-15 03:37:42.576913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.736 qpair failed and we were unable to recover it. 00:34:36.736 [2024-07-15 03:37:42.577074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.736 [2024-07-15 03:37:42.577105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.736 qpair failed and we were unable to recover it. 00:34:36.736 [2024-07-15 03:37:42.577231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.736 [2024-07-15 03:37:42.577257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.736 qpair failed and we were unable to recover it. 00:34:36.736 [2024-07-15 03:37:42.577425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.736 [2024-07-15 03:37:42.577467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.736 qpair failed and we were unable to recover it. 00:34:36.736 [2024-07-15 03:37:42.577616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.736 [2024-07-15 03:37:42.577651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.736 qpair failed and we were unable to recover it. 00:34:36.736 [2024-07-15 03:37:42.577816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.736 [2024-07-15 03:37:42.577844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.736 qpair failed and we were unable to recover it. 00:34:36.736 [2024-07-15 03:37:42.577991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.736 [2024-07-15 03:37:42.578020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.736 qpair failed and we were unable to recover it. 00:34:36.736 [2024-07-15 03:37:42.578207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.736 [2024-07-15 03:37:42.578238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.736 qpair failed and we were unable to recover it. 00:34:36.736 [2024-07-15 03:37:42.578369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.736 [2024-07-15 03:37:42.578397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.736 qpair failed and we were unable to recover it. 00:34:36.736 [2024-07-15 03:37:42.578541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.736 [2024-07-15 03:37:42.578598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.736 qpair failed and we were unable to recover it. 00:34:36.736 [2024-07-15 03:37:42.578787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.736 [2024-07-15 03:37:42.578818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.736 qpair failed and we were unable to recover it. 00:34:36.736 [2024-07-15 03:37:42.578951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.736 [2024-07-15 03:37:42.578977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.736 qpair failed and we were unable to recover it. 00:34:36.736 [2024-07-15 03:37:42.579116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.736 [2024-07-15 03:37:42.579153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.736 qpair failed and we were unable to recover it. 00:34:36.736 [2024-07-15 03:37:42.579346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.736 [2024-07-15 03:37:42.579376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.736 qpair failed and we were unable to recover it. 00:34:36.736 [2024-07-15 03:37:42.579563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.736 [2024-07-15 03:37:42.579591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.736 qpair failed and we were unable to recover it. 00:34:36.736 [2024-07-15 03:37:42.579744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.736 [2024-07-15 03:37:42.579786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.736 qpair failed and we were unable to recover it. 00:34:36.736 [2024-07-15 03:37:42.579971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.736 [2024-07-15 03:37:42.580012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.736 qpair failed and we were unable to recover it. 00:34:36.736 [2024-07-15 03:37:42.580153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.736 [2024-07-15 03:37:42.580181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.736 qpair failed and we were unable to recover it. 00:34:36.736 [2024-07-15 03:37:42.580345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.736 [2024-07-15 03:37:42.580390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.736 qpair failed and we were unable to recover it. 00:34:36.736 [2024-07-15 03:37:42.580543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.736 [2024-07-15 03:37:42.580573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.736 qpair failed and we were unable to recover it. 00:34:36.736 [2024-07-15 03:37:42.580761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.736 [2024-07-15 03:37:42.580788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.736 qpair failed and we were unable to recover it. 00:34:36.736 [2024-07-15 03:37:42.580940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.736 [2024-07-15 03:37:42.580972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.736 qpair failed and we were unable to recover it. 00:34:36.736 [2024-07-15 03:37:42.581113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.736 [2024-07-15 03:37:42.581143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.736 qpair failed and we were unable to recover it. 00:34:36.736 [2024-07-15 03:37:42.581306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.736 [2024-07-15 03:37:42.581333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.736 qpair failed and we were unable to recover it. 00:34:36.736 [2024-07-15 03:37:42.581441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.736 [2024-07-15 03:37:42.581468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.736 qpair failed and we were unable to recover it. 00:34:36.736 [2024-07-15 03:37:42.581630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.736 [2024-07-15 03:37:42.581657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.736 qpair failed and we were unable to recover it. 00:34:36.736 [2024-07-15 03:37:42.581813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.736 [2024-07-15 03:37:42.581840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.737 qpair failed and we were unable to recover it. 00:34:36.737 [2024-07-15 03:37:42.582009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.737 [2024-07-15 03:37:42.582036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.737 qpair failed and we were unable to recover it. 00:34:36.737 [2024-07-15 03:37:42.582219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.737 [2024-07-15 03:37:42.582249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.737 qpair failed and we were unable to recover it. 00:34:36.737 [2024-07-15 03:37:42.582431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.737 [2024-07-15 03:37:42.582458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.737 qpair failed and we were unable to recover it. 00:34:36.737 [2024-07-15 03:37:42.582613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.737 [2024-07-15 03:37:42.582642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.737 qpair failed and we were unable to recover it. 00:34:36.737 [2024-07-15 03:37:42.582786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.737 [2024-07-15 03:37:42.582816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.737 qpair failed and we were unable to recover it. 00:34:36.737 [2024-07-15 03:37:42.582970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.737 [2024-07-15 03:37:42.582998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.737 qpair failed and we were unable to recover it. 00:34:36.737 [2024-07-15 03:37:42.583132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.737 [2024-07-15 03:37:42.583159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.737 qpair failed and we were unable to recover it. 00:34:36.737 [2024-07-15 03:37:42.583287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.737 [2024-07-15 03:37:42.583316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.737 qpair failed and we were unable to recover it. 00:34:36.737 [2024-07-15 03:37:42.583477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.737 [2024-07-15 03:37:42.583504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.737 qpair failed and we were unable to recover it. 00:34:36.737 [2024-07-15 03:37:42.583658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.737 [2024-07-15 03:37:42.583689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.737 qpair failed and we were unable to recover it. 00:34:36.737 [2024-07-15 03:37:42.583807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.737 [2024-07-15 03:37:42.583837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.737 qpair failed and we were unable to recover it. 00:34:36.737 [2024-07-15 03:37:42.584029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.737 [2024-07-15 03:37:42.584056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.737 qpair failed and we were unable to recover it. 00:34:36.737 [2024-07-15 03:37:42.584239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.737 [2024-07-15 03:37:42.584269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.737 qpair failed and we were unable to recover it. 00:34:36.737 [2024-07-15 03:37:42.584444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.737 [2024-07-15 03:37:42.584473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.737 qpair failed and we were unable to recover it. 00:34:36.737 [2024-07-15 03:37:42.584635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.737 [2024-07-15 03:37:42.584672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.737 qpair failed and we were unable to recover it. 00:34:36.737 [2024-07-15 03:37:42.584807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.737 [2024-07-15 03:37:42.584833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.737 qpair failed and we were unable to recover it. 00:34:36.737 [2024-07-15 03:37:42.585033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.737 [2024-07-15 03:37:42.585065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.737 qpair failed and we were unable to recover it. 00:34:36.737 [2024-07-15 03:37:42.585198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.737 [2024-07-15 03:37:42.585225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.737 qpair failed and we were unable to recover it. 00:34:36.737 [2024-07-15 03:37:42.585390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.737 [2024-07-15 03:37:42.585422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.737 qpair failed and we were unable to recover it. 00:34:36.737 [2024-07-15 03:37:42.585588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.737 [2024-07-15 03:37:42.585618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.737 qpair failed and we were unable to recover it. 00:34:36.737 [2024-07-15 03:37:42.585764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.737 [2024-07-15 03:37:42.585791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.737 qpair failed and we were unable to recover it. 00:34:36.737 [2024-07-15 03:37:42.585941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.737 [2024-07-15 03:37:42.585971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.737 qpair failed and we were unable to recover it. 00:34:36.737 [2024-07-15 03:37:42.586121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.737 [2024-07-15 03:37:42.586155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.737 qpair failed and we were unable to recover it. 00:34:36.737 [2024-07-15 03:37:42.586280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.737 [2024-07-15 03:37:42.586307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.737 qpair failed and we were unable to recover it. 00:34:36.737 [2024-07-15 03:37:42.586411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.737 [2024-07-15 03:37:42.586437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.737 qpair failed and we were unable to recover it. 00:34:36.737 [2024-07-15 03:37:42.586634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.737 [2024-07-15 03:37:42.586661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.737 qpair failed and we were unable to recover it. 00:34:36.737 [2024-07-15 03:37:42.586799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.737 [2024-07-15 03:37:42.586826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.737 qpair failed and we were unable to recover it. 00:34:36.737 [2024-07-15 03:37:42.586942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.737 [2024-07-15 03:37:42.586970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.737 qpair failed and we were unable to recover it. 00:34:36.737 [2024-07-15 03:37:42.587102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.737 [2024-07-15 03:37:42.587129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.737 qpair failed and we were unable to recover it. 00:34:36.737 [2024-07-15 03:37:42.587264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.737 [2024-07-15 03:37:42.587291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.737 qpair failed and we were unable to recover it. 00:34:36.737 [2024-07-15 03:37:42.587441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.737 [2024-07-15 03:37:42.587471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.737 qpair failed and we were unable to recover it. 00:34:36.737 [2024-07-15 03:37:42.587632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.737 [2024-07-15 03:37:42.587660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.737 qpair failed and we were unable to recover it. 00:34:36.737 [2024-07-15 03:37:42.587838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.737 [2024-07-15 03:37:42.587865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.737 qpair failed and we were unable to recover it. 00:34:36.737 [2024-07-15 03:37:42.588036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.737 [2024-07-15 03:37:42.588066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.737 qpair failed and we were unable to recover it. 00:34:36.737 [2024-07-15 03:37:42.588209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.738 [2024-07-15 03:37:42.588238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.738 qpair failed and we were unable to recover it. 00:34:36.738 [2024-07-15 03:37:42.588426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.738 [2024-07-15 03:37:42.588453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.738 qpair failed and we were unable to recover it. 00:34:36.738 [2024-07-15 03:37:42.588610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.738 [2024-07-15 03:37:42.588640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.738 qpair failed and we were unable to recover it. 00:34:36.738 [2024-07-15 03:37:42.588763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.738 [2024-07-15 03:37:42.588793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.738 qpair failed and we were unable to recover it. 00:34:36.738 [2024-07-15 03:37:42.588936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.738 [2024-07-15 03:37:42.588964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.738 qpair failed and we were unable to recover it. 00:34:36.738 [2024-07-15 03:37:42.589131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.738 [2024-07-15 03:37:42.589175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.738 qpair failed and we were unable to recover it. 00:34:36.738 [2024-07-15 03:37:42.589321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.738 [2024-07-15 03:37:42.589357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.738 qpair failed and we were unable to recover it. 00:34:36.738 [2024-07-15 03:37:42.589495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.738 [2024-07-15 03:37:42.589521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.738 qpair failed and we were unable to recover it. 00:34:36.738 [2024-07-15 03:37:42.589661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.738 [2024-07-15 03:37:42.589688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.738 qpair failed and we were unable to recover it. 00:34:36.738 [2024-07-15 03:37:42.589800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.738 [2024-07-15 03:37:42.589827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.738 qpair failed and we were unable to recover it. 00:34:36.738 [2024-07-15 03:37:42.589968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.738 [2024-07-15 03:37:42.589996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.738 qpair failed and we were unable to recover it. 00:34:36.738 [2024-07-15 03:37:42.590108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.738 [2024-07-15 03:37:42.590152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.738 qpair failed and we were unable to recover it. 00:34:36.738 [2024-07-15 03:37:42.590334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.738 [2024-07-15 03:37:42.590364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.738 qpair failed and we were unable to recover it. 00:34:36.738 [2024-07-15 03:37:42.590557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.738 [2024-07-15 03:37:42.590584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.738 qpair failed and we were unable to recover it. 00:34:36.738 [2024-07-15 03:37:42.590689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.738 [2024-07-15 03:37:42.590734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.738 qpair failed and we were unable to recover it. 00:34:36.738 [2024-07-15 03:37:42.590913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.738 [2024-07-15 03:37:42.590953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.738 qpair failed and we were unable to recover it. 00:34:36.738 [2024-07-15 03:37:42.591134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.738 [2024-07-15 03:37:42.591170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.738 qpair failed and we were unable to recover it. 00:34:36.738 [2024-07-15 03:37:42.591328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.738 [2024-07-15 03:37:42.591364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.738 qpair failed and we were unable to recover it. 00:34:36.738 [2024-07-15 03:37:42.591515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.738 [2024-07-15 03:37:42.591545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.738 qpair failed and we were unable to recover it. 00:34:36.738 [2024-07-15 03:37:42.591719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.738 [2024-07-15 03:37:42.591750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.738 qpair failed and we were unable to recover it. 00:34:36.738 [2024-07-15 03:37:42.591900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.738 [2024-07-15 03:37:42.591948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.738 qpair failed and we were unable to recover it. 00:34:36.738 [2024-07-15 03:37:42.592053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.738 [2024-07-15 03:37:42.592078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.738 qpair failed and we were unable to recover it. 00:34:36.738 [2024-07-15 03:37:42.592229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.738 [2024-07-15 03:37:42.592255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.738 qpair failed and we were unable to recover it. 00:34:36.738 [2024-07-15 03:37:42.592435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.738 [2024-07-15 03:37:42.592465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.738 qpair failed and we were unable to recover it. 00:34:36.738 [2024-07-15 03:37:42.592608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.738 [2024-07-15 03:37:42.592638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.738 qpair failed and we were unable to recover it. 00:34:36.738 [2024-07-15 03:37:42.592820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.738 [2024-07-15 03:37:42.592847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.738 qpair failed and we were unable to recover it. 00:34:36.738 [2024-07-15 03:37:42.593022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.738 [2024-07-15 03:37:42.593050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.738 qpair failed and we were unable to recover it. 00:34:36.738 [2024-07-15 03:37:42.593213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.738 [2024-07-15 03:37:42.593244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.738 qpair failed and we were unable to recover it. 00:34:36.738 [2024-07-15 03:37:42.593427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.738 [2024-07-15 03:37:42.593458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.738 qpair failed and we were unable to recover it. 00:34:36.738 [2024-07-15 03:37:42.593604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.738 [2024-07-15 03:37:42.593634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.738 qpair failed and we were unable to recover it. 00:34:36.738 [2024-07-15 03:37:42.593785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.738 [2024-07-15 03:37:42.593817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.738 qpair failed and we were unable to recover it. 00:34:36.738 [2024-07-15 03:37:42.593978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.738 [2024-07-15 03:37:42.594007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.738 qpair failed and we were unable to recover it. 00:34:36.738 [2024-07-15 03:37:42.594127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.738 [2024-07-15 03:37:42.594172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.738 qpair failed and we were unable to recover it. 00:34:36.738 [2024-07-15 03:37:42.594325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.738 [2024-07-15 03:37:42.594355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.738 qpair failed and we were unable to recover it. 00:34:36.738 [2024-07-15 03:37:42.594511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.738 [2024-07-15 03:37:42.594538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.738 qpair failed and we were unable to recover it. 00:34:36.738 [2024-07-15 03:37:42.594680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.738 [2024-07-15 03:37:42.594707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.738 qpair failed and we were unable to recover it. 00:34:36.739 [2024-07-15 03:37:42.594814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.739 [2024-07-15 03:37:42.594841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.739 qpair failed and we were unable to recover it. 00:34:36.739 [2024-07-15 03:37:42.595013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.739 [2024-07-15 03:37:42.595041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.739 qpair failed and we were unable to recover it. 00:34:36.739 [2024-07-15 03:37:42.595196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.739 [2024-07-15 03:37:42.595226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.739 qpair failed and we were unable to recover it. 00:34:36.739 [2024-07-15 03:37:42.595378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.739 [2024-07-15 03:37:42.595408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.739 qpair failed and we were unable to recover it. 00:34:36.739 [2024-07-15 03:37:42.595572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.739 [2024-07-15 03:37:42.595600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.739 qpair failed and we were unable to recover it. 00:34:36.739 [2024-07-15 03:37:42.595738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.739 [2024-07-15 03:37:42.595781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.739 qpair failed and we were unable to recover it. 00:34:36.739 [2024-07-15 03:37:42.595949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.739 [2024-07-15 03:37:42.595976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.739 qpair failed and we were unable to recover it. 00:34:36.739 [2024-07-15 03:37:42.596138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.739 [2024-07-15 03:37:42.596165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.739 qpair failed and we were unable to recover it. 00:34:36.739 [2024-07-15 03:37:42.596276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.739 [2024-07-15 03:37:42.596304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.739 qpair failed and we were unable to recover it. 00:34:36.739 [2024-07-15 03:37:42.596445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.739 [2024-07-15 03:37:42.596472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.739 qpair failed and we were unable to recover it. 00:34:36.739 [2024-07-15 03:37:42.596582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.739 [2024-07-15 03:37:42.596609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.739 qpair failed and we were unable to recover it. 00:34:36.739 [2024-07-15 03:37:42.596714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.739 [2024-07-15 03:37:42.596741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.739 qpair failed and we were unable to recover it. 00:34:36.739 [2024-07-15 03:37:42.596872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.739 [2024-07-15 03:37:42.596907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.739 qpair failed and we were unable to recover it. 00:34:36.739 [2024-07-15 03:37:42.597071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.739 [2024-07-15 03:37:42.597098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.739 qpair failed and we were unable to recover it. 00:34:36.739 [2024-07-15 03:37:42.597202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.739 [2024-07-15 03:37:42.597229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.739 qpair failed and we were unable to recover it. 00:34:36.739 [2024-07-15 03:37:42.597367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.739 [2024-07-15 03:37:42.597395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.739 qpair failed and we were unable to recover it. 00:34:36.739 [2024-07-15 03:37:42.597511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.739 [2024-07-15 03:37:42.597538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.739 qpair failed and we were unable to recover it. 00:34:36.739 [2024-07-15 03:37:42.597676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.739 [2024-07-15 03:37:42.597719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.739 qpair failed and we were unable to recover it. 00:34:36.739 [2024-07-15 03:37:42.597867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.739 [2024-07-15 03:37:42.597904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.739 qpair failed and we were unable to recover it. 00:34:36.739 [2024-07-15 03:37:42.598035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.739 [2024-07-15 03:37:42.598063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.739 qpair failed and we were unable to recover it. 00:34:36.739 [2024-07-15 03:37:42.598195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.739 [2024-07-15 03:37:42.598223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.739 qpair failed and we were unable to recover it. 00:34:36.739 [2024-07-15 03:37:42.598359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.739 [2024-07-15 03:37:42.598390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.739 qpair failed and we were unable to recover it. 00:34:36.739 [2024-07-15 03:37:42.598566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.739 [2024-07-15 03:37:42.598593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.739 qpair failed and we were unable to recover it. 00:34:36.739 [2024-07-15 03:37:42.598751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.739 [2024-07-15 03:37:42.598781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.739 qpair failed and we were unable to recover it. 00:34:36.739 [2024-07-15 03:37:42.598977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.739 [2024-07-15 03:37:42.599005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.739 qpair failed and we were unable to recover it. 00:34:36.739 [2024-07-15 03:37:42.599148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.739 [2024-07-15 03:37:42.599175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.739 qpair failed and we were unable to recover it. 00:34:36.739 [2024-07-15 03:37:42.599334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.739 [2024-07-15 03:37:42.599363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.739 qpair failed and we were unable to recover it. 00:34:36.739 [2024-07-15 03:37:42.599516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.739 [2024-07-15 03:37:42.599545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.739 qpair failed and we were unable to recover it. 00:34:36.739 [2024-07-15 03:37:42.599696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.739 [2024-07-15 03:37:42.599723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.739 qpair failed and we were unable to recover it. 00:34:36.739 [2024-07-15 03:37:42.599916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.739 [2024-07-15 03:37:42.599947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.739 qpair failed and we were unable to recover it. 00:34:36.739 [2024-07-15 03:37:42.600094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.739 [2024-07-15 03:37:42.600124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.739 qpair failed and we were unable to recover it. 00:34:36.739 [2024-07-15 03:37:42.600288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.739 [2024-07-15 03:37:42.600315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.739 qpair failed and we were unable to recover it. 00:34:36.739 [2024-07-15 03:37:42.600488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.739 [2024-07-15 03:37:42.600520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.739 qpair failed and we were unable to recover it. 00:34:36.739 [2024-07-15 03:37:42.600679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.739 [2024-07-15 03:37:42.600709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.739 qpair failed and we were unable to recover it. 00:34:36.739 [2024-07-15 03:37:42.600846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.739 [2024-07-15 03:37:42.600873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.739 qpair failed and we were unable to recover it. 00:34:36.739 [2024-07-15 03:37:42.601056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.739 [2024-07-15 03:37:42.601100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.739 qpair failed and we were unable to recover it. 00:34:36.739 [2024-07-15 03:37:42.601297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.740 [2024-07-15 03:37:42.601324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.740 qpair failed and we were unable to recover it. 00:34:36.740 [2024-07-15 03:37:42.601463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.740 [2024-07-15 03:37:42.601490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.740 qpair failed and we were unable to recover it. 00:34:36.740 [2024-07-15 03:37:42.601596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.740 [2024-07-15 03:37:42.601623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.740 qpair failed and we were unable to recover it. 00:34:36.740 [2024-07-15 03:37:42.601794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.740 [2024-07-15 03:37:42.601824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.740 qpair failed and we were unable to recover it. 00:34:36.740 [2024-07-15 03:37:42.601972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.740 [2024-07-15 03:37:42.602001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.740 qpair failed and we were unable to recover it. 00:34:36.740 [2024-07-15 03:37:42.602111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.740 [2024-07-15 03:37:42.602138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.740 qpair failed and we were unable to recover it. 00:34:36.740 [2024-07-15 03:37:42.602282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.740 [2024-07-15 03:37:42.602309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.740 qpair failed and we were unable to recover it. 00:34:36.740 [2024-07-15 03:37:42.602443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.740 [2024-07-15 03:37:42.602470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.740 qpair failed and we were unable to recover it. 00:34:36.740 [2024-07-15 03:37:42.602652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.740 [2024-07-15 03:37:42.602682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.740 qpair failed and we were unable to recover it. 00:34:36.740 [2024-07-15 03:37:42.602809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.740 [2024-07-15 03:37:42.602839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.740 qpair failed and we were unable to recover it. 00:34:36.740 [2024-07-15 03:37:42.603037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.740 [2024-07-15 03:37:42.603064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.740 qpair failed and we were unable to recover it. 00:34:36.740 [2024-07-15 03:37:42.603191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.740 [2024-07-15 03:37:42.603223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.740 qpair failed and we were unable to recover it. 00:34:36.740 [2024-07-15 03:37:42.603402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.740 [2024-07-15 03:37:42.603432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.740 qpair failed and we were unable to recover it. 00:34:36.740 [2024-07-15 03:37:42.603566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.740 [2024-07-15 03:37:42.603593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.740 qpair failed and we were unable to recover it. 00:34:36.740 [2024-07-15 03:37:42.603758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.740 [2024-07-15 03:37:42.603784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.740 qpair failed and we were unable to recover it. 00:34:36.740 [2024-07-15 03:37:42.603918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.740 [2024-07-15 03:37:42.603949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.740 qpair failed and we were unable to recover it. 00:34:36.740 [2024-07-15 03:37:42.604110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.740 [2024-07-15 03:37:42.604137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.740 qpair failed and we were unable to recover it. 00:34:36.740 [2024-07-15 03:37:42.604276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.740 [2024-07-15 03:37:42.604303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.740 qpair failed and we were unable to recover it. 00:34:36.740 [2024-07-15 03:37:42.604440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.740 [2024-07-15 03:37:42.604467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.740 qpair failed and we were unable to recover it. 00:34:36.740 [2024-07-15 03:37:42.604598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.740 [2024-07-15 03:37:42.604625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.740 qpair failed and we were unable to recover it. 00:34:36.740 [2024-07-15 03:37:42.604724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.740 [2024-07-15 03:37:42.604751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.740 qpair failed and we were unable to recover it. 00:34:36.740 [2024-07-15 03:37:42.604921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.740 [2024-07-15 03:37:42.604952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.740 qpair failed and we were unable to recover it. 00:34:36.740 [2024-07-15 03:37:42.605108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.740 [2024-07-15 03:37:42.605135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.740 qpair failed and we were unable to recover it. 00:34:36.740 [2024-07-15 03:37:42.605323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.740 [2024-07-15 03:37:42.605353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.740 qpair failed and we were unable to recover it. 00:34:36.740 [2024-07-15 03:37:42.605473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.740 [2024-07-15 03:37:42.605503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.740 qpair failed and we were unable to recover it. 00:34:36.740 [2024-07-15 03:37:42.605662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.740 [2024-07-15 03:37:42.605689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.740 qpair failed and we were unable to recover it. 00:34:36.740 [2024-07-15 03:37:42.605824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.740 [2024-07-15 03:37:42.605851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.740 qpair failed and we were unable to recover it. 00:34:36.740 [2024-07-15 03:37:42.606005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.740 [2024-07-15 03:37:42.606033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.740 qpair failed and we were unable to recover it. 00:34:36.740 [2024-07-15 03:37:42.606176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.740 [2024-07-15 03:37:42.606203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.740 qpair failed and we were unable to recover it. 00:34:36.740 [2024-07-15 03:37:42.606344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.740 [2024-07-15 03:37:42.606372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.740 qpair failed and we were unable to recover it. 00:34:36.740 [2024-07-15 03:37:42.606539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.740 [2024-07-15 03:37:42.606571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.740 qpair failed and we were unable to recover it. 00:34:36.740 [2024-07-15 03:37:42.606731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.740 [2024-07-15 03:37:42.606758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.740 qpair failed and we were unable to recover it. 00:34:36.740 [2024-07-15 03:37:42.606900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.740 [2024-07-15 03:37:42.606937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.740 qpair failed and we were unable to recover it. 00:34:36.740 [2024-07-15 03:37:42.607078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.740 [2024-07-15 03:37:42.607123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.740 qpair failed and we were unable to recover it. 00:34:36.740 [2024-07-15 03:37:42.607286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.740 [2024-07-15 03:37:42.607313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.740 qpair failed and we were unable to recover it. 00:34:36.740 [2024-07-15 03:37:42.607451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.740 [2024-07-15 03:37:42.607477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.740 qpair failed and we were unable to recover it. 00:34:36.740 [2024-07-15 03:37:42.607620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.740 [2024-07-15 03:37:42.607654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.741 qpair failed and we were unable to recover it. 00:34:36.741 [2024-07-15 03:37:42.607806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.741 [2024-07-15 03:37:42.607833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.741 qpair failed and we were unable to recover it. 00:34:36.741 [2024-07-15 03:37:42.608018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.741 [2024-07-15 03:37:42.608046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.741 qpair failed and we were unable to recover it. 00:34:36.741 [2024-07-15 03:37:42.608205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.741 [2024-07-15 03:37:42.608234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.741 qpair failed and we were unable to recover it. 00:34:36.741 [2024-07-15 03:37:42.608363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.741 [2024-07-15 03:37:42.608389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.741 qpair failed and we were unable to recover it. 00:34:36.741 [2024-07-15 03:37:42.608506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.741 [2024-07-15 03:37:42.608532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.741 qpair failed and we were unable to recover it. 00:34:36.741 [2024-07-15 03:37:42.608632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.741 [2024-07-15 03:37:42.608658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.741 qpair failed and we were unable to recover it. 00:34:36.741 [2024-07-15 03:37:42.608793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.741 [2024-07-15 03:37:42.608820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.741 qpair failed and we were unable to recover it. 00:34:36.741 [2024-07-15 03:37:42.608924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.741 [2024-07-15 03:37:42.608951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.741 qpair failed and we were unable to recover it. 00:34:36.741 [2024-07-15 03:37:42.609084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.741 [2024-07-15 03:37:42.609111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.741 qpair failed and we were unable to recover it. 00:34:36.741 [2024-07-15 03:37:42.609293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.741 [2024-07-15 03:37:42.609320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.741 qpair failed and we were unable to recover it. 00:34:36.741 [2024-07-15 03:37:42.609426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.741 [2024-07-15 03:37:42.609453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.741 qpair failed and we were unable to recover it. 00:34:36.741 [2024-07-15 03:37:42.609585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.741 [2024-07-15 03:37:42.609612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.741 qpair failed and we were unable to recover it. 00:34:36.741 [2024-07-15 03:37:42.609750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.741 [2024-07-15 03:37:42.609776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.741 qpair failed and we were unable to recover it. 00:34:36.741 [2024-07-15 03:37:42.609931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.741 [2024-07-15 03:37:42.609960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.741 qpair failed and we were unable to recover it. 00:34:36.741 [2024-07-15 03:37:42.610147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.741 [2024-07-15 03:37:42.610174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.741 qpair failed and we were unable to recover it. 00:34:36.741 [2024-07-15 03:37:42.610285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.741 [2024-07-15 03:37:42.610312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.741 qpair failed and we were unable to recover it. 00:34:36.741 [2024-07-15 03:37:42.610454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.741 [2024-07-15 03:37:42.610496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.741 qpair failed and we were unable to recover it. 00:34:36.741 [2024-07-15 03:37:42.610646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.741 [2024-07-15 03:37:42.610675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.741 qpair failed and we were unable to recover it. 00:34:36.741 [2024-07-15 03:37:42.610842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.741 [2024-07-15 03:37:42.610871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.741 qpair failed and we were unable to recover it. 00:34:36.741 [2024-07-15 03:37:42.611001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.741 [2024-07-15 03:37:42.611028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.741 qpair failed and we were unable to recover it. 00:34:36.741 [2024-07-15 03:37:42.611138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.741 [2024-07-15 03:37:42.611182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.741 qpair failed and we were unable to recover it. 00:34:36.741 [2024-07-15 03:37:42.611367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.741 [2024-07-15 03:37:42.611393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.741 qpair failed and we were unable to recover it. 00:34:36.741 [2024-07-15 03:37:42.611534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.741 [2024-07-15 03:37:42.611560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.741 qpair failed and we were unable to recover it. 00:34:36.741 [2024-07-15 03:37:42.611697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.741 [2024-07-15 03:37:42.611724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.741 qpair failed and we were unable to recover it. 00:34:36.741 [2024-07-15 03:37:42.611862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.741 [2024-07-15 03:37:42.611897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.741 qpair failed and we were unable to recover it. 00:34:36.741 [2024-07-15 03:37:42.612058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.741 [2024-07-15 03:37:42.612087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.741 qpair failed and we were unable to recover it. 00:34:36.741 [2024-07-15 03:37:42.612242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.741 [2024-07-15 03:37:42.612272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.741 qpair failed and we were unable to recover it. 00:34:36.741 [2024-07-15 03:37:42.612422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.741 [2024-07-15 03:37:42.612449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.741 qpair failed and we were unable to recover it. 00:34:36.741 [2024-07-15 03:37:42.612611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.741 [2024-07-15 03:37:42.612662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.741 qpair failed and we were unable to recover it. 00:34:36.741 [2024-07-15 03:37:42.612813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.741 [2024-07-15 03:37:42.612844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.741 qpair failed and we were unable to recover it. 00:34:36.741 [2024-07-15 03:37:42.613043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.741 [2024-07-15 03:37:42.613071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.741 qpair failed and we were unable to recover it. 00:34:36.741 [2024-07-15 03:37:42.613249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.742 [2024-07-15 03:37:42.613275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.742 qpair failed and we were unable to recover it. 00:34:36.742 [2024-07-15 03:37:42.613412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.742 [2024-07-15 03:37:42.613440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.742 qpair failed and we were unable to recover it. 00:34:36.742 [2024-07-15 03:37:42.613549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.742 [2024-07-15 03:37:42.613576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.742 qpair failed and we were unable to recover it. 00:34:36.742 [2024-07-15 03:37:42.613710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.742 [2024-07-15 03:37:42.613737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.742 qpair failed and we were unable to recover it. 00:34:36.742 [2024-07-15 03:37:42.613895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.742 [2024-07-15 03:37:42.613926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.742 qpair failed and we were unable to recover it. 00:34:36.742 [2024-07-15 03:37:42.614091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.742 [2024-07-15 03:37:42.614118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.742 qpair failed and we were unable to recover it. 00:34:36.742 [2024-07-15 03:37:42.614225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.742 [2024-07-15 03:37:42.614252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.742 qpair failed and we were unable to recover it. 00:34:36.742 [2024-07-15 03:37:42.614387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.742 [2024-07-15 03:37:42.614418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.742 qpair failed and we were unable to recover it. 00:34:36.742 [2024-07-15 03:37:42.614608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.742 [2024-07-15 03:37:42.614639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.742 qpair failed and we were unable to recover it. 00:34:36.742 [2024-07-15 03:37:42.614751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.742 [2024-07-15 03:37:42.614778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.742 qpair failed and we were unable to recover it. 00:34:36.742 [2024-07-15 03:37:42.614917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.742 [2024-07-15 03:37:42.614946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.742 qpair failed and we were unable to recover it. 00:34:36.742 [2024-07-15 03:37:42.615056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.742 [2024-07-15 03:37:42.615083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.742 qpair failed and we were unable to recover it. 00:34:36.742 [2024-07-15 03:37:42.615224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.742 [2024-07-15 03:37:42.615266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.742 qpair failed and we were unable to recover it. 00:34:36.742 [2024-07-15 03:37:42.615387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.742 [2024-07-15 03:37:42.615417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.742 qpair failed and we were unable to recover it. 00:34:36.742 [2024-07-15 03:37:42.615604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.742 [2024-07-15 03:37:42.615630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.742 qpair failed and we were unable to recover it. 00:34:36.742 [2024-07-15 03:37:42.615738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.742 [2024-07-15 03:37:42.615781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.742 qpair failed and we were unable to recover it. 00:34:36.742 [2024-07-15 03:37:42.615931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.742 [2024-07-15 03:37:42.615962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.742 qpair failed and we were unable to recover it. 00:34:36.742 [2024-07-15 03:37:42.616115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.742 [2024-07-15 03:37:42.616152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.742 qpair failed and we were unable to recover it. 00:34:36.742 [2024-07-15 03:37:42.616309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.742 [2024-07-15 03:37:42.616338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.742 qpair failed and we were unable to recover it. 00:34:36.742 [2024-07-15 03:37:42.616482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.742 [2024-07-15 03:37:42.616511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.742 qpair failed and we were unable to recover it. 00:34:36.742 [2024-07-15 03:37:42.616671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.742 [2024-07-15 03:37:42.616698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.742 qpair failed and we were unable to recover it. 00:34:36.742 [2024-07-15 03:37:42.616832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.742 [2024-07-15 03:37:42.616859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.742 qpair failed and we were unable to recover it. 00:34:36.742 [2024-07-15 03:37:42.617036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.742 [2024-07-15 03:37:42.617065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.742 qpair failed and we were unable to recover it. 00:34:36.742 [2024-07-15 03:37:42.617254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.742 [2024-07-15 03:37:42.617281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.742 qpair failed and we were unable to recover it. 00:34:36.742 [2024-07-15 03:37:42.617430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.742 [2024-07-15 03:37:42.617460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.742 qpair failed and we were unable to recover it. 00:34:36.742 [2024-07-15 03:37:42.617610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.742 [2024-07-15 03:37:42.617639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.742 qpair failed and we were unable to recover it. 00:34:36.742 [2024-07-15 03:37:42.617816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.742 [2024-07-15 03:37:42.617845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.742 qpair failed and we were unable to recover it. 00:34:36.742 [2024-07-15 03:37:42.618002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.742 [2024-07-15 03:37:42.618030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.742 qpair failed and we were unable to recover it. 00:34:36.742 [2024-07-15 03:37:42.618165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.742 [2024-07-15 03:37:42.618192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.742 qpair failed and we were unable to recover it. 00:34:36.742 [2024-07-15 03:37:42.618369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.742 [2024-07-15 03:37:42.618405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.742 qpair failed and we were unable to recover it. 00:34:36.742 [2024-07-15 03:37:42.618536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.742 [2024-07-15 03:37:42.618562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.742 qpair failed and we were unable to recover it. 00:34:36.742 [2024-07-15 03:37:42.618698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.742 [2024-07-15 03:37:42.618728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.742 qpair failed and we were unable to recover it. 00:34:36.742 [2024-07-15 03:37:42.618861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.742 [2024-07-15 03:37:42.618895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.742 qpair failed and we were unable to recover it. 00:34:36.742 [2024-07-15 03:37:42.619074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.742 [2024-07-15 03:37:42.619105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.742 qpair failed and we were unable to recover it. 00:34:36.742 [2024-07-15 03:37:42.619273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.742 [2024-07-15 03:37:42.619304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.742 qpair failed and we were unable to recover it. 00:34:36.742 [2024-07-15 03:37:42.619447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.742 [2024-07-15 03:37:42.619474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.742 qpair failed and we were unable to recover it. 00:34:36.742 [2024-07-15 03:37:42.619612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.742 [2024-07-15 03:37:42.619648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.742 qpair failed and we were unable to recover it. 00:34:36.743 [2024-07-15 03:37:42.619832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.743 [2024-07-15 03:37:42.619860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.743 qpair failed and we were unable to recover it. 00:34:36.743 [2024-07-15 03:37:42.620042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.743 [2024-07-15 03:37:42.620069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.743 qpair failed and we were unable to recover it. 00:34:36.743 [2024-07-15 03:37:42.620260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.743 [2024-07-15 03:37:42.620293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.743 qpair failed and we were unable to recover it. 00:34:36.743 [2024-07-15 03:37:42.620446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.743 [2024-07-15 03:37:42.620477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.743 qpair failed and we were unable to recover it. 00:34:36.743 [2024-07-15 03:37:42.620638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.743 [2024-07-15 03:37:42.620665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.743 qpair failed and we were unable to recover it. 00:34:36.743 [2024-07-15 03:37:42.620833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.743 [2024-07-15 03:37:42.620863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.743 qpair failed and we were unable to recover it. 00:34:36.743 [2024-07-15 03:37:42.621060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.743 [2024-07-15 03:37:42.621089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.743 qpair failed and we were unable to recover it. 00:34:36.743 [2024-07-15 03:37:42.621230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.743 [2024-07-15 03:37:42.621257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.743 qpair failed and we were unable to recover it. 00:34:36.743 [2024-07-15 03:37:42.621417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.743 [2024-07-15 03:37:42.621462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.743 qpair failed and we were unable to recover it. 00:34:36.743 [2024-07-15 03:37:42.621619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.743 [2024-07-15 03:37:42.621653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.743 qpair failed and we were unable to recover it. 00:34:36.743 [2024-07-15 03:37:42.621789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.743 [2024-07-15 03:37:42.621817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.743 qpair failed and we were unable to recover it. 00:34:36.743 [2024-07-15 03:37:42.621972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.743 [2024-07-15 03:37:42.622005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.743 qpair failed and we were unable to recover it. 00:34:36.743 [2024-07-15 03:37:42.622170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.743 [2024-07-15 03:37:42.622197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.743 qpair failed and we were unable to recover it. 00:34:36.743 [2024-07-15 03:37:42.622343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.743 [2024-07-15 03:37:42.622371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.743 qpair failed and we were unable to recover it. 00:34:36.743 [2024-07-15 03:37:42.622496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.743 [2024-07-15 03:37:42.622531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.743 qpair failed and we were unable to recover it. 00:34:36.743 [2024-07-15 03:37:42.622673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.743 [2024-07-15 03:37:42.622701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.743 qpair failed and we were unable to recover it. 00:34:36.743 [2024-07-15 03:37:42.622848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.743 [2024-07-15 03:37:42.622895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.743 qpair failed and we were unable to recover it. 00:34:36.743 [2024-07-15 03:37:42.623016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.743 [2024-07-15 03:37:42.623043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.743 qpair failed and we were unable to recover it. 00:34:36.743 [2024-07-15 03:37:42.623210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.743 [2024-07-15 03:37:42.623240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.743 qpair failed and we were unable to recover it. 00:34:36.743 [2024-07-15 03:37:42.623399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.743 [2024-07-15 03:37:42.623427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.743 qpair failed and we were unable to recover it. 00:34:36.743 [2024-07-15 03:37:42.623568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.743 [2024-07-15 03:37:42.623594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.743 qpair failed and we were unable to recover it. 00:34:36.743 [2024-07-15 03:37:42.623775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.743 [2024-07-15 03:37:42.623805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.743 qpair failed and we were unable to recover it. 00:34:36.743 [2024-07-15 03:37:42.623971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.743 [2024-07-15 03:37:42.624004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.743 qpair failed and we were unable to recover it. 00:34:36.743 [2024-07-15 03:37:42.624126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.743 [2024-07-15 03:37:42.624157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.743 qpair failed and we were unable to recover it. 00:34:36.743 [2024-07-15 03:37:42.624308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.743 [2024-07-15 03:37:42.624335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.743 qpair failed and we were unable to recover it. 00:34:36.743 [2024-07-15 03:37:42.624444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.743 [2024-07-15 03:37:42.624472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.743 qpair failed and we were unable to recover it. 00:34:36.743 [2024-07-15 03:37:42.624671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.743 [2024-07-15 03:37:42.624701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.743 qpair failed and we were unable to recover it. 00:34:36.743 [2024-07-15 03:37:42.624890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.743 [2024-07-15 03:37:42.624930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.743 qpair failed and we were unable to recover it. 00:34:36.743 [2024-07-15 03:37:42.625051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.743 [2024-07-15 03:37:42.625078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.743 qpair failed and we were unable to recover it. 00:34:36.743 [2024-07-15 03:37:42.625275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.743 [2024-07-15 03:37:42.625306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.743 qpair failed and we were unable to recover it. 00:34:36.743 [2024-07-15 03:37:42.625424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.743 [2024-07-15 03:37:42.625454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.743 qpair failed and we were unable to recover it. 00:34:36.743 [2024-07-15 03:37:42.625625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.743 [2024-07-15 03:37:42.625651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.743 qpair failed and we were unable to recover it. 00:34:36.743 [2024-07-15 03:37:42.625807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.743 [2024-07-15 03:37:42.625846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.743 qpair failed and we were unable to recover it. 00:34:36.743 [2024-07-15 03:37:42.626024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.743 [2024-07-15 03:37:42.626051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.743 qpair failed and we were unable to recover it. 00:34:36.743 [2024-07-15 03:37:42.626174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.743 [2024-07-15 03:37:42.626201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.743 qpair failed and we were unable to recover it. 00:34:36.743 [2024-07-15 03:37:42.626349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.743 [2024-07-15 03:37:42.626402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.743 qpair failed and we were unable to recover it. 00:34:36.743 [2024-07-15 03:37:42.626554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.744 [2024-07-15 03:37:42.626583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.744 qpair failed and we were unable to recover it. 00:34:36.744 [2024-07-15 03:37:42.626775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.744 [2024-07-15 03:37:42.626801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.744 qpair failed and we were unable to recover it. 00:34:36.744 [2024-07-15 03:37:42.626963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.744 [2024-07-15 03:37:42.626995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.744 qpair failed and we were unable to recover it. 00:34:36.744 [2024-07-15 03:37:42.627148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.744 [2024-07-15 03:37:42.627179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.744 qpair failed and we were unable to recover it. 00:34:36.744 [2024-07-15 03:37:42.627337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.744 [2024-07-15 03:37:42.627364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.744 qpair failed and we were unable to recover it. 00:34:36.744 [2024-07-15 03:37:42.627480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.744 [2024-07-15 03:37:42.627507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.744 qpair failed and we were unable to recover it. 00:34:36.744 [2024-07-15 03:37:42.627621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.744 [2024-07-15 03:37:42.627658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.744 qpair failed and we were unable to recover it. 00:34:36.744 [2024-07-15 03:37:42.627804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.744 [2024-07-15 03:37:42.627832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.744 qpair failed and we were unable to recover it. 00:34:36.744 [2024-07-15 03:37:42.628013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.744 [2024-07-15 03:37:42.628040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.744 qpair failed and we were unable to recover it. 00:34:36.744 [2024-07-15 03:37:42.628159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.744 [2024-07-15 03:37:42.628194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.744 qpair failed and we were unable to recover it. 00:34:36.744 [2024-07-15 03:37:42.628335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.744 [2024-07-15 03:37:42.628368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.744 qpair failed and we were unable to recover it. 00:34:36.744 [2024-07-15 03:37:42.628486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.744 [2024-07-15 03:37:42.628532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.744 qpair failed and we were unable to recover it. 00:34:36.744 [2024-07-15 03:37:42.628679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.744 [2024-07-15 03:37:42.628709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.744 qpair failed and we were unable to recover it. 00:34:36.744 [2024-07-15 03:37:42.628889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.744 [2024-07-15 03:37:42.628928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.744 qpair failed and we were unable to recover it. 00:34:36.744 [2024-07-15 03:37:42.629064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.744 [2024-07-15 03:37:42.629092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.744 qpair failed and we were unable to recover it. 00:34:36.744 [2024-07-15 03:37:42.629244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.744 [2024-07-15 03:37:42.629279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.744 qpair failed and we were unable to recover it. 00:34:36.744 [2024-07-15 03:37:42.629439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.744 [2024-07-15 03:37:42.629466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.744 qpair failed and we were unable to recover it. 00:34:36.744 [2024-07-15 03:37:42.629584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.744 [2024-07-15 03:37:42.629630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.744 qpair failed and we were unable to recover it. 00:34:36.744 [2024-07-15 03:37:42.629786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.744 [2024-07-15 03:37:42.629817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.744 qpair failed and we were unable to recover it. 00:34:36.744 [2024-07-15 03:37:42.629978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.744 [2024-07-15 03:37:42.630010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.744 qpair failed and we were unable to recover it. 00:34:36.744 [2024-07-15 03:37:42.630206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.744 [2024-07-15 03:37:42.630240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.744 qpair failed and we were unable to recover it. 00:34:36.744 [2024-07-15 03:37:42.630392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.744 [2024-07-15 03:37:42.630421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.744 qpair failed and we were unable to recover it. 00:34:36.744 [2024-07-15 03:37:42.630572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.744 [2024-07-15 03:37:42.630599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.744 qpair failed and we were unable to recover it. 00:34:36.744 [2024-07-15 03:37:42.630729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.744 [2024-07-15 03:37:42.630772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.744 qpair failed and we were unable to recover it. 00:34:36.744 [2024-07-15 03:37:42.630935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.744 [2024-07-15 03:37:42.630975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.744 qpair failed and we were unable to recover it. 00:34:36.744 [2024-07-15 03:37:42.631143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.744 [2024-07-15 03:37:42.631169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.744 qpair failed and we were unable to recover it. 00:34:36.744 [2024-07-15 03:37:42.631278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.744 [2024-07-15 03:37:42.631309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.744 qpair failed and we were unable to recover it. 00:34:36.744 [2024-07-15 03:37:42.631427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.744 [2024-07-15 03:37:42.631454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.744 qpair failed and we were unable to recover it. 00:34:36.744 [2024-07-15 03:37:42.631595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.744 [2024-07-15 03:37:42.631623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.744 qpair failed and we were unable to recover it. 00:34:36.744 [2024-07-15 03:37:42.631745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.744 [2024-07-15 03:37:42.631773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.744 qpair failed and we were unable to recover it. 00:34:36.744 [2024-07-15 03:37:42.631923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.744 [2024-07-15 03:37:42.631950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.744 qpair failed and we were unable to recover it. 00:34:36.744 [2024-07-15 03:37:42.632128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.744 [2024-07-15 03:37:42.632159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.744 qpair failed and we were unable to recover it. 00:34:36.744 [2024-07-15 03:37:42.632339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.744 [2024-07-15 03:37:42.632369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.744 qpair failed and we were unable to recover it. 00:34:36.744 [2024-07-15 03:37:42.632512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.744 [2024-07-15 03:37:42.632542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.744 qpair failed and we were unable to recover it. 00:34:36.744 [2024-07-15 03:37:42.632736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.744 [2024-07-15 03:37:42.632763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.744 qpair failed and we were unable to recover it. 00:34:36.744 [2024-07-15 03:37:42.632865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.744 [2024-07-15 03:37:42.632923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.744 qpair failed and we were unable to recover it. 00:34:36.744 [2024-07-15 03:37:42.633082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.744 [2024-07-15 03:37:42.633113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.744 qpair failed and we were unable to recover it. 00:34:36.745 [2024-07-15 03:37:42.633279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.745 [2024-07-15 03:37:42.633306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.745 qpair failed and we were unable to recover it. 00:34:36.745 [2024-07-15 03:37:42.633476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.745 [2024-07-15 03:37:42.633506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.745 qpair failed and we were unable to recover it. 00:34:36.745 [2024-07-15 03:37:42.633661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.745 [2024-07-15 03:37:42.633692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.745 qpair failed and we were unable to recover it. 00:34:36.745 [2024-07-15 03:37:42.633822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.745 [2024-07-15 03:37:42.633849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.745 qpair failed and we were unable to recover it. 00:34:36.745 [2024-07-15 03:37:42.634041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.745 [2024-07-15 03:37:42.634069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.745 qpair failed and we were unable to recover it. 00:34:36.745 [2024-07-15 03:37:42.634193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.745 [2024-07-15 03:37:42.634221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.745 qpair failed and we were unable to recover it. 00:34:36.745 [2024-07-15 03:37:42.634354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.745 [2024-07-15 03:37:42.634380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.745 qpair failed and we were unable to recover it. 00:34:36.745 [2024-07-15 03:37:42.634497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.745 [2024-07-15 03:37:42.634539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.745 qpair failed and we were unable to recover it. 00:34:36.745 [2024-07-15 03:37:42.634663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.745 [2024-07-15 03:37:42.634693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.745 qpair failed and we were unable to recover it. 00:34:36.745 [2024-07-15 03:37:42.634817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.745 [2024-07-15 03:37:42.634845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.745 qpair failed and we were unable to recover it. 00:34:36.745 [2024-07-15 03:37:42.634993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.745 [2024-07-15 03:37:42.635022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.745 qpair failed and we were unable to recover it. 00:34:36.745 [2024-07-15 03:37:42.635193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.745 [2024-07-15 03:37:42.635224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.745 qpair failed and we were unable to recover it. 00:34:36.745 [2024-07-15 03:37:42.635381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.745 [2024-07-15 03:37:42.635412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.745 qpair failed and we were unable to recover it. 00:34:36.745 [2024-07-15 03:37:42.635604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.745 [2024-07-15 03:37:42.635634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.745 qpair failed and we were unable to recover it. 00:34:36.745 [2024-07-15 03:37:42.635798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.745 [2024-07-15 03:37:42.635828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.745 qpair failed and we were unable to recover it. 00:34:36.745 [2024-07-15 03:37:42.636002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.745 [2024-07-15 03:37:42.636031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.745 qpair failed and we were unable to recover it. 00:34:36.745 [2024-07-15 03:37:42.636202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.745 [2024-07-15 03:37:42.636233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.745 qpair failed and we were unable to recover it. 00:34:36.745 [2024-07-15 03:37:42.636345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.745 [2024-07-15 03:37:42.636374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.745 qpair failed and we were unable to recover it. 00:34:36.745 [2024-07-15 03:37:42.636570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.745 [2024-07-15 03:37:42.636604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.745 qpair failed and we were unable to recover it. 00:34:36.745 [2024-07-15 03:37:42.636734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.745 [2024-07-15 03:37:42.636766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.745 qpair failed and we were unable to recover it. 00:34:36.745 [2024-07-15 03:37:42.636925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.745 [2024-07-15 03:37:42.636953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.745 qpair failed and we were unable to recover it. 00:34:36.745 [2024-07-15 03:37:42.637114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.745 [2024-07-15 03:37:42.637151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.745 qpair failed and we were unable to recover it. 00:34:36.745 [2024-07-15 03:37:42.637339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.745 [2024-07-15 03:37:42.637371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.745 qpair failed and we were unable to recover it. 00:34:36.745 [2024-07-15 03:37:42.637515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.745 [2024-07-15 03:37:42.637546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.745 qpair failed and we were unable to recover it. 00:34:36.745 [2024-07-15 03:37:42.637727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.745 [2024-07-15 03:37:42.637753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.745 qpair failed and we were unable to recover it. 00:34:36.745 [2024-07-15 03:37:42.637938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.745 [2024-07-15 03:37:42.637969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.745 qpair failed and we were unable to recover it. 00:34:36.745 [2024-07-15 03:37:42.638152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.745 [2024-07-15 03:37:42.638183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.745 qpair failed and we were unable to recover it. 00:34:36.745 [2024-07-15 03:37:42.638338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.745 [2024-07-15 03:37:42.638366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.745 qpair failed and we were unable to recover it. 00:34:36.745 [2024-07-15 03:37:42.638497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.745 [2024-07-15 03:37:42.638539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.745 qpair failed and we were unable to recover it. 00:34:36.745 [2024-07-15 03:37:42.638689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.745 [2024-07-15 03:37:42.638726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.745 qpair failed and we were unable to recover it. 00:34:36.745 [2024-07-15 03:37:42.638893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.745 [2024-07-15 03:37:42.638921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.745 qpair failed and we were unable to recover it. 00:34:36.745 [2024-07-15 03:37:42.639060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.745 [2024-07-15 03:37:42.639108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.745 qpair failed and we were unable to recover it. 00:34:36.745 [2024-07-15 03:37:42.639254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.745 [2024-07-15 03:37:42.639284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.745 qpair failed and we were unable to recover it. 00:34:36.745 [2024-07-15 03:37:42.639447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.745 [2024-07-15 03:37:42.639475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.745 qpair failed and we were unable to recover it. 00:34:36.745 [2024-07-15 03:37:42.639616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.745 [2024-07-15 03:37:42.639661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.745 qpair failed and we were unable to recover it. 00:34:36.745 [2024-07-15 03:37:42.639823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.745 [2024-07-15 03:37:42.639850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.745 qpair failed and we were unable to recover it. 00:34:36.745 [2024-07-15 03:37:42.640027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.746 [2024-07-15 03:37:42.640056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.746 qpair failed and we were unable to recover it. 00:34:36.746 [2024-07-15 03:37:42.640190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.746 [2024-07-15 03:37:42.640221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.746 qpair failed and we were unable to recover it. 00:34:36.746 [2024-07-15 03:37:42.640370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.746 [2024-07-15 03:37:42.640401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.746 qpair failed and we were unable to recover it. 00:34:36.746 [2024-07-15 03:37:42.640593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.746 [2024-07-15 03:37:42.640621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.746 qpair failed and we were unable to recover it. 00:34:36.746 [2024-07-15 03:37:42.640804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.746 [2024-07-15 03:37:42.640833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.746 qpair failed and we were unable to recover it. 00:34:36.746 [2024-07-15 03:37:42.640991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.746 [2024-07-15 03:37:42.641023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.746 qpair failed and we were unable to recover it. 00:34:36.746 [2024-07-15 03:37:42.641156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.746 [2024-07-15 03:37:42.641183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.746 qpair failed and we were unable to recover it. 00:34:36.746 [2024-07-15 03:37:42.641332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.746 [2024-07-15 03:37:42.641359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.746 qpair failed and we were unable to recover it. 00:34:36.746 [2024-07-15 03:37:42.641490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.746 [2024-07-15 03:37:42.641532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.746 qpair failed and we were unable to recover it. 00:34:36.746 [2024-07-15 03:37:42.641678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.746 [2024-07-15 03:37:42.641706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.746 qpair failed and we were unable to recover it. 00:34:36.746 [2024-07-15 03:37:42.641841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.746 [2024-07-15 03:37:42.641905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.746 qpair failed and we were unable to recover it. 00:34:36.746 [2024-07-15 03:37:42.642064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.746 [2024-07-15 03:37:42.642094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.746 qpair failed and we were unable to recover it. 00:34:36.746 [2024-07-15 03:37:42.642235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.746 [2024-07-15 03:37:42.642265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.746 qpair failed and we were unable to recover it. 00:34:36.746 [2024-07-15 03:37:42.642416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.746 [2024-07-15 03:37:42.642460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.746 qpair failed and we were unable to recover it. 00:34:36.746 [2024-07-15 03:37:42.642581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.746 [2024-07-15 03:37:42.642610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.746 qpair failed and we were unable to recover it. 00:34:36.746 [2024-07-15 03:37:42.642775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.746 [2024-07-15 03:37:42.642803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.746 qpair failed and we were unable to recover it. 00:34:36.746 [2024-07-15 03:37:42.642922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.746 [2024-07-15 03:37:42.642970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.746 qpair failed and we were unable to recover it. 00:34:36.746 [2024-07-15 03:37:42.643155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.746 [2024-07-15 03:37:42.643185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.746 qpair failed and we were unable to recover it. 00:34:36.746 [2024-07-15 03:37:42.643331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.746 [2024-07-15 03:37:42.643359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.746 qpair failed and we were unable to recover it. 00:34:36.746 [2024-07-15 03:37:42.643494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.746 [2024-07-15 03:37:42.643536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.746 qpair failed and we were unable to recover it. 00:34:36.746 [2024-07-15 03:37:42.643684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.746 [2024-07-15 03:37:42.643721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.746 qpair failed and we were unable to recover it. 00:34:36.746 [2024-07-15 03:37:42.643884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.746 [2024-07-15 03:37:42.643912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.746 qpair failed and we were unable to recover it. 00:34:36.746 [2024-07-15 03:37:42.644043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.746 [2024-07-15 03:37:42.644071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.746 qpair failed and we were unable to recover it. 00:34:36.746 [2024-07-15 03:37:42.644240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.746 [2024-07-15 03:37:42.644269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.746 qpair failed and we were unable to recover it. 00:34:36.746 [2024-07-15 03:37:42.644423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.746 [2024-07-15 03:37:42.644450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.746 qpair failed and we were unable to recover it. 00:34:36.746 [2024-07-15 03:37:42.644586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.746 [2024-07-15 03:37:42.644631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.746 qpair failed and we were unable to recover it. 00:34:36.746 [2024-07-15 03:37:42.644786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.746 [2024-07-15 03:37:42.644816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.746 qpair failed and we were unable to recover it. 00:34:36.746 [2024-07-15 03:37:42.644992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.746 [2024-07-15 03:37:42.645019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.746 qpair failed and we were unable to recover it. 00:34:36.746 [2024-07-15 03:37:42.645178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.746 [2024-07-15 03:37:42.645209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.746 qpair failed and we were unable to recover it. 00:34:36.746 [2024-07-15 03:37:42.645367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.746 [2024-07-15 03:37:42.645396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.746 qpair failed and we were unable to recover it. 00:34:36.746 [2024-07-15 03:37:42.645559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.746 [2024-07-15 03:37:42.645587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.746 qpair failed and we were unable to recover it. 00:34:36.746 [2024-07-15 03:37:42.645701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.746 [2024-07-15 03:37:42.645752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.746 qpair failed and we were unable to recover it. 00:34:36.746 [2024-07-15 03:37:42.645948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.747 [2024-07-15 03:37:42.645980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.747 qpair failed and we were unable to recover it. 00:34:36.747 [2024-07-15 03:37:42.646092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.747 [2024-07-15 03:37:42.646120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.747 qpair failed and we were unable to recover it. 00:34:36.747 [2024-07-15 03:37:42.646233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.747 [2024-07-15 03:37:42.646260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.747 qpair failed and we were unable to recover it. 00:34:36.747 [2024-07-15 03:37:42.646407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.747 [2024-07-15 03:37:42.646437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.747 qpair failed and we were unable to recover it. 00:34:36.747 [2024-07-15 03:37:42.646629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.747 [2024-07-15 03:37:42.646656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.747 qpair failed and we were unable to recover it. 00:34:36.747 [2024-07-15 03:37:42.646808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.747 [2024-07-15 03:37:42.646838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.747 qpair failed and we were unable to recover it. 00:34:36.747 [2024-07-15 03:37:42.646995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.747 [2024-07-15 03:37:42.647026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.747 qpair failed and we were unable to recover it. 00:34:36.747 [2024-07-15 03:37:42.647175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.747 [2024-07-15 03:37:42.647202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.747 qpair failed and we were unable to recover it. 00:34:36.747 [2024-07-15 03:37:42.647352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.747 [2024-07-15 03:37:42.647399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.747 qpair failed and we were unable to recover it. 00:34:36.747 [2024-07-15 03:37:42.647543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.747 [2024-07-15 03:37:42.647574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.747 qpair failed and we were unable to recover it. 00:34:36.747 [2024-07-15 03:37:42.647741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.747 [2024-07-15 03:37:42.647769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.747 qpair failed and we were unable to recover it. 00:34:36.747 [2024-07-15 03:37:42.647909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.747 [2024-07-15 03:37:42.647946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.747 qpair failed and we were unable to recover it. 00:34:36.747 [2024-07-15 03:37:42.648082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.747 [2024-07-15 03:37:42.648126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.747 qpair failed and we were unable to recover it. 00:34:36.747 [2024-07-15 03:37:42.648291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.747 [2024-07-15 03:37:42.648319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.747 qpair failed and we were unable to recover it. 00:34:36.747 [2024-07-15 03:37:42.648462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.747 [2024-07-15 03:37:42.648507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.747 qpair failed and we were unable to recover it. 00:34:36.747 [2024-07-15 03:37:42.648650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.747 [2024-07-15 03:37:42.648679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.747 qpair failed and we were unable to recover it. 00:34:36.747 [2024-07-15 03:37:42.648866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.747 [2024-07-15 03:37:42.648904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.747 qpair failed and we were unable to recover it. 00:34:36.747 [2024-07-15 03:37:42.649022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.747 [2024-07-15 03:37:42.649072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.747 qpair failed and we were unable to recover it. 00:34:36.747 [2024-07-15 03:37:42.649212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.747 [2024-07-15 03:37:42.649241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.747 qpair failed and we were unable to recover it. 00:34:36.747 [2024-07-15 03:37:42.649404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.747 [2024-07-15 03:37:42.649432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.747 qpair failed and we were unable to recover it. 00:34:36.747 [2024-07-15 03:37:42.649614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.747 [2024-07-15 03:37:42.649645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.747 qpair failed and we were unable to recover it. 00:34:36.747 [2024-07-15 03:37:42.649797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.747 [2024-07-15 03:37:42.649826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.747 qpair failed and we were unable to recover it. 00:34:36.747 [2024-07-15 03:37:42.649991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.747 [2024-07-15 03:37:42.650019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.747 qpair failed and we were unable to recover it. 00:34:36.747 [2024-07-15 03:37:42.650132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.747 [2024-07-15 03:37:42.650159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.747 qpair failed and we were unable to recover it. 00:34:36.747 [2024-07-15 03:37:42.650276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.747 [2024-07-15 03:37:42.650302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.747 qpair failed and we were unable to recover it. 00:34:36.747 [2024-07-15 03:37:42.650466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.747 [2024-07-15 03:37:42.650494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.747 qpair failed and we were unable to recover it. 00:34:36.747 [2024-07-15 03:37:42.650680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.747 [2024-07-15 03:37:42.650710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.747 qpair failed and we were unable to recover it. 00:34:36.747 [2024-07-15 03:37:42.650830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.747 [2024-07-15 03:37:42.650859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.747 qpair failed and we were unable to recover it. 00:34:36.747 [2024-07-15 03:37:42.651025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.747 [2024-07-15 03:37:42.651053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.747 qpair failed and we were unable to recover it. 00:34:36.747 [2024-07-15 03:37:42.651184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.747 [2024-07-15 03:37:42.651211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.747 qpair failed and we were unable to recover it. 00:34:36.747 [2024-07-15 03:37:42.651368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.747 [2024-07-15 03:37:42.651398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.747 qpair failed and we were unable to recover it. 00:34:36.747 [2024-07-15 03:37:42.651533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.747 [2024-07-15 03:37:42.651561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.747 qpair failed and we were unable to recover it. 00:34:36.747 [2024-07-15 03:37:42.651698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.747 [2024-07-15 03:37:42.651726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.747 qpair failed and we were unable to recover it. 00:34:36.747 [2024-07-15 03:37:42.651924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.747 [2024-07-15 03:37:42.651955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.747 qpair failed and we were unable to recover it. 00:34:36.747 [2024-07-15 03:37:42.652082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.747 [2024-07-15 03:37:42.652109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.747 qpair failed and we were unable to recover it. 00:34:36.747 [2024-07-15 03:37:42.652222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.747 [2024-07-15 03:37:42.652249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.747 qpair failed and we were unable to recover it. 00:34:36.747 [2024-07-15 03:37:42.652379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.747 [2024-07-15 03:37:42.652410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.747 qpair failed and we were unable to recover it. 00:34:36.748 [2024-07-15 03:37:42.652581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.748 [2024-07-15 03:37:42.652608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.748 qpair failed and we were unable to recover it. 00:34:36.748 [2024-07-15 03:37:42.652740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.748 [2024-07-15 03:37:42.652784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.748 qpair failed and we were unable to recover it. 00:34:36.748 [2024-07-15 03:37:42.652899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.748 [2024-07-15 03:37:42.652931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.748 qpair failed and we were unable to recover it. 00:34:36.748 [2024-07-15 03:37:42.653084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.748 [2024-07-15 03:37:42.653113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.748 qpair failed and we were unable to recover it. 00:34:36.748 [2024-07-15 03:37:42.653306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.748 [2024-07-15 03:37:42.653337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.748 qpair failed and we were unable to recover it. 00:34:36.748 [2024-07-15 03:37:42.653453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.748 [2024-07-15 03:37:42.653483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.748 qpair failed and we were unable to recover it. 00:34:36.748 [2024-07-15 03:37:42.653614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.748 [2024-07-15 03:37:42.653641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.748 qpair failed and we were unable to recover it. 00:34:36.748 [2024-07-15 03:37:42.653792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.748 [2024-07-15 03:37:42.653820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.748 qpair failed and we were unable to recover it. 00:34:36.748 [2024-07-15 03:37:42.654001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.748 [2024-07-15 03:37:42.654031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.748 qpair failed and we were unable to recover it. 00:34:36.748 [2024-07-15 03:37:42.654219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.748 [2024-07-15 03:37:42.654246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.748 qpair failed and we were unable to recover it. 00:34:36.748 [2024-07-15 03:37:42.654376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.748 [2024-07-15 03:37:42.654408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.748 qpair failed and we were unable to recover it. 00:34:36.748 [2024-07-15 03:37:42.654551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.748 [2024-07-15 03:37:42.654590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.748 qpair failed and we were unable to recover it. 00:34:36.748 [2024-07-15 03:37:42.654835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.748 [2024-07-15 03:37:42.654865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.748 qpair failed and we were unable to recover it. 00:34:36.748 [2024-07-15 03:37:42.655037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.748 [2024-07-15 03:37:42.655064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.748 qpair failed and we were unable to recover it. 00:34:36.748 [2024-07-15 03:37:42.655224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.748 [2024-07-15 03:37:42.655254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.748 qpair failed and we were unable to recover it. 00:34:36.748 [2024-07-15 03:37:42.655456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.748 [2024-07-15 03:37:42.655483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.748 qpair failed and we were unable to recover it. 00:34:36.748 [2024-07-15 03:37:42.655667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.748 [2024-07-15 03:37:42.655698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.748 qpair failed and we were unable to recover it. 00:34:36.748 [2024-07-15 03:37:42.655850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.748 [2024-07-15 03:37:42.655888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.748 qpair failed and we were unable to recover it. 00:34:36.748 [2024-07-15 03:37:42.656022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.748 [2024-07-15 03:37:42.656048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.748 qpair failed and we were unable to recover it. 00:34:36.748 [2024-07-15 03:37:42.656186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.748 [2024-07-15 03:37:42.656231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.748 qpair failed and we were unable to recover it. 00:34:36.748 [2024-07-15 03:37:42.656380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.748 [2024-07-15 03:37:42.656421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.748 qpair failed and we were unable to recover it. 00:34:36.748 [2024-07-15 03:37:42.656563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.748 [2024-07-15 03:37:42.656591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.748 qpair failed and we were unable to recover it. 00:34:36.748 [2024-07-15 03:37:42.656769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.748 [2024-07-15 03:37:42.656799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.748 qpair failed and we were unable to recover it. 00:34:36.748 [2024-07-15 03:37:42.656913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.748 [2024-07-15 03:37:42.656943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.748 qpair failed and we were unable to recover it. 00:34:36.748 [2024-07-15 03:37:42.657112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.748 [2024-07-15 03:37:42.657140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.748 qpair failed and we were unable to recover it. 00:34:36.748 [2024-07-15 03:37:42.657327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.748 [2024-07-15 03:37:42.657357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.748 qpair failed and we were unable to recover it. 00:34:36.748 [2024-07-15 03:37:42.657535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.748 [2024-07-15 03:37:42.657565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.748 qpair failed and we were unable to recover it. 00:34:36.748 [2024-07-15 03:37:42.657717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.748 [2024-07-15 03:37:42.657750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.748 qpair failed and we were unable to recover it. 00:34:36.748 [2024-07-15 03:37:42.657872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.748 [2024-07-15 03:37:42.657907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.748 qpair failed and we were unable to recover it. 00:34:36.748 [2024-07-15 03:37:42.658021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.748 [2024-07-15 03:37:42.658049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.748 qpair failed and we were unable to recover it. 00:34:36.748 [2024-07-15 03:37:42.658180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.748 [2024-07-15 03:37:42.658207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.748 qpair failed and we were unable to recover it. 00:34:36.748 [2024-07-15 03:37:42.658385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.748 [2024-07-15 03:37:42.658421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.748 qpair failed and we were unable to recover it. 00:34:36.748 [2024-07-15 03:37:42.658597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.748 [2024-07-15 03:37:42.658631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.748 qpair failed and we were unable to recover it. 00:34:36.748 [2024-07-15 03:37:42.658779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.748 [2024-07-15 03:37:42.658806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.748 qpair failed and we were unable to recover it. 00:34:36.748 [2024-07-15 03:37:42.658998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.748 [2024-07-15 03:37:42.659028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.748 qpair failed and we were unable to recover it. 00:34:36.748 [2024-07-15 03:37:42.659218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.748 [2024-07-15 03:37:42.659247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.749 qpair failed and we were unable to recover it. 00:34:36.749 [2024-07-15 03:37:42.659399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.749 [2024-07-15 03:37:42.659427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.749 qpair failed and we were unable to recover it. 00:34:36.749 [2024-07-15 03:37:42.659584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.749 [2024-07-15 03:37:42.659614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.749 qpair failed and we were unable to recover it. 00:34:36.749 [2024-07-15 03:37:42.659793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.749 [2024-07-15 03:37:42.659824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.749 qpair failed and we were unable to recover it. 00:34:36.749 [2024-07-15 03:37:42.659969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.749 [2024-07-15 03:37:42.659997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.749 qpair failed and we were unable to recover it. 00:34:36.749 [2024-07-15 03:37:42.660133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.749 [2024-07-15 03:37:42.660166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.749 qpair failed and we were unable to recover it. 00:34:36.749 [2024-07-15 03:37:42.660339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.749 [2024-07-15 03:37:42.660369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.749 qpair failed and we were unable to recover it. 00:34:36.749 [2024-07-15 03:37:42.660510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.749 [2024-07-15 03:37:42.660536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.749 qpair failed and we were unable to recover it. 00:34:36.749 [2024-07-15 03:37:42.660642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.749 [2024-07-15 03:37:42.660671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.749 qpair failed and we were unable to recover it. 00:34:36.749 [2024-07-15 03:37:42.660851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.749 [2024-07-15 03:37:42.660891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.749 qpair failed and we were unable to recover it. 00:34:36.749 [2024-07-15 03:37:42.661054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.749 [2024-07-15 03:37:42.661082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.749 qpair failed and we were unable to recover it. 00:34:36.749 [2024-07-15 03:37:42.661217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.749 [2024-07-15 03:37:42.661262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.749 qpair failed and we were unable to recover it. 00:34:36.749 [2024-07-15 03:37:42.661390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.749 [2024-07-15 03:37:42.661419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.749 qpair failed and we were unable to recover it. 00:34:36.749 [2024-07-15 03:37:42.661592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.749 [2024-07-15 03:37:42.661620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.749 qpair failed and we were unable to recover it. 00:34:36.749 [2024-07-15 03:37:42.661783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.749 [2024-07-15 03:37:42.661813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.749 qpair failed and we were unable to recover it. 00:34:36.749 [2024-07-15 03:37:42.661995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.749 [2024-07-15 03:37:42.662024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.749 qpair failed and we were unable to recover it. 00:34:36.749 [2024-07-15 03:37:42.662178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.749 [2024-07-15 03:37:42.662208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.749 qpair failed and we were unable to recover it. 00:34:36.749 [2024-07-15 03:37:42.662398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.749 [2024-07-15 03:37:42.662428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.749 qpair failed and we were unable to recover it. 00:34:36.749 [2024-07-15 03:37:42.662618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.749 [2024-07-15 03:37:42.662661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.749 qpair failed and we were unable to recover it. 00:34:36.749 [2024-07-15 03:37:42.662860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.749 [2024-07-15 03:37:42.662901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.749 qpair failed and we were unable to recover it. 00:34:36.749 [2024-07-15 03:37:42.663086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.749 [2024-07-15 03:37:42.663114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.749 qpair failed and we were unable to recover it. 00:34:36.749 [2024-07-15 03:37:42.663273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.749 [2024-07-15 03:37:42.663303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.749 qpair failed and we were unable to recover it. 00:34:36.749 [2024-07-15 03:37:42.663433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.749 [2024-07-15 03:37:42.663461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.749 qpair failed and we were unable to recover it. 00:34:36.749 [2024-07-15 03:37:42.663611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.749 [2024-07-15 03:37:42.663656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.749 qpair failed and we were unable to recover it. 00:34:36.749 [2024-07-15 03:37:42.663823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.749 [2024-07-15 03:37:42.663853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.749 qpair failed and we were unable to recover it. 00:34:36.749 [2024-07-15 03:37:42.664012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.749 [2024-07-15 03:37:42.664043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.749 qpair failed and we were unable to recover it. 00:34:36.749 [2024-07-15 03:37:42.664164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.749 [2024-07-15 03:37:42.664198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.749 qpair failed and we were unable to recover it. 00:34:36.749 [2024-07-15 03:37:42.664338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.749 [2024-07-15 03:37:42.664365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.749 qpair failed and we were unable to recover it. 00:34:36.749 [2024-07-15 03:37:42.664482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.749 [2024-07-15 03:37:42.664517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.749 qpair failed and we were unable to recover it. 00:34:36.749 [2024-07-15 03:37:42.664641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.749 [2024-07-15 03:37:42.664684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.749 qpair failed and we were unable to recover it. 00:34:36.749 [2024-07-15 03:37:42.664840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.749 [2024-07-15 03:37:42.664870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.749 qpair failed and we were unable to recover it. 00:34:36.749 [2024-07-15 03:37:42.665008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.749 [2024-07-15 03:37:42.665043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.749 qpair failed and we were unable to recover it. 00:34:36.749 [2024-07-15 03:37:42.665181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.749 [2024-07-15 03:37:42.665208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.749 qpair failed and we were unable to recover it. 00:34:36.749 [2024-07-15 03:37:42.665399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.749 [2024-07-15 03:37:42.665428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.749 qpair failed and we were unable to recover it. 00:34:36.749 [2024-07-15 03:37:42.665568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.749 [2024-07-15 03:37:42.665596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.749 qpair failed and we were unable to recover it. 00:34:36.749 [2024-07-15 03:37:42.665773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.749 [2024-07-15 03:37:42.665818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.749 qpair failed and we were unable to recover it. 00:34:36.749 [2024-07-15 03:37:42.665953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.749 [2024-07-15 03:37:42.665984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.749 qpair failed and we were unable to recover it. 00:34:36.749 [2024-07-15 03:37:42.666114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.750 [2024-07-15 03:37:42.666143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.750 qpair failed and we were unable to recover it. 00:34:36.750 [2024-07-15 03:37:42.666285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.750 [2024-07-15 03:37:42.666312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.750 qpair failed and we were unable to recover it. 00:34:36.750 [2024-07-15 03:37:42.666482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.750 [2024-07-15 03:37:42.666512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.750 qpair failed and we were unable to recover it. 00:34:36.750 [2024-07-15 03:37:42.666670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.750 [2024-07-15 03:37:42.666700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.750 qpair failed and we were unable to recover it. 00:34:36.750 [2024-07-15 03:37:42.666853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.750 [2024-07-15 03:37:42.666894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.750 qpair failed and we were unable to recover it. 00:34:36.750 [2024-07-15 03:37:42.667061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.750 [2024-07-15 03:37:42.667089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.750 qpair failed and we were unable to recover it. 00:34:36.750 [2024-07-15 03:37:42.667198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.750 [2024-07-15 03:37:42.667225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.750 qpair failed and we were unable to recover it. 00:34:36.750 [2024-07-15 03:37:42.667332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.750 [2024-07-15 03:37:42.667359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.750 qpair failed and we were unable to recover it. 00:34:36.750 [2024-07-15 03:37:42.667500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.750 [2024-07-15 03:37:42.667527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.750 qpair failed and we were unable to recover it. 00:34:36.750 [2024-07-15 03:37:42.667665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.750 [2024-07-15 03:37:42.667693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.750 qpair failed and we were unable to recover it. 00:34:36.750 [2024-07-15 03:37:42.667827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.750 [2024-07-15 03:37:42.667853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.750 qpair failed and we were unable to recover it. 00:34:36.750 [2024-07-15 03:37:42.667992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.750 [2024-07-15 03:37:42.668050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.750 qpair failed and we were unable to recover it. 00:34:36.750 [2024-07-15 03:37:42.668222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.750 [2024-07-15 03:37:42.668249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.750 qpair failed and we were unable to recover it. 00:34:36.750 [2024-07-15 03:37:42.668363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.750 [2024-07-15 03:37:42.668390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.750 qpair failed and we were unable to recover it. 00:34:36.750 [2024-07-15 03:37:42.668514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.750 [2024-07-15 03:37:42.668544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.750 qpair failed and we were unable to recover it. 00:34:36.750 [2024-07-15 03:37:42.668701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.750 [2024-07-15 03:37:42.668728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.750 qpair failed and we were unable to recover it. 00:34:36.750 [2024-07-15 03:37:42.668921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.750 [2024-07-15 03:37:42.668952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.750 qpair failed and we were unable to recover it. 00:34:36.750 [2024-07-15 03:37:42.669065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.750 [2024-07-15 03:37:42.669094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.750 qpair failed and we were unable to recover it. 00:34:36.750 [2024-07-15 03:37:42.669226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.750 [2024-07-15 03:37:42.669252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.750 qpair failed and we were unable to recover it. 00:34:36.750 [2024-07-15 03:37:42.669417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.750 [2024-07-15 03:37:42.669462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.750 qpair failed and we were unable to recover it. 00:34:36.750 [2024-07-15 03:37:42.669616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.750 [2024-07-15 03:37:42.669646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.750 qpair failed and we were unable to recover it. 00:34:36.750 [2024-07-15 03:37:42.669804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.750 [2024-07-15 03:37:42.669830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.750 qpair failed and we were unable to recover it. 00:34:36.750 [2024-07-15 03:37:42.669974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.750 [2024-07-15 03:37:42.670001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.750 qpair failed and we were unable to recover it. 00:34:36.750 [2024-07-15 03:37:42.670140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.750 [2024-07-15 03:37:42.670185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.750 qpair failed and we were unable to recover it. 00:34:36.750 [2024-07-15 03:37:42.670314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.750 [2024-07-15 03:37:42.670341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.750 qpair failed and we were unable to recover it. 00:34:36.750 [2024-07-15 03:37:42.670456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.750 [2024-07-15 03:37:42.670482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.750 qpair failed and we were unable to recover it. 00:34:36.750 [2024-07-15 03:37:42.670635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.750 [2024-07-15 03:37:42.670661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.750 qpair failed and we were unable to recover it. 00:34:36.750 [2024-07-15 03:37:42.670802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.750 [2024-07-15 03:37:42.670830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.750 qpair failed and we were unable to recover it. 00:34:36.750 [2024-07-15 03:37:42.670980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.750 [2024-07-15 03:37:42.671013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.750 qpair failed and we were unable to recover it. 00:34:36.750 [2024-07-15 03:37:42.671153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.750 [2024-07-15 03:37:42.671197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.750 qpair failed and we were unable to recover it. 00:34:36.750 [2024-07-15 03:37:42.671332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.750 [2024-07-15 03:37:42.671359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.750 qpair failed and we were unable to recover it. 00:34:36.750 [2024-07-15 03:37:42.671501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.750 [2024-07-15 03:37:42.671527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.750 qpair failed and we were unable to recover it. 00:34:36.750 [2024-07-15 03:37:42.671671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.750 [2024-07-15 03:37:42.671700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.750 qpair failed and we were unable to recover it. 00:34:36.750 [2024-07-15 03:37:42.671826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.750 [2024-07-15 03:37:42.671853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.750 qpair failed and we were unable to recover it. 00:34:36.750 [2024-07-15 03:37:42.672003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.750 [2024-07-15 03:37:42.672031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.750 qpair failed and we were unable to recover it. 00:34:36.750 [2024-07-15 03:37:42.672223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.750 [2024-07-15 03:37:42.672254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.750 qpair failed and we were unable to recover it. 00:34:36.750 [2024-07-15 03:37:42.672410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.751 [2024-07-15 03:37:42.672438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.751 qpair failed and we were unable to recover it. 00:34:36.751 [2024-07-15 03:37:42.672545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.751 [2024-07-15 03:37:42.672572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.751 qpair failed and we were unable to recover it. 00:34:36.751 [2024-07-15 03:37:42.672740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.751 [2024-07-15 03:37:42.672770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.751 qpair failed and we were unable to recover it. 00:34:36.751 [2024-07-15 03:37:42.672906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.751 [2024-07-15 03:37:42.672934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.751 qpair failed and we were unable to recover it. 00:34:36.751 [2024-07-15 03:37:42.673073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.751 [2024-07-15 03:37:42.673100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.751 qpair failed and we were unable to recover it. 00:34:36.751 [2024-07-15 03:37:42.673291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.751 [2024-07-15 03:37:42.673321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.751 qpair failed and we were unable to recover it. 00:34:36.751 [2024-07-15 03:37:42.673513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.751 [2024-07-15 03:37:42.673540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.751 qpair failed and we were unable to recover it. 00:34:36.751 [2024-07-15 03:37:42.673708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.751 [2024-07-15 03:37:42.673738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.751 qpair failed and we were unable to recover it. 00:34:36.751 [2024-07-15 03:37:42.673897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.751 [2024-07-15 03:37:42.673925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.751 qpair failed and we were unable to recover it. 00:34:36.751 [2024-07-15 03:37:42.674062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.751 [2024-07-15 03:37:42.674089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.751 qpair failed and we were unable to recover it. 00:34:36.751 [2024-07-15 03:37:42.674246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.751 [2024-07-15 03:37:42.674276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.751 qpair failed and we were unable to recover it. 00:34:36.751 [2024-07-15 03:37:42.674458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.751 [2024-07-15 03:37:42.674509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.751 qpair failed and we were unable to recover it. 00:34:36.751 [2024-07-15 03:37:42.674663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.751 [2024-07-15 03:37:42.674690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.751 qpair failed and we were unable to recover it. 00:34:36.751 [2024-07-15 03:37:42.674834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.751 [2024-07-15 03:37:42.674861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.751 qpair failed and we were unable to recover it. 00:34:36.751 [2024-07-15 03:37:42.675011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.751 [2024-07-15 03:37:42.675038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.751 qpair failed and we were unable to recover it. 00:34:36.751 [2024-07-15 03:37:42.675176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.751 [2024-07-15 03:37:42.675203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.751 qpair failed and we were unable to recover it. 00:34:36.751 [2024-07-15 03:37:42.675358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.751 [2024-07-15 03:37:42.675387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.751 qpair failed and we were unable to recover it. 00:34:36.751 [2024-07-15 03:37:42.675543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.751 [2024-07-15 03:37:42.675573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.751 qpair failed and we were unable to recover it. 00:34:36.751 [2024-07-15 03:37:42.675740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.751 [2024-07-15 03:37:42.675778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.751 qpair failed and we were unable to recover it. 00:34:36.751 [2024-07-15 03:37:42.675966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.751 [2024-07-15 03:37:42.675997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.751 qpair failed and we were unable to recover it. 00:34:36.751 [2024-07-15 03:37:42.676140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.751 [2024-07-15 03:37:42.676174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.751 qpair failed and we were unable to recover it. 00:34:36.751 [2024-07-15 03:37:42.676317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.751 [2024-07-15 03:37:42.676344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.751 qpair failed and we were unable to recover it. 00:34:36.751 [2024-07-15 03:37:42.676466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.751 [2024-07-15 03:37:42.676493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.751 qpair failed and we were unable to recover it. 00:34:36.751 [2024-07-15 03:37:42.676650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.751 [2024-07-15 03:37:42.676681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.751 qpair failed and we were unable to recover it. 00:34:36.751 [2024-07-15 03:37:42.676860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.751 [2024-07-15 03:37:42.676895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.751 qpair failed and we were unable to recover it. 00:34:36.751 [2024-07-15 03:37:42.677060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.751 [2024-07-15 03:37:42.677089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.751 qpair failed and we were unable to recover it. 00:34:36.751 [2024-07-15 03:37:42.677239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.751 [2024-07-15 03:37:42.677269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.751 qpair failed and we were unable to recover it. 00:34:36.751 [2024-07-15 03:37:42.677427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.751 [2024-07-15 03:37:42.677454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.751 qpair failed and we were unable to recover it. 00:34:36.751 [2024-07-15 03:37:42.677598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.751 [2024-07-15 03:37:42.677624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.751 qpair failed and we were unable to recover it. 00:34:36.751 [2024-07-15 03:37:42.677755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.751 [2024-07-15 03:37:42.677785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.751 qpair failed and we were unable to recover it. 00:34:36.751 [2024-07-15 03:37:42.677968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.751 [2024-07-15 03:37:42.677996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.751 qpair failed and we were unable to recover it. 00:34:36.751 [2024-07-15 03:37:42.678156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.751 [2024-07-15 03:37:42.678185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.751 qpair failed and we were unable to recover it. 00:34:36.752 [2024-07-15 03:37:42.678314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.752 [2024-07-15 03:37:42.678349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.752 qpair failed and we were unable to recover it. 00:34:36.752 [2024-07-15 03:37:42.678530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.752 [2024-07-15 03:37:42.678557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.752 qpair failed and we were unable to recover it. 00:34:36.752 [2024-07-15 03:37:42.678743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.752 [2024-07-15 03:37:42.678773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.752 qpair failed and we were unable to recover it. 00:34:36.752 [2024-07-15 03:37:42.678932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.752 [2024-07-15 03:37:42.678960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.752 qpair failed and we were unable to recover it. 00:34:36.752 [2024-07-15 03:37:42.679099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.752 [2024-07-15 03:37:42.679127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.752 qpair failed and we were unable to recover it. 00:34:36.752 [2024-07-15 03:37:42.679267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.752 [2024-07-15 03:37:42.679294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.752 qpair failed and we were unable to recover it. 00:34:36.752 [2024-07-15 03:37:42.679454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.752 [2024-07-15 03:37:42.679484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.752 qpair failed and we were unable to recover it. 00:34:36.752 [2024-07-15 03:37:42.679620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.752 [2024-07-15 03:37:42.679647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.752 qpair failed and we were unable to recover it. 00:34:36.752 [2024-07-15 03:37:42.679755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.752 [2024-07-15 03:37:42.679781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.752 qpair failed and we were unable to recover it. 00:34:36.752 [2024-07-15 03:37:42.679945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.752 [2024-07-15 03:37:42.679975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.752 qpair failed and we were unable to recover it. 00:34:36.752 [2024-07-15 03:37:42.680139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.752 [2024-07-15 03:37:42.680165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.752 qpair failed and we were unable to recover it. 00:34:36.752 [2024-07-15 03:37:42.680286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.752 [2024-07-15 03:37:42.680313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.752 qpair failed and we were unable to recover it. 00:34:36.752 [2024-07-15 03:37:42.680437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.752 [2024-07-15 03:37:42.680463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.752 qpair failed and we were unable to recover it. 00:34:36.752 [2024-07-15 03:37:42.680598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.752 [2024-07-15 03:37:42.680624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.752 qpair failed and we were unable to recover it. 00:34:36.752 [2024-07-15 03:37:42.680801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.752 [2024-07-15 03:37:42.680832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.752 qpair failed and we were unable to recover it. 00:34:36.752 [2024-07-15 03:37:42.681017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.752 [2024-07-15 03:37:42.681048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.752 qpair failed and we were unable to recover it. 00:34:36.752 [2024-07-15 03:37:42.681178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.752 [2024-07-15 03:37:42.681205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.752 qpair failed and we were unable to recover it. 00:34:36.752 [2024-07-15 03:37:42.681345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.752 [2024-07-15 03:37:42.681373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.752 qpair failed and we were unable to recover it. 00:34:36.752 [2024-07-15 03:37:42.681514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.752 [2024-07-15 03:37:42.681541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.752 qpair failed and we were unable to recover it. 00:34:36.752 [2024-07-15 03:37:42.681679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.752 [2024-07-15 03:37:42.681706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.752 qpair failed and we were unable to recover it. 00:34:36.752 [2024-07-15 03:37:42.681870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.752 [2024-07-15 03:37:42.681921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.752 qpair failed and we were unable to recover it. 00:34:36.752 [2024-07-15 03:37:42.682077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.752 [2024-07-15 03:37:42.682106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.752 qpair failed and we were unable to recover it. 00:34:36.752 [2024-07-15 03:37:42.682246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.752 [2024-07-15 03:37:42.682273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.752 qpair failed and we were unable to recover it. 00:34:36.752 [2024-07-15 03:37:42.682431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.752 [2024-07-15 03:37:42.682475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.752 qpair failed and we were unable to recover it. 00:34:36.752 [2024-07-15 03:37:42.682658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.752 [2024-07-15 03:37:42.682688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.752 qpair failed and we were unable to recover it. 00:34:36.752 [2024-07-15 03:37:42.682836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.752 [2024-07-15 03:37:42.682863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.752 qpair failed and we were unable to recover it. 00:34:36.752 [2024-07-15 03:37:42.683062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.752 [2024-07-15 03:37:42.683092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.752 qpair failed and we were unable to recover it. 00:34:36.752 [2024-07-15 03:37:42.683269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.752 [2024-07-15 03:37:42.683303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.752 qpair failed and we were unable to recover it. 00:34:36.752 [2024-07-15 03:37:42.683466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.752 [2024-07-15 03:37:42.683494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.752 qpair failed and we were unable to recover it. 00:34:36.752 [2024-07-15 03:37:42.683612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.752 [2024-07-15 03:37:42.683655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.752 qpair failed and we were unable to recover it. 00:34:36.752 [2024-07-15 03:37:42.683801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.752 [2024-07-15 03:37:42.683830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.752 qpair failed and we were unable to recover it. 00:34:36.752 [2024-07-15 03:37:42.684001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.752 [2024-07-15 03:37:42.684038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.752 qpair failed and we were unable to recover it. 00:34:36.752 [2024-07-15 03:37:42.684205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.752 [2024-07-15 03:37:42.684236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.752 qpair failed and we were unable to recover it. 00:34:36.752 [2024-07-15 03:37:42.684364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.752 [2024-07-15 03:37:42.684394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.752 qpair failed and we were unable to recover it. 00:34:36.752 [2024-07-15 03:37:42.684533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.752 [2024-07-15 03:37:42.684561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.752 qpair failed and we were unable to recover it. 00:34:36.752 [2024-07-15 03:37:42.684723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.752 [2024-07-15 03:37:42.684768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.752 qpair failed and we were unable to recover it. 00:34:36.752 [2024-07-15 03:37:42.684908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.753 [2024-07-15 03:37:42.684939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.753 qpair failed and we were unable to recover it. 00:34:36.753 [2024-07-15 03:37:42.685108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.753 [2024-07-15 03:37:42.685145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.753 qpair failed and we were unable to recover it. 00:34:36.753 [2024-07-15 03:37:42.685304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.753 [2024-07-15 03:37:42.685336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.753 qpair failed and we were unable to recover it. 00:34:36.753 [2024-07-15 03:37:42.685510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.753 [2024-07-15 03:37:42.685566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.753 qpair failed and we were unable to recover it. 00:34:36.753 [2024-07-15 03:37:42.685760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.753 [2024-07-15 03:37:42.685792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.753 qpair failed and we were unable to recover it. 00:34:36.753 [2024-07-15 03:37:42.685932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.753 [2024-07-15 03:37:42.685962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.753 qpair failed and we were unable to recover it. 00:34:36.753 [2024-07-15 03:37:42.686118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.753 [2024-07-15 03:37:42.686153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.753 qpair failed and we were unable to recover it. 00:34:36.753 [2024-07-15 03:37:42.686286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.753 [2024-07-15 03:37:42.686319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.753 qpair failed and we were unable to recover it. 00:34:36.753 [2024-07-15 03:37:42.686509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.753 [2024-07-15 03:37:42.686540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.753 qpair failed and we were unable to recover it. 00:34:36.753 [2024-07-15 03:37:42.686666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.753 [2024-07-15 03:37:42.686698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.753 qpair failed and we were unable to recover it. 00:34:36.753 [2024-07-15 03:37:42.686842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.753 [2024-07-15 03:37:42.686872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.753 qpair failed and we were unable to recover it. 00:34:36.753 [2024-07-15 03:37:42.687030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.753 [2024-07-15 03:37:42.687056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.753 qpair failed and we were unable to recover it. 00:34:36.753 [2024-07-15 03:37:42.687216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.753 [2024-07-15 03:37:42.687246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.753 qpair failed and we were unable to recover it. 00:34:36.753 [2024-07-15 03:37:42.687384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.753 [2024-07-15 03:37:42.687413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.753 qpair failed and we were unable to recover it. 00:34:36.753 [2024-07-15 03:37:42.687547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.753 [2024-07-15 03:37:42.687574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.753 qpair failed and we were unable to recover it. 00:34:36.753 [2024-07-15 03:37:42.687762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.753 [2024-07-15 03:37:42.687793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.753 qpair failed and we were unable to recover it. 00:34:36.753 [2024-07-15 03:37:42.687924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.753 [2024-07-15 03:37:42.687959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.753 qpair failed and we were unable to recover it. 00:34:36.753 [2024-07-15 03:37:42.688094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.753 [2024-07-15 03:37:42.688120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.753 qpair failed and we were unable to recover it. 00:34:36.753 [2024-07-15 03:37:42.688339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.753 [2024-07-15 03:37:42.688391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.753 qpair failed and we were unable to recover it. 00:34:36.753 [2024-07-15 03:37:42.688526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.753 [2024-07-15 03:37:42.688553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.753 qpair failed and we were unable to recover it. 00:34:36.753 [2024-07-15 03:37:42.688703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.753 [2024-07-15 03:37:42.688730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.753 qpair failed and we were unable to recover it. 00:34:36.753 [2024-07-15 03:37:42.688860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.753 [2024-07-15 03:37:42.688892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.753 qpair failed and we were unable to recover it. 00:34:36.753 [2024-07-15 03:37:42.689006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.753 [2024-07-15 03:37:42.689034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.753 qpair failed and we were unable to recover it. 00:34:36.753 [2024-07-15 03:37:42.689167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.753 [2024-07-15 03:37:42.689210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.753 qpair failed and we were unable to recover it. 00:34:36.753 [2024-07-15 03:37:42.689368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.753 [2024-07-15 03:37:42.689395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.753 qpair failed and we were unable to recover it. 00:34:36.753 [2024-07-15 03:37:42.689534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.753 [2024-07-15 03:37:42.689562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.753 qpair failed and we were unable to recover it. 00:34:36.753 [2024-07-15 03:37:42.689731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.753 [2024-07-15 03:37:42.689764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.753 qpair failed and we were unable to recover it. 00:34:36.753 [2024-07-15 03:37:42.689958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.753 [2024-07-15 03:37:42.689986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.753 qpair failed and we were unable to recover it. 00:34:36.753 [2024-07-15 03:37:42.690100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.753 [2024-07-15 03:37:42.690127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.753 qpair failed and we were unable to recover it. 00:34:36.753 [2024-07-15 03:37:42.690268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.753 [2024-07-15 03:37:42.690311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.753 qpair failed and we were unable to recover it. 00:34:36.753 [2024-07-15 03:37:42.690490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.753 [2024-07-15 03:37:42.690519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.753 qpair failed and we were unable to recover it. 00:34:36.753 [2024-07-15 03:37:42.690700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.753 [2024-07-15 03:37:42.690735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.753 qpair failed and we were unable to recover it. 00:34:36.753 [2024-07-15 03:37:42.690922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.753 [2024-07-15 03:37:42.690953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.753 qpair failed and we were unable to recover it. 00:34:36.753 [2024-07-15 03:37:42.691064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.753 [2024-07-15 03:37:42.691093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.753 qpair failed and we were unable to recover it. 00:34:36.753 [2024-07-15 03:37:42.691288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.753 [2024-07-15 03:37:42.691322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.753 qpair failed and we were unable to recover it. 00:34:36.753 [2024-07-15 03:37:42.691463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.753 [2024-07-15 03:37:42.691507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.753 qpair failed and we were unable to recover it. 00:34:36.753 [2024-07-15 03:37:42.691669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.754 [2024-07-15 03:37:42.691699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.754 qpair failed and we were unable to recover it. 00:34:36.754 [2024-07-15 03:37:42.691848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.754 [2024-07-15 03:37:42.691875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.754 qpair failed and we were unable to recover it. 00:34:36.754 [2024-07-15 03:37:42.692064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.754 [2024-07-15 03:37:42.692096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.754 qpair failed and we were unable to recover it. 00:34:36.754 [2024-07-15 03:37:42.692223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.754 [2024-07-15 03:37:42.692252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.754 qpair failed and we were unable to recover it. 00:34:36.754 [2024-07-15 03:37:42.692410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.754 [2024-07-15 03:37:42.692443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.754 qpair failed and we were unable to recover it. 00:34:36.754 [2024-07-15 03:37:42.692601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.754 [2024-07-15 03:37:42.692631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.754 qpair failed and we were unable to recover it. 00:34:36.754 [2024-07-15 03:37:42.692756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.754 [2024-07-15 03:37:42.692785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.754 qpair failed and we were unable to recover it. 00:34:36.754 [2024-07-15 03:37:42.692944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.754 [2024-07-15 03:37:42.692972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.754 qpair failed and we were unable to recover it. 00:34:36.754 [2024-07-15 03:37:42.693116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.754 [2024-07-15 03:37:42.693148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.754 qpair failed and we were unable to recover it. 00:34:36.754 [2024-07-15 03:37:42.693290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.754 [2024-07-15 03:37:42.693318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.754 qpair failed and we were unable to recover it. 00:34:36.754 [2024-07-15 03:37:42.693447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.754 [2024-07-15 03:37:42.693474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.754 qpair failed and we were unable to recover it. 00:34:36.754 [2024-07-15 03:37:42.693585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.754 [2024-07-15 03:37:42.693611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.754 qpair failed and we were unable to recover it. 00:34:36.754 [2024-07-15 03:37:42.693776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.754 [2024-07-15 03:37:42.693805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.754 qpair failed and we were unable to recover it. 00:34:36.754 [2024-07-15 03:37:42.693964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.754 [2024-07-15 03:37:42.693992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.754 qpair failed and we were unable to recover it. 00:34:36.754 [2024-07-15 03:37:42.694136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.754 [2024-07-15 03:37:42.694180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.754 qpair failed and we were unable to recover it. 00:34:36.754 [2024-07-15 03:37:42.694361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.754 [2024-07-15 03:37:42.694415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.754 qpair failed and we were unable to recover it. 00:34:36.754 [2024-07-15 03:37:42.694562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.754 [2024-07-15 03:37:42.694590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.754 qpair failed and we were unable to recover it. 00:34:36.754 [2024-07-15 03:37:42.694755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.754 [2024-07-15 03:37:42.694781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.754 qpair failed and we were unable to recover it. 00:34:36.754 [2024-07-15 03:37:42.694968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.754 [2024-07-15 03:37:42.695013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.754 qpair failed and we were unable to recover it. 00:34:36.754 [2024-07-15 03:37:42.695162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.754 [2024-07-15 03:37:42.695190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.754 qpair failed and we were unable to recover it. 00:34:36.754 [2024-07-15 03:37:42.695324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.754 [2024-07-15 03:37:42.695368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.754 qpair failed and we were unable to recover it. 00:34:36.754 [2024-07-15 03:37:42.695525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.754 [2024-07-15 03:37:42.695582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.754 qpair failed and we were unable to recover it. 00:34:36.754 [2024-07-15 03:37:42.695770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.754 [2024-07-15 03:37:42.695801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.754 qpair failed and we were unable to recover it. 00:34:36.754 [2024-07-15 03:37:42.695950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.754 [2024-07-15 03:37:42.695978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.754 qpair failed and we were unable to recover it. 00:34:36.754 [2024-07-15 03:37:42.696122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.754 [2024-07-15 03:37:42.696152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.754 qpair failed and we were unable to recover it. 00:34:36.754 [2024-07-15 03:37:42.696287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.754 [2024-07-15 03:37:42.696314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.754 qpair failed and we were unable to recover it. 00:34:36.754 [2024-07-15 03:37:42.696427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.754 [2024-07-15 03:37:42.696470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.754 qpair failed and we were unable to recover it. 00:34:36.754 [2024-07-15 03:37:42.696638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.754 [2024-07-15 03:37:42.696704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.754 qpair failed and we were unable to recover it. 00:34:36.754 [2024-07-15 03:37:42.696862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.754 [2024-07-15 03:37:42.696901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.754 qpair failed and we were unable to recover it. 00:34:36.754 [2024-07-15 03:37:42.697043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.754 [2024-07-15 03:37:42.697070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.754 qpair failed and we were unable to recover it. 00:34:36.754 [2024-07-15 03:37:42.697214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.754 [2024-07-15 03:37:42.697241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.754 qpair failed and we were unable to recover it. 00:34:36.754 [2024-07-15 03:37:42.697378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.754 [2024-07-15 03:37:42.697405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.754 qpair failed and we were unable to recover it. 00:34:36.754 [2024-07-15 03:37:42.697520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.754 [2024-07-15 03:37:42.697564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.754 qpair failed and we were unable to recover it. 00:34:36.754 [2024-07-15 03:37:42.697686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.754 [2024-07-15 03:37:42.697716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.754 qpair failed and we were unable to recover it. 00:34:36.754 [2024-07-15 03:37:42.697880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.754 [2024-07-15 03:37:42.697908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.754 qpair failed and we were unable to recover it. 00:34:36.754 [2024-07-15 03:37:42.698051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.754 [2024-07-15 03:37:42.698079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.754 qpair failed and we were unable to recover it. 00:34:36.754 [2024-07-15 03:37:42.698273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.754 [2024-07-15 03:37:42.698303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.754 qpair failed and we were unable to recover it. 00:34:36.755 [2024-07-15 03:37:42.698459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.755 [2024-07-15 03:37:42.698488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.755 qpair failed and we were unable to recover it. 00:34:36.755 [2024-07-15 03:37:42.698669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.755 [2024-07-15 03:37:42.698699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.755 qpair failed and we were unable to recover it. 00:34:36.755 [2024-07-15 03:37:42.698819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.755 [2024-07-15 03:37:42.698849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.755 qpair failed and we were unable to recover it. 00:34:36.755 [2024-07-15 03:37:42.698997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.755 [2024-07-15 03:37:42.699025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.755 qpair failed and we were unable to recover it. 00:34:36.755 [2024-07-15 03:37:42.699189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.755 [2024-07-15 03:37:42.699217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.755 qpair failed and we were unable to recover it. 00:34:36.755 [2024-07-15 03:37:42.699350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.755 [2024-07-15 03:37:42.699381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.755 qpair failed and we were unable to recover it. 00:34:36.755 [2024-07-15 03:37:42.699512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.755 [2024-07-15 03:37:42.699540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.755 qpair failed and we were unable to recover it. 00:34:36.755 [2024-07-15 03:37:42.699650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.755 [2024-07-15 03:37:42.699677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.755 qpair failed and we were unable to recover it. 00:34:36.755 [2024-07-15 03:37:42.699872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.755 [2024-07-15 03:37:42.699919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.755 qpair failed and we were unable to recover it. 00:34:36.755 [2024-07-15 03:37:42.700082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.755 [2024-07-15 03:37:42.700109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.755 qpair failed and we were unable to recover it. 00:34:36.755 [2024-07-15 03:37:42.700287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.755 [2024-07-15 03:37:42.700331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.755 qpair failed and we were unable to recover it. 00:34:36.755 [2024-07-15 03:37:42.700467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.755 [2024-07-15 03:37:42.700504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.755 qpair failed and we were unable to recover it. 00:34:36.755 [2024-07-15 03:37:42.700692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.755 [2024-07-15 03:37:42.700719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.755 qpair failed and we were unable to recover it. 00:34:36.755 [2024-07-15 03:37:42.700831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.755 [2024-07-15 03:37:42.700888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.755 qpair failed and we were unable to recover it. 00:34:36.755 [2024-07-15 03:37:42.701090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.755 [2024-07-15 03:37:42.701116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.755 qpair failed and we were unable to recover it. 00:34:36.755 [2024-07-15 03:37:42.701258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.755 [2024-07-15 03:37:42.701284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.755 qpair failed and we were unable to recover it. 00:34:36.755 [2024-07-15 03:37:42.701440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.755 [2024-07-15 03:37:42.701467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.755 qpair failed and we were unable to recover it. 00:34:36.755 [2024-07-15 03:37:42.701634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.755 [2024-07-15 03:37:42.701664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.755 qpair failed and we were unable to recover it. 00:34:36.755 [2024-07-15 03:37:42.701803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.755 [2024-07-15 03:37:42.701832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.755 qpair failed and we were unable to recover it. 00:34:36.755 [2024-07-15 03:37:42.701979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.755 [2024-07-15 03:37:42.702006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.755 qpair failed and we were unable to recover it. 00:34:36.755 [2024-07-15 03:37:42.702162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.755 [2024-07-15 03:37:42.702192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.755 qpair failed and we were unable to recover it. 00:34:36.755 [2024-07-15 03:37:42.702325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.755 [2024-07-15 03:37:42.702352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.755 qpair failed and we were unable to recover it. 00:34:36.755 [2024-07-15 03:37:42.702492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.755 [2024-07-15 03:37:42.702518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.755 qpair failed and we were unable to recover it. 00:34:36.755 [2024-07-15 03:37:42.702680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.755 [2024-07-15 03:37:42.702710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.755 qpair failed and we were unable to recover it. 00:34:36.755 [2024-07-15 03:37:42.702845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.755 [2024-07-15 03:37:42.702871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.755 qpair failed and we were unable to recover it. 00:34:36.755 [2024-07-15 03:37:42.703051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.755 [2024-07-15 03:37:42.703077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.755 qpair failed and we were unable to recover it. 00:34:36.755 [2024-07-15 03:37:42.703222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.755 [2024-07-15 03:37:42.703251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.755 qpair failed and we were unable to recover it. 00:34:36.755 [2024-07-15 03:37:42.703442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.755 [2024-07-15 03:37:42.703469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.755 qpair failed and we were unable to recover it. 00:34:36.755 [2024-07-15 03:37:42.703628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.755 [2024-07-15 03:37:42.703684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.755 qpair failed and we were unable to recover it. 00:34:36.755 [2024-07-15 03:37:42.703833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.755 [2024-07-15 03:37:42.703862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.755 qpair failed and we were unable to recover it. 00:34:36.755 [2024-07-15 03:37:42.704037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.755 [2024-07-15 03:37:42.704064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.755 qpair failed and we were unable to recover it. 00:34:36.755 [2024-07-15 03:37:42.704202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.755 [2024-07-15 03:37:42.704248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.755 qpair failed and we were unable to recover it. 00:34:36.755 [2024-07-15 03:37:42.704433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.755 [2024-07-15 03:37:42.704482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.755 qpair failed and we were unable to recover it. 00:34:36.755 [2024-07-15 03:37:42.704635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.755 [2024-07-15 03:37:42.704661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.755 qpair failed and we were unable to recover it. 00:34:36.755 [2024-07-15 03:37:42.704839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.755 [2024-07-15 03:37:42.704894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.755 qpair failed and we were unable to recover it. 00:34:36.755 [2024-07-15 03:37:42.705058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.755 [2024-07-15 03:37:42.705098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.755 qpair failed and we were unable to recover it. 00:34:36.755 [2024-07-15 03:37:42.705245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.755 [2024-07-15 03:37:42.705276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.756 qpair failed and we were unable to recover it. 00:34:36.756 [2024-07-15 03:37:42.705401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.756 [2024-07-15 03:37:42.705428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.756 qpair failed and we were unable to recover it. 00:34:36.756 [2024-07-15 03:37:42.705545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.756 [2024-07-15 03:37:42.705579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.756 qpair failed and we were unable to recover it. 00:34:36.756 [2024-07-15 03:37:42.705747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.756 [2024-07-15 03:37:42.705775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.756 qpair failed and we were unable to recover it. 00:34:36.756 [2024-07-15 03:37:42.705887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.756 [2024-07-15 03:37:42.705924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.756 qpair failed and we were unable to recover it. 00:34:36.756 [2024-07-15 03:37:42.706047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.756 [2024-07-15 03:37:42.706078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.756 qpair failed and we were unable to recover it. 00:34:36.756 [2024-07-15 03:37:42.706232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.756 [2024-07-15 03:37:42.706260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.756 qpair failed and we were unable to recover it. 00:34:36.756 [2024-07-15 03:37:42.706445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.756 [2024-07-15 03:37:42.706476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.756 qpair failed and we were unable to recover it. 00:34:36.756 [2024-07-15 03:37:42.706656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.756 [2024-07-15 03:37:42.706706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.756 qpair failed and we were unable to recover it. 00:34:36.756 [2024-07-15 03:37:42.706860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.756 [2024-07-15 03:37:42.706893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.756 qpair failed and we were unable to recover it. 00:34:36.756 [2024-07-15 03:37:42.707042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.756 [2024-07-15 03:37:42.707069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.756 qpair failed and we were unable to recover it. 00:34:36.756 [2024-07-15 03:37:42.707229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.756 [2024-07-15 03:37:42.707259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.756 qpair failed and we were unable to recover it. 00:34:36.756 [2024-07-15 03:37:42.707426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.756 [2024-07-15 03:37:42.707453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.756 qpair failed and we were unable to recover it. 00:34:36.756 [2024-07-15 03:37:42.707569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.756 [2024-07-15 03:37:42.707596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.756 qpair failed and we were unable to recover it. 00:34:36.756 [2024-07-15 03:37:42.707755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.756 [2024-07-15 03:37:42.707785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.756 qpair failed and we were unable to recover it. 00:34:36.756 [2024-07-15 03:37:42.707944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.756 [2024-07-15 03:37:42.707971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.756 qpair failed and we were unable to recover it. 00:34:36.756 [2024-07-15 03:37:42.708087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.756 [2024-07-15 03:37:42.708115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.756 qpair failed and we were unable to recover it. 00:34:36.756 [2024-07-15 03:37:42.708280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.756 [2024-07-15 03:37:42.708310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.756 qpair failed and we were unable to recover it. 00:34:36.756 [2024-07-15 03:37:42.708478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.756 [2024-07-15 03:37:42.708505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.756 qpair failed and we were unable to recover it. 00:34:36.756 [2024-07-15 03:37:42.708659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.756 [2024-07-15 03:37:42.708688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.756 qpair failed and we were unable to recover it. 00:34:36.756 [2024-07-15 03:37:42.708845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.756 [2024-07-15 03:37:42.708882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.756 qpair failed and we were unable to recover it. 00:34:36.756 [2024-07-15 03:37:42.709040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.756 [2024-07-15 03:37:42.709067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.756 qpair failed and we were unable to recover it. 00:34:36.756 [2024-07-15 03:37:42.709211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.756 [2024-07-15 03:37:42.709238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.756 qpair failed and we were unable to recover it. 00:34:36.756 [2024-07-15 03:37:42.709380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.756 [2024-07-15 03:37:42.709407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.756 qpair failed and we were unable to recover it. 00:34:36.756 [2024-07-15 03:37:42.709546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.756 [2024-07-15 03:37:42.709573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.756 qpair failed and we were unable to recover it. 00:34:36.756 [2024-07-15 03:37:42.709684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.756 [2024-07-15 03:37:42.709728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.756 qpair failed and we were unable to recover it. 00:34:36.756 [2024-07-15 03:37:42.709906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.756 [2024-07-15 03:37:42.709960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.756 qpair failed and we were unable to recover it. 00:34:36.756 [2024-07-15 03:37:42.710124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.756 [2024-07-15 03:37:42.710151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.756 qpair failed and we were unable to recover it. 00:34:36.756 [2024-07-15 03:37:42.710331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.756 [2024-07-15 03:37:42.710361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.756 qpair failed and we were unable to recover it. 00:34:36.756 [2024-07-15 03:37:42.710519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.756 [2024-07-15 03:37:42.710549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.756 qpair failed and we were unable to recover it. 00:34:36.756 [2024-07-15 03:37:42.710689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.756 [2024-07-15 03:37:42.710716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.756 qpair failed and we were unable to recover it. 00:34:36.756 [2024-07-15 03:37:42.710829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.756 [2024-07-15 03:37:42.710856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.756 qpair failed and we were unable to recover it. 00:34:36.756 [2024-07-15 03:37:42.710979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.757 [2024-07-15 03:37:42.711007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.757 qpair failed and we were unable to recover it. 00:34:36.757 [2024-07-15 03:37:42.711172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.757 [2024-07-15 03:37:42.711200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.757 qpair failed and we were unable to recover it. 00:34:36.757 [2024-07-15 03:37:42.711355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.757 [2024-07-15 03:37:42.711385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.757 qpair failed and we were unable to recover it. 00:34:36.757 [2024-07-15 03:37:42.711575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.757 [2024-07-15 03:37:42.711605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.757 qpair failed and we were unable to recover it. 00:34:36.757 [2024-07-15 03:37:42.711768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.757 [2024-07-15 03:37:42.711796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.757 qpair failed and we were unable to recover it. 00:34:36.757 [2024-07-15 03:37:42.711935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.757 [2024-07-15 03:37:42.711962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.757 qpair failed and we were unable to recover it. 00:34:36.757 [2024-07-15 03:37:42.712106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.757 [2024-07-15 03:37:42.712139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.757 qpair failed and we were unable to recover it. 00:34:36.757 [2024-07-15 03:37:42.712284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.757 [2024-07-15 03:37:42.712312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.757 qpair failed and we were unable to recover it. 00:34:36.757 [2024-07-15 03:37:42.712431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.757 [2024-07-15 03:37:42.712460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.757 qpair failed and we were unable to recover it. 00:34:36.757 [2024-07-15 03:37:42.712650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.757 [2024-07-15 03:37:42.712681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.757 qpair failed and we were unable to recover it. 00:34:36.757 [2024-07-15 03:37:42.712813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.757 [2024-07-15 03:37:42.712845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.757 qpair failed and we were unable to recover it. 00:34:36.757 [2024-07-15 03:37:42.712989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.757 [2024-07-15 03:37:42.713016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.757 qpair failed and we were unable to recover it. 00:34:36.757 [2024-07-15 03:37:42.713183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.757 [2024-07-15 03:37:42.713215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.757 qpair failed and we were unable to recover it. 00:34:36.757 [2024-07-15 03:37:42.713352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.757 [2024-07-15 03:37:42.713379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.757 qpair failed and we were unable to recover it. 00:34:36.757 [2024-07-15 03:37:42.713517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.757 [2024-07-15 03:37:42.713544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.757 qpair failed and we were unable to recover it. 00:34:36.757 [2024-07-15 03:37:42.713706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.757 [2024-07-15 03:37:42.713736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.757 qpair failed and we were unable to recover it. 00:34:36.757 [2024-07-15 03:37:42.713891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.757 [2024-07-15 03:37:42.713928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.757 qpair failed and we were unable to recover it. 00:34:36.757 [2024-07-15 03:37:42.714071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.757 [2024-07-15 03:37:42.714098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.757 qpair failed and we were unable to recover it. 00:34:36.757 [2024-07-15 03:37:42.714248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.757 [2024-07-15 03:37:42.714278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.757 qpair failed and we were unable to recover it. 00:34:36.757 [2024-07-15 03:37:42.714465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.757 [2024-07-15 03:37:42.714492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.757 qpair failed and we were unable to recover it. 00:34:36.757 [2024-07-15 03:37:42.714655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.757 [2024-07-15 03:37:42.714685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.757 qpair failed and we were unable to recover it. 00:34:36.757 [2024-07-15 03:37:42.714860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.757 [2024-07-15 03:37:42.714896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.757 qpair failed and we were unable to recover it. 00:34:36.757 [2024-07-15 03:37:42.715034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.757 [2024-07-15 03:37:42.715061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.757 qpair failed and we were unable to recover it. 00:34:36.757 [2024-07-15 03:37:42.715226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.757 [2024-07-15 03:37:42.715270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.757 qpair failed and we were unable to recover it. 00:34:36.757 [2024-07-15 03:37:42.715425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.757 [2024-07-15 03:37:42.715455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.757 qpair failed and we were unable to recover it. 00:34:36.757 [2024-07-15 03:37:42.715588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.757 [2024-07-15 03:37:42.715615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.757 qpair failed and we were unable to recover it. 00:34:36.757 [2024-07-15 03:37:42.715732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.757 [2024-07-15 03:37:42.715760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.757 qpair failed and we were unable to recover it. 00:34:36.757 [2024-07-15 03:37:42.715892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.757 [2024-07-15 03:37:42.715932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.757 qpair failed and we were unable to recover it. 00:34:36.757 [2024-07-15 03:37:42.716097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.757 [2024-07-15 03:37:42.716135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.757 qpair failed and we were unable to recover it. 00:34:36.757 [2024-07-15 03:37:42.716294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.757 [2024-07-15 03:37:42.716324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.757 qpair failed and we were unable to recover it. 00:34:36.757 [2024-07-15 03:37:42.716449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.757 [2024-07-15 03:37:42.716479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.757 qpair failed and we were unable to recover it. 00:34:36.757 [2024-07-15 03:37:42.716665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.757 [2024-07-15 03:37:42.716692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.757 qpair failed and we were unable to recover it. 00:34:36.757 [2024-07-15 03:37:42.716849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.757 [2024-07-15 03:37:42.716884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.757 qpair failed and we were unable to recover it. 00:34:36.757 [2024-07-15 03:37:42.717055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.757 [2024-07-15 03:37:42.717082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.757 qpair failed and we were unable to recover it. 00:34:36.757 [2024-07-15 03:37:42.717228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.757 [2024-07-15 03:37:42.717255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.757 qpair failed and we were unable to recover it. 00:34:36.757 [2024-07-15 03:37:42.717409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.757 [2024-07-15 03:37:42.717447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.757 qpair failed and we were unable to recover it. 00:34:36.757 [2024-07-15 03:37:42.717633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.757 [2024-07-15 03:37:42.717660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.757 qpair failed and we were unable to recover it. 00:34:36.758 [2024-07-15 03:37:42.717776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.758 [2024-07-15 03:37:42.717808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.758 qpair failed and we were unable to recover it. 00:34:36.758 [2024-07-15 03:37:42.717923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.758 [2024-07-15 03:37:42.717951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.758 qpair failed and we were unable to recover it. 00:34:36.758 [2024-07-15 03:37:42.718115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.758 [2024-07-15 03:37:42.718162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.758 qpair failed and we were unable to recover it. 00:34:36.758 [2024-07-15 03:37:42.718326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.758 [2024-07-15 03:37:42.718353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.758 qpair failed and we were unable to recover it. 00:34:36.758 [2024-07-15 03:37:42.718462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.758 [2024-07-15 03:37:42.718490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.758 qpair failed and we were unable to recover it. 00:34:36.758 [2024-07-15 03:37:42.718695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.758 [2024-07-15 03:37:42.718725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.758 qpair failed and we were unable to recover it. 00:34:36.758 [2024-07-15 03:37:42.718866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.758 [2024-07-15 03:37:42.718901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.758 qpair failed and we were unable to recover it. 00:34:36.758 [2024-07-15 03:37:42.719020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.758 [2024-07-15 03:37:42.719048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.758 qpair failed and we were unable to recover it. 00:34:36.758 [2024-07-15 03:37:42.719176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.758 [2024-07-15 03:37:42.719204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.758 qpair failed and we were unable to recover it. 00:34:36.758 [2024-07-15 03:37:42.719375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.758 [2024-07-15 03:37:42.719402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.758 qpair failed and we were unable to recover it. 00:34:36.758 [2024-07-15 03:37:42.719509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.758 [2024-07-15 03:37:42.719536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.758 qpair failed and we were unable to recover it. 00:34:36.758 [2024-07-15 03:37:42.719676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.758 [2024-07-15 03:37:42.719703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.758 qpair failed and we were unable to recover it. 00:34:36.758 [2024-07-15 03:37:42.719823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.758 [2024-07-15 03:37:42.719850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.758 qpair failed and we were unable to recover it. 00:34:36.758 [2024-07-15 03:37:42.719974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.758 [2024-07-15 03:37:42.720001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.758 qpair failed and we were unable to recover it. 00:34:36.758 [2024-07-15 03:37:42.720168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.758 [2024-07-15 03:37:42.720199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.758 qpair failed and we were unable to recover it. 00:34:36.758 [2024-07-15 03:37:42.720327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.758 [2024-07-15 03:37:42.720355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.758 qpair failed and we were unable to recover it. 00:34:36.758 [2024-07-15 03:37:42.720874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.758 [2024-07-15 03:37:42.720916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.758 qpair failed and we were unable to recover it. 00:34:36.758 [2024-07-15 03:37:42.721085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.758 [2024-07-15 03:37:42.721112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.758 qpair failed and we were unable to recover it. 00:34:36.758 [2024-07-15 03:37:42.721257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.758 [2024-07-15 03:37:42.721284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.758 qpair failed and we were unable to recover it. 00:34:36.758 [2024-07-15 03:37:42.721437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.758 [2024-07-15 03:37:42.721467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.758 qpair failed and we were unable to recover it. 00:34:36.758 [2024-07-15 03:37:42.721906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.758 [2024-07-15 03:37:42.721957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.758 qpair failed and we were unable to recover it. 00:34:36.758 [2024-07-15 03:37:42.722125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.758 [2024-07-15 03:37:42.722154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.758 qpair failed and we were unable to recover it. 00:34:36.758 [2024-07-15 03:37:42.722303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.758 [2024-07-15 03:37:42.722333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.758 qpair failed and we were unable to recover it. 00:34:36.758 [2024-07-15 03:37:42.722491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.758 [2024-07-15 03:37:42.722521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.758 qpair failed and we were unable to recover it. 00:34:36.758 [2024-07-15 03:37:42.722710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.758 [2024-07-15 03:37:42.722737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.758 qpair failed and we were unable to recover it. 00:34:36.758 [2024-07-15 03:37:42.722932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.758 [2024-07-15 03:37:42.722961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.758 qpair failed and we were unable to recover it. 00:34:36.758 [2024-07-15 03:37:42.723105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.758 [2024-07-15 03:37:42.723143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.758 qpair failed and we were unable to recover it. 00:34:36.758 [2024-07-15 03:37:42.723311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.758 [2024-07-15 03:37:42.723338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.758 qpair failed and we were unable to recover it. 00:34:36.758 [2024-07-15 03:37:42.723444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.758 [2024-07-15 03:37:42.723489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.758 qpair failed and we were unable to recover it. 00:34:36.758 [2024-07-15 03:37:42.723637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.758 [2024-07-15 03:37:42.723667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.758 qpair failed and we were unable to recover it. 00:34:36.758 [2024-07-15 03:37:42.723822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.758 [2024-07-15 03:37:42.723850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.758 qpair failed and we were unable to recover it. 00:34:36.758 [2024-07-15 03:37:42.724004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.758 [2024-07-15 03:37:42.724032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.758 qpair failed and we were unable to recover it. 00:34:36.758 [2024-07-15 03:37:42.724141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.758 [2024-07-15 03:37:42.724169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.758 qpair failed and we were unable to recover it. 00:34:36.758 [2024-07-15 03:37:42.724340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.758 [2024-07-15 03:37:42.724367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.758 qpair failed and we were unable to recover it. 00:34:36.758 [2024-07-15 03:37:42.724526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.758 [2024-07-15 03:37:42.724556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.758 qpair failed and we were unable to recover it. 00:34:36.758 [2024-07-15 03:37:42.724706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.758 [2024-07-15 03:37:42.724736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.758 qpair failed and we were unable to recover it. 00:34:36.758 [2024-07-15 03:37:42.724863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.759 [2024-07-15 03:37:42.724896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.759 qpair failed and we were unable to recover it. 00:34:36.759 [2024-07-15 03:37:42.725034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.759 [2024-07-15 03:37:42.725061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.759 qpair failed and we were unable to recover it. 00:34:36.759 [2024-07-15 03:37:42.725179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.759 [2024-07-15 03:37:42.725221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.759 qpair failed and we were unable to recover it. 00:34:36.759 [2024-07-15 03:37:42.725353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.759 [2024-07-15 03:37:42.725382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.759 qpair failed and we were unable to recover it. 00:34:36.759 [2024-07-15 03:37:42.725498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.759 [2024-07-15 03:37:42.725530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.759 qpair failed and we were unable to recover it. 00:34:36.759 [2024-07-15 03:37:42.725695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.759 [2024-07-15 03:37:42.725726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.759 qpair failed and we were unable to recover it. 00:34:36.759 [2024-07-15 03:37:42.725898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.759 [2024-07-15 03:37:42.725926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.759 qpair failed and we were unable to recover it. 00:34:36.759 [2024-07-15 03:37:42.726069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.759 [2024-07-15 03:37:42.726096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.759 qpair failed and we were unable to recover it. 00:34:36.759 [2024-07-15 03:37:42.726287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.759 [2024-07-15 03:37:42.726314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.759 qpair failed and we were unable to recover it. 00:34:36.759 [2024-07-15 03:37:42.726451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.759 [2024-07-15 03:37:42.726478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.759 qpair failed and we were unable to recover it. 00:34:36.759 [2024-07-15 03:37:42.726619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.759 [2024-07-15 03:37:42.726647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.759 qpair failed and we were unable to recover it. 00:34:36.759 [2024-07-15 03:37:42.726832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.759 [2024-07-15 03:37:42.726863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.759 qpair failed and we were unable to recover it. 00:34:36.759 [2024-07-15 03:37:42.727035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.759 [2024-07-15 03:37:42.727064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.759 qpair failed and we were unable to recover it. 00:34:36.759 [2024-07-15 03:37:42.727177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.759 [2024-07-15 03:37:42.727220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.759 qpair failed and we were unable to recover it. 00:34:36.759 [2024-07-15 03:37:42.727376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.759 [2024-07-15 03:37:42.727407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.759 qpair failed and we were unable to recover it. 00:34:36.759 [2024-07-15 03:37:42.727591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.759 [2024-07-15 03:37:42.727618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.759 qpair failed and we were unable to recover it. 00:34:36.759 [2024-07-15 03:37:42.727774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.759 [2024-07-15 03:37:42.727804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.759 qpair failed and we were unable to recover it. 00:34:36.759 [2024-07-15 03:37:42.727927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.759 [2024-07-15 03:37:42.727958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.759 qpair failed and we were unable to recover it. 00:34:36.759 [2024-07-15 03:37:42.728103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.759 [2024-07-15 03:37:42.728130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.759 qpair failed and we were unable to recover it. 00:34:36.759 [2024-07-15 03:37:42.728294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.759 [2024-07-15 03:37:42.728339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.759 qpair failed and we were unable to recover it. 00:34:36.759 [2024-07-15 03:37:42.728464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.759 [2024-07-15 03:37:42.728495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.759 qpair failed and we were unable to recover it. 00:34:36.759 [2024-07-15 03:37:42.728658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.759 [2024-07-15 03:37:42.728685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.759 qpair failed and we were unable to recover it. 00:34:36.759 [2024-07-15 03:37:42.728865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.759 [2024-07-15 03:37:42.728901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.759 qpair failed and we were unable to recover it. 00:34:36.759 [2024-07-15 03:37:42.729049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.759 [2024-07-15 03:37:42.729080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.759 qpair failed and we were unable to recover it. 00:34:36.759 [2024-07-15 03:37:42.729214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.759 [2024-07-15 03:37:42.729240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.759 qpair failed and we were unable to recover it. 00:34:36.759 [2024-07-15 03:37:42.729350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.759 [2024-07-15 03:37:42.729377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.759 qpair failed and we were unable to recover it. 00:34:36.759 [2024-07-15 03:37:42.729537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.759 [2024-07-15 03:37:42.729567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.759 qpair failed and we were unable to recover it. 00:34:36.759 [2024-07-15 03:37:42.729730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.759 [2024-07-15 03:37:42.729757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.759 qpair failed and we were unable to recover it. 00:34:36.759 [2024-07-15 03:37:42.729868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.759 [2024-07-15 03:37:42.729901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.759 qpair failed and we were unable to recover it. 00:34:36.759 [2024-07-15 03:37:42.730064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.759 [2024-07-15 03:37:42.730094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.759 qpair failed and we were unable to recover it. 00:34:36.759 [2024-07-15 03:37:42.730229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.759 [2024-07-15 03:37:42.730258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.759 qpair failed and we were unable to recover it. 00:34:36.759 [2024-07-15 03:37:42.730417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.759 [2024-07-15 03:37:42.730461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.759 qpair failed and we were unable to recover it. 00:34:36.759 [2024-07-15 03:37:42.730613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.759 [2024-07-15 03:37:42.730643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.759 qpair failed and we were unable to recover it. 00:34:36.759 [2024-07-15 03:37:42.730818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.759 [2024-07-15 03:37:42.730849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.759 qpair failed and we were unable to recover it. 00:34:36.759 [2024-07-15 03:37:42.731015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.759 [2024-07-15 03:37:42.731043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.759 qpair failed and we were unable to recover it. 00:34:36.759 [2024-07-15 03:37:42.731202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.759 [2024-07-15 03:37:42.731232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.759 qpair failed and we were unable to recover it. 00:34:36.759 [2024-07-15 03:37:42.731416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.760 [2024-07-15 03:37:42.731443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.760 qpair failed and we were unable to recover it. 00:34:36.760 [2024-07-15 03:37:42.731593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.760 [2024-07-15 03:37:42.731623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.760 qpair failed and we were unable to recover it. 00:34:36.760 [2024-07-15 03:37:42.731749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.760 [2024-07-15 03:37:42.731778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.760 qpair failed and we were unable to recover it. 00:34:36.760 [2024-07-15 03:37:42.731906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.760 [2024-07-15 03:37:42.731934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.760 qpair failed and we were unable to recover it. 00:34:36.760 [2024-07-15 03:37:42.732042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.760 [2024-07-15 03:37:42.732070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.760 qpair failed and we were unable to recover it. 00:34:36.760 [2024-07-15 03:37:42.732225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.760 [2024-07-15 03:37:42.732255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.760 qpair failed and we were unable to recover it. 00:34:36.760 [2024-07-15 03:37:42.732416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.760 [2024-07-15 03:37:42.732444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.760 qpair failed and we were unable to recover it. 00:34:36.760 [2024-07-15 03:37:42.732555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.760 [2024-07-15 03:37:42.732599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.760 qpair failed and we were unable to recover it. 00:34:36.760 [2024-07-15 03:37:42.732781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.760 [2024-07-15 03:37:42.732815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.760 qpair failed and we were unable to recover it. 00:34:36.760 [2024-07-15 03:37:42.732949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.760 [2024-07-15 03:37:42.732977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.760 qpair failed and we were unable to recover it. 00:34:36.760 [2024-07-15 03:37:42.733123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.760 [2024-07-15 03:37:42.733150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.760 qpair failed and we were unable to recover it. 00:34:36.760 [2024-07-15 03:37:42.733263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.760 [2024-07-15 03:37:42.733292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.760 qpair failed and we were unable to recover it. 00:34:36.760 [2024-07-15 03:37:42.733436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.760 [2024-07-15 03:37:42.733463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.760 qpair failed and we were unable to recover it. 00:34:36.760 [2024-07-15 03:37:42.733616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.760 [2024-07-15 03:37:42.733647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.760 qpair failed and we were unable to recover it. 00:34:36.760 [2024-07-15 03:37:42.733798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.760 [2024-07-15 03:37:42.733828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.760 qpair failed and we were unable to recover it. 00:34:36.760 [2024-07-15 03:37:42.733997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.760 [2024-07-15 03:37:42.734024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.760 qpair failed and we were unable to recover it. 00:34:36.760 [2024-07-15 03:37:42.734181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.760 [2024-07-15 03:37:42.734211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.760 qpair failed and we were unable to recover it. 00:34:36.760 [2024-07-15 03:37:42.734367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.760 [2024-07-15 03:37:42.734396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.760 qpair failed and we were unable to recover it. 00:34:36.760 [2024-07-15 03:37:42.734525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.760 [2024-07-15 03:37:42.734553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.760 qpair failed and we were unable to recover it. 00:34:36.760 [2024-07-15 03:37:42.734715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.760 [2024-07-15 03:37:42.734743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.760 qpair failed and we were unable to recover it. 00:34:36.760 [2024-07-15 03:37:42.734924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.760 [2024-07-15 03:37:42.734952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.760 qpair failed and we were unable to recover it. 00:34:36.760 [2024-07-15 03:37:42.735060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.760 [2024-07-15 03:37:42.735087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.760 qpair failed and we were unable to recover it. 00:34:36.760 [2024-07-15 03:37:42.735234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.760 [2024-07-15 03:37:42.735261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.760 qpair failed and we were unable to recover it. 00:34:36.760 [2024-07-15 03:37:42.735406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.760 [2024-07-15 03:37:42.735436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.760 qpair failed and we were unable to recover it. 00:34:36.760 [2024-07-15 03:37:42.735594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.760 [2024-07-15 03:37:42.735621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.760 qpair failed and we were unable to recover it. 00:34:36.760 [2024-07-15 03:37:42.735807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.760 [2024-07-15 03:37:42.735836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.760 qpair failed and we were unable to recover it. 00:34:36.760 [2024-07-15 03:37:42.735987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.760 [2024-07-15 03:37:42.736017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.760 qpair failed and we were unable to recover it. 00:34:36.760 [2024-07-15 03:37:42.736176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.760 [2024-07-15 03:37:42.736203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.760 qpair failed and we were unable to recover it. 00:34:36.760 [2024-07-15 03:37:42.736394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.760 [2024-07-15 03:37:42.736424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.760 qpair failed and we were unable to recover it. 00:34:36.760 [2024-07-15 03:37:42.736573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.760 [2024-07-15 03:37:42.736603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.760 qpair failed and we were unable to recover it. 00:34:36.760 [2024-07-15 03:37:42.736763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.760 [2024-07-15 03:37:42.736790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.760 qpair failed and we were unable to recover it. 00:34:36.760 [2024-07-15 03:37:42.736907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.760 [2024-07-15 03:37:42.736935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.760 qpair failed and we were unable to recover it. 00:34:36.760 [2024-07-15 03:37:42.737080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.761 [2024-07-15 03:37:42.737107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.761 qpair failed and we were unable to recover it. 00:34:36.761 [2024-07-15 03:37:42.737271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.761 [2024-07-15 03:37:42.737297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.761 qpair failed and we were unable to recover it. 00:34:36.761 [2024-07-15 03:37:42.737446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.761 [2024-07-15 03:37:42.737476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.761 qpair failed and we were unable to recover it. 00:34:36.761 [2024-07-15 03:37:42.737638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.761 [2024-07-15 03:37:42.737667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.761 qpair failed and we were unable to recover it. 00:34:36.761 [2024-07-15 03:37:42.737819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.761 [2024-07-15 03:37:42.737846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.761 qpair failed and we were unable to recover it. 00:34:36.761 [2024-07-15 03:37:42.738004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.761 [2024-07-15 03:37:42.738054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.761 qpair failed and we were unable to recover it. 00:34:36.761 [2024-07-15 03:37:42.738202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.761 [2024-07-15 03:37:42.738231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.761 qpair failed and we were unable to recover it. 00:34:36.761 [2024-07-15 03:37:42.738389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.761 [2024-07-15 03:37:42.738416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.761 qpair failed and we were unable to recover it. 00:34:36.761 [2024-07-15 03:37:42.738532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.761 [2024-07-15 03:37:42.738559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.761 qpair failed and we were unable to recover it. 00:34:36.761 [2024-07-15 03:37:42.738719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.761 [2024-07-15 03:37:42.738748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.761 qpair failed and we were unable to recover it. 00:34:36.761 [2024-07-15 03:37:42.738888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.761 [2024-07-15 03:37:42.738916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.761 qpair failed and we were unable to recover it. 00:34:36.761 [2024-07-15 03:37:42.739057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.761 [2024-07-15 03:37:42.739084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.761 qpair failed and we were unable to recover it. 00:34:36.761 [2024-07-15 03:37:42.739267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.761 [2024-07-15 03:37:42.739296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.761 qpair failed and we were unable to recover it. 00:34:36.761 [2024-07-15 03:37:42.739459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.761 [2024-07-15 03:37:42.739487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.761 qpair failed and we were unable to recover it. 00:34:36.761 [2024-07-15 03:37:42.739666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.761 [2024-07-15 03:37:42.739696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.761 qpair failed and we were unable to recover it. 00:34:36.761 [2024-07-15 03:37:42.739840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.761 [2024-07-15 03:37:42.739870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.761 qpair failed and we were unable to recover it. 00:34:36.761 [2024-07-15 03:37:42.740049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.761 [2024-07-15 03:37:42.740080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.761 qpair failed and we were unable to recover it. 00:34:36.761 [2024-07-15 03:37:42.740238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.761 [2024-07-15 03:37:42.740268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.761 qpair failed and we were unable to recover it. 00:34:36.761 [2024-07-15 03:37:42.740449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.761 [2024-07-15 03:37:42.740479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.761 qpair failed and we were unable to recover it. 00:34:36.761 [2024-07-15 03:37:42.740663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.761 [2024-07-15 03:37:42.740689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.761 qpair failed and we were unable to recover it. 00:34:36.761 [2024-07-15 03:37:42.740869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.761 [2024-07-15 03:37:42.740921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.761 qpair failed and we were unable to recover it. 00:34:36.761 [2024-07-15 03:37:42.741074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.761 [2024-07-15 03:37:42.741103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.761 qpair failed and we were unable to recover it. 00:34:36.761 [2024-07-15 03:37:42.741288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.761 [2024-07-15 03:37:42.741315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.761 qpair failed and we were unable to recover it. 00:34:36.761 [2024-07-15 03:37:42.741501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.761 [2024-07-15 03:37:42.741532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.761 qpair failed and we were unable to recover it. 00:34:36.761 [2024-07-15 03:37:42.741685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.761 [2024-07-15 03:37:42.741715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.761 qpair failed and we were unable to recover it. 00:34:36.761 [2024-07-15 03:37:42.741869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.761 [2024-07-15 03:37:42.741905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.761 qpair failed and we were unable to recover it. 00:34:36.761 [2024-07-15 03:37:42.742027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.761 [2024-07-15 03:37:42.742054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.761 qpair failed and we were unable to recover it. 00:34:36.761 [2024-07-15 03:37:42.742214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.761 [2024-07-15 03:37:42.742257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.761 qpair failed and we were unable to recover it. 00:34:36.761 [2024-07-15 03:37:42.742418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.761 [2024-07-15 03:37:42.742445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.761 qpair failed and we were unable to recover it. 00:34:36.761 [2024-07-15 03:37:42.742607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.761 [2024-07-15 03:37:42.742634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.761 qpair failed and we were unable to recover it. 00:34:36.761 [2024-07-15 03:37:42.742815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.761 [2024-07-15 03:37:42.742843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.761 qpair failed and we were unable to recover it. 00:34:36.761 [2024-07-15 03:37:42.743018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.761 [2024-07-15 03:37:42.743046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.761 qpair failed and we were unable to recover it. 00:34:36.761 [2024-07-15 03:37:42.743231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.761 [2024-07-15 03:37:42.743261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.761 qpair failed and we were unable to recover it. 00:34:36.761 [2024-07-15 03:37:42.743410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.761 [2024-07-15 03:37:42.743440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.761 qpair failed and we were unable to recover it. 00:34:36.761 [2024-07-15 03:37:42.743576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.761 [2024-07-15 03:37:42.743604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.761 qpair failed and we were unable to recover it. 00:34:36.761 [2024-07-15 03:37:42.743714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.761 [2024-07-15 03:37:42.743742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.761 qpair failed and we were unable to recover it. 00:34:36.761 [2024-07-15 03:37:42.743930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.762 [2024-07-15 03:37:42.743961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.762 qpair failed and we were unable to recover it. 00:34:36.762 [2024-07-15 03:37:42.744115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.762 [2024-07-15 03:37:42.744143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.762 qpair failed and we were unable to recover it. 00:34:36.762 [2024-07-15 03:37:42.744297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.762 [2024-07-15 03:37:42.744327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.762 qpair failed and we were unable to recover it. 00:34:36.762 [2024-07-15 03:37:42.744476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.762 [2024-07-15 03:37:42.744506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.762 qpair failed and we were unable to recover it. 00:34:36.762 [2024-07-15 03:37:42.744642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.762 [2024-07-15 03:37:42.744670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.762 qpair failed and we were unable to recover it. 00:34:36.762 [2024-07-15 03:37:42.744832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.762 [2024-07-15 03:37:42.744859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.762 qpair failed and we were unable to recover it. 00:34:36.762 [2024-07-15 03:37:42.745001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.762 [2024-07-15 03:37:42.745044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.762 qpair failed and we were unable to recover it. 00:34:36.762 [2024-07-15 03:37:42.745211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.762 [2024-07-15 03:37:42.745239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.762 qpair failed and we were unable to recover it. 00:34:36.762 [2024-07-15 03:37:42.745376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.762 [2024-07-15 03:37:42.745420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.762 qpair failed and we were unable to recover it. 00:34:36.762 [2024-07-15 03:37:42.745572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.762 [2024-07-15 03:37:42.745602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.762 qpair failed and we were unable to recover it. 00:34:36.762 [2024-07-15 03:37:42.745786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.762 [2024-07-15 03:37:42.745813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.762 qpair failed and we were unable to recover it. 00:34:36.762 [2024-07-15 03:37:42.745982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.762 [2024-07-15 03:37:42.746012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.762 qpair failed and we were unable to recover it. 00:34:36.762 [2024-07-15 03:37:42.746131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.762 [2024-07-15 03:37:42.746161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.762 qpair failed and we were unable to recover it. 00:34:36.762 [2024-07-15 03:37:42.746343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.762 [2024-07-15 03:37:42.746370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.762 qpair failed and we were unable to recover it. 00:34:36.762 [2024-07-15 03:37:42.746525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.762 [2024-07-15 03:37:42.746555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.762 qpair failed and we were unable to recover it. 00:34:36.762 [2024-07-15 03:37:42.746673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.762 [2024-07-15 03:37:42.746703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.762 qpair failed and we were unable to recover it. 00:34:36.762 [2024-07-15 03:37:42.746888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.762 [2024-07-15 03:37:42.746916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.762 qpair failed and we were unable to recover it. 00:34:36.762 [2024-07-15 03:37:42.747087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.762 [2024-07-15 03:37:42.747117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.762 qpair failed and we were unable to recover it. 00:34:36.762 [2024-07-15 03:37:42.747263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.762 [2024-07-15 03:37:42.747293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.762 qpair failed and we were unable to recover it. 00:34:36.762 [2024-07-15 03:37:42.747483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.762 [2024-07-15 03:37:42.747510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.762 qpair failed and we were unable to recover it. 00:34:36.762 [2024-07-15 03:37:42.747635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.762 [2024-07-15 03:37:42.747670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.762 qpair failed and we were unable to recover it. 00:34:36.762 [2024-07-15 03:37:42.747798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.762 [2024-07-15 03:37:42.747829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.762 qpair failed and we were unable to recover it. 00:34:36.762 [2024-07-15 03:37:42.747991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.762 [2024-07-15 03:37:42.748019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.762 qpair failed and we were unable to recover it. 00:34:36.762 [2024-07-15 03:37:42.748162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.762 [2024-07-15 03:37:42.748206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.762 qpair failed and we were unable to recover it. 00:34:36.762 [2024-07-15 03:37:42.748366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.762 [2024-07-15 03:37:42.748396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.762 qpair failed and we were unable to recover it. 00:34:36.762 [2024-07-15 03:37:42.748559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.762 [2024-07-15 03:37:42.748587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.762 qpair failed and we were unable to recover it. 00:34:36.762 [2024-07-15 03:37:42.748723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.762 [2024-07-15 03:37:42.748751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.762 qpair failed and we were unable to recover it. 00:34:36.762 [2024-07-15 03:37:42.748943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.762 [2024-07-15 03:37:42.748972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.762 qpair failed and we were unable to recover it. 00:34:36.762 [2024-07-15 03:37:42.749107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.762 [2024-07-15 03:37:42.749134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.762 qpair failed and we were unable to recover it. 00:34:36.762 [2024-07-15 03:37:42.749269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.762 [2024-07-15 03:37:42.749313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.762 qpair failed and we were unable to recover it. 00:34:36.762 [2024-07-15 03:37:42.749498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.762 [2024-07-15 03:37:42.749528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.762 qpair failed and we were unable to recover it. 00:34:36.762 [2024-07-15 03:37:42.749685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.762 [2024-07-15 03:37:42.749712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.762 qpair failed and we were unable to recover it. 00:34:36.762 [2024-07-15 03:37:42.749874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.762 [2024-07-15 03:37:42.749913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.762 qpair failed and we were unable to recover it. 00:34:36.762 [2024-07-15 03:37:42.750040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.762 [2024-07-15 03:37:42.750070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.762 qpair failed and we were unable to recover it. 00:34:36.762 [2024-07-15 03:37:42.750239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.762 [2024-07-15 03:37:42.750266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.762 qpair failed and we were unable to recover it. 00:34:36.762 [2024-07-15 03:37:42.750431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.762 [2024-07-15 03:37:42.750458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.762 qpair failed and we were unable to recover it. 00:34:36.762 [2024-07-15 03:37:42.750592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.762 [2024-07-15 03:37:42.750619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.762 qpair failed and we were unable to recover it. 00:34:36.763 [2024-07-15 03:37:42.750794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.763 [2024-07-15 03:37:42.750820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.763 qpair failed and we were unable to recover it. 00:34:36.763 [2024-07-15 03:37:42.750956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.763 [2024-07-15 03:37:42.751002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.763 qpair failed and we were unable to recover it. 00:34:36.763 [2024-07-15 03:37:42.751204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.763 [2024-07-15 03:37:42.751230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.763 qpair failed and we were unable to recover it. 00:34:36.763 [2024-07-15 03:37:42.751372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.763 [2024-07-15 03:37:42.751398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.763 qpair failed and we were unable to recover it. 00:34:36.763 [2024-07-15 03:37:42.751578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.763 [2024-07-15 03:37:42.751607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.763 qpair failed and we were unable to recover it. 00:34:36.763 [2024-07-15 03:37:42.751747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.763 [2024-07-15 03:37:42.751776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.763 qpair failed and we were unable to recover it. 00:34:36.763 [2024-07-15 03:37:42.751926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.763 [2024-07-15 03:37:42.751954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.763 qpair failed and we were unable to recover it. 00:34:36.763 [2024-07-15 03:37:42.752065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.763 [2024-07-15 03:37:42.752092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.763 qpair failed and we were unable to recover it. 00:34:36.763 [2024-07-15 03:37:42.752203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.763 [2024-07-15 03:37:42.752230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.763 qpair failed and we were unable to recover it. 00:34:36.763 [2024-07-15 03:37:42.752361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.763 [2024-07-15 03:37:42.752388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.763 qpair failed and we were unable to recover it. 00:34:36.763 [2024-07-15 03:37:42.752530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.763 [2024-07-15 03:37:42.752557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.763 qpair failed and we were unable to recover it. 00:34:36.763 [2024-07-15 03:37:42.752722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.763 [2024-07-15 03:37:42.752749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.763 qpair failed and we were unable to recover it. 00:34:36.763 [2024-07-15 03:37:42.752891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.763 [2024-07-15 03:37:42.752919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.763 qpair failed and we were unable to recover it. 00:34:36.763 [2024-07-15 03:37:42.753070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.763 [2024-07-15 03:37:42.753100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.763 qpair failed and we were unable to recover it. 00:34:36.763 [2024-07-15 03:37:42.753248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.763 [2024-07-15 03:37:42.753278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.763 qpair failed and we were unable to recover it. 00:34:36.763 [2024-07-15 03:37:42.753443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.763 [2024-07-15 03:37:42.753471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.763 qpair failed and we were unable to recover it. 00:34:36.763 [2024-07-15 03:37:42.753633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.763 [2024-07-15 03:37:42.753677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.763 qpair failed and we were unable to recover it. 00:34:36.763 [2024-07-15 03:37:42.753793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.763 [2024-07-15 03:37:42.753823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.763 qpair failed and we were unable to recover it. 00:34:36.763 [2024-07-15 03:37:42.754017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.763 [2024-07-15 03:37:42.754045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.763 qpair failed and we were unable to recover it. 00:34:36.763 [2024-07-15 03:37:42.754233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.763 [2024-07-15 03:37:42.754263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.763 qpair failed and we were unable to recover it. 00:34:36.763 [2024-07-15 03:37:42.754409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.763 [2024-07-15 03:37:42.754439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.763 qpair failed and we were unable to recover it. 00:34:36.763 [2024-07-15 03:37:42.754596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.763 [2024-07-15 03:37:42.754624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.763 qpair failed and we were unable to recover it. 00:34:36.763 [2024-07-15 03:37:42.754756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.763 [2024-07-15 03:37:42.754801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.763 qpair failed and we were unable to recover it. 00:34:36.763 [2024-07-15 03:37:42.754950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.763 [2024-07-15 03:37:42.754985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.763 qpair failed and we were unable to recover it. 00:34:36.763 [2024-07-15 03:37:42.755140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.763 [2024-07-15 03:37:42.755167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.763 qpair failed and we were unable to recover it. 00:34:36.763 [2024-07-15 03:37:42.755349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.763 [2024-07-15 03:37:42.755380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.763 qpair failed and we were unable to recover it. 00:34:36.763 [2024-07-15 03:37:42.755503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.763 [2024-07-15 03:37:42.755533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.763 qpair failed and we were unable to recover it. 00:34:36.763 [2024-07-15 03:37:42.755718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.763 [2024-07-15 03:37:42.755745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.763 qpair failed and we were unable to recover it. 00:34:36.763 [2024-07-15 03:37:42.755859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.763 [2024-07-15 03:37:42.755904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.763 qpair failed and we were unable to recover it. 00:34:36.763 [2024-07-15 03:37:42.756053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.763 [2024-07-15 03:37:42.756080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.763 qpair failed and we were unable to recover it. 00:34:36.763 [2024-07-15 03:37:42.756254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.763 [2024-07-15 03:37:42.756281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.763 qpair failed and we were unable to recover it. 00:34:36.763 [2024-07-15 03:37:42.756409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.763 [2024-07-15 03:37:42.756440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.763 qpair failed and we were unable to recover it. 00:34:36.763 [2024-07-15 03:37:42.756591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.763 [2024-07-15 03:37:42.756621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.763 qpair failed and we were unable to recover it. 00:34:36.763 [2024-07-15 03:37:42.756770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.763 [2024-07-15 03:37:42.756800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.763 qpair failed and we were unable to recover it. 00:34:36.763 [2024-07-15 03:37:42.756945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.763 [2024-07-15 03:37:42.756974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.763 qpair failed and we were unable to recover it. 00:34:36.763 [2024-07-15 03:37:42.757117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.763 [2024-07-15 03:37:42.757152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.763 qpair failed and we were unable to recover it. 00:34:36.764 [2024-07-15 03:37:42.757324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.764 [2024-07-15 03:37:42.757351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.764 qpair failed and we were unable to recover it. 00:34:36.764 [2024-07-15 03:37:42.757544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.764 [2024-07-15 03:37:42.757574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.764 qpair failed and we were unable to recover it. 00:34:36.764 [2024-07-15 03:37:42.757722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.764 [2024-07-15 03:37:42.757752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.764 qpair failed and we were unable to recover it. 00:34:36.764 [2024-07-15 03:37:42.757942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.764 [2024-07-15 03:37:42.757969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.764 qpair failed and we were unable to recover it. 00:34:36.764 [2024-07-15 03:37:42.758121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.764 [2024-07-15 03:37:42.758152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.764 qpair failed and we were unable to recover it. 00:34:36.764 [2024-07-15 03:37:42.758283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.764 [2024-07-15 03:37:42.758311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.764 qpair failed and we were unable to recover it. 00:34:36.764 [2024-07-15 03:37:42.758466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.764 [2024-07-15 03:37:42.758494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.764 qpair failed and we were unable to recover it. 00:34:36.764 [2024-07-15 03:37:42.758677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.764 [2024-07-15 03:37:42.758707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.764 qpair failed and we were unable to recover it. 00:34:36.764 [2024-07-15 03:37:42.758830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.764 [2024-07-15 03:37:42.758861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.764 qpair failed and we were unable to recover it. 00:34:36.764 [2024-07-15 03:37:42.759054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.764 [2024-07-15 03:37:42.759081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.764 qpair failed and we were unable to recover it. 00:34:36.764 [2024-07-15 03:37:42.759242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.764 [2024-07-15 03:37:42.759272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.764 qpair failed and we were unable to recover it. 00:34:36.764 [2024-07-15 03:37:42.759424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.764 [2024-07-15 03:37:42.759454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.764 qpair failed and we were unable to recover it. 00:34:36.764 [2024-07-15 03:37:42.759614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.764 [2024-07-15 03:37:42.759641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.764 qpair failed and we were unable to recover it. 00:34:36.764 [2024-07-15 03:37:42.759779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.764 [2024-07-15 03:37:42.759823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.764 qpair failed and we were unable to recover it. 00:34:36.764 [2024-07-15 03:37:42.760002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.764 [2024-07-15 03:37:42.760033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.764 qpair failed and we were unable to recover it. 00:34:36.764 [2024-07-15 03:37:42.760202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.764 [2024-07-15 03:37:42.760228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.764 qpair failed and we were unable to recover it. 00:34:36.764 [2024-07-15 03:37:42.760409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.764 [2024-07-15 03:37:42.760439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.764 qpair failed and we were unable to recover it. 00:34:36.764 [2024-07-15 03:37:42.760589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.764 [2024-07-15 03:37:42.760620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.764 qpair failed and we were unable to recover it. 00:34:36.764 [2024-07-15 03:37:42.760785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.764 [2024-07-15 03:37:42.760812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.764 qpair failed and we were unable to recover it. 00:34:36.764 [2024-07-15 03:37:42.760982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.764 [2024-07-15 03:37:42.761013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.764 qpair failed and we were unable to recover it. 00:34:36.764 [2024-07-15 03:37:42.761193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.764 [2024-07-15 03:37:42.761223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.764 qpair failed and we were unable to recover it. 00:34:36.764 [2024-07-15 03:37:42.761378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.764 [2024-07-15 03:37:42.761406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.764 qpair failed and we were unable to recover it. 00:34:36.764 [2024-07-15 03:37:42.761552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.764 [2024-07-15 03:37:42.761579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.764 qpair failed and we were unable to recover it. 00:34:36.764 [2024-07-15 03:37:42.761722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.764 [2024-07-15 03:37:42.761750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.764 qpair failed and we were unable to recover it. 00:34:36.764 [2024-07-15 03:37:42.761890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.764 [2024-07-15 03:37:42.761927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.764 qpair failed and we were unable to recover it. 00:34:36.764 [2024-07-15 03:37:42.762061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.764 [2024-07-15 03:37:42.762105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.764 qpair failed and we were unable to recover it. 00:34:36.764 [2024-07-15 03:37:42.762223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.764 [2024-07-15 03:37:42.762254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.764 qpair failed and we were unable to recover it. 00:34:36.764 [2024-07-15 03:37:42.762431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.764 [2024-07-15 03:37:42.762463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.764 qpair failed and we were unable to recover it. 00:34:36.764 [2024-07-15 03:37:42.762618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.764 [2024-07-15 03:37:42.762656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.764 qpair failed and we were unable to recover it. 00:34:36.764 [2024-07-15 03:37:42.762776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.764 [2024-07-15 03:37:42.762806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.764 qpair failed and we were unable to recover it. 00:34:36.764 [2024-07-15 03:37:42.762966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.764 [2024-07-15 03:37:42.762993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.764 qpair failed and we were unable to recover it. 00:34:36.764 [2024-07-15 03:37:42.763130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.764 [2024-07-15 03:37:42.763175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.764 qpair failed and we were unable to recover it. 00:34:36.764 [2024-07-15 03:37:42.763346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.764 [2024-07-15 03:37:42.763373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.764 qpair failed and we were unable to recover it. 00:34:36.764 [2024-07-15 03:37:42.763488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.764 [2024-07-15 03:37:42.763517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.764 qpair failed and we were unable to recover it. 00:34:36.764 [2024-07-15 03:37:42.763695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.764 [2024-07-15 03:37:42.763725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.764 qpair failed and we were unable to recover it. 00:34:36.764 [2024-07-15 03:37:42.763885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.764 [2024-07-15 03:37:42.763916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.764 qpair failed and we were unable to recover it. 00:34:36.764 [2024-07-15 03:37:42.764066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.764 [2024-07-15 03:37:42.764093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.764 qpair failed and we were unable to recover it. 00:34:36.764 [2024-07-15 03:37:42.764244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.764 [2024-07-15 03:37:42.764287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.764 qpair failed and we were unable to recover it. 00:34:36.764 [2024-07-15 03:37:42.764429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.764 [2024-07-15 03:37:42.764458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.764 qpair failed and we were unable to recover it. 00:34:36.764 [2024-07-15 03:37:42.764639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.765 [2024-07-15 03:37:42.764666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.765 qpair failed and we were unable to recover it. 00:34:36.765 [2024-07-15 03:37:42.764772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.765 [2024-07-15 03:37:42.764826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.765 qpair failed and we were unable to recover it. 00:34:36.765 [2024-07-15 03:37:42.765019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.765 [2024-07-15 03:37:42.765046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.765 qpair failed and we were unable to recover it. 00:34:36.765 [2024-07-15 03:37:42.765189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.765 [2024-07-15 03:37:42.765216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.765 qpair failed and we were unable to recover it. 00:34:36.765 [2024-07-15 03:37:42.765360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.765 [2024-07-15 03:37:42.765388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.765 qpair failed and we were unable to recover it. 00:34:36.765 [2024-07-15 03:37:42.765495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.765 [2024-07-15 03:37:42.765522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.765 qpair failed and we were unable to recover it. 00:34:36.765 [2024-07-15 03:37:42.765726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.765 [2024-07-15 03:37:42.765753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.765 qpair failed and we were unable to recover it. 00:34:36.765 [2024-07-15 03:37:42.765922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.765 [2024-07-15 03:37:42.765954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.765 qpair failed and we were unable to recover it. 00:34:36.765 [2024-07-15 03:37:42.766101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.765 [2024-07-15 03:37:42.766131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.765 qpair failed and we were unable to recover it. 00:34:36.765 [2024-07-15 03:37:42.766294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.765 [2024-07-15 03:37:42.766321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.765 qpair failed and we were unable to recover it. 00:34:36.765 [2024-07-15 03:37:42.766433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.765 [2024-07-15 03:37:42.766460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.765 qpair failed and we were unable to recover it. 00:34:36.765 [2024-07-15 03:37:42.766652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.765 [2024-07-15 03:37:42.766682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.765 qpair failed and we were unable to recover it. 00:34:36.765 [2024-07-15 03:37:42.766836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.765 [2024-07-15 03:37:42.766863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.765 qpair failed and we were unable to recover it. 00:34:36.765 [2024-07-15 03:37:42.767005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.765 [2024-07-15 03:37:42.767050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.765 qpair failed and we were unable to recover it. 00:34:36.765 [2024-07-15 03:37:42.767235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.765 [2024-07-15 03:37:42.767266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.765 qpair failed and we were unable to recover it. 00:34:36.765 [2024-07-15 03:37:42.767405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.765 [2024-07-15 03:37:42.767433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.765 qpair failed and we were unable to recover it. 00:34:36.765 [2024-07-15 03:37:42.767571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.765 [2024-07-15 03:37:42.767597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.765 qpair failed and we were unable to recover it. 00:34:36.765 [2024-07-15 03:37:42.767731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.765 [2024-07-15 03:37:42.767758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.765 qpair failed and we were unable to recover it. 00:34:36.765 [2024-07-15 03:37:42.767897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.765 [2024-07-15 03:37:42.767930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.765 qpair failed and we were unable to recover it. 00:34:36.765 [2024-07-15 03:37:42.768035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.765 [2024-07-15 03:37:42.768078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.765 qpair failed and we were unable to recover it. 00:34:36.765 [2024-07-15 03:37:42.768234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.765 [2024-07-15 03:37:42.768265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.765 qpair failed and we were unable to recover it. 00:34:36.765 [2024-07-15 03:37:42.768450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.765 [2024-07-15 03:37:42.768478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.765 qpair failed and we were unable to recover it. 00:34:36.765 [2024-07-15 03:37:42.768639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.765 [2024-07-15 03:37:42.768669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.765 qpair failed and we were unable to recover it. 00:34:36.765 [2024-07-15 03:37:42.768792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.765 [2024-07-15 03:37:42.768822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.765 qpair failed and we were unable to recover it. 00:34:36.765 [2024-07-15 03:37:42.768958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.765 [2024-07-15 03:37:42.768986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.765 qpair failed and we were unable to recover it. 00:34:36.765 [2024-07-15 03:37:42.769131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.765 [2024-07-15 03:37:42.769174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.765 qpair failed and we were unable to recover it. 00:34:36.765 [2024-07-15 03:37:42.769294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.765 [2024-07-15 03:37:42.769325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.765 qpair failed and we were unable to recover it. 00:34:36.765 [2024-07-15 03:37:42.769467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.765 [2024-07-15 03:37:42.769496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.765 qpair failed and we were unable to recover it. 00:34:36.765 [2024-07-15 03:37:42.769611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.765 [2024-07-15 03:37:42.769644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.765 qpair failed and we were unable to recover it. 00:34:36.765 [2024-07-15 03:37:42.769807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.765 [2024-07-15 03:37:42.769838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.765 qpair failed and we were unable to recover it. 00:34:36.765 [2024-07-15 03:37:42.769977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.765 [2024-07-15 03:37:42.770005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.765 qpair failed and we were unable to recover it. 00:34:36.765 [2024-07-15 03:37:42.770141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.765 [2024-07-15 03:37:42.770184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.765 qpair failed and we were unable to recover it. 00:34:36.765 [2024-07-15 03:37:42.770305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.765 [2024-07-15 03:37:42.770335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.765 qpair failed and we were unable to recover it. 00:34:36.765 [2024-07-15 03:37:42.770496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.765 [2024-07-15 03:37:42.770524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.765 qpair failed and we were unable to recover it. 00:34:36.765 [2024-07-15 03:37:42.770663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.765 [2024-07-15 03:37:42.770708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.765 qpair failed and we were unable to recover it. 00:34:36.765 [2024-07-15 03:37:42.770870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.766 [2024-07-15 03:37:42.770907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.766 qpair failed and we were unable to recover it. 00:34:36.766 [2024-07-15 03:37:42.771039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.766 [2024-07-15 03:37:42.771065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.766 qpair failed and we were unable to recover it. 00:34:36.766 [2024-07-15 03:37:42.771206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.766 [2024-07-15 03:37:42.771233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.766 qpair failed and we were unable to recover it. 00:34:36.766 [2024-07-15 03:37:42.771374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.766 [2024-07-15 03:37:42.771401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.766 qpair failed and we were unable to recover it. 00:34:36.766 [2024-07-15 03:37:42.771538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.766 [2024-07-15 03:37:42.771566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.766 qpair failed and we were unable to recover it. 00:34:36.766 [2024-07-15 03:37:42.771704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.766 [2024-07-15 03:37:42.771747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.766 qpair failed and we were unable to recover it. 00:34:36.766 [2024-07-15 03:37:42.771885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.766 [2024-07-15 03:37:42.771942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.766 qpair failed and we were unable to recover it. 00:34:36.766 [2024-07-15 03:37:42.772113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.766 [2024-07-15 03:37:42.772143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.766 qpair failed and we were unable to recover it. 00:34:36.766 [2024-07-15 03:37:42.772329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.766 [2024-07-15 03:37:42.772359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.766 qpair failed and we were unable to recover it. 00:34:36.766 [2024-07-15 03:37:42.772513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.766 [2024-07-15 03:37:42.772545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.766 qpair failed and we were unable to recover it. 00:34:36.766 [2024-07-15 03:37:42.772732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.766 [2024-07-15 03:37:42.772760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.766 qpair failed and we were unable to recover it. 00:34:36.766 [2024-07-15 03:37:42.772948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.766 [2024-07-15 03:37:42.772979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.766 qpair failed and we were unable to recover it. 00:34:36.766 [2024-07-15 03:37:42.773120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.766 [2024-07-15 03:37:42.773160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.766 qpair failed and we were unable to recover it. 00:34:36.766 [2024-07-15 03:37:42.773350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.766 [2024-07-15 03:37:42.773377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.766 qpair failed and we were unable to recover it. 00:34:36.766 [2024-07-15 03:37:42.773527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.766 [2024-07-15 03:37:42.773557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.766 qpair failed and we were unable to recover it. 00:34:36.766 [2024-07-15 03:37:42.773678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.766 [2024-07-15 03:37:42.773708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.766 qpair failed and we were unable to recover it. 00:34:36.766 [2024-07-15 03:37:42.773896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.766 [2024-07-15 03:37:42.773931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.766 qpair failed and we were unable to recover it. 00:34:36.766 [2024-07-15 03:37:42.774085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.766 [2024-07-15 03:37:42.774115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.766 qpair failed and we were unable to recover it. 00:34:36.766 [2024-07-15 03:37:42.774263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.766 [2024-07-15 03:37:42.774293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.766 qpair failed and we were unable to recover it. 00:34:36.766 [2024-07-15 03:37:42.774458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.766 [2024-07-15 03:37:42.774485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.766 qpair failed and we were unable to recover it. 00:34:36.766 [2024-07-15 03:37:42.774604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.766 [2024-07-15 03:37:42.774633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.766 qpair failed and we were unable to recover it. 00:34:36.766 [2024-07-15 03:37:42.774792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.766 [2024-07-15 03:37:42.774823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.766 qpair failed and we were unable to recover it. 00:34:36.766 [2024-07-15 03:37:42.774993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.766 [2024-07-15 03:37:42.775021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.766 qpair failed and we were unable to recover it. 00:34:36.766 [2024-07-15 03:37:42.775132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.766 [2024-07-15 03:37:42.775177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.766 qpair failed and we were unable to recover it. 00:34:36.766 [2024-07-15 03:37:42.775292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.766 [2024-07-15 03:37:42.775322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.766 qpair failed and we were unable to recover it. 00:34:36.766 [2024-07-15 03:37:42.775486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.766 [2024-07-15 03:37:42.775514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.766 qpair failed and we were unable to recover it. 00:34:36.766 [2024-07-15 03:37:42.775672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.766 [2024-07-15 03:37:42.775702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.766 qpair failed and we were unable to recover it. 00:34:36.766 [2024-07-15 03:37:42.775861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.766 [2024-07-15 03:37:42.775903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.766 qpair failed and we were unable to recover it. 00:34:36.766 [2024-07-15 03:37:42.776034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.766 [2024-07-15 03:37:42.776061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.766 qpair failed and we were unable to recover it. 00:34:36.766 [2024-07-15 03:37:42.776245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.766 [2024-07-15 03:37:42.776276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.766 qpair failed and we were unable to recover it. 00:34:36.766 [2024-07-15 03:37:42.776394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.767 [2024-07-15 03:37:42.776426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.767 qpair failed and we were unable to recover it. 00:34:36.767 [2024-07-15 03:37:42.776618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.767 [2024-07-15 03:37:42.776645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.767 qpair failed and we were unable to recover it. 00:34:36.767 [2024-07-15 03:37:42.776774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.767 [2024-07-15 03:37:42.776806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.767 qpair failed and we were unable to recover it. 00:34:36.767 [2024-07-15 03:37:42.776988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.767 [2024-07-15 03:37:42.777023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.767 qpair failed and we were unable to recover it. 00:34:36.767 [2024-07-15 03:37:42.777191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.767 [2024-07-15 03:37:42.777218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.767 qpair failed and we were unable to recover it. 00:34:36.767 [2024-07-15 03:37:42.777352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.767 [2024-07-15 03:37:42.777380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.767 qpair failed and we were unable to recover it. 00:34:36.767 [2024-07-15 03:37:42.777492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.767 [2024-07-15 03:37:42.777520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.767 qpair failed and we were unable to recover it. 00:34:36.767 [2024-07-15 03:37:42.777661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.767 [2024-07-15 03:37:42.777689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.767 qpair failed and we were unable to recover it. 00:34:36.767 [2024-07-15 03:37:42.777807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.767 [2024-07-15 03:37:42.777851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.767 qpair failed and we were unable to recover it. 00:34:36.767 [2024-07-15 03:37:42.778028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.767 [2024-07-15 03:37:42.778055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.767 qpair failed and we were unable to recover it. 00:34:36.767 [2024-07-15 03:37:42.778194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.767 [2024-07-15 03:37:42.778221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.767 qpair failed and we were unable to recover it. 00:34:36.767 [2024-07-15 03:37:42.778401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.767 [2024-07-15 03:37:42.778431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.767 qpair failed and we were unable to recover it. 00:34:36.767 [2024-07-15 03:37:42.778584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.767 [2024-07-15 03:37:42.778614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.767 qpair failed and we were unable to recover it. 00:34:36.767 [2024-07-15 03:37:42.778776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.767 [2024-07-15 03:37:42.778803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.767 qpair failed and we were unable to recover it. 00:34:36.767 [2024-07-15 03:37:42.778984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.767 [2024-07-15 03:37:42.779014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.767 qpair failed and we were unable to recover it. 00:34:36.767 [2024-07-15 03:37:42.779135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.767 [2024-07-15 03:37:42.779166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.767 qpair failed and we were unable to recover it. 00:34:36.767 [2024-07-15 03:37:42.779356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.767 [2024-07-15 03:37:42.779383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.767 qpair failed and we were unable to recover it. 00:34:36.767 [2024-07-15 03:37:42.779497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.767 [2024-07-15 03:37:42.779540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.767 qpair failed and we were unable to recover it. 00:34:36.767 [2024-07-15 03:37:42.779688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.767 [2024-07-15 03:37:42.779719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.767 qpair failed and we were unable to recover it. 00:34:36.767 [2024-07-15 03:37:42.779975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.767 [2024-07-15 03:37:42.780003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.767 qpair failed and we were unable to recover it. 00:34:36.767 [2024-07-15 03:37:42.780138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.767 [2024-07-15 03:37:42.780165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.767 qpair failed and we were unable to recover it. 00:34:36.767 [2024-07-15 03:37:42.780308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.767 [2024-07-15 03:37:42.780337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.767 qpair failed and we were unable to recover it. 00:34:36.767 [2024-07-15 03:37:42.780469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.767 [2024-07-15 03:37:42.780497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.767 qpair failed and we were unable to recover it. 00:34:36.767 [2024-07-15 03:37:42.780657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.767 [2024-07-15 03:37:42.780687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.767 qpair failed and we were unable to recover it. 00:34:36.767 [2024-07-15 03:37:42.780808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.767 [2024-07-15 03:37:42.780839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.767 qpair failed and we were unable to recover it. 00:34:36.767 [2024-07-15 03:37:42.781033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.767 [2024-07-15 03:37:42.781061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.767 qpair failed and we were unable to recover it. 00:34:36.767 [2024-07-15 03:37:42.781204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.767 [2024-07-15 03:37:42.781232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.767 qpair failed and we were unable to recover it. 00:34:36.767 [2024-07-15 03:37:42.781393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.767 [2024-07-15 03:37:42.781420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.767 qpair failed and we were unable to recover it. 00:34:36.767 [2024-07-15 03:37:42.781566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.767 [2024-07-15 03:37:42.781593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.767 qpair failed and we were unable to recover it. 00:34:36.767 [2024-07-15 03:37:42.781779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.767 [2024-07-15 03:37:42.781809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.767 qpair failed and we were unable to recover it. 00:34:36.767 [2024-07-15 03:37:42.781995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.767 [2024-07-15 03:37:42.782023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.767 qpair failed and we were unable to recover it. 00:34:36.767 [2024-07-15 03:37:42.782188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.767 [2024-07-15 03:37:42.782215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.767 qpair failed and we were unable to recover it. 00:34:36.767 [2024-07-15 03:37:42.782360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.767 [2024-07-15 03:37:42.782387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.767 qpair failed and we were unable to recover it. 00:34:36.767 [2024-07-15 03:37:42.782518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.767 [2024-07-15 03:37:42.782545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.767 qpair failed and we were unable to recover it. 00:34:36.767 [2024-07-15 03:37:42.782690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.767 [2024-07-15 03:37:42.782718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.767 qpair failed and we were unable to recover it. 00:34:36.767 [2024-07-15 03:37:42.782874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.767 [2024-07-15 03:37:42.782911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.767 qpair failed and we were unable to recover it. 00:34:36.767 [2024-07-15 03:37:42.783051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.767 [2024-07-15 03:37:42.783079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.767 qpair failed and we were unable to recover it. 00:34:36.767 [2024-07-15 03:37:42.783230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.767 [2024-07-15 03:37:42.783258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.767 qpair failed and we were unable to recover it. 00:34:36.767 [2024-07-15 03:37:42.783392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.767 [2024-07-15 03:37:42.783419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.767 qpair failed and we were unable to recover it. 00:34:36.767 [2024-07-15 03:37:42.783559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.767 [2024-07-15 03:37:42.783585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.767 qpair failed and we were unable to recover it. 00:34:36.767 [2024-07-15 03:37:42.783728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.768 [2024-07-15 03:37:42.783755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.768 qpair failed and we were unable to recover it. 00:34:36.768 [2024-07-15 03:37:42.783864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.768 [2024-07-15 03:37:42.783911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.768 qpair failed and we were unable to recover it. 00:34:36.768 [2024-07-15 03:37:42.784086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.768 [2024-07-15 03:37:42.784113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.768 qpair failed and we were unable to recover it. 00:34:36.768 [2024-07-15 03:37:42.784322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.768 [2024-07-15 03:37:42.784353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.768 qpair failed and we were unable to recover it. 00:34:36.768 [2024-07-15 03:37:42.784505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.768 [2024-07-15 03:37:42.784535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.768 qpair failed and we were unable to recover it. 00:34:36.768 [2024-07-15 03:37:42.784714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.768 [2024-07-15 03:37:42.784743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.768 qpair failed and we were unable to recover it. 00:34:36.768 [2024-07-15 03:37:42.784905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.768 [2024-07-15 03:37:42.784932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.768 qpair failed and we were unable to recover it. 00:34:36.768 [2024-07-15 03:37:42.785078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.768 [2024-07-15 03:37:42.785104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.768 qpair failed and we were unable to recover it. 00:34:36.768 [2024-07-15 03:37:42.785220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.768 [2024-07-15 03:37:42.785247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.768 qpair failed and we were unable to recover it. 00:34:36.768 [2024-07-15 03:37:42.785388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.768 [2024-07-15 03:37:42.785414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.768 qpair failed and we were unable to recover it. 00:34:36.768 [2024-07-15 03:37:42.785601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.768 [2024-07-15 03:37:42.785631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.768 qpair failed and we were unable to recover it. 00:34:36.768 [2024-07-15 03:37:42.785792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.768 [2024-07-15 03:37:42.785819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.768 qpair failed and we were unable to recover it. 00:34:36.768 [2024-07-15 03:37:42.785965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.768 [2024-07-15 03:37:42.785994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.768 qpair failed and we were unable to recover it. 00:34:36.768 [2024-07-15 03:37:42.786126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.768 [2024-07-15 03:37:42.786154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.768 qpair failed and we were unable to recover it. 00:34:36.768 [2024-07-15 03:37:42.786322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.768 [2024-07-15 03:37:42.786352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.768 qpair failed and we were unable to recover it. 00:34:36.768 [2024-07-15 03:37:42.786513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.768 [2024-07-15 03:37:42.786541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.768 qpair failed and we were unable to recover it. 00:34:36.768 [2024-07-15 03:37:42.786677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.768 [2024-07-15 03:37:42.786722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.768 qpair failed and we were unable to recover it. 00:34:36.768 [2024-07-15 03:37:42.786860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.768 [2024-07-15 03:37:42.786898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.768 qpair failed and we were unable to recover it. 00:34:36.768 [2024-07-15 03:37:42.787091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.768 [2024-07-15 03:37:42.787117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.768 qpair failed and we were unable to recover it. 00:34:36.768 [2024-07-15 03:37:42.787271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.768 [2024-07-15 03:37:42.787302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.768 qpair failed and we were unable to recover it. 00:34:36.768 [2024-07-15 03:37:42.787461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.768 [2024-07-15 03:37:42.787491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.768 qpair failed and we were unable to recover it. 00:34:36.768 [2024-07-15 03:37:42.787657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.768 [2024-07-15 03:37:42.787684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.768 qpair failed and we were unable to recover it. 00:34:36.768 [2024-07-15 03:37:42.787797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.768 [2024-07-15 03:37:42.787839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.768 qpair failed and we were unable to recover it. 00:34:36.768 [2024-07-15 03:37:42.788012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.768 [2024-07-15 03:37:42.788042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.768 qpair failed and we were unable to recover it. 00:34:36.768 [2024-07-15 03:37:42.788199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.768 [2024-07-15 03:37:42.788227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.768 qpair failed and we were unable to recover it. 00:34:36.768 [2024-07-15 03:37:42.788368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.768 [2024-07-15 03:37:42.788411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.768 qpair failed and we were unable to recover it. 00:34:36.768 [2024-07-15 03:37:42.788566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.768 [2024-07-15 03:37:42.788596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.768 qpair failed and we were unable to recover it. 00:34:36.768 [2024-07-15 03:37:42.788774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.768 [2024-07-15 03:37:42.788800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.768 qpair failed and we were unable to recover it. 00:34:36.768 [2024-07-15 03:37:42.788986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.768 [2024-07-15 03:37:42.789016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.768 qpair failed and we were unable to recover it. 00:34:36.768 [2024-07-15 03:37:42.789195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.768 [2024-07-15 03:37:42.789225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.768 qpair failed and we were unable to recover it. 00:34:36.768 [2024-07-15 03:37:42.789415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.768 [2024-07-15 03:37:42.789442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.768 qpair failed and we were unable to recover it. 00:34:36.768 [2024-07-15 03:37:42.789596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.768 [2024-07-15 03:37:42.789627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.768 qpair failed and we were unable to recover it. 00:34:36.768 [2024-07-15 03:37:42.789759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.768 [2024-07-15 03:37:42.789791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.768 qpair failed and we were unable to recover it. 00:34:36.768 [2024-07-15 03:37:42.789964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.769 [2024-07-15 03:37:42.789992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.769 qpair failed and we were unable to recover it. 00:34:36.769 [2024-07-15 03:37:42.790147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.769 [2024-07-15 03:37:42.790175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.769 qpair failed and we were unable to recover it. 00:34:36.769 [2024-07-15 03:37:42.790340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.769 [2024-07-15 03:37:42.790370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.769 qpair failed and we were unable to recover it. 00:34:36.769 [2024-07-15 03:37:42.790527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.769 [2024-07-15 03:37:42.790554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.769 qpair failed and we were unable to recover it. 00:34:36.769 [2024-07-15 03:37:42.790661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.769 [2024-07-15 03:37:42.790688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.769 qpair failed and we were unable to recover it. 00:34:36.769 [2024-07-15 03:37:42.790825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.769 [2024-07-15 03:37:42.790855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.769 qpair failed and we were unable to recover it. 00:34:36.769 [2024-07-15 03:37:42.791063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.769 [2024-07-15 03:37:42.791090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.769 qpair failed and we were unable to recover it. 00:34:36.769 [2024-07-15 03:37:42.791253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.769 [2024-07-15 03:37:42.791283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.769 qpair failed and we were unable to recover it. 00:34:36.769 [2024-07-15 03:37:42.791434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.769 [2024-07-15 03:37:42.791463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.769 qpair failed and we were unable to recover it. 00:34:36.769 [2024-07-15 03:37:42.791597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.769 [2024-07-15 03:37:42.791624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.769 qpair failed and we were unable to recover it. 00:34:36.769 [2024-07-15 03:37:42.791763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.769 [2024-07-15 03:37:42.791795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.769 qpair failed and we were unable to recover it. 00:34:36.769 [2024-07-15 03:37:42.791969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.769 [2024-07-15 03:37:42.792000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.769 qpair failed and we were unable to recover it. 00:34:36.769 [2024-07-15 03:37:42.792185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.769 [2024-07-15 03:37:42.792212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.769 qpair failed and we were unable to recover it. 00:34:36.769 [2024-07-15 03:37:42.792328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.769 [2024-07-15 03:37:42.792372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.769 qpair failed and we were unable to recover it. 00:34:36.769 [2024-07-15 03:37:42.792559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.769 [2024-07-15 03:37:42.792585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.769 qpair failed and we were unable to recover it. 00:34:36.769 [2024-07-15 03:37:42.792752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.769 [2024-07-15 03:37:42.792779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.769 qpair failed and we were unable to recover it. 00:34:36.769 [2024-07-15 03:37:42.792981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.769 [2024-07-15 03:37:42.793011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.769 qpair failed and we were unable to recover it. 00:34:36.769 [2024-07-15 03:37:42.793205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.769 [2024-07-15 03:37:42.793233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.769 qpair failed and we were unable to recover it. 00:34:36.769 [2024-07-15 03:37:42.793401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.769 [2024-07-15 03:37:42.793427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.769 qpair failed and we were unable to recover it. 00:34:36.769 [2024-07-15 03:37:42.793609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.769 [2024-07-15 03:37:42.793639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.769 qpair failed and we were unable to recover it. 00:34:36.769 [2024-07-15 03:37:42.793760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.769 [2024-07-15 03:37:42.793792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.769 qpair failed and we were unable to recover it. 00:34:36.769 [2024-07-15 03:37:42.793976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.769 [2024-07-15 03:37:42.794003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.769 qpair failed and we were unable to recover it. 00:34:36.769 [2024-07-15 03:37:42.794161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.769 [2024-07-15 03:37:42.794191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.769 qpair failed and we were unable to recover it. 00:34:36.769 [2024-07-15 03:37:42.794341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.769 [2024-07-15 03:37:42.794371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.769 qpair failed and we were unable to recover it. 00:34:36.769 [2024-07-15 03:37:42.794517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.769 [2024-07-15 03:37:42.794544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.769 qpair failed and we were unable to recover it. 00:34:36.769 [2024-07-15 03:37:42.794688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.769 [2024-07-15 03:37:42.794730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.769 qpair failed and we were unable to recover it. 00:34:36.769 [2024-07-15 03:37:42.794856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.769 [2024-07-15 03:37:42.794894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.769 qpair failed and we were unable to recover it. 00:34:36.769 [2024-07-15 03:37:42.795069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.769 [2024-07-15 03:37:42.795096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.769 qpair failed and we were unable to recover it. 00:34:36.769 [2024-07-15 03:37:42.795249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.769 [2024-07-15 03:37:42.795279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.769 qpair failed and we were unable to recover it. 00:34:36.769 [2024-07-15 03:37:42.795452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.769 [2024-07-15 03:37:42.795482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.769 qpair failed and we were unable to recover it. 00:34:36.769 [2024-07-15 03:37:42.795617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.769 [2024-07-15 03:37:42.795645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.769 qpair failed and we were unable to recover it. 00:34:36.769 [2024-07-15 03:37:42.795793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.769 [2024-07-15 03:37:42.795837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.769 qpair failed and we were unable to recover it. 00:34:36.769 [2024-07-15 03:37:42.796017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.769 [2024-07-15 03:37:42.796048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.769 qpair failed and we were unable to recover it. 00:34:36.769 [2024-07-15 03:37:42.796207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.769 [2024-07-15 03:37:42.796234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.769 qpair failed and we were unable to recover it. 00:34:36.769 [2024-07-15 03:37:42.796386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.769 [2024-07-15 03:37:42.796416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.769 qpair failed and we were unable to recover it. 00:34:36.769 [2024-07-15 03:37:42.796568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.769 [2024-07-15 03:37:42.796598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.769 qpair failed and we were unable to recover it. 00:34:36.769 [2024-07-15 03:37:42.796741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.769 [2024-07-15 03:37:42.796768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.769 qpair failed and we were unable to recover it. 00:34:36.769 [2024-07-15 03:37:42.796925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.769 [2024-07-15 03:37:42.796978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.769 qpair failed and we were unable to recover it. 00:34:36.769 [2024-07-15 03:37:42.797143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.769 [2024-07-15 03:37:42.797173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.769 qpair failed and we were unable to recover it. 00:34:36.769 [2024-07-15 03:37:42.797332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.769 [2024-07-15 03:37:42.797359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.769 qpair failed and we were unable to recover it. 00:34:36.769 [2024-07-15 03:37:42.797490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.769 [2024-07-15 03:37:42.797536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.769 qpair failed and we were unable to recover it. 00:34:36.769 [2024-07-15 03:37:42.797689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.770 [2024-07-15 03:37:42.797719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.770 qpair failed and we were unable to recover it. 00:34:36.770 [2024-07-15 03:37:42.797846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.770 [2024-07-15 03:37:42.797904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.770 qpair failed and we were unable to recover it. 00:34:36.770 [2024-07-15 03:37:42.798062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.770 [2024-07-15 03:37:42.798089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.770 qpair failed and we were unable to recover it. 00:34:36.770 [2024-07-15 03:37:42.798231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.770 [2024-07-15 03:37:42.798258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.770 qpair failed and we were unable to recover it. 00:34:36.770 [2024-07-15 03:37:42.798386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.770 [2024-07-15 03:37:42.798413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.770 qpair failed and we were unable to recover it. 00:34:36.770 [2024-07-15 03:37:42.798593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.770 [2024-07-15 03:37:42.798657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.770 qpair failed and we were unable to recover it. 00:34:36.770 [2024-07-15 03:37:42.798847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.770 [2024-07-15 03:37:42.798885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.770 qpair failed and we were unable to recover it. 00:34:36.770 [2024-07-15 03:37:42.799041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.770 [2024-07-15 03:37:42.799068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.770 qpair failed and we were unable to recover it. 00:34:36.770 [2024-07-15 03:37:42.799223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.770 [2024-07-15 03:37:42.799250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.770 qpair failed and we were unable to recover it. 00:34:36.770 [2024-07-15 03:37:42.799355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.770 [2024-07-15 03:37:42.799386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.770 qpair failed and we were unable to recover it. 00:34:36.770 [2024-07-15 03:37:42.799516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.770 [2024-07-15 03:37:42.799543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.770 qpair failed and we were unable to recover it. 00:34:36.770 [2024-07-15 03:37:42.799659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.770 [2024-07-15 03:37:42.799702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.770 qpair failed and we were unable to recover it. 00:34:36.770 [2024-07-15 03:37:42.799832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.770 [2024-07-15 03:37:42.799861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.770 qpair failed and we were unable to recover it. 00:34:36.770 [2024-07-15 03:37:42.800035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.770 [2024-07-15 03:37:42.800062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.770 qpair failed and we were unable to recover it. 00:34:36.770 [2024-07-15 03:37:42.800199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.770 [2024-07-15 03:37:42.800245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.770 qpair failed and we were unable to recover it. 00:34:36.770 [2024-07-15 03:37:42.800399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.770 [2024-07-15 03:37:42.800428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.770 qpair failed and we were unable to recover it. 00:34:36.770 [2024-07-15 03:37:42.800609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.770 [2024-07-15 03:37:42.800636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.770 qpair failed and we were unable to recover it. 00:34:36.770 [2024-07-15 03:37:42.800762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.770 [2024-07-15 03:37:42.800806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.770 qpair failed and we were unable to recover it. 00:34:36.770 [2024-07-15 03:37:42.800970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.770 [2024-07-15 03:37:42.801000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.770 qpair failed and we were unable to recover it. 00:34:36.770 [2024-07-15 03:37:42.801159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.770 [2024-07-15 03:37:42.801185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.770 qpair failed and we were unable to recover it. 00:34:36.770 [2024-07-15 03:37:42.801322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.770 [2024-07-15 03:37:42.801366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.770 qpair failed and we were unable to recover it. 00:34:36.770 [2024-07-15 03:37:42.801518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.770 [2024-07-15 03:37:42.801547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.770 qpair failed and we were unable to recover it. 00:34:36.770 [2024-07-15 03:37:42.801674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.770 [2024-07-15 03:37:42.801700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.770 qpair failed and we were unable to recover it. 00:34:36.770 [2024-07-15 03:37:42.801818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.770 [2024-07-15 03:37:42.801846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.770 qpair failed and we were unable to recover it. 00:34:36.770 [2024-07-15 03:37:42.802041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.770 [2024-07-15 03:37:42.802068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.770 qpair failed and we were unable to recover it. 00:34:36.770 [2024-07-15 03:37:42.802176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.770 [2024-07-15 03:37:42.802203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.770 qpair failed and we were unable to recover it. 00:34:36.770 [2024-07-15 03:37:42.802345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.770 [2024-07-15 03:37:42.802372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.770 qpair failed and we were unable to recover it. 00:34:36.770 [2024-07-15 03:37:42.802549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.771 [2024-07-15 03:37:42.802576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.771 qpair failed and we were unable to recover it. 00:34:36.771 [2024-07-15 03:37:42.802715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.771 [2024-07-15 03:37:42.802742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.771 qpair failed and we were unable to recover it. 00:34:36.771 [2024-07-15 03:37:42.802897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.771 [2024-07-15 03:37:42.802935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.771 qpair failed and we were unable to recover it. 00:34:36.771 [2024-07-15 03:37:42.803086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.771 [2024-07-15 03:37:42.803115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.771 qpair failed and we were unable to recover it. 00:34:36.771 [2024-07-15 03:37:42.803279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.771 [2024-07-15 03:37:42.803307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.771 qpair failed and we were unable to recover it. 00:34:36.771 [2024-07-15 03:37:42.803493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.771 [2024-07-15 03:37:42.803561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.771 qpair failed and we were unable to recover it. 00:34:36.771 [2024-07-15 03:37:42.803738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.771 [2024-07-15 03:37:42.803768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.771 qpair failed and we were unable to recover it. 00:34:36.771 [2024-07-15 03:37:42.803936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.771 [2024-07-15 03:37:42.803964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.771 qpair failed and we were unable to recover it. 00:34:36.771 [2024-07-15 03:37:42.804078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.771 [2024-07-15 03:37:42.804105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.771 qpair failed and we were unable to recover it. 00:34:36.771 [2024-07-15 03:37:42.804261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.771 [2024-07-15 03:37:42.804302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.771 qpair failed and we were unable to recover it. 00:34:36.771 [2024-07-15 03:37:42.804463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.771 [2024-07-15 03:37:42.804492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.771 qpair failed and we were unable to recover it. 00:34:36.771 [2024-07-15 03:37:42.804635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.771 [2024-07-15 03:37:42.804662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.771 qpair failed and we were unable to recover it. 00:34:36.771 [2024-07-15 03:37:42.804836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.771 [2024-07-15 03:37:42.804866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.771 qpair failed and we were unable to recover it. 00:34:36.771 [2024-07-15 03:37:42.805039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.771 [2024-07-15 03:37:42.805066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.771 qpair failed and we were unable to recover it. 00:34:36.771 [2024-07-15 03:37:42.805233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.771 [2024-07-15 03:37:42.805263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.771 qpair failed and we were unable to recover it. 00:34:36.771 [2024-07-15 03:37:42.805380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.771 [2024-07-15 03:37:42.805409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.771 qpair failed and we were unable to recover it. 00:34:36.771 [2024-07-15 03:37:42.805595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.771 [2024-07-15 03:37:42.805622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.771 qpair failed and we were unable to recover it. 00:34:36.771 [2024-07-15 03:37:42.805767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.771 [2024-07-15 03:37:42.805793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.771 qpair failed and we were unable to recover it. 00:34:36.771 [2024-07-15 03:37:42.805918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.771 [2024-07-15 03:37:42.805945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.771 qpair failed and we were unable to recover it. 00:34:36.771 [2024-07-15 03:37:42.806081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.771 [2024-07-15 03:37:42.806107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.771 qpair failed and we were unable to recover it. 00:34:36.771 [2024-07-15 03:37:42.806260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.771 [2024-07-15 03:37:42.806290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.771 qpair failed and we were unable to recover it. 00:34:36.771 [2024-07-15 03:37:42.806531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.771 [2024-07-15 03:37:42.806588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.771 qpair failed and we were unable to recover it. 00:34:36.771 [2024-07-15 03:37:42.806764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.771 [2024-07-15 03:37:42.806790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.771 qpair failed and we were unable to recover it. 00:34:36.771 [2024-07-15 03:37:42.806985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.771 [2024-07-15 03:37:42.807016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.771 qpair failed and we were unable to recover it. 00:34:36.771 [2024-07-15 03:37:42.807168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.771 [2024-07-15 03:37:42.807197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.771 qpair failed and we were unable to recover it. 00:34:36.772 [2024-07-15 03:37:42.807351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.772 [2024-07-15 03:37:42.807378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.772 qpair failed and we were unable to recover it. 00:34:36.772 [2024-07-15 03:37:42.807536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.772 [2024-07-15 03:37:42.807581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.772 qpair failed and we were unable to recover it. 00:34:36.772 [2024-07-15 03:37:42.807757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.772 [2024-07-15 03:37:42.807787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.772 qpair failed and we were unable to recover it. 00:34:36.772 [2024-07-15 03:37:42.807966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.772 [2024-07-15 03:37:42.807993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.772 qpair failed and we were unable to recover it. 00:34:36.772 [2024-07-15 03:37:42.808178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.772 [2024-07-15 03:37:42.808208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.772 qpair failed and we were unable to recover it. 00:34:36.772 [2024-07-15 03:37:42.808510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.772 [2024-07-15 03:37:42.808565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.772 qpair failed and we were unable to recover it. 00:34:36.772 [2024-07-15 03:37:42.808728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.772 [2024-07-15 03:37:42.808755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.772 qpair failed and we were unable to recover it. 00:34:36.772 [2024-07-15 03:37:42.808907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.772 [2024-07-15 03:37:42.808935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.772 qpair failed and we were unable to recover it. 00:34:36.772 [2024-07-15 03:37:42.809089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.772 [2024-07-15 03:37:42.809118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.772 qpair failed and we were unable to recover it. 00:34:36.772 [2024-07-15 03:37:42.809262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.772 [2024-07-15 03:37:42.809290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.772 qpair failed and we were unable to recover it. 00:34:36.772 [2024-07-15 03:37:42.809424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.772 [2024-07-15 03:37:42.809451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.772 qpair failed and we were unable to recover it. 00:34:36.772 [2024-07-15 03:37:42.809614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.772 [2024-07-15 03:37:42.809649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.772 qpair failed and we were unable to recover it. 00:34:36.772 [2024-07-15 03:37:42.809832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.772 [2024-07-15 03:37:42.809859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:36.772 qpair failed and we were unable to recover it. 00:34:36.772 [2024-07-15 03:37:42.810017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.772 [2024-07-15 03:37:42.810057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.772 qpair failed and we were unable to recover it. 00:34:36.772 [2024-07-15 03:37:42.810248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.772 [2024-07-15 03:37:42.810292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.772 qpair failed and we were unable to recover it. 00:34:36.772 [2024-07-15 03:37:42.810461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.772 [2024-07-15 03:37:42.810489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.772 qpair failed and we were unable to recover it. 00:34:36.772 [2024-07-15 03:37:42.810716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.772 [2024-07-15 03:37:42.810783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.772 qpair failed and we were unable to recover it. 00:34:36.772 [2024-07-15 03:37:42.810949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.772 [2024-07-15 03:37:42.810976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.772 qpair failed and we were unable to recover it. 00:34:36.772 [2024-07-15 03:37:42.811119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.772 [2024-07-15 03:37:42.811149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.772 qpair failed and we were unable to recover it. 00:34:36.772 [2024-07-15 03:37:42.811387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.772 [2024-07-15 03:37:42.811438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.772 qpair failed and we were unable to recover it. 00:34:36.772 [2024-07-15 03:37:42.811605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.772 [2024-07-15 03:37:42.811658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.772 qpair failed and we were unable to recover it. 00:34:36.772 [2024-07-15 03:37:42.811824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.772 [2024-07-15 03:37:42.811851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.772 qpair failed and we were unable to recover it. 00:34:36.772 [2024-07-15 03:37:42.812001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.772 [2024-07-15 03:37:42.812029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.772 qpair failed and we were unable to recover it. 00:34:36.772 [2024-07-15 03:37:42.812142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.772 [2024-07-15 03:37:42.812186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.772 qpair failed and we were unable to recover it. 00:34:36.772 [2024-07-15 03:37:42.812376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.772 [2024-07-15 03:37:42.812403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.772 qpair failed and we were unable to recover it. 00:34:36.772 [2024-07-15 03:37:42.812666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.772 [2024-07-15 03:37:42.812718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.772 qpair failed and we were unable to recover it. 00:34:36.772 [2024-07-15 03:37:42.812900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.772 [2024-07-15 03:37:42.812928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.772 qpair failed and we were unable to recover it. 00:34:36.772 [2024-07-15 03:37:42.813068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.772 [2024-07-15 03:37:42.813095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.772 qpair failed and we were unable to recover it. 00:34:36.772 [2024-07-15 03:37:42.813250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.772 [2024-07-15 03:37:42.813279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.773 qpair failed and we were unable to recover it. 00:34:36.773 [2024-07-15 03:37:42.813486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.773 [2024-07-15 03:37:42.813549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.773 qpair failed and we were unable to recover it. 00:34:36.773 [2024-07-15 03:37:42.813715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.773 [2024-07-15 03:37:42.813747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.773 qpair failed and we were unable to recover it. 00:34:36.773 [2024-07-15 03:37:42.813853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.773 [2024-07-15 03:37:42.813885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.773 qpair failed and we were unable to recover it. 00:34:36.773 [2024-07-15 03:37:42.814049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.773 [2024-07-15 03:37:42.814077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.773 qpair failed and we were unable to recover it. 00:34:36.773 [2024-07-15 03:37:42.814191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.773 [2024-07-15 03:37:42.814218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.773 qpair failed and we were unable to recover it. 00:34:36.773 [2024-07-15 03:37:42.814442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.773 [2024-07-15 03:37:42.814496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.773 qpair failed and we were unable to recover it. 00:34:36.773 [2024-07-15 03:37:42.814651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.773 [2024-07-15 03:37:42.814682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.773 qpair failed and we were unable to recover it. 00:34:36.773 [2024-07-15 03:37:42.814830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.773 [2024-07-15 03:37:42.814857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.773 qpair failed and we were unable to recover it. 00:34:36.773 [2024-07-15 03:37:42.814976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.773 [2024-07-15 03:37:42.815003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.773 qpair failed and we were unable to recover it. 00:34:36.773 [2024-07-15 03:37:42.815171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.773 [2024-07-15 03:37:42.815202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.773 qpair failed and we were unable to recover it. 00:34:36.773 [2024-07-15 03:37:42.815327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.773 [2024-07-15 03:37:42.815354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.773 qpair failed and we were unable to recover it. 00:34:36.773 [2024-07-15 03:37:42.815496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.773 [2024-07-15 03:37:42.815524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.773 qpair failed and we were unable to recover it. 00:34:36.773 [2024-07-15 03:37:42.815636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.773 [2024-07-15 03:37:42.815664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.773 qpair failed and we were unable to recover it. 00:34:36.773 [2024-07-15 03:37:42.815813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.773 [2024-07-15 03:37:42.815841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.773 qpair failed and we were unable to recover it. 00:34:36.773 [2024-07-15 03:37:42.815991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.773 [2024-07-15 03:37:42.816018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.773 qpair failed and we were unable to recover it. 00:34:36.773 [2024-07-15 03:37:42.816200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.773 [2024-07-15 03:37:42.816230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.773 qpair failed and we were unable to recover it. 00:34:36.773 [2024-07-15 03:37:42.816380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.773 [2024-07-15 03:37:42.816407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.773 qpair failed and we were unable to recover it. 00:34:36.773 [2024-07-15 03:37:42.816540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.773 [2024-07-15 03:37:42.816582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.773 qpair failed and we were unable to recover it. 00:34:36.773 [2024-07-15 03:37:42.816729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.773 [2024-07-15 03:37:42.816759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.773 qpair failed and we were unable to recover it. 00:34:36.773 [2024-07-15 03:37:42.816922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.773 [2024-07-15 03:37:42.816958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.773 qpair failed and we were unable to recover it. 00:34:36.773 [2024-07-15 03:37:42.817117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.773 [2024-07-15 03:37:42.817148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.773 qpair failed and we were unable to recover it. 00:34:36.773 [2024-07-15 03:37:42.817327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.773 [2024-07-15 03:37:42.817357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.773 qpair failed and we were unable to recover it. 00:34:36.773 [2024-07-15 03:37:42.817544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.773 [2024-07-15 03:37:42.817576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.773 qpair failed and we were unable to recover it. 00:34:36.773 [2024-07-15 03:37:42.817731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.773 [2024-07-15 03:37:42.817761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.773 qpair failed and we were unable to recover it. 00:34:36.773 [2024-07-15 03:37:42.817923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.773 [2024-07-15 03:37:42.817950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.773 qpair failed and we were unable to recover it. 00:34:36.773 [2024-07-15 03:37:42.818085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.773 [2024-07-15 03:37:42.818112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.773 qpair failed and we were unable to recover it. 00:34:36.773 [2024-07-15 03:37:42.818229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.773 [2024-07-15 03:37:42.818274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.773 qpair failed and we were unable to recover it. 00:34:36.773 [2024-07-15 03:37:42.818474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.773 [2024-07-15 03:37:42.818540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.773 qpair failed and we were unable to recover it. 00:34:36.773 [2024-07-15 03:37:42.818727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.773 [2024-07-15 03:37:42.818757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.773 qpair failed and we were unable to recover it. 00:34:36.773 [2024-07-15 03:37:42.818911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.773 [2024-07-15 03:37:42.818955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.773 qpair failed and we were unable to recover it. 00:34:36.773 [2024-07-15 03:37:42.819123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.773 [2024-07-15 03:37:42.819165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.773 qpair failed and we were unable to recover it. 00:34:36.773 [2024-07-15 03:37:42.819336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.773 [2024-07-15 03:37:42.819364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.773 qpair failed and we were unable to recover it. 00:34:36.774 [2024-07-15 03:37:42.819482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.774 [2024-07-15 03:37:42.819508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.774 qpair failed and we were unable to recover it. 00:34:36.774 [2024-07-15 03:37:42.819626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.774 [2024-07-15 03:37:42.819653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.774 qpair failed and we were unable to recover it. 00:34:36.774 [2024-07-15 03:37:42.819787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.774 [2024-07-15 03:37:42.819815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.774 qpair failed and we were unable to recover it. 00:34:36.774 [2024-07-15 03:37:42.819970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.774 [2024-07-15 03:37:42.820000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.774 qpair failed and we were unable to recover it. 00:34:36.774 [2024-07-15 03:37:42.820186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.774 [2024-07-15 03:37:42.820216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.774 qpair failed and we were unable to recover it. 00:34:36.774 [2024-07-15 03:37:42.820376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.774 [2024-07-15 03:37:42.820403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.774 qpair failed and we were unable to recover it. 00:34:36.774 [2024-07-15 03:37:42.820581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.774 [2024-07-15 03:37:42.820611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.774 qpair failed and we were unable to recover it. 00:34:36.774 [2024-07-15 03:37:42.820785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.774 [2024-07-15 03:37:42.820815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.774 qpair failed and we were unable to recover it. 00:34:36.774 [2024-07-15 03:37:42.821020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.774 [2024-07-15 03:37:42.821047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.774 qpair failed and we were unable to recover it. 00:34:36.774 [2024-07-15 03:37:42.821186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.774 [2024-07-15 03:37:42.821214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.774 qpair failed and we were unable to recover it. 00:34:36.774 [2024-07-15 03:37:42.821368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.774 [2024-07-15 03:37:42.821398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.774 qpair failed and we were unable to recover it. 00:34:36.774 [2024-07-15 03:37:42.821583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.774 [2024-07-15 03:37:42.821610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.774 qpair failed and we were unable to recover it. 00:34:36.774 [2024-07-15 03:37:42.821797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.774 [2024-07-15 03:37:42.821827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.774 qpair failed and we were unable to recover it. 00:34:36.774 [2024-07-15 03:37:42.821982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.774 [2024-07-15 03:37:42.822024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.774 qpair failed and we were unable to recover it. 00:34:36.774 [2024-07-15 03:37:42.822171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.774 [2024-07-15 03:37:42.822200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.774 qpair failed and we were unable to recover it. 00:34:36.774 [2024-07-15 03:37:42.822385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.774 [2024-07-15 03:37:42.822416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.774 qpair failed and we were unable to recover it. 00:34:36.774 [2024-07-15 03:37:42.822643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.774 [2024-07-15 03:37:42.822670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.774 qpair failed and we were unable to recover it. 00:34:36.774 [2024-07-15 03:37:42.822841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.774 [2024-07-15 03:37:42.822870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.774 qpair failed and we were unable to recover it. 00:34:36.774 [2024-07-15 03:37:42.823022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.774 [2024-07-15 03:37:42.823050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.774 qpair failed and we were unable to recover it. 00:34:36.774 [2024-07-15 03:37:42.823181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.774 [2024-07-15 03:37:42.823212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.774 qpair failed and we were unable to recover it. 00:34:36.774 [2024-07-15 03:37:42.823367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.774 [2024-07-15 03:37:42.823395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.774 qpair failed and we were unable to recover it. 00:34:36.774 [2024-07-15 03:37:42.823506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.774 [2024-07-15 03:37:42.823535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.774 qpair failed and we were unable to recover it. 00:34:36.774 [2024-07-15 03:37:42.823728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.774 [2024-07-15 03:37:42.823759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.774 qpair failed and we were unable to recover it. 00:34:36.774 [2024-07-15 03:37:42.823930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.774 [2024-07-15 03:37:42.823963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.774 qpair failed and we were unable to recover it. 00:34:36.774 [2024-07-15 03:37:42.824097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.774 [2024-07-15 03:37:42.824127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.774 qpair failed and we were unable to recover it. 00:34:36.774 [2024-07-15 03:37:42.824274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.774 [2024-07-15 03:37:42.824305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.774 qpair failed and we were unable to recover it. 00:34:36.774 [2024-07-15 03:37:42.824465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.774 [2024-07-15 03:37:42.824493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.774 qpair failed and we were unable to recover it. 00:34:36.774 [2024-07-15 03:37:42.824631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.774 [2024-07-15 03:37:42.824677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.774 qpair failed and we were unable to recover it. 00:34:36.775 [2024-07-15 03:37:42.824833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.775 [2024-07-15 03:37:42.824865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.775 qpair failed and we were unable to recover it. 00:34:36.775 [2024-07-15 03:37:42.825037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.775 [2024-07-15 03:37:42.825065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.775 qpair failed and we were unable to recover it. 00:34:36.775 [2024-07-15 03:37:42.825205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.775 [2024-07-15 03:37:42.825238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.775 qpair failed and we were unable to recover it. 00:34:36.775 [2024-07-15 03:37:42.825391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.775 [2024-07-15 03:37:42.825439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.775 qpair failed and we were unable to recover it. 00:34:36.775 [2024-07-15 03:37:42.825631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.775 [2024-07-15 03:37:42.825659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.775 qpair failed and we were unable to recover it. 00:34:36.775 [2024-07-15 03:37:42.825847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.775 [2024-07-15 03:37:42.825885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.775 qpair failed and we were unable to recover it. 00:34:36.775 [2024-07-15 03:37:42.826047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.775 [2024-07-15 03:37:42.826075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.775 qpair failed and we were unable to recover it. 00:34:36.775 [2024-07-15 03:37:42.826210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.775 [2024-07-15 03:37:42.826238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.775 qpair failed and we were unable to recover it. 00:34:36.775 [2024-07-15 03:37:42.826376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.775 [2024-07-15 03:37:42.826424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.775 qpair failed and we were unable to recover it. 00:34:36.775 [2024-07-15 03:37:42.826614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.775 [2024-07-15 03:37:42.826667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.775 qpair failed and we were unable to recover it. 00:34:36.775 [2024-07-15 03:37:42.826856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.775 [2024-07-15 03:37:42.826891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.775 qpair failed and we were unable to recover it. 00:34:36.775 [2024-07-15 03:37:42.827027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.775 [2024-07-15 03:37:42.827057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.775 qpair failed and we were unable to recover it. 00:34:36.775 [2024-07-15 03:37:42.827312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.775 [2024-07-15 03:37:42.827366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.775 qpair failed and we were unable to recover it. 00:34:36.775 [2024-07-15 03:37:42.827556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.775 [2024-07-15 03:37:42.827583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.775 qpair failed and we were unable to recover it. 00:34:36.775 [2024-07-15 03:37:42.827752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.775 [2024-07-15 03:37:42.827782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.775 qpair failed and we were unable to recover it. 00:34:36.775 [2024-07-15 03:37:42.827958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.775 [2024-07-15 03:37:42.827990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.775 qpair failed and we were unable to recover it. 00:34:36.775 [2024-07-15 03:37:42.828156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.775 [2024-07-15 03:37:42.828183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.775 qpair failed and we were unable to recover it. 00:34:36.775 [2024-07-15 03:37:42.828327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.775 [2024-07-15 03:37:42.828373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.775 qpair failed and we were unable to recover it. 00:34:36.775 [2024-07-15 03:37:42.828568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.775 [2024-07-15 03:37:42.828626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.775 qpair failed and we were unable to recover it. 00:34:36.775 [2024-07-15 03:37:42.828790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.775 [2024-07-15 03:37:42.828818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.775 qpair failed and we were unable to recover it. 00:34:36.775 [2024-07-15 03:37:42.828972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.775 [2024-07-15 03:37:42.829004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.775 qpair failed and we were unable to recover it. 00:34:36.775 [2024-07-15 03:37:42.829181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.775 [2024-07-15 03:37:42.829211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.775 qpair failed and we were unable to recover it. 00:34:36.775 [2024-07-15 03:37:42.829375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.775 [2024-07-15 03:37:42.829404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.775 qpair failed and we were unable to recover it. 00:34:36.775 [2024-07-15 03:37:42.829524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.775 [2024-07-15 03:37:42.829568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.775 qpair failed and we were unable to recover it. 00:34:36.775 [2024-07-15 03:37:42.829699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.775 [2024-07-15 03:37:42.829731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.775 qpair failed and we were unable to recover it. 00:34:36.775 [2024-07-15 03:37:42.829868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.775 [2024-07-15 03:37:42.829901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.775 qpair failed and we were unable to recover it. 00:34:36.775 [2024-07-15 03:37:42.830064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.775 [2024-07-15 03:37:42.830108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.775 qpair failed and we were unable to recover it. 00:34:36.775 [2024-07-15 03:37:42.830293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.775 [2024-07-15 03:37:42.830323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.775 qpair failed and we were unable to recover it. 00:34:36.775 [2024-07-15 03:37:42.830482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.775 [2024-07-15 03:37:42.830509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.775 qpair failed and we were unable to recover it. 00:34:36.775 [2024-07-15 03:37:42.830629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.775 [2024-07-15 03:37:42.830675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.775 qpair failed and we were unable to recover it. 00:34:36.775 [2024-07-15 03:37:42.830856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.775 [2024-07-15 03:37:42.830904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.775 qpair failed and we were unable to recover it. 00:34:36.775 [2024-07-15 03:37:42.831067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.775 [2024-07-15 03:37:42.831094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.775 qpair failed and we were unable to recover it. 00:34:36.775 [2024-07-15 03:37:42.831280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.776 [2024-07-15 03:37:42.831310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.776 qpair failed and we were unable to recover it. 00:34:36.776 [2024-07-15 03:37:42.831591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.776 [2024-07-15 03:37:42.831641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.776 qpair failed and we were unable to recover it. 00:34:36.776 [2024-07-15 03:37:42.831797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.776 [2024-07-15 03:37:42.831824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.777 qpair failed and we were unable to recover it. 00:34:36.777 [2024-07-15 03:37:42.831988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.777 [2024-07-15 03:37:42.832015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.777 qpair failed and we were unable to recover it. 00:34:36.777 [2024-07-15 03:37:42.832175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.777 [2024-07-15 03:37:42.832205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.777 qpair failed and we were unable to recover it. 00:34:36.777 [2024-07-15 03:37:42.832360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.777 [2024-07-15 03:37:42.832387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.777 qpair failed and we were unable to recover it. 00:34:36.777 [2024-07-15 03:37:42.832525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.777 [2024-07-15 03:37:42.832568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.777 qpair failed and we were unable to recover it. 00:34:36.777 [2024-07-15 03:37:42.832693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.777 [2024-07-15 03:37:42.832723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.777 qpair failed and we were unable to recover it. 00:34:36.777 [2024-07-15 03:37:42.832860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.777 [2024-07-15 03:37:42.832892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.777 qpair failed and we were unable to recover it. 00:34:36.777 [2024-07-15 03:37:42.833032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.777 [2024-07-15 03:37:42.833059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.777 qpair failed and we were unable to recover it. 00:34:36.777 [2024-07-15 03:37:42.833260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.777 [2024-07-15 03:37:42.833292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.777 qpair failed and we were unable to recover it. 00:34:36.777 [2024-07-15 03:37:42.833454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.777 [2024-07-15 03:37:42.833482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.777 qpair failed and we were unable to recover it. 00:34:36.777 [2024-07-15 03:37:42.833637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.777 [2024-07-15 03:37:42.833668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.777 qpair failed and we were unable to recover it. 00:34:36.777 [2024-07-15 03:37:42.833810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.777 [2024-07-15 03:37:42.833840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.777 qpair failed and we were unable to recover it. 00:34:36.777 [2024-07-15 03:37:42.834005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.777 [2024-07-15 03:37:42.834033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.777 qpair failed and we were unable to recover it. 00:34:36.777 [2024-07-15 03:37:42.834144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.777 [2024-07-15 03:37:42.834188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.777 qpair failed and we were unable to recover it. 00:34:36.777 [2024-07-15 03:37:42.834310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.777 [2024-07-15 03:37:42.834340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.777 qpair failed and we were unable to recover it. 00:34:36.777 [2024-07-15 03:37:42.834493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.777 [2024-07-15 03:37:42.834521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.777 qpair failed and we were unable to recover it. 00:34:36.777 [2024-07-15 03:37:42.834626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.777 [2024-07-15 03:37:42.834653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:36.777 qpair failed and we were unable to recover it. 00:34:36.777 [2024-07-15 03:37:42.834823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.777 [2024-07-15 03:37:42.834868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.777 qpair failed and we were unable to recover it. 00:34:36.777 [2024-07-15 03:37:42.835073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.777 [2024-07-15 03:37:42.835102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.777 qpair failed and we were unable to recover it. 00:34:36.777 [2024-07-15 03:37:42.835267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.777 [2024-07-15 03:37:42.835297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.777 qpair failed and we were unable to recover it. 00:34:36.777 [2024-07-15 03:37:42.835527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.777 [2024-07-15 03:37:42.835579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:36.777 qpair failed and we were unable to recover it. 00:34:37.060 [2024-07-15 03:37:42.835729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.060 [2024-07-15 03:37:42.835757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.060 qpair failed and we were unable to recover it. 00:34:37.060 [2024-07-15 03:37:42.835982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.060 [2024-07-15 03:37:42.836011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.060 qpair failed and we were unable to recover it. 00:34:37.060 [2024-07-15 03:37:42.836183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.060 [2024-07-15 03:37:42.836214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.060 qpair failed and we were unable to recover it. 00:34:37.060 [2024-07-15 03:37:42.836375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.060 [2024-07-15 03:37:42.836414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.060 qpair failed and we were unable to recover it. 00:34:37.060 [2024-07-15 03:37:42.836563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.060 [2024-07-15 03:37:42.836598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.060 qpair failed and we were unable to recover it. 00:34:37.060 [2024-07-15 03:37:42.836739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.060 [2024-07-15 03:37:42.836771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.060 qpair failed and we were unable to recover it. 00:34:37.060 [2024-07-15 03:37:42.836911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.060 [2024-07-15 03:37:42.836940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.060 qpair failed and we were unable to recover it. 00:34:37.060 [2024-07-15 03:37:42.837053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.060 [2024-07-15 03:37:42.837080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.060 qpair failed and we were unable to recover it. 00:34:37.060 [2024-07-15 03:37:42.837203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.060 [2024-07-15 03:37:42.837230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.060 qpair failed and we were unable to recover it. 00:34:37.060 [2024-07-15 03:37:42.837416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.060 [2024-07-15 03:37:42.837443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.060 qpair failed and we were unable to recover it. 00:34:37.060 [2024-07-15 03:37:42.837563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.060 [2024-07-15 03:37:42.837610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.060 qpair failed and we were unable to recover it. 00:34:37.060 [2024-07-15 03:37:42.837742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.060 [2024-07-15 03:37:42.837776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.060 qpair failed and we were unable to recover it. 00:34:37.060 [2024-07-15 03:37:42.837915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.060 [2024-07-15 03:37:42.837942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.060 qpair failed and we were unable to recover it. 00:34:37.060 [2024-07-15 03:37:42.838055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.060 [2024-07-15 03:37:42.838083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.060 qpair failed and we were unable to recover it. 00:34:37.060 [2024-07-15 03:37:42.838312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.060 [2024-07-15 03:37:42.838364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.060 qpair failed and we were unable to recover it. 00:34:37.060 [2024-07-15 03:37:42.838499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.060 [2024-07-15 03:37:42.838527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.060 qpair failed and we were unable to recover it. 00:34:37.060 [2024-07-15 03:37:42.838646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.060 [2024-07-15 03:37:42.838674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.060 qpair failed and we were unable to recover it. 00:34:37.060 [2024-07-15 03:37:42.838842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.060 [2024-07-15 03:37:42.838872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.060 qpair failed and we were unable to recover it. 00:34:37.060 [2024-07-15 03:37:42.839108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.060 [2024-07-15 03:37:42.839136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.060 qpair failed and we were unable to recover it. 00:34:37.060 [2024-07-15 03:37:42.839267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.060 [2024-07-15 03:37:42.839296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.060 qpair failed and we were unable to recover it. 00:34:37.060 [2024-07-15 03:37:42.839445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.060 [2024-07-15 03:37:42.839488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.060 qpair failed and we were unable to recover it. 00:34:37.060 [2024-07-15 03:37:42.839656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.060 [2024-07-15 03:37:42.839684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.060 qpair failed and we were unable to recover it. 00:34:37.060 [2024-07-15 03:37:42.839843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.060 [2024-07-15 03:37:42.839885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.060 qpair failed and we were unable to recover it. 00:34:37.060 [2024-07-15 03:37:42.840028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.060 [2024-07-15 03:37:42.840059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.060 qpair failed and we were unable to recover it. 00:34:37.060 [2024-07-15 03:37:42.840243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.060 [2024-07-15 03:37:42.840271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.060 qpair failed and we were unable to recover it. 00:34:37.060 [2024-07-15 03:37:42.840389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.060 [2024-07-15 03:37:42.840416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.061 qpair failed and we were unable to recover it. 00:34:37.061 [2024-07-15 03:37:42.840577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.061 [2024-07-15 03:37:42.840604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.061 qpair failed and we were unable to recover it. 00:34:37.061 [2024-07-15 03:37:42.840713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.061 [2024-07-15 03:37:42.840745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.061 qpair failed and we were unable to recover it. 00:34:37.061 [2024-07-15 03:37:42.840908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.061 [2024-07-15 03:37:42.840936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.061 qpair failed and we were unable to recover it. 00:34:37.061 [2024-07-15 03:37:42.841097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.061 [2024-07-15 03:37:42.841127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.061 qpair failed and we were unable to recover it. 00:34:37.061 [2024-07-15 03:37:42.841323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.061 [2024-07-15 03:37:42.841351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.061 qpair failed and we were unable to recover it. 00:34:37.061 [2024-07-15 03:37:42.841466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.061 [2024-07-15 03:37:42.841493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.061 qpair failed and we were unable to recover it. 00:34:37.061 [2024-07-15 03:37:42.841631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.061 [2024-07-15 03:37:42.841658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.061 qpair failed and we were unable to recover it. 00:34:37.061 [2024-07-15 03:37:42.841798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.061 [2024-07-15 03:37:42.841828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.061 qpair failed and we were unable to recover it. 00:34:37.061 [2024-07-15 03:37:42.841969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.061 [2024-07-15 03:37:42.841996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.061 qpair failed and we were unable to recover it. 00:34:37.061 [2024-07-15 03:37:42.842107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.061 [2024-07-15 03:37:42.842134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.061 qpair failed and we were unable to recover it. 00:34:37.061 [2024-07-15 03:37:42.842245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.061 [2024-07-15 03:37:42.842273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.061 qpair failed and we were unable to recover it. 00:34:37.061 [2024-07-15 03:37:42.842382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.061 [2024-07-15 03:37:42.842409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.061 qpair failed and we were unable to recover it. 00:34:37.061 [2024-07-15 03:37:42.842583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.061 [2024-07-15 03:37:42.842612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.061 qpair failed and we were unable to recover it. 00:34:37.061 [2024-07-15 03:37:42.842765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.061 [2024-07-15 03:37:42.842792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.061 qpair failed and we were unable to recover it. 00:34:37.061 [2024-07-15 03:37:42.842909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.061 [2024-07-15 03:37:42.842937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.061 qpair failed and we were unable to recover it. 00:34:37.061 [2024-07-15 03:37:42.843112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.061 [2024-07-15 03:37:42.843139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.061 qpair failed and we were unable to recover it. 00:34:37.061 [2024-07-15 03:37:42.843297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.061 [2024-07-15 03:37:42.843324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.061 qpair failed and we were unable to recover it. 00:34:37.061 [2024-07-15 03:37:42.843463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.061 [2024-07-15 03:37:42.843491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.061 qpair failed and we were unable to recover it. 00:34:37.061 [2024-07-15 03:37:42.843636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.061 [2024-07-15 03:37:42.843668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.061 qpair failed and we were unable to recover it. 00:34:37.061 [2024-07-15 03:37:42.843810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.061 [2024-07-15 03:37:42.843838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.061 qpair failed and we were unable to recover it. 00:34:37.061 [2024-07-15 03:37:42.843992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.061 [2024-07-15 03:37:42.844020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.061 qpair failed and we were unable to recover it. 00:34:37.061 [2024-07-15 03:37:42.844179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.061 [2024-07-15 03:37:42.844209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.061 qpair failed and we were unable to recover it. 00:34:37.061 [2024-07-15 03:37:42.844371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.061 [2024-07-15 03:37:42.844399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.061 qpair failed and we were unable to recover it. 00:34:37.061 [2024-07-15 03:37:42.844508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.061 [2024-07-15 03:37:42.844536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.061 qpair failed and we were unable to recover it. 00:34:37.061 [2024-07-15 03:37:42.844648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.061 [2024-07-15 03:37:42.844676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.061 qpair failed and we were unable to recover it. 00:34:37.061 [2024-07-15 03:37:42.844794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.062 [2024-07-15 03:37:42.844821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.062 qpair failed and we were unable to recover it. 00:34:37.062 [2024-07-15 03:37:42.844961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.062 [2024-07-15 03:37:42.844990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.062 qpair failed and we were unable to recover it. 00:34:37.062 [2024-07-15 03:37:42.845178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.062 [2024-07-15 03:37:42.845208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.062 qpair failed and we were unable to recover it. 00:34:37.062 [2024-07-15 03:37:42.845369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.062 [2024-07-15 03:37:42.845397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.062 qpair failed and we were unable to recover it. 00:34:37.062 [2024-07-15 03:37:42.845528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.062 [2024-07-15 03:37:42.845571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.062 qpair failed and we were unable to recover it. 00:34:37.062 [2024-07-15 03:37:42.845745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.062 [2024-07-15 03:37:42.845773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.062 qpair failed and we were unable to recover it. 00:34:37.062 [2024-07-15 03:37:42.845910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.062 [2024-07-15 03:37:42.845938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.062 qpair failed and we were unable to recover it. 00:34:37.062 [2024-07-15 03:37:42.846075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.062 [2024-07-15 03:37:42.846102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.062 qpair failed and we were unable to recover it. 00:34:37.062 [2024-07-15 03:37:42.846238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.062 [2024-07-15 03:37:42.846268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.062 qpair failed and we were unable to recover it. 00:34:37.062 [2024-07-15 03:37:42.846402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.062 [2024-07-15 03:37:42.846429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.062 qpair failed and we were unable to recover it. 00:34:37.062 [2024-07-15 03:37:42.846580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.062 [2024-07-15 03:37:42.846623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.062 qpair failed and we were unable to recover it. 00:34:37.062 [2024-07-15 03:37:42.846746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.062 [2024-07-15 03:37:42.846777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.062 qpair failed and we were unable to recover it. 00:34:37.062 [2024-07-15 03:37:42.846941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.062 [2024-07-15 03:37:42.846968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.062 qpair failed and we were unable to recover it. 00:34:37.062 [2024-07-15 03:37:42.847103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.062 [2024-07-15 03:37:42.847130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.062 qpair failed and we were unable to recover it. 00:34:37.062 [2024-07-15 03:37:42.847267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.062 [2024-07-15 03:37:42.847297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.062 qpair failed and we were unable to recover it. 00:34:37.062 [2024-07-15 03:37:42.847478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.062 [2024-07-15 03:37:42.847506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.062 qpair failed and we were unable to recover it. 00:34:37.062 [2024-07-15 03:37:42.847619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.062 [2024-07-15 03:37:42.847668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.062 qpair failed and we were unable to recover it. 00:34:37.062 [2024-07-15 03:37:42.847811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.062 [2024-07-15 03:37:42.847841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.062 qpair failed and we were unable to recover it. 00:34:37.062 [2024-07-15 03:37:42.848036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.062 [2024-07-15 03:37:42.848064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.062 qpair failed and we were unable to recover it. 00:34:37.062 [2024-07-15 03:37:42.848195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.062 [2024-07-15 03:37:42.848225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.062 qpair failed and we were unable to recover it. 00:34:37.062 [2024-07-15 03:37:42.848423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.062 [2024-07-15 03:37:42.848450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.062 qpair failed and we were unable to recover it. 00:34:37.062 [2024-07-15 03:37:42.848613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.062 [2024-07-15 03:37:42.848640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.062 qpair failed and we were unable to recover it. 00:34:37.062 [2024-07-15 03:37:42.848799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.062 [2024-07-15 03:37:42.848830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.062 qpair failed and we were unable to recover it. 00:34:37.062 [2024-07-15 03:37:42.849011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.062 [2024-07-15 03:37:42.849056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.062 qpair failed and we were unable to recover it. 00:34:37.062 [2024-07-15 03:37:42.849221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.062 [2024-07-15 03:37:42.849250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.062 qpair failed and we were unable to recover it. 00:34:37.062 [2024-07-15 03:37:42.849392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.062 [2024-07-15 03:37:42.849438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.062 qpair failed and we were unable to recover it. 00:34:37.062 [2024-07-15 03:37:42.849722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.062 [2024-07-15 03:37:42.849775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.062 qpair failed and we were unable to recover it. 00:34:37.062 [2024-07-15 03:37:42.849941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.062 [2024-07-15 03:37:42.849969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.062 qpair failed and we were unable to recover it. 00:34:37.063 [2024-07-15 03:37:42.850073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.063 [2024-07-15 03:37:42.850101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.063 qpair failed and we were unable to recover it. 00:34:37.063 [2024-07-15 03:37:42.850238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.063 [2024-07-15 03:37:42.850265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.063 qpair failed and we were unable to recover it. 00:34:37.063 [2024-07-15 03:37:42.850414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.063 [2024-07-15 03:37:42.850441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.063 qpair failed and we were unable to recover it. 00:34:37.063 [2024-07-15 03:37:42.850563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.063 [2024-07-15 03:37:42.850591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.063 qpair failed and we were unable to recover it. 00:34:37.063 [2024-07-15 03:37:42.850758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.063 [2024-07-15 03:37:42.850787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.063 qpair failed and we were unable to recover it. 00:34:37.063 [2024-07-15 03:37:42.850925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.063 [2024-07-15 03:37:42.850953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.063 qpair failed and we were unable to recover it. 00:34:37.063 [2024-07-15 03:37:42.851115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.063 [2024-07-15 03:37:42.851164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.063 qpair failed and we were unable to recover it. 00:34:37.063 [2024-07-15 03:37:42.851391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.063 [2024-07-15 03:37:42.851420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.063 qpair failed and we were unable to recover it. 00:34:37.063 [2024-07-15 03:37:42.851557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.063 [2024-07-15 03:37:42.851586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.063 qpair failed and we were unable to recover it. 00:34:37.063 [2024-07-15 03:37:42.851738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.063 [2024-07-15 03:37:42.851782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.063 qpair failed and we were unable to recover it. 00:34:37.063 [2024-07-15 03:37:42.851960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.063 [2024-07-15 03:37:42.851991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.063 qpair failed and we were unable to recover it. 00:34:37.063 [2024-07-15 03:37:42.852123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.063 [2024-07-15 03:37:42.852151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.063 qpair failed and we were unable to recover it. 00:34:37.063 [2024-07-15 03:37:42.852257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.063 [2024-07-15 03:37:42.852284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.063 qpair failed and we were unable to recover it. 00:34:37.063 [2024-07-15 03:37:42.852474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.063 [2024-07-15 03:37:42.852504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.063 qpair failed and we were unable to recover it. 00:34:37.063 [2024-07-15 03:37:42.852638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.063 [2024-07-15 03:37:42.852666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.063 qpair failed and we were unable to recover it. 00:34:37.063 [2024-07-15 03:37:42.852779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.063 [2024-07-15 03:37:42.852806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.063 qpair failed and we were unable to recover it. 00:34:37.063 [2024-07-15 03:37:42.852946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.063 [2024-07-15 03:37:42.852979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.063 qpair failed and we were unable to recover it. 00:34:37.063 [2024-07-15 03:37:42.853115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.063 [2024-07-15 03:37:42.853142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.063 qpair failed and we were unable to recover it. 00:34:37.063 [2024-07-15 03:37:42.853283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.063 [2024-07-15 03:37:42.853310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.063 qpair failed and we were unable to recover it. 00:34:37.063 [2024-07-15 03:37:42.853475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.063 [2024-07-15 03:37:42.853505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.063 qpair failed and we were unable to recover it. 00:34:37.063 [2024-07-15 03:37:42.853653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.063 [2024-07-15 03:37:42.853680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.063 qpair failed and we were unable to recover it. 00:34:37.063 [2024-07-15 03:37:42.853791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.063 [2024-07-15 03:37:42.853818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.063 qpair failed and we were unable to recover it. 00:34:37.063 [2024-07-15 03:37:42.853960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.063 [2024-07-15 03:37:42.853990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.063 qpair failed and we were unable to recover it. 00:34:37.063 [2024-07-15 03:37:42.854148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.063 [2024-07-15 03:37:42.854175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.063 qpair failed and we were unable to recover it. 00:34:37.063 [2024-07-15 03:37:42.854315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.063 [2024-07-15 03:37:42.854359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.063 qpair failed and we were unable to recover it. 00:34:37.063 [2024-07-15 03:37:42.854477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.063 [2024-07-15 03:37:42.854507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.063 qpair failed and we were unable to recover it. 00:34:37.063 [2024-07-15 03:37:42.854665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.063 [2024-07-15 03:37:42.854696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.063 qpair failed and we were unable to recover it. 00:34:37.063 [2024-07-15 03:37:42.854809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.063 [2024-07-15 03:37:42.854839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.063 qpair failed and we were unable to recover it. 00:34:37.063 [2024-07-15 03:37:42.855011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.063 [2024-07-15 03:37:42.855045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.063 qpair failed and we were unable to recover it. 00:34:37.063 [2024-07-15 03:37:42.855177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.064 [2024-07-15 03:37:42.855204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.064 qpair failed and we were unable to recover it. 00:34:37.064 [2024-07-15 03:37:42.855363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.064 [2024-07-15 03:37:42.855394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.064 qpair failed and we were unable to recover it. 00:34:37.064 [2024-07-15 03:37:42.855522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.064 [2024-07-15 03:37:42.855553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.064 qpair failed and we were unable to recover it. 00:34:37.064 [2024-07-15 03:37:42.855739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.064 [2024-07-15 03:37:42.855767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.064 qpair failed and we were unable to recover it. 00:34:37.064 [2024-07-15 03:37:42.855922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.064 [2024-07-15 03:37:42.855954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.064 qpair failed and we were unable to recover it. 00:34:37.064 [2024-07-15 03:37:42.856143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.064 [2024-07-15 03:37:42.856200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.064 qpair failed and we were unable to recover it. 00:34:37.064 [2024-07-15 03:37:42.856352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.064 [2024-07-15 03:37:42.856379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.064 qpair failed and we were unable to recover it. 00:34:37.064 [2024-07-15 03:37:42.856524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.064 [2024-07-15 03:37:42.856552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.064 qpair failed and we were unable to recover it. 00:34:37.064 [2024-07-15 03:37:42.856731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.064 [2024-07-15 03:37:42.856762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.064 qpair failed and we were unable to recover it. 00:34:37.064 [2024-07-15 03:37:42.856924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.064 [2024-07-15 03:37:42.856952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.064 qpair failed and we were unable to recover it. 00:34:37.064 [2024-07-15 03:37:42.857106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.064 [2024-07-15 03:37:42.857136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.064 qpair failed and we were unable to recover it. 00:34:37.064 [2024-07-15 03:37:42.857342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.064 [2024-07-15 03:37:42.857401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.064 qpair failed and we were unable to recover it. 00:34:37.064 [2024-07-15 03:37:42.857590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.064 [2024-07-15 03:37:42.857618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.064 qpair failed and we were unable to recover it. 00:34:37.064 [2024-07-15 03:37:42.857754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.064 [2024-07-15 03:37:42.857784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.064 qpair failed and we were unable to recover it. 00:34:37.064 [2024-07-15 03:37:42.857934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.064 [2024-07-15 03:37:42.857965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.064 qpair failed and we were unable to recover it. 00:34:37.064 [2024-07-15 03:37:42.858103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.064 [2024-07-15 03:37:42.858131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.064 qpair failed and we were unable to recover it. 00:34:37.064 [2024-07-15 03:37:42.858272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.064 [2024-07-15 03:37:42.858300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.064 qpair failed and we were unable to recover it. 00:34:37.064 [2024-07-15 03:37:42.858463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.064 [2024-07-15 03:37:42.858491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.064 qpair failed and we were unable to recover it. 00:34:37.064 [2024-07-15 03:37:42.858626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.064 [2024-07-15 03:37:42.858654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.064 qpair failed and we were unable to recover it. 00:34:37.064 [2024-07-15 03:37:42.858795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.064 [2024-07-15 03:37:42.858823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.064 qpair failed and we were unable to recover it. 00:34:37.064 [2024-07-15 03:37:42.858965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.064 [2024-07-15 03:37:42.858994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.064 qpair failed and we were unable to recover it. 00:34:37.064 [2024-07-15 03:37:42.859108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.064 [2024-07-15 03:37:42.859136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.064 qpair failed and we were unable to recover it. 00:34:37.064 [2024-07-15 03:37:42.859275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.064 [2024-07-15 03:37:42.859302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.064 qpair failed and we were unable to recover it. 00:34:37.064 [2024-07-15 03:37:42.859441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.064 [2024-07-15 03:37:42.859484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.064 qpair failed and we were unable to recover it. 00:34:37.064 [2024-07-15 03:37:42.859647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.064 [2024-07-15 03:37:42.859674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.064 qpair failed and we were unable to recover it. 00:34:37.064 [2024-07-15 03:37:42.859787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.064 [2024-07-15 03:37:42.859815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.064 qpair failed and we were unable to recover it. 00:34:37.064 [2024-07-15 03:37:42.860000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.064 [2024-07-15 03:37:42.860032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.064 qpair failed and we were unable to recover it. 00:34:37.064 [2024-07-15 03:37:42.860197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.064 [2024-07-15 03:37:42.860224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.064 qpair failed and we were unable to recover it. 00:34:37.064 [2024-07-15 03:37:42.860389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.064 [2024-07-15 03:37:42.860433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.064 qpair failed and we were unable to recover it. 00:34:37.064 [2024-07-15 03:37:42.860687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.064 [2024-07-15 03:37:42.860737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.064 qpair failed and we were unable to recover it. 00:34:37.065 [2024-07-15 03:37:42.860906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.065 [2024-07-15 03:37:42.860933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.065 qpair failed and we were unable to recover it. 00:34:37.065 [2024-07-15 03:37:42.861034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.065 [2024-07-15 03:37:42.861062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.065 qpair failed and we were unable to recover it. 00:34:37.065 [2024-07-15 03:37:42.861250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.065 [2024-07-15 03:37:42.861278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.065 qpair failed and we were unable to recover it. 00:34:37.065 [2024-07-15 03:37:42.861414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.065 [2024-07-15 03:37:42.861442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.065 qpair failed and we were unable to recover it. 00:34:37.065 [2024-07-15 03:37:42.861600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.065 [2024-07-15 03:37:42.861630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.065 qpair failed and we were unable to recover it. 00:34:37.065 [2024-07-15 03:37:42.861743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.065 [2024-07-15 03:37:42.861773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.065 qpair failed and we were unable to recover it. 00:34:37.065 [2024-07-15 03:37:42.861909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.065 [2024-07-15 03:37:42.861938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.065 qpair failed and we were unable to recover it. 00:34:37.065 [2024-07-15 03:37:42.862105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.065 [2024-07-15 03:37:42.862132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.065 qpair failed and we were unable to recover it. 00:34:37.065 [2024-07-15 03:37:42.862326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.065 [2024-07-15 03:37:42.862380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.065 qpair failed and we were unable to recover it. 00:34:37.065 [2024-07-15 03:37:42.862541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.065 [2024-07-15 03:37:42.862574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.065 qpair failed and we were unable to recover it. 00:34:37.065 [2024-07-15 03:37:42.862728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.065 [2024-07-15 03:37:42.862758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.065 qpair failed and we were unable to recover it. 00:34:37.065 [2024-07-15 03:37:42.862873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.065 [2024-07-15 03:37:42.862924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.065 qpair failed and we were unable to recover it. 00:34:37.065 [2024-07-15 03:37:42.863061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.065 [2024-07-15 03:37:42.863089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.065 qpair failed and we were unable to recover it. 00:34:37.065 [2024-07-15 03:37:42.863226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.065 [2024-07-15 03:37:42.863271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.065 qpair failed and we were unable to recover it. 00:34:37.065 [2024-07-15 03:37:42.863464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.065 [2024-07-15 03:37:42.863516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.065 qpair failed and we were unable to recover it. 00:34:37.065 [2024-07-15 03:37:42.863681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.065 [2024-07-15 03:37:42.863708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.065 qpair failed and we were unable to recover it. 00:34:37.065 [2024-07-15 03:37:42.863867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.065 [2024-07-15 03:37:42.863903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.065 qpair failed and we were unable to recover it. 00:34:37.065 [2024-07-15 03:37:42.864051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.065 [2024-07-15 03:37:42.864081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.065 qpair failed and we were unable to recover it. 00:34:37.065 [2024-07-15 03:37:42.864247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.065 [2024-07-15 03:37:42.864275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.065 qpair failed and we were unable to recover it. 00:34:37.065 [2024-07-15 03:37:42.864407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.065 [2024-07-15 03:37:42.864452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.065 qpair failed and we were unable to recover it. 00:34:37.065 [2024-07-15 03:37:42.864628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.065 [2024-07-15 03:37:42.864659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.065 qpair failed and we were unable to recover it. 00:34:37.065 [2024-07-15 03:37:42.864816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.065 [2024-07-15 03:37:42.864843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.065 qpair failed and we were unable to recover it. 00:34:37.065 [2024-07-15 03:37:42.864990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.065 [2024-07-15 03:37:42.865018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.065 qpair failed and we were unable to recover it. 00:34:37.065 [2024-07-15 03:37:42.865169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.065 [2024-07-15 03:37:42.865196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.065 qpair failed and we were unable to recover it. 00:34:37.065 [2024-07-15 03:37:42.865336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.065 [2024-07-15 03:37:42.865363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.065 qpair failed and we were unable to recover it. 00:34:37.065 [2024-07-15 03:37:42.865519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.065 [2024-07-15 03:37:42.865549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.065 qpair failed and we were unable to recover it. 00:34:37.065 [2024-07-15 03:37:42.865698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.065 [2024-07-15 03:37:42.865728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.065 qpair failed and we were unable to recover it. 00:34:37.065 [2024-07-15 03:37:42.865886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.065 [2024-07-15 03:37:42.865914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.065 qpair failed and we were unable to recover it. 00:34:37.065 [2024-07-15 03:37:42.866075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.066 [2024-07-15 03:37:42.866120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.066 qpair failed and we were unable to recover it. 00:34:37.066 [2024-07-15 03:37:42.866240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.066 [2024-07-15 03:37:42.866271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.066 qpair failed and we were unable to recover it. 00:34:37.066 [2024-07-15 03:37:42.866396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.066 [2024-07-15 03:37:42.866424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.066 qpair failed and we were unable to recover it. 00:34:37.066 [2024-07-15 03:37:42.866564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.066 [2024-07-15 03:37:42.866592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.066 qpair failed and we were unable to recover it. 00:34:37.066 [2024-07-15 03:37:42.866783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.066 [2024-07-15 03:37:42.866813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.066 qpair failed and we were unable to recover it. 00:34:37.066 [2024-07-15 03:37:42.866982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.066 [2024-07-15 03:37:42.867009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.066 qpair failed and we were unable to recover it. 00:34:37.066 [2024-07-15 03:37:42.867127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.066 [2024-07-15 03:37:42.867154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.066 qpair failed and we were unable to recover it. 00:34:37.066 [2024-07-15 03:37:42.867288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.066 [2024-07-15 03:37:42.867315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.066 qpair failed and we were unable to recover it. 00:34:37.066 [2024-07-15 03:37:42.867455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.066 [2024-07-15 03:37:42.867485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.066 qpair failed and we were unable to recover it. 00:34:37.066 [2024-07-15 03:37:42.867667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.066 [2024-07-15 03:37:42.867696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.066 qpair failed and we were unable to recover it. 00:34:37.066 [2024-07-15 03:37:42.867849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.066 [2024-07-15 03:37:42.867886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.066 qpair failed and we were unable to recover it. 00:34:37.066 [2024-07-15 03:37:42.868045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.066 [2024-07-15 03:37:42.868071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.066 qpair failed and we were unable to recover it. 00:34:37.066 [2024-07-15 03:37:42.868203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.066 [2024-07-15 03:37:42.868229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.066 qpair failed and we were unable to recover it. 00:34:37.066 [2024-07-15 03:37:42.868340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.066 [2024-07-15 03:37:42.868367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.066 qpair failed and we were unable to recover it. 00:34:37.066 [2024-07-15 03:37:42.868578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.066 [2024-07-15 03:37:42.868604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.066 qpair failed and we were unable to recover it. 00:34:37.066 [2024-07-15 03:37:42.868776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.066 [2024-07-15 03:37:42.868806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.066 qpair failed and we were unable to recover it. 00:34:37.066 [2024-07-15 03:37:42.868968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.066 [2024-07-15 03:37:42.868996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.066 qpair failed and we were unable to recover it. 00:34:37.066 [2024-07-15 03:37:42.869135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.066 [2024-07-15 03:37:42.869162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.066 qpair failed and we were unable to recover it. 00:34:37.066 [2024-07-15 03:37:42.869328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.066 [2024-07-15 03:37:42.869358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.066 qpair failed and we were unable to recover it. 00:34:37.066 [2024-07-15 03:37:42.869568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.066 [2024-07-15 03:37:42.869629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.066 qpair failed and we were unable to recover it. 00:34:37.066 [2024-07-15 03:37:42.869766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.067 [2024-07-15 03:37:42.869793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.067 qpair failed and we were unable to recover it. 00:34:37.067 [2024-07-15 03:37:42.869935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.067 [2024-07-15 03:37:42.869978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.067 qpair failed and we were unable to recover it. 00:34:37.067 [2024-07-15 03:37:42.870137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.067 [2024-07-15 03:37:42.870167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.067 qpair failed and we were unable to recover it. 00:34:37.067 [2024-07-15 03:37:42.870302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.067 [2024-07-15 03:37:42.870340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.067 qpair failed and we were unable to recover it. 00:34:37.067 [2024-07-15 03:37:42.870481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.067 [2024-07-15 03:37:42.870525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.067 qpair failed and we were unable to recover it. 00:34:37.067 [2024-07-15 03:37:42.870703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.067 [2024-07-15 03:37:42.870733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.067 qpair failed and we were unable to recover it. 00:34:37.067 [2024-07-15 03:37:42.870863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.067 [2024-07-15 03:37:42.870895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.067 qpair failed and we were unable to recover it. 00:34:37.067 [2024-07-15 03:37:42.871039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.067 [2024-07-15 03:37:42.871067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.067 qpair failed and we were unable to recover it. 00:34:37.067 [2024-07-15 03:37:42.871248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.067 [2024-07-15 03:37:42.871279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.067 qpair failed and we were unable to recover it. 00:34:37.067 [2024-07-15 03:37:42.871471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.067 [2024-07-15 03:37:42.871506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.067 qpair failed and we were unable to recover it. 00:34:37.067 [2024-07-15 03:37:42.871667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.067 [2024-07-15 03:37:42.871697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.067 qpair failed and we were unable to recover it. 00:34:37.067 [2024-07-15 03:37:42.871885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.067 [2024-07-15 03:37:42.871916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.067 qpair failed and we were unable to recover it. 00:34:37.067 [2024-07-15 03:37:42.872062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.067 [2024-07-15 03:37:42.872098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.067 qpair failed and we were unable to recover it. 00:34:37.067 [2024-07-15 03:37:42.872219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.067 [2024-07-15 03:37:42.872246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.067 qpair failed and we were unable to recover it. 00:34:37.067 [2024-07-15 03:37:42.872447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.067 [2024-07-15 03:37:42.872478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.067 qpair failed and we were unable to recover it. 00:34:37.067 [2024-07-15 03:37:42.872632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.067 [2024-07-15 03:37:42.872663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.067 qpair failed and we were unable to recover it. 00:34:37.067 [2024-07-15 03:37:42.872820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.067 [2024-07-15 03:37:42.872861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.067 qpair failed and we were unable to recover it. 00:34:37.067 [2024-07-15 03:37:42.873028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.067 [2024-07-15 03:37:42.873057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.067 qpair failed and we were unable to recover it. 00:34:37.067 [2024-07-15 03:37:42.873220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.067 [2024-07-15 03:37:42.873247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.067 qpair failed and we were unable to recover it. 00:34:37.067 [2024-07-15 03:37:42.873407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.067 [2024-07-15 03:37:42.873439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.067 qpair failed and we were unable to recover it. 00:34:37.067 [2024-07-15 03:37:42.873591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.067 [2024-07-15 03:37:42.873621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.067 qpair failed and we were unable to recover it. 00:34:37.067 [2024-07-15 03:37:42.873746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.067 [2024-07-15 03:37:42.873789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.067 qpair failed and we were unable to recover it. 00:34:37.067 [2024-07-15 03:37:42.873955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.067 [2024-07-15 03:37:42.873984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.067 qpair failed and we were unable to recover it. 00:34:37.067 [2024-07-15 03:37:42.874127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.067 [2024-07-15 03:37:42.874155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.067 qpair failed and we were unable to recover it. 00:34:37.067 [2024-07-15 03:37:42.874344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.067 [2024-07-15 03:37:42.874372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.067 qpair failed and we were unable to recover it. 00:34:37.067 [2024-07-15 03:37:42.874526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.067 [2024-07-15 03:37:42.874556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.067 qpair failed and we were unable to recover it. 00:34:37.067 [2024-07-15 03:37:42.874736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.067 [2024-07-15 03:37:42.874767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.067 qpair failed and we were unable to recover it. 00:34:37.067 [2024-07-15 03:37:42.874935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.067 [2024-07-15 03:37:42.874963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.067 qpair failed and we were unable to recover it. 00:34:37.067 [2024-07-15 03:37:42.875084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.067 [2024-07-15 03:37:42.875133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.067 qpair failed and we were unable to recover it. 00:34:37.067 [2024-07-15 03:37:42.875322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.067 [2024-07-15 03:37:42.875352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.067 qpair failed and we were unable to recover it. 00:34:37.067 [2024-07-15 03:37:42.875510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.067 [2024-07-15 03:37:42.875536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.067 qpair failed and we were unable to recover it. 00:34:37.068 [2024-07-15 03:37:42.875677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.068 [2024-07-15 03:37:42.875729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.068 qpair failed and we were unable to recover it. 00:34:37.068 [2024-07-15 03:37:42.875893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.068 [2024-07-15 03:37:42.875920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.068 qpair failed and we were unable to recover it. 00:34:37.068 [2024-07-15 03:37:42.876036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.068 [2024-07-15 03:37:42.876063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.068 qpair failed and we were unable to recover it. 00:34:37.068 [2024-07-15 03:37:42.876202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.068 [2024-07-15 03:37:42.876248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.068 qpair failed and we were unable to recover it. 00:34:37.068 [2024-07-15 03:37:42.876392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.068 [2024-07-15 03:37:42.876422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.068 qpair failed and we were unable to recover it. 00:34:37.068 [2024-07-15 03:37:42.876557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.068 [2024-07-15 03:37:42.876585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.068 qpair failed and we were unable to recover it. 00:34:37.068 [2024-07-15 03:37:42.876723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.068 [2024-07-15 03:37:42.876750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.068 qpair failed and we were unable to recover it. 00:34:37.068 [2024-07-15 03:37:42.876946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.068 [2024-07-15 03:37:42.876977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.068 qpair failed and we were unable to recover it. 00:34:37.068 [2024-07-15 03:37:42.877132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.068 [2024-07-15 03:37:42.877159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.068 qpair failed and we were unable to recover it. 00:34:37.068 [2024-07-15 03:37:42.877344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.068 [2024-07-15 03:37:42.877378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.068 qpair failed and we were unable to recover it. 00:34:37.068 [2024-07-15 03:37:42.877539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.068 [2024-07-15 03:37:42.877566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.068 qpair failed and we were unable to recover it. 00:34:37.068 [2024-07-15 03:37:42.877736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.068 [2024-07-15 03:37:42.877765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.068 qpair failed and we were unable to recover it. 00:34:37.068 [2024-07-15 03:37:42.877914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.068 [2024-07-15 03:37:42.877942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.068 qpair failed and we were unable to recover it. 00:34:37.068 [2024-07-15 03:37:42.878079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.068 [2024-07-15 03:37:42.878107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.068 qpair failed and we were unable to recover it. 00:34:37.068 [2024-07-15 03:37:42.878247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.068 [2024-07-15 03:37:42.878274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.068 qpair failed and we were unable to recover it. 00:34:37.068 [2024-07-15 03:37:42.878381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.068 [2024-07-15 03:37:42.878425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.068 qpair failed and we were unable to recover it. 00:34:37.068 [2024-07-15 03:37:42.878593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.068 [2024-07-15 03:37:42.878624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.068 qpair failed and we were unable to recover it. 00:34:37.068 [2024-07-15 03:37:42.878786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.068 [2024-07-15 03:37:42.878813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.068 qpair failed and we were unable to recover it. 00:34:37.068 [2024-07-15 03:37:42.878930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.068 [2024-07-15 03:37:42.878957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.068 qpair failed and we were unable to recover it. 00:34:37.068 [2024-07-15 03:37:42.879099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.068 [2024-07-15 03:37:42.879130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.068 qpair failed and we were unable to recover it. 00:34:37.068 [2024-07-15 03:37:42.879242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.068 [2024-07-15 03:37:42.879268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.068 qpair failed and we were unable to recover it. 00:34:37.068 [2024-07-15 03:37:42.879401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.068 [2024-07-15 03:37:42.879428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.068 qpair failed and we were unable to recover it. 00:34:37.068 [2024-07-15 03:37:42.879590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.068 [2024-07-15 03:37:42.879619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.068 qpair failed and we were unable to recover it. 00:34:37.068 [2024-07-15 03:37:42.879762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.068 [2024-07-15 03:37:42.879791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.068 qpair failed and we were unable to recover it. 00:34:37.068 [2024-07-15 03:37:42.879939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.068 [2024-07-15 03:37:42.879967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.068 qpair failed and we were unable to recover it. 00:34:37.068 [2024-07-15 03:37:42.880077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.068 [2024-07-15 03:37:42.880105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.068 qpair failed and we were unable to recover it. 00:34:37.068 [2024-07-15 03:37:42.880207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.068 [2024-07-15 03:37:42.880233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.068 qpair failed and we were unable to recover it. 00:34:37.068 [2024-07-15 03:37:42.880377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.068 [2024-07-15 03:37:42.880404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.068 qpair failed and we were unable to recover it. 00:34:37.068 [2024-07-15 03:37:42.880559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.068 [2024-07-15 03:37:42.880587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.068 qpair failed and we were unable to recover it. 00:34:37.068 [2024-07-15 03:37:42.880723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.069 [2024-07-15 03:37:42.880751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.069 qpair failed and we were unable to recover it. 00:34:37.069 [2024-07-15 03:37:42.880908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.069 [2024-07-15 03:37:42.880939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.069 qpair failed and we were unable to recover it. 00:34:37.069 [2024-07-15 03:37:42.881116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.069 [2024-07-15 03:37:42.881146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.069 qpair failed and we were unable to recover it. 00:34:37.069 [2024-07-15 03:37:42.881271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.069 [2024-07-15 03:37:42.881298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.069 qpair failed and we were unable to recover it. 00:34:37.069 [2024-07-15 03:37:42.881449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.069 [2024-07-15 03:37:42.881476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.069 qpair failed and we were unable to recover it. 00:34:37.069 [2024-07-15 03:37:42.881647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.069 [2024-07-15 03:37:42.881676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.069 qpair failed and we were unable to recover it. 00:34:37.069 [2024-07-15 03:37:42.881824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.069 [2024-07-15 03:37:42.881853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.069 qpair failed and we were unable to recover it. 00:34:37.069 [2024-07-15 03:37:42.881999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.069 [2024-07-15 03:37:42.882027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.069 qpair failed and we were unable to recover it. 00:34:37.069 [2024-07-15 03:37:42.882189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.069 [2024-07-15 03:37:42.882223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.069 qpair failed and we were unable to recover it. 00:34:37.069 [2024-07-15 03:37:42.882384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.069 [2024-07-15 03:37:42.882412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.069 qpair failed and we were unable to recover it. 00:34:37.069 [2024-07-15 03:37:42.882591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.069 [2024-07-15 03:37:42.882621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.069 qpair failed and we were unable to recover it. 00:34:37.069 [2024-07-15 03:37:42.882783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.069 [2024-07-15 03:37:42.882812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.069 qpair failed and we were unable to recover it. 00:34:37.069 [2024-07-15 03:37:42.882979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.069 [2024-07-15 03:37:42.883007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.069 qpair failed and we were unable to recover it. 00:34:37.069 [2024-07-15 03:37:42.883141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.069 [2024-07-15 03:37:42.883168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.069 qpair failed and we were unable to recover it. 00:34:37.069 [2024-07-15 03:37:42.883311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.069 [2024-07-15 03:37:42.883346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.069 qpair failed and we were unable to recover it. 00:34:37.069 [2024-07-15 03:37:42.883469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.069 [2024-07-15 03:37:42.883496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.069 qpair failed and we were unable to recover it. 00:34:37.069 [2024-07-15 03:37:42.883639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.069 [2024-07-15 03:37:42.883666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.069 qpair failed and we were unable to recover it. 00:34:37.069 [2024-07-15 03:37:42.883844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.069 [2024-07-15 03:37:42.883871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.069 qpair failed and we were unable to recover it. 00:34:37.069 [2024-07-15 03:37:42.884018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.069 [2024-07-15 03:37:42.884047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.069 qpair failed and we were unable to recover it. 00:34:37.069 [2024-07-15 03:37:42.884169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.069 [2024-07-15 03:37:42.884213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.069 qpair failed and we were unable to recover it. 00:34:37.069 [2024-07-15 03:37:42.884362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.069 [2024-07-15 03:37:42.884392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.069 qpair failed and we were unable to recover it. 00:34:37.069 [2024-07-15 03:37:42.884534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.069 [2024-07-15 03:37:42.884561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.069 qpair failed and we were unable to recover it. 00:34:37.069 [2024-07-15 03:37:42.884715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.069 [2024-07-15 03:37:42.884743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.069 qpair failed and we were unable to recover it. 00:34:37.069 [2024-07-15 03:37:42.884924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.069 [2024-07-15 03:37:42.884955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.069 qpair failed and we were unable to recover it. 00:34:37.069 [2024-07-15 03:37:42.885114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.069 [2024-07-15 03:37:42.885141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.069 qpair failed and we were unable to recover it. 00:34:37.069 [2024-07-15 03:37:42.885295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.069 [2024-07-15 03:37:42.885325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.069 qpair failed and we were unable to recover it. 00:34:37.069 [2024-07-15 03:37:42.885478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.069 [2024-07-15 03:37:42.885508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.069 qpair failed and we were unable to recover it. 00:34:37.069 [2024-07-15 03:37:42.885661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.069 [2024-07-15 03:37:42.885688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.069 qpair failed and we were unable to recover it. 00:34:37.069 [2024-07-15 03:37:42.885852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.070 [2024-07-15 03:37:42.885903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.070 qpair failed and we were unable to recover it. 00:34:37.070 [2024-07-15 03:37:42.886060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.070 [2024-07-15 03:37:42.886090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.070 qpair failed and we were unable to recover it. 00:34:37.070 [2024-07-15 03:37:42.886281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.070 [2024-07-15 03:37:42.886309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.070 qpair failed and we were unable to recover it. 00:34:37.070 [2024-07-15 03:37:42.886427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.070 [2024-07-15 03:37:42.886454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.070 qpair failed and we were unable to recover it. 00:34:37.070 [2024-07-15 03:37:42.886591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.070 [2024-07-15 03:37:42.886618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.070 qpair failed and we were unable to recover it. 00:34:37.070 [2024-07-15 03:37:42.886780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.070 [2024-07-15 03:37:42.886807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.070 qpair failed and we were unable to recover it. 00:34:37.070 [2024-07-15 03:37:42.886959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.070 [2024-07-15 03:37:42.886990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.070 qpair failed and we were unable to recover it. 00:34:37.070 [2024-07-15 03:37:42.887145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.070 [2024-07-15 03:37:42.887174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.070 qpair failed and we were unable to recover it. 00:34:37.070 [2024-07-15 03:37:42.887310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.070 [2024-07-15 03:37:42.887337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.070 qpair failed and we were unable to recover it. 00:34:37.070 [2024-07-15 03:37:42.887470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.070 [2024-07-15 03:37:42.887497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.070 qpair failed and we were unable to recover it. 00:34:37.070 [2024-07-15 03:37:42.887621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.070 [2024-07-15 03:37:42.887651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.070 qpair failed and we were unable to recover it. 00:34:37.070 [2024-07-15 03:37:42.887812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.070 [2024-07-15 03:37:42.887849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.070 qpair failed and we were unable to recover it. 00:34:37.070 [2024-07-15 03:37:42.887974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.070 [2024-07-15 03:37:42.888001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.070 qpair failed and we were unable to recover it. 00:34:37.070 [2024-07-15 03:37:42.888166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.070 [2024-07-15 03:37:42.888193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.070 qpair failed and we were unable to recover it. 00:34:37.070 [2024-07-15 03:37:42.888364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.070 [2024-07-15 03:37:42.888391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.070 qpair failed and we were unable to recover it. 00:34:37.070 [2024-07-15 03:37:42.888549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.070 [2024-07-15 03:37:42.888579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.070 qpair failed and we were unable to recover it. 00:34:37.070 [2024-07-15 03:37:42.888740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.070 [2024-07-15 03:37:42.888767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.070 qpair failed and we were unable to recover it. 00:34:37.070 [2024-07-15 03:37:42.888905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.070 [2024-07-15 03:37:42.888932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.070 qpair failed and we were unable to recover it. 00:34:37.070 [2024-07-15 03:37:42.889119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.070 [2024-07-15 03:37:42.889150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.070 qpair failed and we were unable to recover it. 00:34:37.070 [2024-07-15 03:37:42.889299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.070 [2024-07-15 03:37:42.889329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.070 qpair failed and we were unable to recover it. 00:34:37.070 [2024-07-15 03:37:42.889486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.070 [2024-07-15 03:37:42.889517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.070 qpair failed and we were unable to recover it. 00:34:37.070 [2024-07-15 03:37:42.889628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.070 [2024-07-15 03:37:42.889655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.070 qpair failed and we were unable to recover it. 00:34:37.070 [2024-07-15 03:37:42.889795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.070 [2024-07-15 03:37:42.889821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.070 qpair failed and we were unable to recover it. 00:34:37.070 [2024-07-15 03:37:42.889983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.070 [2024-07-15 03:37:42.890011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.070 qpair failed and we were unable to recover it. 00:34:37.070 [2024-07-15 03:37:42.890167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.070 [2024-07-15 03:37:42.890199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.070 qpair failed and we were unable to recover it. 00:34:37.070 [2024-07-15 03:37:42.890351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.070 [2024-07-15 03:37:42.890382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.070 qpair failed and we were unable to recover it. 00:34:37.070 [2024-07-15 03:37:42.890548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.070 [2024-07-15 03:37:42.890575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.070 qpair failed and we were unable to recover it. 00:34:37.070 [2024-07-15 03:37:42.890676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.070 [2024-07-15 03:37:42.890703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.070 qpair failed and we were unable to recover it. 00:34:37.070 [2024-07-15 03:37:42.890866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.070 [2024-07-15 03:37:42.890929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.070 qpair failed and we were unable to recover it. 00:34:37.070 [2024-07-15 03:37:42.891115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.071 [2024-07-15 03:37:42.891142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.071 qpair failed and we were unable to recover it. 00:34:37.071 [2024-07-15 03:37:42.891259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.071 [2024-07-15 03:37:42.891288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.071 qpair failed and we were unable to recover it. 00:34:37.071 [2024-07-15 03:37:42.891431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.071 [2024-07-15 03:37:42.891457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.071 qpair failed and we were unable to recover it. 00:34:37.071 [2024-07-15 03:37:42.891668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.071 [2024-07-15 03:37:42.891695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.071 qpair failed and we were unable to recover it. 00:34:37.071 [2024-07-15 03:37:42.891815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.071 [2024-07-15 03:37:42.891845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.071 qpair failed and we were unable to recover it. 00:34:37.071 [2024-07-15 03:37:42.892021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.071 [2024-07-15 03:37:42.892049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.071 qpair failed and we were unable to recover it. 00:34:37.071 [2024-07-15 03:37:42.892219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.071 [2024-07-15 03:37:42.892246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.071 qpair failed and we were unable to recover it. 00:34:37.071 [2024-07-15 03:37:42.892401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.071 [2024-07-15 03:37:42.892432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.071 qpair failed and we were unable to recover it. 00:34:37.071 [2024-07-15 03:37:42.892612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.071 [2024-07-15 03:37:42.892642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.071 qpair failed and we were unable to recover it. 00:34:37.071 [2024-07-15 03:37:42.892795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.071 [2024-07-15 03:37:42.892822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.071 qpair failed and we were unable to recover it. 00:34:37.071 [2024-07-15 03:37:42.892981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.071 [2024-07-15 03:37:42.893012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.071 qpair failed and we were unable to recover it. 00:34:37.071 [2024-07-15 03:37:42.893168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.071 [2024-07-15 03:37:42.893198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.071 qpair failed and we were unable to recover it. 00:34:37.071 [2024-07-15 03:37:42.893349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.071 [2024-07-15 03:37:42.893376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.071 qpair failed and we were unable to recover it. 00:34:37.071 [2024-07-15 03:37:42.893504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.071 [2024-07-15 03:37:42.893546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.071 qpair failed and we were unable to recover it. 00:34:37.071 [2024-07-15 03:37:42.893694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.071 [2024-07-15 03:37:42.893724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.071 qpair failed and we were unable to recover it. 00:34:37.071 [2024-07-15 03:37:42.893908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.071 [2024-07-15 03:37:42.893935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.071 qpair failed and we were unable to recover it. 00:34:37.071 [2024-07-15 03:37:42.894049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.071 [2024-07-15 03:37:42.894093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.071 qpair failed and we were unable to recover it. 00:34:37.071 [2024-07-15 03:37:42.894279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.071 [2024-07-15 03:37:42.894306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.071 qpair failed and we were unable to recover it. 00:34:37.071 [2024-07-15 03:37:42.894475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.071 [2024-07-15 03:37:42.894502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.071 qpair failed and we were unable to recover it. 00:34:37.071 [2024-07-15 03:37:42.894649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.071 [2024-07-15 03:37:42.894679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.071 qpair failed and we were unable to recover it. 00:34:37.071 [2024-07-15 03:37:42.894827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.071 [2024-07-15 03:37:42.894857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.071 qpair failed and we were unable to recover it. 00:34:37.071 [2024-07-15 03:37:42.895053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.071 [2024-07-15 03:37:42.895080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.071 qpair failed and we were unable to recover it. 00:34:37.071 [2024-07-15 03:37:42.895197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.071 [2024-07-15 03:37:42.895241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.071 qpair failed and we were unable to recover it. 00:34:37.071 [2024-07-15 03:37:42.895369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.071 [2024-07-15 03:37:42.895400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.071 qpair failed and we were unable to recover it. 00:34:37.071 [2024-07-15 03:37:42.895566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.071 [2024-07-15 03:37:42.895593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.071 qpair failed and we were unable to recover it. 00:34:37.071 [2024-07-15 03:37:42.895724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.071 [2024-07-15 03:37:42.895750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.072 qpair failed and we were unable to recover it. 00:34:37.072 [2024-07-15 03:37:42.895921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.072 [2024-07-15 03:37:42.895951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.072 qpair failed and we were unable to recover it. 00:34:37.072 [2024-07-15 03:37:42.896089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.072 [2024-07-15 03:37:42.896116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.072 qpair failed and we were unable to recover it. 00:34:37.072 [2024-07-15 03:37:42.896230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.072 [2024-07-15 03:37:42.896256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.072 qpair failed and we were unable to recover it. 00:34:37.072 [2024-07-15 03:37:42.896420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.072 [2024-07-15 03:37:42.896449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.072 qpair failed and we were unable to recover it. 00:34:37.072 [2024-07-15 03:37:42.896585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.072 [2024-07-15 03:37:42.896611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.072 qpair failed and we were unable to recover it. 00:34:37.072 [2024-07-15 03:37:42.896776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.072 [2024-07-15 03:37:42.896807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.072 qpair failed and we were unable to recover it. 00:34:37.072 [2024-07-15 03:37:42.896928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.072 [2024-07-15 03:37:42.896959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.072 qpair failed and we were unable to recover it. 00:34:37.072 [2024-07-15 03:37:42.897091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.072 [2024-07-15 03:37:42.897119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.072 qpair failed and we were unable to recover it. 00:34:37.072 [2024-07-15 03:37:42.897261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.072 [2024-07-15 03:37:42.897305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.072 qpair failed and we were unable to recover it. 00:34:37.072 [2024-07-15 03:37:42.897424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.072 [2024-07-15 03:37:42.897454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.072 qpair failed and we were unable to recover it. 00:34:37.072 [2024-07-15 03:37:42.897579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.072 [2024-07-15 03:37:42.897606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.072 qpair failed and we were unable to recover it. 00:34:37.072 [2024-07-15 03:37:42.897745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.072 [2024-07-15 03:37:42.897773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.072 qpair failed and we were unable to recover it. 00:34:37.072 [2024-07-15 03:37:42.897955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.072 [2024-07-15 03:37:42.897985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.072 qpair failed and we were unable to recover it. 00:34:37.072 [2024-07-15 03:37:42.898131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.072 [2024-07-15 03:37:42.898158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.072 qpair failed and we were unable to recover it. 00:34:37.072 [2024-07-15 03:37:42.898276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.072 [2024-07-15 03:37:42.898303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.072 qpair failed and we were unable to recover it. 00:34:37.072 [2024-07-15 03:37:42.898403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.072 [2024-07-15 03:37:42.898430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.072 qpair failed and we were unable to recover it. 00:34:37.072 [2024-07-15 03:37:42.898570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.072 [2024-07-15 03:37:42.898597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.072 qpair failed and we were unable to recover it. 00:34:37.072 [2024-07-15 03:37:42.898734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.072 [2024-07-15 03:37:42.898762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.072 qpair failed and we were unable to recover it. 00:34:37.072 [2024-07-15 03:37:42.898927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.072 [2024-07-15 03:37:42.898955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.072 qpair failed and we were unable to recover it. 00:34:37.072 [2024-07-15 03:37:42.899099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.072 [2024-07-15 03:37:42.899126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.072 qpair failed and we were unable to recover it. 00:34:37.072 [2024-07-15 03:37:42.899283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.072 [2024-07-15 03:37:42.899310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.072 qpair failed and we were unable to recover it. 00:34:37.072 [2024-07-15 03:37:42.899445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.072 [2024-07-15 03:37:42.899472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.072 qpair failed and we were unable to recover it. 00:34:37.072 [2024-07-15 03:37:42.899642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.072 [2024-07-15 03:37:42.899669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.072 qpair failed and we were unable to recover it. 00:34:37.072 [2024-07-15 03:37:42.899822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.072 [2024-07-15 03:37:42.899852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.072 qpair failed and we were unable to recover it. 00:34:37.072 [2024-07-15 03:37:42.900031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.072 [2024-07-15 03:37:42.900059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.072 qpair failed and we were unable to recover it. 00:34:37.072 [2024-07-15 03:37:42.900196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.072 [2024-07-15 03:37:42.900224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.073 qpair failed and we were unable to recover it. 00:34:37.073 [2024-07-15 03:37:42.900333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.073 [2024-07-15 03:37:42.900374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.073 qpair failed and we were unable to recover it. 00:34:37.073 [2024-07-15 03:37:42.900487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.073 [2024-07-15 03:37:42.900516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.073 qpair failed and we were unable to recover it. 00:34:37.073 [2024-07-15 03:37:42.900676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.073 [2024-07-15 03:37:42.900704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.073 qpair failed and we were unable to recover it. 00:34:37.073 [2024-07-15 03:37:42.900842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.073 [2024-07-15 03:37:42.900870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.073 qpair failed and we were unable to recover it. 00:34:37.073 [2024-07-15 03:37:42.901014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.073 [2024-07-15 03:37:42.901041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.073 qpair failed and we were unable to recover it. 00:34:37.073 [2024-07-15 03:37:42.901203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.073 [2024-07-15 03:37:42.901231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.073 qpair failed and we were unable to recover it. 00:34:37.073 [2024-07-15 03:37:42.901390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.073 [2024-07-15 03:37:42.901420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.073 qpair failed and we were unable to recover it. 00:34:37.073 [2024-07-15 03:37:42.901569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.073 [2024-07-15 03:37:42.901599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.073 qpair failed and we were unable to recover it. 00:34:37.073 [2024-07-15 03:37:42.901732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.073 [2024-07-15 03:37:42.901761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.073 qpair failed and we were unable to recover it. 00:34:37.073 [2024-07-15 03:37:42.901893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.073 [2024-07-15 03:37:42.901921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.073 qpair failed and we were unable to recover it. 00:34:37.073 [2024-07-15 03:37:42.902083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.073 [2024-07-15 03:37:42.902114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.073 qpair failed and we were unable to recover it. 00:34:37.073 [2024-07-15 03:37:42.902271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.073 [2024-07-15 03:37:42.902298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.073 qpair failed and we were unable to recover it. 00:34:37.073 [2024-07-15 03:37:42.902438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.073 [2024-07-15 03:37:42.902481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.073 qpair failed and we were unable to recover it. 00:34:37.073 [2024-07-15 03:37:42.902633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.073 [2024-07-15 03:37:42.902664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.073 qpair failed and we were unable to recover it. 00:34:37.073 [2024-07-15 03:37:42.902824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.073 [2024-07-15 03:37:42.902851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.073 qpair failed and we were unable to recover it. 00:34:37.073 [2024-07-15 03:37:42.902966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.073 [2024-07-15 03:37:42.902995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.073 qpair failed and we were unable to recover it. 00:34:37.073 [2024-07-15 03:37:42.903111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.073 [2024-07-15 03:37:42.903138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.073 qpair failed and we were unable to recover it. 00:34:37.073 [2024-07-15 03:37:42.903291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.073 [2024-07-15 03:37:42.903318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.073 qpair failed and we were unable to recover it. 00:34:37.073 [2024-07-15 03:37:42.903482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.073 [2024-07-15 03:37:42.903509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.073 qpair failed and we were unable to recover it. 00:34:37.073 [2024-07-15 03:37:42.903708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.073 [2024-07-15 03:37:42.903743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.073 qpair failed and we were unable to recover it. 00:34:37.073 [2024-07-15 03:37:42.903907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.073 [2024-07-15 03:37:42.903934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.073 qpair failed and we were unable to recover it. 00:34:37.073 [2024-07-15 03:37:42.904042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.073 [2024-07-15 03:37:42.904069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.073 qpair failed and we were unable to recover it. 00:34:37.073 [2024-07-15 03:37:42.904234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.073 [2024-07-15 03:37:42.904264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.073 qpair failed and we were unable to recover it. 00:34:37.073 [2024-07-15 03:37:42.904451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.073 [2024-07-15 03:37:42.904478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.073 qpair failed and we were unable to recover it. 00:34:37.073 [2024-07-15 03:37:42.904628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.073 [2024-07-15 03:37:42.904657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.073 qpair failed and we were unable to recover it. 00:34:37.073 [2024-07-15 03:37:42.904796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.073 [2024-07-15 03:37:42.904826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.073 qpair failed and we were unable to recover it. 00:34:37.073 [2024-07-15 03:37:42.905005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.073 [2024-07-15 03:37:42.905032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.073 qpair failed and we were unable to recover it. 00:34:37.073 [2024-07-15 03:37:42.905148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.073 [2024-07-15 03:37:42.905192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.073 qpair failed and we were unable to recover it. 00:34:37.073 [2024-07-15 03:37:42.905347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.073 [2024-07-15 03:37:42.905377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.073 qpair failed and we were unable to recover it. 00:34:37.073 [2024-07-15 03:37:42.905502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.074 [2024-07-15 03:37:42.905529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.074 qpair failed and we were unable to recover it. 00:34:37.074 [2024-07-15 03:37:42.905670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.074 [2024-07-15 03:37:42.905697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.074 qpair failed and we were unable to recover it. 00:34:37.074 [2024-07-15 03:37:42.905859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.074 [2024-07-15 03:37:42.905897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.074 qpair failed and we were unable to recover it. 00:34:37.074 [2024-07-15 03:37:42.906033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.074 [2024-07-15 03:37:42.906061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.074 qpair failed and we were unable to recover it. 00:34:37.074 [2024-07-15 03:37:42.906200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.074 [2024-07-15 03:37:42.906228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.074 qpair failed and we were unable to recover it. 00:34:37.074 [2024-07-15 03:37:42.906385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.074 [2024-07-15 03:37:42.906415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.074 qpair failed and we were unable to recover it. 00:34:37.074 [2024-07-15 03:37:42.906554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.074 [2024-07-15 03:37:42.906582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.074 qpair failed and we were unable to recover it. 00:34:37.074 [2024-07-15 03:37:42.906718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.074 [2024-07-15 03:37:42.906760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.074 qpair failed and we were unable to recover it. 00:34:37.074 [2024-07-15 03:37:42.906905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.074 [2024-07-15 03:37:42.906936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.074 qpair failed and we were unable to recover it. 00:34:37.074 [2024-07-15 03:37:42.907129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.074 [2024-07-15 03:37:42.907156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.074 qpair failed and we were unable to recover it. 00:34:37.074 [2024-07-15 03:37:42.907265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.074 [2024-07-15 03:37:42.907310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.074 qpair failed and we were unable to recover it. 00:34:37.074 [2024-07-15 03:37:42.907463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.074 [2024-07-15 03:37:42.907492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.074 qpair failed and we were unable to recover it. 00:34:37.074 [2024-07-15 03:37:42.907654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.074 [2024-07-15 03:37:42.907682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.074 qpair failed and we were unable to recover it. 00:34:37.074 [2024-07-15 03:37:42.907795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.074 [2024-07-15 03:37:42.907838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.074 qpair failed and we were unable to recover it. 00:34:37.074 [2024-07-15 03:37:42.908040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.074 [2024-07-15 03:37:42.908068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.074 qpair failed and we were unable to recover it. 00:34:37.074 [2024-07-15 03:37:42.908178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.074 [2024-07-15 03:37:42.908206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.074 qpair failed and we were unable to recover it. 00:34:37.074 [2024-07-15 03:37:42.908317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.074 [2024-07-15 03:37:42.908344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.074 qpair failed and we were unable to recover it. 00:34:37.074 [2024-07-15 03:37:42.908474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.074 [2024-07-15 03:37:42.908504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.074 qpair failed and we were unable to recover it. 00:34:37.074 [2024-07-15 03:37:42.908697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.074 [2024-07-15 03:37:42.908724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.074 qpair failed and we were unable to recover it. 00:34:37.074 [2024-07-15 03:37:42.908910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.074 [2024-07-15 03:37:42.908941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.074 qpair failed and we were unable to recover it. 00:34:37.074 [2024-07-15 03:37:42.909129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.074 [2024-07-15 03:37:42.909156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.074 qpair failed and we were unable to recover it. 00:34:37.074 [2024-07-15 03:37:42.909297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.074 [2024-07-15 03:37:42.909324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.074 qpair failed and we were unable to recover it. 00:34:37.074 [2024-07-15 03:37:42.909483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.074 [2024-07-15 03:37:42.909513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.074 qpair failed and we were unable to recover it. 00:34:37.074 [2024-07-15 03:37:42.909674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.074 [2024-07-15 03:37:42.909704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.074 qpair failed and we were unable to recover it. 00:34:37.074 [2024-07-15 03:37:42.909888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.074 [2024-07-15 03:37:42.909916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.074 qpair failed and we were unable to recover it. 00:34:37.074 [2024-07-15 03:37:42.910023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.074 [2024-07-15 03:37:42.910067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.074 qpair failed and we were unable to recover it. 00:34:37.074 [2024-07-15 03:37:42.910244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.074 [2024-07-15 03:37:42.910275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.074 qpair failed and we were unable to recover it. 00:34:37.074 [2024-07-15 03:37:42.910433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.074 [2024-07-15 03:37:42.910460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.074 qpair failed and we were unable to recover it. 00:34:37.075 [2024-07-15 03:37:42.910566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.075 [2024-07-15 03:37:42.910593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.075 qpair failed and we were unable to recover it. 00:34:37.075 [2024-07-15 03:37:42.910730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.075 [2024-07-15 03:37:42.910759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.075 qpair failed and we were unable to recover it. 00:34:37.075 [2024-07-15 03:37:42.910917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.075 [2024-07-15 03:37:42.910948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.075 qpair failed and we were unable to recover it. 00:34:37.075 [2024-07-15 03:37:42.911089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.075 [2024-07-15 03:37:42.911133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.075 qpair failed and we were unable to recover it. 00:34:37.075 [2024-07-15 03:37:42.911282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.075 [2024-07-15 03:37:42.911311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.075 qpair failed and we were unable to recover it. 00:34:37.075 [2024-07-15 03:37:42.911473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.075 [2024-07-15 03:37:42.911501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.075 qpair failed and we were unable to recover it. 00:34:37.075 [2024-07-15 03:37:42.911641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.075 [2024-07-15 03:37:42.911683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.075 qpair failed and we were unable to recover it. 00:34:37.075 [2024-07-15 03:37:42.911841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.075 [2024-07-15 03:37:42.911871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.075 qpair failed and we were unable to recover it. 00:34:37.075 [2024-07-15 03:37:42.912012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.075 [2024-07-15 03:37:42.912039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.075 qpair failed and we were unable to recover it. 00:34:37.075 [2024-07-15 03:37:42.912181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.075 [2024-07-15 03:37:42.912208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.075 qpair failed and we were unable to recover it. 00:34:37.075 [2024-07-15 03:37:42.912354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.075 [2024-07-15 03:37:42.912384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.075 qpair failed and we were unable to recover it. 00:34:37.075 [2024-07-15 03:37:42.912545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.075 [2024-07-15 03:37:42.912573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.075 qpair failed and we were unable to recover it. 00:34:37.075 [2024-07-15 03:37:42.912712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.075 [2024-07-15 03:37:42.912757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.075 qpair failed and we were unable to recover it. 00:34:37.075 [2024-07-15 03:37:42.912872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.075 [2024-07-15 03:37:42.912909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.075 qpair failed and we were unable to recover it. 00:34:37.075 [2024-07-15 03:37:42.913046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.075 [2024-07-15 03:37:42.913074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.075 qpair failed and we were unable to recover it. 00:34:37.075 [2024-07-15 03:37:42.913191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.075 [2024-07-15 03:37:42.913218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.075 qpair failed and we were unable to recover it. 00:34:37.075 [2024-07-15 03:37:42.913395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.075 [2024-07-15 03:37:42.913425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.075 qpair failed and we were unable to recover it. 00:34:37.075 [2024-07-15 03:37:42.913578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.075 [2024-07-15 03:37:42.913606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.075 qpair failed and we were unable to recover it. 00:34:37.075 [2024-07-15 03:37:42.913721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.075 [2024-07-15 03:37:42.913748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.075 qpair failed and we were unable to recover it. 00:34:37.075 [2024-07-15 03:37:42.913931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.075 [2024-07-15 03:37:42.913959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.075 qpair failed and we were unable to recover it. 00:34:37.075 [2024-07-15 03:37:42.914096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.075 [2024-07-15 03:37:42.914124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.075 qpair failed and we were unable to recover it. 00:34:37.075 [2024-07-15 03:37:42.914288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.075 [2024-07-15 03:37:42.914315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.075 qpair failed and we were unable to recover it. 00:34:37.075 [2024-07-15 03:37:42.914484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.075 [2024-07-15 03:37:42.914528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.075 qpair failed and we were unable to recover it. 00:34:37.075 [2024-07-15 03:37:42.914683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.075 [2024-07-15 03:37:42.914711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.075 qpair failed and we were unable to recover it. 00:34:37.075 [2024-07-15 03:37:42.914853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.075 [2024-07-15 03:37:42.914898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.075 qpair failed and we were unable to recover it. 00:34:37.075 [2024-07-15 03:37:42.915046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.076 [2024-07-15 03:37:42.915074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.076 qpair failed and we were unable to recover it. 00:34:37.076 [2024-07-15 03:37:42.915249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.076 [2024-07-15 03:37:42.915275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.076 qpair failed and we were unable to recover it. 00:34:37.076 [2024-07-15 03:37:42.915468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.076 [2024-07-15 03:37:42.915498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.076 qpair failed and we were unable to recover it. 00:34:37.076 [2024-07-15 03:37:42.915619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.076 [2024-07-15 03:37:42.915649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.076 qpair failed and we were unable to recover it. 00:34:37.076 [2024-07-15 03:37:42.915779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.076 [2024-07-15 03:37:42.915808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.076 qpair failed and we were unable to recover it. 00:34:37.076 [2024-07-15 03:37:42.915920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.076 [2024-07-15 03:37:42.915948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.076 qpair failed and we were unable to recover it. 00:34:37.076 [2024-07-15 03:37:42.916067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.076 [2024-07-15 03:37:42.916094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.076 qpair failed and we were unable to recover it. 00:34:37.076 [2024-07-15 03:37:42.916256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.076 [2024-07-15 03:37:42.916284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.076 qpair failed and we were unable to recover it. 00:34:37.076 [2024-07-15 03:37:42.916475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.076 [2024-07-15 03:37:42.916505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.076 qpair failed and we were unable to recover it. 00:34:37.076 [2024-07-15 03:37:42.916656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.076 [2024-07-15 03:37:42.916685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.076 qpair failed and we were unable to recover it. 00:34:37.076 [2024-07-15 03:37:42.916851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.076 [2024-07-15 03:37:42.916887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.076 qpair failed and we were unable to recover it. 00:34:37.076 [2024-07-15 03:37:42.917048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.076 [2024-07-15 03:37:42.917078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.076 qpair failed and we were unable to recover it. 00:34:37.076 [2024-07-15 03:37:42.917205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.076 [2024-07-15 03:37:42.917236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.076 qpair failed and we were unable to recover it. 00:34:37.076 [2024-07-15 03:37:42.917393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.076 [2024-07-15 03:37:42.917421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.076 qpair failed and we were unable to recover it. 00:34:37.076 [2024-07-15 03:37:42.917604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.076 [2024-07-15 03:37:42.917634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.076 qpair failed and we were unable to recover it. 00:34:37.076 [2024-07-15 03:37:42.917789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.076 [2024-07-15 03:37:42.917819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.076 qpair failed and we were unable to recover it. 00:34:37.076 [2024-07-15 03:37:42.917987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.076 [2024-07-15 03:37:42.918015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.076 qpair failed and we were unable to recover it. 00:34:37.076 [2024-07-15 03:37:42.918134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.076 [2024-07-15 03:37:42.918165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.076 qpair failed and we were unable to recover it. 00:34:37.076 [2024-07-15 03:37:42.918357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.076 [2024-07-15 03:37:42.918387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.076 qpair failed and we were unable to recover it. 00:34:37.076 [2024-07-15 03:37:42.918546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.076 [2024-07-15 03:37:42.918573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.076 qpair failed and we were unable to recover it. 00:34:37.076 [2024-07-15 03:37:42.918705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.076 [2024-07-15 03:37:42.918749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.076 qpair failed and we were unable to recover it. 00:34:37.076 [2024-07-15 03:37:42.918897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.076 [2024-07-15 03:37:42.918928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.076 qpair failed and we were unable to recover it. 00:34:37.076 [2024-07-15 03:37:42.919091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.076 [2024-07-15 03:37:42.919118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.076 qpair failed and we were unable to recover it. 00:34:37.076 [2024-07-15 03:37:42.919296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.076 [2024-07-15 03:37:42.919326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.076 qpair failed and we were unable to recover it. 00:34:37.076 [2024-07-15 03:37:42.919477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.076 [2024-07-15 03:37:42.919508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.076 qpair failed and we were unable to recover it. 00:34:37.076 [2024-07-15 03:37:42.919667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.076 [2024-07-15 03:37:42.919695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.076 qpair failed and we were unable to recover it. 00:34:37.076 [2024-07-15 03:37:42.919833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.076 [2024-07-15 03:37:42.919883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.076 qpair failed and we were unable to recover it. 00:34:37.076 [2024-07-15 03:37:42.920043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.077 [2024-07-15 03:37:42.920073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.077 qpair failed and we were unable to recover it. 00:34:37.077 [2024-07-15 03:37:42.920237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.077 [2024-07-15 03:37:42.920264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.077 qpair failed and we were unable to recover it. 00:34:37.077 [2024-07-15 03:37:42.920406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.077 [2024-07-15 03:37:42.920435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.077 qpair failed and we were unable to recover it. 00:34:37.077 [2024-07-15 03:37:42.920579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.077 [2024-07-15 03:37:42.920608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.077 qpair failed and we were unable to recover it. 00:34:37.077 [2024-07-15 03:37:42.920865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.077 [2024-07-15 03:37:42.920903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.077 qpair failed and we were unable to recover it. 00:34:37.077 [2024-07-15 03:37:42.921070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.077 [2024-07-15 03:37:42.921099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.077 qpair failed and we were unable to recover it. 00:34:37.077 [2024-07-15 03:37:42.921237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.077 [2024-07-15 03:37:42.921268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.077 qpair failed and we were unable to recover it. 00:34:37.077 [2024-07-15 03:37:42.921402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.077 [2024-07-15 03:37:42.921429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.077 qpair failed and we were unable to recover it. 00:34:37.077 [2024-07-15 03:37:42.921539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.077 [2024-07-15 03:37:42.921566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.077 qpair failed and we were unable to recover it. 00:34:37.077 [2024-07-15 03:37:42.921723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.077 [2024-07-15 03:37:42.921753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.077 qpair failed and we were unable to recover it. 00:34:37.077 [2024-07-15 03:37:42.921913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.077 [2024-07-15 03:37:42.921942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.077 qpair failed and we were unable to recover it. 00:34:37.077 [2024-07-15 03:37:42.922079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.077 [2024-07-15 03:37:42.922124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.077 qpair failed and we were unable to recover it. 00:34:37.077 [2024-07-15 03:37:42.922241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.077 [2024-07-15 03:37:42.922270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.077 qpair failed and we were unable to recover it. 00:34:37.077 [2024-07-15 03:37:42.922468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.077 [2024-07-15 03:37:42.922495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.077 qpair failed and we were unable to recover it. 00:34:37.077 [2024-07-15 03:37:42.922649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.077 [2024-07-15 03:37:42.922678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.077 qpair failed and we were unable to recover it. 00:34:37.077 [2024-07-15 03:37:42.922810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.077 [2024-07-15 03:37:42.922840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.077 qpair failed and we were unable to recover it. 00:34:37.077 [2024-07-15 03:37:42.922997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.077 [2024-07-15 03:37:42.923025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.077 qpair failed and we were unable to recover it. 00:34:37.077 [2024-07-15 03:37:42.923166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.077 [2024-07-15 03:37:42.923210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.077 qpair failed and we were unable to recover it. 00:34:37.077 [2024-07-15 03:37:42.923367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.077 [2024-07-15 03:37:42.923398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.077 qpair failed and we were unable to recover it. 00:34:37.077 [2024-07-15 03:37:42.923557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.077 [2024-07-15 03:37:42.923585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.077 qpair failed and we were unable to recover it. 00:34:37.077 [2024-07-15 03:37:42.923731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.077 [2024-07-15 03:37:42.923758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.077 qpair failed and we were unable to recover it. 00:34:37.077 [2024-07-15 03:37:42.923903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.077 [2024-07-15 03:37:42.923931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.077 qpair failed and we were unable to recover it. 00:34:37.077 [2024-07-15 03:37:42.924046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.077 [2024-07-15 03:37:42.924074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.077 qpair failed and we were unable to recover it. 00:34:37.077 [2024-07-15 03:37:42.924213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.077 [2024-07-15 03:37:42.924240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.077 qpair failed and we were unable to recover it. 00:34:37.077 [2024-07-15 03:37:42.924401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.077 [2024-07-15 03:37:42.924430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.077 qpair failed and we were unable to recover it. 00:34:37.077 [2024-07-15 03:37:42.924612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.077 [2024-07-15 03:37:42.924639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.077 qpair failed and we were unable to recover it. 00:34:37.077 [2024-07-15 03:37:42.924797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.077 [2024-07-15 03:37:42.924828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.077 qpair failed and we were unable to recover it. 00:34:37.077 [2024-07-15 03:37:42.924987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.077 [2024-07-15 03:37:42.925018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.077 qpair failed and we were unable to recover it. 00:34:37.078 [2024-07-15 03:37:42.925177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.078 [2024-07-15 03:37:42.925205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.078 qpair failed and we were unable to recover it. 00:34:37.078 [2024-07-15 03:37:42.925361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.078 [2024-07-15 03:37:42.925391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.078 qpair failed and we were unable to recover it. 00:34:37.078 [2024-07-15 03:37:42.925561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.078 [2024-07-15 03:37:42.925595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.078 qpair failed and we were unable to recover it. 00:34:37.078 [2024-07-15 03:37:42.925793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.078 [2024-07-15 03:37:42.925820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.078 qpair failed and we were unable to recover it. 00:34:37.078 [2024-07-15 03:37:42.925961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.078 [2024-07-15 03:37:42.925990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.078 qpair failed and we were unable to recover it. 00:34:37.078 [2024-07-15 03:37:42.926131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.078 [2024-07-15 03:37:42.926174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.078 qpair failed and we were unable to recover it. 00:34:37.078 [2024-07-15 03:37:42.926310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.078 [2024-07-15 03:37:42.926337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.078 qpair failed and we were unable to recover it. 00:34:37.078 [2024-07-15 03:37:42.926482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.078 [2024-07-15 03:37:42.926525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.078 qpair failed and we were unable to recover it. 00:34:37.078 [2024-07-15 03:37:42.926716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.078 [2024-07-15 03:37:42.926744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.078 qpair failed and we were unable to recover it. 00:34:37.078 [2024-07-15 03:37:42.926859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.078 [2024-07-15 03:37:42.926905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.078 qpair failed and we were unable to recover it. 00:34:37.078 [2024-07-15 03:37:42.927044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.078 [2024-07-15 03:37:42.927071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.078 qpair failed and we were unable to recover it. 00:34:37.078 [2024-07-15 03:37:42.927253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.078 [2024-07-15 03:37:42.927282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.078 qpair failed and we were unable to recover it. 00:34:37.078 [2024-07-15 03:37:42.927451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.078 [2024-07-15 03:37:42.927478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.078 qpair failed and we were unable to recover it. 00:34:37.078 [2024-07-15 03:37:42.927606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.078 [2024-07-15 03:37:42.927633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.078 qpair failed and we were unable to recover it. 00:34:37.078 [2024-07-15 03:37:42.927762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.078 [2024-07-15 03:37:42.927792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.078 qpair failed and we were unable to recover it. 00:34:37.078 [2024-07-15 03:37:42.927935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.078 [2024-07-15 03:37:42.927963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.078 qpair failed and we were unable to recover it. 00:34:37.078 [2024-07-15 03:37:42.928109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.078 [2024-07-15 03:37:42.928136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.078 qpair failed and we were unable to recover it. 00:34:37.078 [2024-07-15 03:37:42.928301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.078 [2024-07-15 03:37:42.928331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.078 qpair failed and we were unable to recover it. 00:34:37.078 [2024-07-15 03:37:42.928468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.078 [2024-07-15 03:37:42.928496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.078 qpair failed and we were unable to recover it. 00:34:37.078 [2024-07-15 03:37:42.928617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.078 [2024-07-15 03:37:42.928645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.078 qpair failed and we were unable to recover it. 00:34:37.078 [2024-07-15 03:37:42.928821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.078 [2024-07-15 03:37:42.928851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.078 qpair failed and we were unable to recover it. 00:34:37.078 [2024-07-15 03:37:42.929022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.079 [2024-07-15 03:37:42.929051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.079 qpair failed and we were unable to recover it. 00:34:37.079 [2024-07-15 03:37:42.929206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.079 [2024-07-15 03:37:42.929236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.079 qpair failed and we were unable to recover it. 00:34:37.079 [2024-07-15 03:37:42.929413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.079 [2024-07-15 03:37:42.929443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.079 qpair failed and we were unable to recover it. 00:34:37.079 [2024-07-15 03:37:42.929606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.079 [2024-07-15 03:37:42.929633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.079 qpair failed and we were unable to recover it. 00:34:37.079 [2024-07-15 03:37:42.929751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.079 [2024-07-15 03:37:42.929778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.079 qpair failed and we were unable to recover it. 00:34:37.079 [2024-07-15 03:37:42.929920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.079 [2024-07-15 03:37:42.929948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.079 qpair failed and we were unable to recover it. 00:34:37.079 [2024-07-15 03:37:42.930116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.079 [2024-07-15 03:37:42.930144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.079 qpair failed and we were unable to recover it. 00:34:37.079 [2024-07-15 03:37:42.930287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.079 [2024-07-15 03:37:42.930314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.079 qpair failed and we were unable to recover it. 00:34:37.079 [2024-07-15 03:37:42.930453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.079 [2024-07-15 03:37:42.930480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.079 qpair failed and we were unable to recover it. 00:34:37.079 [2024-07-15 03:37:42.930616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.079 [2024-07-15 03:37:42.930644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.079 qpair failed and we were unable to recover it. 00:34:37.079 [2024-07-15 03:37:42.930800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.079 [2024-07-15 03:37:42.930830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.079 qpair failed and we were unable to recover it. 00:34:37.079 [2024-07-15 03:37:42.930977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.079 [2024-07-15 03:37:42.931008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.079 qpair failed and we were unable to recover it. 00:34:37.079 [2024-07-15 03:37:42.931158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.079 [2024-07-15 03:37:42.931185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.079 qpair failed and we were unable to recover it. 00:34:37.079 [2024-07-15 03:37:42.931367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.079 [2024-07-15 03:37:42.931397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.079 qpair failed and we were unable to recover it. 00:34:37.079 [2024-07-15 03:37:42.931602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.079 [2024-07-15 03:37:42.931629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.079 qpair failed and we were unable to recover it. 00:34:37.079 [2024-07-15 03:37:42.931771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.079 [2024-07-15 03:37:42.931798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.079 qpair failed and we were unable to recover it. 00:34:37.079 [2024-07-15 03:37:42.931940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.079 [2024-07-15 03:37:42.931987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.079 qpair failed and we were unable to recover it. 00:34:37.079 [2024-07-15 03:37:42.932147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.079 [2024-07-15 03:37:42.932178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.079 qpair failed and we were unable to recover it. 00:34:37.079 [2024-07-15 03:37:42.932338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.079 [2024-07-15 03:37:42.932366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.079 qpair failed and we were unable to recover it. 00:34:37.079 [2024-07-15 03:37:42.932505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.079 [2024-07-15 03:37:42.932549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.079 qpair failed and we were unable to recover it. 00:34:37.079 [2024-07-15 03:37:42.932704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.079 [2024-07-15 03:37:42.932735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.079 qpair failed and we were unable to recover it. 00:34:37.079 [2024-07-15 03:37:42.932897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.079 [2024-07-15 03:37:42.932928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.079 qpair failed and we were unable to recover it. 00:34:37.079 [2024-07-15 03:37:42.933046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.079 [2024-07-15 03:37:42.933074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.079 qpair failed and we were unable to recover it. 00:34:37.079 [2024-07-15 03:37:42.933235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.079 [2024-07-15 03:37:42.933265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.079 qpair failed and we were unable to recover it. 00:34:37.079 [2024-07-15 03:37:42.933421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.079 [2024-07-15 03:37:42.933448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.079 qpair failed and we were unable to recover it. 00:34:37.079 [2024-07-15 03:37:42.933569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.079 [2024-07-15 03:37:42.933596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.079 qpair failed and we were unable to recover it. 00:34:37.079 [2024-07-15 03:37:42.933702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.079 [2024-07-15 03:37:42.933730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.079 qpair failed and we were unable to recover it. 00:34:37.079 [2024-07-15 03:37:42.933865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.079 [2024-07-15 03:37:42.933899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.079 qpair failed and we were unable to recover it. 00:34:37.080 [2024-07-15 03:37:42.934090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.080 [2024-07-15 03:37:42.934120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.080 qpair failed and we were unable to recover it. 00:34:37.080 [2024-07-15 03:37:42.934276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.080 [2024-07-15 03:37:42.934306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.080 qpair failed and we were unable to recover it. 00:34:37.080 [2024-07-15 03:37:42.934467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.080 [2024-07-15 03:37:42.934494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.080 qpair failed and we were unable to recover it. 00:34:37.080 [2024-07-15 03:37:42.934601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.080 [2024-07-15 03:37:42.934628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.080 qpair failed and we were unable to recover it. 00:34:37.080 [2024-07-15 03:37:42.934815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.080 [2024-07-15 03:37:42.934845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.080 qpair failed and we were unable to recover it. 00:34:37.080 [2024-07-15 03:37:42.934998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.080 [2024-07-15 03:37:42.935025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.080 qpair failed and we were unable to recover it. 00:34:37.080 [2024-07-15 03:37:42.935165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.080 [2024-07-15 03:37:42.935192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.080 qpair failed and we were unable to recover it. 00:34:37.080 [2024-07-15 03:37:42.935337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.080 [2024-07-15 03:37:42.935369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.080 qpair failed and we were unable to recover it. 00:34:37.080 [2024-07-15 03:37:42.935521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.080 [2024-07-15 03:37:42.935549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.080 qpair failed and we were unable to recover it. 00:34:37.080 [2024-07-15 03:37:42.935682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.080 [2024-07-15 03:37:42.935726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.080 qpair failed and we were unable to recover it. 00:34:37.080 [2024-07-15 03:37:42.935885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.080 [2024-07-15 03:37:42.935930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.080 qpair failed and we were unable to recover it. 00:34:37.080 [2024-07-15 03:37:42.936074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.080 [2024-07-15 03:37:42.936101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.080 qpair failed and we were unable to recover it. 00:34:37.080 [2024-07-15 03:37:42.936256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.080 [2024-07-15 03:37:42.936286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.080 qpair failed and we were unable to recover it. 00:34:37.080 [2024-07-15 03:37:42.936429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.080 [2024-07-15 03:37:42.936459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.080 qpair failed and we were unable to recover it. 00:34:37.080 [2024-07-15 03:37:42.936619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.080 [2024-07-15 03:37:42.936646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.080 qpair failed and we were unable to recover it. 00:34:37.080 [2024-07-15 03:37:42.936787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.080 [2024-07-15 03:37:42.936832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.080 qpair failed and we were unable to recover it. 00:34:37.080 [2024-07-15 03:37:42.936984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.080 [2024-07-15 03:37:42.937015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.080 qpair failed and we were unable to recover it. 00:34:37.080 [2024-07-15 03:37:42.937145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.080 [2024-07-15 03:37:42.937173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.080 qpair failed and we were unable to recover it. 00:34:37.080 [2024-07-15 03:37:42.937309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.080 [2024-07-15 03:37:42.937337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.080 qpair failed and we were unable to recover it. 00:34:37.080 [2024-07-15 03:37:42.937529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.080 [2024-07-15 03:37:42.937559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.080 qpair failed and we were unable to recover it. 00:34:37.080 [2024-07-15 03:37:42.937724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.080 [2024-07-15 03:37:42.937752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.080 qpair failed and we were unable to recover it. 00:34:37.080 [2024-07-15 03:37:42.937887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.080 [2024-07-15 03:37:42.937933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.080 qpair failed and we were unable to recover it. 00:34:37.080 [2024-07-15 03:37:42.938059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.080 [2024-07-15 03:37:42.938089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.080 qpair failed and we were unable to recover it. 00:34:37.080 [2024-07-15 03:37:42.938271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.080 [2024-07-15 03:37:42.938299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.080 qpair failed and we were unable to recover it. 00:34:37.080 [2024-07-15 03:37:42.938449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.080 [2024-07-15 03:37:42.938479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.080 qpair failed and we were unable to recover it. 00:34:37.080 [2024-07-15 03:37:42.938665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.081 [2024-07-15 03:37:42.938696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.081 qpair failed and we were unable to recover it. 00:34:37.081 [2024-07-15 03:37:42.938857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.081 [2024-07-15 03:37:42.938896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.081 qpair failed and we were unable to recover it. 00:34:37.081 [2024-07-15 03:37:42.939082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.081 [2024-07-15 03:37:42.939112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.081 qpair failed and we were unable to recover it. 00:34:37.081 [2024-07-15 03:37:42.939231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.081 [2024-07-15 03:37:42.939262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.081 qpair failed and we were unable to recover it. 00:34:37.081 [2024-07-15 03:37:42.939427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.081 [2024-07-15 03:37:42.939455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.081 qpair failed and we were unable to recover it. 00:34:37.081 [2024-07-15 03:37:42.939592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.081 [2024-07-15 03:37:42.939638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.081 qpair failed and we were unable to recover it. 00:34:37.081 [2024-07-15 03:37:42.939796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.081 [2024-07-15 03:37:42.939823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.081 qpair failed and we were unable to recover it. 00:34:37.081 [2024-07-15 03:37:42.939955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.081 [2024-07-15 03:37:42.939983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.081 qpair failed and we were unable to recover it. 00:34:37.081 [2024-07-15 03:37:42.940163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.081 [2024-07-15 03:37:42.940197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.081 qpair failed and we were unable to recover it. 00:34:37.081 [2024-07-15 03:37:42.940315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.081 [2024-07-15 03:37:42.940347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.081 qpair failed and we were unable to recover it. 00:34:37.081 [2024-07-15 03:37:42.940537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.081 [2024-07-15 03:37:42.940564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.081 qpair failed and we were unable to recover it. 00:34:37.081 [2024-07-15 03:37:42.940718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.081 [2024-07-15 03:37:42.940747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.081 qpair failed and we were unable to recover it. 00:34:37.081 [2024-07-15 03:37:42.940913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.081 [2024-07-15 03:37:42.940941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.081 qpair failed and we were unable to recover it. 00:34:37.081 [2024-07-15 03:37:42.941079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.081 [2024-07-15 03:37:42.941107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.081 qpair failed and we were unable to recover it. 00:34:37.081 [2024-07-15 03:37:42.941262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.081 [2024-07-15 03:37:42.941292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.081 qpair failed and we were unable to recover it. 00:34:37.081 [2024-07-15 03:37:42.941471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.081 [2024-07-15 03:37:42.941501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.081 qpair failed and we were unable to recover it. 00:34:37.081 [2024-07-15 03:37:42.941658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.081 [2024-07-15 03:37:42.941685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.081 qpair failed and we were unable to recover it. 00:34:37.081 [2024-07-15 03:37:42.941868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.081 [2024-07-15 03:37:42.941905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.081 qpair failed and we were unable to recover it. 00:34:37.081 [2024-07-15 03:37:42.942051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.081 [2024-07-15 03:37:42.942081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.081 qpair failed and we were unable to recover it. 00:34:37.081 [2024-07-15 03:37:42.942210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.081 [2024-07-15 03:37:42.942238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.081 qpair failed and we were unable to recover it. 00:34:37.081 [2024-07-15 03:37:42.942402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.081 [2024-07-15 03:37:42.942445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.081 qpair failed and we were unable to recover it. 00:34:37.081 [2024-07-15 03:37:42.942600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.081 [2024-07-15 03:37:42.942631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.081 qpair failed and we were unable to recover it. 00:34:37.081 [2024-07-15 03:37:42.942790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.081 [2024-07-15 03:37:42.942817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.081 qpair failed and we were unable to recover it. 00:34:37.081 [2024-07-15 03:37:42.942953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.081 [2024-07-15 03:37:42.942999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.081 qpair failed and we were unable to recover it. 00:34:37.081 [2024-07-15 03:37:42.943154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.081 [2024-07-15 03:37:42.943184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.081 qpair failed and we were unable to recover it. 00:34:37.081 [2024-07-15 03:37:42.943336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.081 [2024-07-15 03:37:42.943363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.081 qpair failed and we were unable to recover it. 00:34:37.081 [2024-07-15 03:37:42.943505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.081 [2024-07-15 03:37:42.943548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.081 qpair failed and we were unable to recover it. 00:34:37.081 [2024-07-15 03:37:42.943727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.081 [2024-07-15 03:37:42.943756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.081 qpair failed and we were unable to recover it. 00:34:37.081 [2024-07-15 03:37:42.943948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.082 [2024-07-15 03:37:42.943976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.082 qpair failed and we were unable to recover it. 00:34:37.082 [2024-07-15 03:37:42.944118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.082 [2024-07-15 03:37:42.944146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.082 qpair failed and we were unable to recover it. 00:34:37.082 [2024-07-15 03:37:42.944339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.082 [2024-07-15 03:37:42.944368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.082 qpair failed and we were unable to recover it. 00:34:37.082 [2024-07-15 03:37:42.944525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.082 [2024-07-15 03:37:42.944552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.082 qpair failed and we were unable to recover it. 00:34:37.082 [2024-07-15 03:37:42.944734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.082 [2024-07-15 03:37:42.944764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.082 qpair failed and we were unable to recover it. 00:34:37.082 [2024-07-15 03:37:42.944947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.082 [2024-07-15 03:37:42.944974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.082 qpair failed and we were unable to recover it. 00:34:37.082 [2024-07-15 03:37:42.945113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.082 [2024-07-15 03:37:42.945140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.082 qpair failed and we were unable to recover it. 00:34:37.082 [2024-07-15 03:37:42.945304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.082 [2024-07-15 03:37:42.945332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.082 qpair failed and we were unable to recover it. 00:34:37.082 [2024-07-15 03:37:42.945515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.082 [2024-07-15 03:37:42.945545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.082 qpair failed and we were unable to recover it. 00:34:37.082 [2024-07-15 03:37:42.945733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.082 [2024-07-15 03:37:42.945759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.082 qpair failed and we were unable to recover it. 00:34:37.082 [2024-07-15 03:37:42.945912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.082 [2024-07-15 03:37:42.945942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.082 qpair failed and we were unable to recover it. 00:34:37.082 [2024-07-15 03:37:42.946089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.082 [2024-07-15 03:37:42.946120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.082 qpair failed and we were unable to recover it. 00:34:37.082 [2024-07-15 03:37:42.946253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.082 [2024-07-15 03:37:42.946281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.082 qpair failed and we were unable to recover it. 00:34:37.082 [2024-07-15 03:37:42.946444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.082 [2024-07-15 03:37:42.946488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.082 qpair failed and we were unable to recover it. 00:34:37.082 [2024-07-15 03:37:42.946608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.082 [2024-07-15 03:37:42.946637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.082 qpair failed and we were unable to recover it. 00:34:37.082 [2024-07-15 03:37:42.946803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.082 [2024-07-15 03:37:42.946830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.082 qpair failed and we were unable to recover it. 00:34:37.082 [2024-07-15 03:37:42.946994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.082 [2024-07-15 03:37:42.947021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.082 qpair failed and we were unable to recover it. 00:34:37.082 [2024-07-15 03:37:42.947137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.082 [2024-07-15 03:37:42.947180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.082 qpair failed and we were unable to recover it. 00:34:37.082 [2024-07-15 03:37:42.947343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.082 [2024-07-15 03:37:42.947369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.082 qpair failed and we were unable to recover it. 00:34:37.082 [2024-07-15 03:37:42.947485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.082 [2024-07-15 03:37:42.947512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.082 qpair failed and we were unable to recover it. 00:34:37.082 [2024-07-15 03:37:42.947681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.082 [2024-07-15 03:37:42.947715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.082 qpair failed and we were unable to recover it. 00:34:37.082 [2024-07-15 03:37:42.947851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.082 [2024-07-15 03:37:42.947887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.082 qpair failed and we were unable to recover it. 00:34:37.082 [2024-07-15 03:37:42.948029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.082 [2024-07-15 03:37:42.948056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.082 qpair failed and we were unable to recover it. 00:34:37.082 [2024-07-15 03:37:42.948194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.082 [2024-07-15 03:37:42.948223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.082 qpair failed and we were unable to recover it. 00:34:37.082 [2024-07-15 03:37:42.948389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.082 [2024-07-15 03:37:42.948417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.082 qpair failed and we were unable to recover it. 00:34:37.083 [2024-07-15 03:37:42.948575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.083 [2024-07-15 03:37:42.948602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.083 qpair failed and we were unable to recover it. 00:34:37.083 [2024-07-15 03:37:42.948754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.083 [2024-07-15 03:37:42.948784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.083 qpair failed and we were unable to recover it. 00:34:37.083 [2024-07-15 03:37:42.948945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.083 [2024-07-15 03:37:42.948972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.083 qpair failed and we were unable to recover it. 00:34:37.083 [2024-07-15 03:37:42.949128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.083 [2024-07-15 03:37:42.949158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.083 qpair failed and we were unable to recover it. 00:34:37.083 [2024-07-15 03:37:42.949307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.083 [2024-07-15 03:37:42.949336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.083 qpair failed and we were unable to recover it. 00:34:37.083 [2024-07-15 03:37:42.949495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.083 [2024-07-15 03:37:42.949523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.083 qpair failed and we were unable to recover it. 00:34:37.083 [2024-07-15 03:37:42.949666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.083 [2024-07-15 03:37:42.949709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.083 qpair failed and we were unable to recover it. 00:34:37.083 [2024-07-15 03:37:42.949862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.083 [2024-07-15 03:37:42.949899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.083 qpair failed and we were unable to recover it. 00:34:37.083 [2024-07-15 03:37:42.950035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.083 [2024-07-15 03:37:42.950062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.083 qpair failed and we were unable to recover it. 00:34:37.083 [2024-07-15 03:37:42.950198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.083 [2024-07-15 03:37:42.950226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.083 qpair failed and we were unable to recover it. 00:34:37.083 [2024-07-15 03:37:42.950426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.083 [2024-07-15 03:37:42.950453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.083 qpair failed and we were unable to recover it. 00:34:37.083 [2024-07-15 03:37:42.950593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.083 [2024-07-15 03:37:42.950620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.083 qpair failed and we were unable to recover it. 00:34:37.083 [2024-07-15 03:37:42.950773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.083 [2024-07-15 03:37:42.950803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.083 qpair failed and we were unable to recover it. 00:34:37.083 [2024-07-15 03:37:42.950960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.083 [2024-07-15 03:37:42.950988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.083 qpair failed and we were unable to recover it. 00:34:37.083 [2024-07-15 03:37:42.951128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.083 [2024-07-15 03:37:42.951155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.083 qpair failed and we were unable to recover it. 00:34:37.083 [2024-07-15 03:37:42.951305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.083 [2024-07-15 03:37:42.951336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.083 qpair failed and we were unable to recover it. 00:34:37.083 [2024-07-15 03:37:42.951490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.083 [2024-07-15 03:37:42.951520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.083 qpair failed and we were unable to recover it. 00:34:37.083 [2024-07-15 03:37:42.951707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.083 [2024-07-15 03:37:42.951734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.083 qpair failed and we were unable to recover it. 00:34:37.083 [2024-07-15 03:37:42.951851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.083 [2024-07-15 03:37:42.951885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.083 qpair failed and we were unable to recover it. 00:34:37.083 [2024-07-15 03:37:42.951998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.083 [2024-07-15 03:37:42.952026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.083 qpair failed and we were unable to recover it. 00:34:37.083 [2024-07-15 03:37:42.952133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.083 [2024-07-15 03:37:42.952160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.083 qpair failed and we were unable to recover it. 00:34:37.083 [2024-07-15 03:37:42.952317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.083 [2024-07-15 03:37:42.952362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.083 qpair failed and we were unable to recover it. 00:34:37.083 [2024-07-15 03:37:42.952485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.083 [2024-07-15 03:37:42.952515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.083 qpair failed and we were unable to recover it. 00:34:37.083 [2024-07-15 03:37:42.952707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.083 [2024-07-15 03:37:42.952734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.083 qpair failed and we were unable to recover it. 00:34:37.083 [2024-07-15 03:37:42.952861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.083 [2024-07-15 03:37:42.952900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.083 qpair failed and we were unable to recover it. 00:34:37.083 [2024-07-15 03:37:42.953078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.083 [2024-07-15 03:37:42.953108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.083 qpair failed and we were unable to recover it. 00:34:37.083 [2024-07-15 03:37:42.953271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.084 [2024-07-15 03:37:42.953298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.084 qpair failed and we were unable to recover it. 00:34:37.084 [2024-07-15 03:37:42.953407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.084 [2024-07-15 03:37:42.953433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.084 qpair failed and we were unable to recover it. 00:34:37.084 [2024-07-15 03:37:42.953607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.084 [2024-07-15 03:37:42.953637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.084 qpair failed and we were unable to recover it. 00:34:37.084 [2024-07-15 03:37:42.953791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.084 [2024-07-15 03:37:42.953819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.084 qpair failed and we were unable to recover it. 00:34:37.084 [2024-07-15 03:37:42.953932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.084 [2024-07-15 03:37:42.953960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.084 qpair failed and we were unable to recover it. 00:34:37.084 [2024-07-15 03:37:42.954126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.084 [2024-07-15 03:37:42.954154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.084 qpair failed and we were unable to recover it. 00:34:37.084 [2024-07-15 03:37:42.954288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.084 [2024-07-15 03:37:42.954315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.084 qpair failed and we were unable to recover it. 00:34:37.084 [2024-07-15 03:37:42.954496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.084 [2024-07-15 03:37:42.954525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.084 qpair failed and we were unable to recover it. 00:34:37.084 [2024-07-15 03:37:42.954713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.084 [2024-07-15 03:37:42.954743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.084 qpair failed and we were unable to recover it. 00:34:37.084 [2024-07-15 03:37:42.954896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.084 [2024-07-15 03:37:42.954928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.084 qpair failed and we were unable to recover it. 00:34:37.084 [2024-07-15 03:37:42.955109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.084 [2024-07-15 03:37:42.955139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.084 qpair failed and we were unable to recover it. 00:34:37.084 [2024-07-15 03:37:42.955324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.084 [2024-07-15 03:37:42.955351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.084 qpair failed and we were unable to recover it. 00:34:37.084 [2024-07-15 03:37:42.955513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.084 [2024-07-15 03:37:42.955539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.084 qpair failed and we were unable to recover it. 00:34:37.084 [2024-07-15 03:37:42.955727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.084 [2024-07-15 03:37:42.955757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.084 qpair failed and we were unable to recover it. 00:34:37.084 [2024-07-15 03:37:42.955941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.084 [2024-07-15 03:37:42.955972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.084 qpair failed and we were unable to recover it. 00:34:37.084 [2024-07-15 03:37:42.956144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.084 [2024-07-15 03:37:42.956171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.084 qpair failed and we were unable to recover it. 00:34:37.084 [2024-07-15 03:37:42.956309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.084 [2024-07-15 03:37:42.956336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.084 qpair failed and we were unable to recover it. 00:34:37.084 [2024-07-15 03:37:42.956471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.084 [2024-07-15 03:37:42.956499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.084 qpair failed and we were unable to recover it. 00:34:37.084 [2024-07-15 03:37:42.956640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.084 [2024-07-15 03:37:42.956667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.084 qpair failed and we were unable to recover it. 00:34:37.084 [2024-07-15 03:37:42.956784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.084 [2024-07-15 03:37:42.956811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.084 qpair failed and we were unable to recover it. 00:34:37.084 [2024-07-15 03:37:42.956958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.084 [2024-07-15 03:37:42.956986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.084 qpair failed and we were unable to recover it. 00:34:37.084 [2024-07-15 03:37:42.957191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.084 [2024-07-15 03:37:42.957219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.084 qpair failed and we were unable to recover it. 00:34:37.084 [2024-07-15 03:37:42.957376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.084 [2024-07-15 03:37:42.957406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.084 qpair failed and we were unable to recover it. 00:34:37.084 [2024-07-15 03:37:42.957587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.084 [2024-07-15 03:37:42.957614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.084 qpair failed and we were unable to recover it. 00:34:37.084 [2024-07-15 03:37:42.957809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.084 [2024-07-15 03:37:42.957838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.084 qpair failed and we were unable to recover it. 00:34:37.084 [2024-07-15 03:37:42.958029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.084 [2024-07-15 03:37:42.958056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.084 qpair failed and we were unable to recover it. 00:34:37.084 [2024-07-15 03:37:42.958218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.084 [2024-07-15 03:37:42.958248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.084 qpair failed and we were unable to recover it. 00:34:37.084 [2024-07-15 03:37:42.958434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.084 [2024-07-15 03:37:42.958461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.084 qpair failed and we were unable to recover it. 00:34:37.084 [2024-07-15 03:37:42.958573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.084 [2024-07-15 03:37:42.958618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.085 qpair failed and we were unable to recover it. 00:34:37.085 [2024-07-15 03:37:42.958775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.085 [2024-07-15 03:37:42.958802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.085 qpair failed and we were unable to recover it. 00:34:37.085 [2024-07-15 03:37:42.958965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.085 [2024-07-15 03:37:42.958993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.085 qpair failed and we were unable to recover it. 00:34:37.085 [2024-07-15 03:37:42.959151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.085 [2024-07-15 03:37:42.959183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.085 qpair failed and we were unable to recover it. 00:34:37.085 [2024-07-15 03:37:42.959307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.085 [2024-07-15 03:37:42.959337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.085 qpair failed and we were unable to recover it. 00:34:37.085 [2024-07-15 03:37:42.959490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.085 [2024-07-15 03:37:42.959517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.085 qpair failed and we were unable to recover it. 00:34:37.085 [2024-07-15 03:37:42.959655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.085 [2024-07-15 03:37:42.959699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.085 qpair failed and we were unable to recover it. 00:34:37.085 [2024-07-15 03:37:42.959887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.085 [2024-07-15 03:37:42.959917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.085 qpair failed and we were unable to recover it. 00:34:37.085 [2024-07-15 03:37:42.960053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.085 [2024-07-15 03:37:42.960080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.085 qpair failed and we were unable to recover it. 00:34:37.085 [2024-07-15 03:37:42.960221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.085 [2024-07-15 03:37:42.960248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.085 qpair failed and we were unable to recover it. 00:34:37.085 [2024-07-15 03:37:42.960416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.085 [2024-07-15 03:37:42.960447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.085 qpair failed and we were unable to recover it. 00:34:37.085 [2024-07-15 03:37:42.960606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.085 [2024-07-15 03:37:42.960633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.085 qpair failed and we were unable to recover it. 00:34:37.085 [2024-07-15 03:37:42.960787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.085 [2024-07-15 03:37:42.960817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.085 qpair failed and we were unable to recover it. 00:34:37.085 [2024-07-15 03:37:42.960946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.085 [2024-07-15 03:37:42.960977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.085 qpair failed and we were unable to recover it. 00:34:37.085 [2024-07-15 03:37:42.961131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.085 [2024-07-15 03:37:42.961158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.085 qpair failed and we were unable to recover it. 00:34:37.085 [2024-07-15 03:37:42.961294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.085 [2024-07-15 03:37:42.961339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.085 qpair failed and we were unable to recover it. 00:34:37.085 [2024-07-15 03:37:42.961492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.085 [2024-07-15 03:37:42.961522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.085 qpair failed and we were unable to recover it. 00:34:37.085 [2024-07-15 03:37:42.961654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.085 [2024-07-15 03:37:42.961681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.085 qpair failed and we were unable to recover it. 00:34:37.085 [2024-07-15 03:37:42.961815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.085 [2024-07-15 03:37:42.961843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.085 qpair failed and we were unable to recover it. 00:34:37.085 [2024-07-15 03:37:42.962040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.085 [2024-07-15 03:37:42.962070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.085 qpair failed and we were unable to recover it. 00:34:37.085 [2024-07-15 03:37:42.962200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.085 [2024-07-15 03:37:42.962227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.085 qpair failed and we were unable to recover it. 00:34:37.085 [2024-07-15 03:37:42.962363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.085 [2024-07-15 03:37:42.962390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.085 qpair failed and we were unable to recover it. 00:34:37.085 [2024-07-15 03:37:42.962559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.085 [2024-07-15 03:37:42.962589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.085 qpair failed and we were unable to recover it. 00:34:37.086 [2024-07-15 03:37:42.962775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.086 [2024-07-15 03:37:42.962802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.086 qpair failed and we were unable to recover it. 00:34:37.086 [2024-07-15 03:37:42.962940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.086 [2024-07-15 03:37:42.962967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.086 qpair failed and we were unable to recover it. 00:34:37.086 [2024-07-15 03:37:42.963130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.086 [2024-07-15 03:37:42.963161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.086 qpair failed and we were unable to recover it. 00:34:37.086 [2024-07-15 03:37:42.963346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.086 [2024-07-15 03:37:42.963373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.086 qpair failed and we were unable to recover it. 00:34:37.086 [2024-07-15 03:37:42.963484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.086 [2024-07-15 03:37:42.963526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.086 qpair failed and we were unable to recover it. 00:34:37.086 [2024-07-15 03:37:42.963645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.086 [2024-07-15 03:37:42.963674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.086 qpair failed and we were unable to recover it. 00:34:37.086 [2024-07-15 03:37:42.963837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.086 [2024-07-15 03:37:42.963864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.086 qpair failed and we were unable to recover it. 00:34:37.086 [2024-07-15 03:37:42.964055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.086 [2024-07-15 03:37:42.964085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.086 qpair failed and we were unable to recover it. 00:34:37.086 [2024-07-15 03:37:42.964233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.086 [2024-07-15 03:37:42.964263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.086 qpair failed and we were unable to recover it. 00:34:37.086 [2024-07-15 03:37:42.964448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.086 [2024-07-15 03:37:42.964475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.086 qpair failed and we were unable to recover it. 00:34:37.086 [2024-07-15 03:37:42.964659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.086 [2024-07-15 03:37:42.964689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.086 qpair failed and we were unable to recover it. 00:34:37.086 [2024-07-15 03:37:42.964841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.086 [2024-07-15 03:37:42.964871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.086 qpair failed and we were unable to recover it. 00:34:37.086 [2024-07-15 03:37:42.965045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.086 [2024-07-15 03:37:42.965071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.086 qpair failed and we were unable to recover it. 00:34:37.086 [2024-07-15 03:37:42.965175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.086 [2024-07-15 03:37:42.965202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.086 qpair failed and we were unable to recover it. 00:34:37.086 [2024-07-15 03:37:42.965381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.086 [2024-07-15 03:37:42.965408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.086 qpair failed and we were unable to recover it. 00:34:37.086 [2024-07-15 03:37:42.965546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.086 [2024-07-15 03:37:42.965573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.086 qpair failed and we were unable to recover it. 00:34:37.086 [2024-07-15 03:37:42.965715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.086 [2024-07-15 03:37:42.965742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.086 qpair failed and we were unable to recover it. 00:34:37.086 [2024-07-15 03:37:42.965903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.086 [2024-07-15 03:37:42.965948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.086 qpair failed and we were unable to recover it. 00:34:37.086 [2024-07-15 03:37:42.966087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.086 [2024-07-15 03:37:42.966114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.086 qpair failed and we were unable to recover it. 00:34:37.086 [2024-07-15 03:37:42.966220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.086 [2024-07-15 03:37:42.966263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.086 qpair failed and we were unable to recover it. 00:34:37.086 [2024-07-15 03:37:42.966420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.086 [2024-07-15 03:37:42.966449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.086 qpair failed and we were unable to recover it. 00:34:37.086 [2024-07-15 03:37:42.966608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.086 [2024-07-15 03:37:42.966635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.086 qpair failed and we were unable to recover it. 00:34:37.086 [2024-07-15 03:37:42.966739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.086 [2024-07-15 03:37:42.966765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.086 qpair failed and we were unable to recover it. 00:34:37.086 [2024-07-15 03:37:42.966945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.086 [2024-07-15 03:37:42.966973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.086 qpair failed and we were unable to recover it. 00:34:37.086 [2024-07-15 03:37:42.967135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.086 [2024-07-15 03:37:42.967162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.086 qpair failed and we were unable to recover it. 00:34:37.086 [2024-07-15 03:37:42.967293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.087 [2024-07-15 03:37:42.967326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.087 qpair failed and we were unable to recover it. 00:34:37.087 [2024-07-15 03:37:42.967515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.087 [2024-07-15 03:37:42.967542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.087 qpair failed and we were unable to recover it. 00:34:37.087 [2024-07-15 03:37:42.967674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.087 [2024-07-15 03:37:42.967701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.087 qpair failed and we were unable to recover it. 00:34:37.087 [2024-07-15 03:37:42.967841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.087 [2024-07-15 03:37:42.967893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.087 qpair failed and we were unable to recover it. 00:34:37.087 [2024-07-15 03:37:42.968072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.087 [2024-07-15 03:37:42.968102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.087 qpair failed and we were unable to recover it. 00:34:37.087 [2024-07-15 03:37:42.968282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.087 [2024-07-15 03:37:42.968309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.087 qpair failed and we were unable to recover it. 00:34:37.087 [2024-07-15 03:37:42.968496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.087 [2024-07-15 03:37:42.968525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.087 qpair failed and we were unable to recover it. 00:34:37.087 [2024-07-15 03:37:42.968686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.087 [2024-07-15 03:37:42.968714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.087 qpair failed and we were unable to recover it. 00:34:37.087 [2024-07-15 03:37:42.968852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.087 [2024-07-15 03:37:42.968886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.087 qpair failed and we were unable to recover it. 00:34:37.087 [2024-07-15 03:37:42.969052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.087 [2024-07-15 03:37:42.969080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.087 qpair failed and we were unable to recover it. 00:34:37.087 [2024-07-15 03:37:42.969265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.087 [2024-07-15 03:37:42.969295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.087 qpair failed and we were unable to recover it. 00:34:37.087 [2024-07-15 03:37:42.969468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.087 [2024-07-15 03:37:42.969495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.087 qpair failed and we were unable to recover it. 00:34:37.087 [2024-07-15 03:37:42.969608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.087 [2024-07-15 03:37:42.969635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.087 qpair failed and we were unable to recover it. 00:34:37.087 [2024-07-15 03:37:42.969807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.087 [2024-07-15 03:37:42.969837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.087 qpair failed and we were unable to recover it. 00:34:37.087 [2024-07-15 03:37:42.970009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.087 [2024-07-15 03:37:42.970038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.087 qpair failed and we were unable to recover it. 00:34:37.087 [2024-07-15 03:37:42.970198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.087 [2024-07-15 03:37:42.970228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.087 qpair failed and we were unable to recover it. 00:34:37.087 [2024-07-15 03:37:42.970411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.087 [2024-07-15 03:37:42.970440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.087 qpair failed and we were unable to recover it. 00:34:37.087 [2024-07-15 03:37:42.970607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.087 [2024-07-15 03:37:42.970639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.087 qpair failed and we were unable to recover it. 00:34:37.087 [2024-07-15 03:37:42.970823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.087 [2024-07-15 03:37:42.970851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.087 qpair failed and we were unable to recover it. 00:34:37.087 [2024-07-15 03:37:42.971012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.087 [2024-07-15 03:37:42.971041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.087 qpair failed and we were unable to recover it. 00:34:37.087 [2024-07-15 03:37:42.971198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.087 [2024-07-15 03:37:42.971225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.087 qpair failed and we were unable to recover it. 00:34:37.087 [2024-07-15 03:37:42.971409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.087 [2024-07-15 03:37:42.971439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.087 qpair failed and we were unable to recover it. 00:34:37.087 [2024-07-15 03:37:42.971623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.087 [2024-07-15 03:37:42.971652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.087 qpair failed and we were unable to recover it. 00:34:37.087 [2024-07-15 03:37:42.971818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.087 [2024-07-15 03:37:42.971845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.087 qpair failed and we were unable to recover it. 00:34:37.087 [2024-07-15 03:37:42.971995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.087 [2024-07-15 03:37:42.972024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.087 qpair failed and we were unable to recover it. 00:34:37.087 [2024-07-15 03:37:42.972161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.087 [2024-07-15 03:37:42.972188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.087 qpair failed and we were unable to recover it. 00:34:37.087 [2024-07-15 03:37:42.972300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.087 [2024-07-15 03:37:42.972328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.087 qpair failed and we were unable to recover it. 00:34:37.087 [2024-07-15 03:37:42.972466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.087 [2024-07-15 03:37:42.972510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.087 qpair failed and we were unable to recover it. 00:34:37.087 [2024-07-15 03:37:42.972685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.087 [2024-07-15 03:37:42.972716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.087 qpair failed and we were unable to recover it. 00:34:37.088 [2024-07-15 03:37:42.972867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.088 [2024-07-15 03:37:42.972904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.088 qpair failed and we were unable to recover it. 00:34:37.088 [2024-07-15 03:37:42.973064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.088 [2024-07-15 03:37:42.973091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.088 qpair failed and we were unable to recover it. 00:34:37.088 [2024-07-15 03:37:42.973230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.088 [2024-07-15 03:37:42.973260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.088 qpair failed and we were unable to recover it. 00:34:37.088 [2024-07-15 03:37:42.973447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.088 [2024-07-15 03:37:42.973474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.088 qpair failed and we were unable to recover it. 00:34:37.088 [2024-07-15 03:37:42.973629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.088 [2024-07-15 03:37:42.973660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.088 qpair failed and we were unable to recover it. 00:34:37.088 [2024-07-15 03:37:42.973804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.088 [2024-07-15 03:37:42.973834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.088 qpair failed and we were unable to recover it. 00:34:37.088 [2024-07-15 03:37:42.973999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.088 [2024-07-15 03:37:42.974027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.088 qpair failed and we were unable to recover it. 00:34:37.088 [2024-07-15 03:37:42.974206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.088 [2024-07-15 03:37:42.974236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.088 qpair failed and we were unable to recover it. 00:34:37.088 [2024-07-15 03:37:42.974389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.088 [2024-07-15 03:37:42.974420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.088 qpair failed and we were unable to recover it. 00:34:37.088 [2024-07-15 03:37:42.974581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.088 [2024-07-15 03:37:42.974608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.088 qpair failed and we were unable to recover it. 00:34:37.088 [2024-07-15 03:37:42.974743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.088 [2024-07-15 03:37:42.974770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.088 qpair failed and we were unable to recover it. 00:34:37.088 [2024-07-15 03:37:42.974931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.088 [2024-07-15 03:37:42.974967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.088 qpair failed and we were unable to recover it. 00:34:37.088 [2024-07-15 03:37:42.975095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.088 [2024-07-15 03:37:42.975123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.088 qpair failed and we were unable to recover it. 00:34:37.088 [2024-07-15 03:37:42.975231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.088 [2024-07-15 03:37:42.975259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.088 qpair failed and we were unable to recover it. 00:34:37.088 [2024-07-15 03:37:42.975435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.088 [2024-07-15 03:37:42.975478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.088 qpair failed and we were unable to recover it. 00:34:37.088 [2024-07-15 03:37:42.975607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.088 [2024-07-15 03:37:42.975634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.088 qpair failed and we were unable to recover it. 00:34:37.088 [2024-07-15 03:37:42.975767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.088 [2024-07-15 03:37:42.975794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.088 qpair failed and we were unable to recover it. 00:34:37.088 [2024-07-15 03:37:42.975936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.088 [2024-07-15 03:37:42.975966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.088 qpair failed and we were unable to recover it. 00:34:37.088 [2024-07-15 03:37:42.976157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.088 [2024-07-15 03:37:42.976184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.088 qpair failed and we were unable to recover it. 00:34:37.088 [2024-07-15 03:37:42.976321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.088 [2024-07-15 03:37:42.976349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.088 qpair failed and we were unable to recover it. 00:34:37.088 [2024-07-15 03:37:42.976517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.088 [2024-07-15 03:37:42.976547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.088 qpair failed and we were unable to recover it. 00:34:37.088 [2024-07-15 03:37:42.976699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.088 [2024-07-15 03:37:42.976726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.088 qpair failed and we were unable to recover it. 00:34:37.088 [2024-07-15 03:37:42.976832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.088 [2024-07-15 03:37:42.976860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.088 qpair failed and we were unable to recover it. 00:34:37.088 [2024-07-15 03:37:42.977056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.088 [2024-07-15 03:37:42.977087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.088 qpair failed and we were unable to recover it. 00:34:37.088 [2024-07-15 03:37:42.977242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.088 [2024-07-15 03:37:42.977269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.088 qpair failed and we were unable to recover it. 00:34:37.088 [2024-07-15 03:37:42.977459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.088 [2024-07-15 03:37:42.977489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.088 qpair failed and we were unable to recover it. 00:34:37.088 [2024-07-15 03:37:42.977642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.089 [2024-07-15 03:37:42.977672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.089 qpair failed and we were unable to recover it. 00:34:37.089 [2024-07-15 03:37:42.977841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.089 [2024-07-15 03:37:42.977869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.089 qpair failed and we were unable to recover it. 00:34:37.089 [2024-07-15 03:37:42.977986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.089 [2024-07-15 03:37:42.978029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.089 qpair failed and we were unable to recover it. 00:34:37.089 [2024-07-15 03:37:42.978180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.089 [2024-07-15 03:37:42.978210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.089 qpair failed and we were unable to recover it. 00:34:37.089 [2024-07-15 03:37:42.978359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.089 [2024-07-15 03:37:42.978386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.089 qpair failed and we were unable to recover it. 00:34:37.089 [2024-07-15 03:37:42.978516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.089 [2024-07-15 03:37:42.978543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.089 qpair failed and we were unable to recover it. 00:34:37.089 [2024-07-15 03:37:42.978679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.089 [2024-07-15 03:37:42.978709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.089 qpair failed and we were unable to recover it. 00:34:37.089 [2024-07-15 03:37:42.978902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.089 [2024-07-15 03:37:42.978930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.089 qpair failed and we were unable to recover it. 00:34:37.089 [2024-07-15 03:37:42.979086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.089 [2024-07-15 03:37:42.979117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.089 qpair failed and we were unable to recover it. 00:34:37.089 [2024-07-15 03:37:42.979247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.089 [2024-07-15 03:37:42.979277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.089 qpair failed and we were unable to recover it. 00:34:37.089 [2024-07-15 03:37:42.979442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.089 [2024-07-15 03:37:42.979469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.089 qpair failed and we were unable to recover it. 00:34:37.089 [2024-07-15 03:37:42.979610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.089 [2024-07-15 03:37:42.979636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.089 qpair failed and we were unable to recover it. 00:34:37.089 [2024-07-15 03:37:42.979820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.089 [2024-07-15 03:37:42.979850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.089 qpair failed and we were unable to recover it. 00:34:37.089 [2024-07-15 03:37:42.980001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.089 [2024-07-15 03:37:42.980029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.089 qpair failed and we were unable to recover it. 00:34:37.089 [2024-07-15 03:37:42.980207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.089 [2024-07-15 03:37:42.980236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.089 qpair failed and we were unable to recover it. 00:34:37.089 [2024-07-15 03:37:42.980363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.089 [2024-07-15 03:37:42.980393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.089 qpair failed and we were unable to recover it. 00:34:37.089 [2024-07-15 03:37:42.980532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.089 [2024-07-15 03:37:42.980559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.089 qpair failed and we were unable to recover it. 00:34:37.089 [2024-07-15 03:37:42.980692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.089 [2024-07-15 03:37:42.980719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.089 qpair failed and we were unable to recover it. 00:34:37.089 [2024-07-15 03:37:42.980888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.089 [2024-07-15 03:37:42.980932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.089 qpair failed and we were unable to recover it. 00:34:37.089 [2024-07-15 03:37:42.981071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.089 [2024-07-15 03:37:42.981099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.089 qpair failed and we were unable to recover it. 00:34:37.089 [2024-07-15 03:37:42.981291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.089 [2024-07-15 03:37:42.981321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.089 qpair failed and we were unable to recover it. 00:34:37.089 [2024-07-15 03:37:42.981436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.089 [2024-07-15 03:37:42.981466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.089 qpair failed and we were unable to recover it. 00:34:37.089 [2024-07-15 03:37:42.981627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.089 [2024-07-15 03:37:42.981654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.089 qpair failed and we were unable to recover it. 00:34:37.089 [2024-07-15 03:37:42.981792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.089 [2024-07-15 03:37:42.981818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.089 qpair failed and we were unable to recover it. 00:34:37.089 [2024-07-15 03:37:42.981968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.089 [2024-07-15 03:37:42.981996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.089 qpair failed and we were unable to recover it. 00:34:37.089 [2024-07-15 03:37:42.982137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.089 [2024-07-15 03:37:42.982168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.089 qpair failed and we were unable to recover it. 00:34:37.089 [2024-07-15 03:37:42.982319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.089 [2024-07-15 03:37:42.982349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.089 qpair failed and we were unable to recover it. 00:34:37.089 [2024-07-15 03:37:42.982525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.090 [2024-07-15 03:37:42.982555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.090 qpair failed and we were unable to recover it. 00:34:37.090 [2024-07-15 03:37:42.982689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.090 [2024-07-15 03:37:42.982716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.090 qpair failed and we were unable to recover it. 00:34:37.090 [2024-07-15 03:37:42.982854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.090 [2024-07-15 03:37:42.982903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.090 qpair failed and we were unable to recover it. 00:34:37.090 [2024-07-15 03:37:42.983087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.090 [2024-07-15 03:37:42.983117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.090 qpair failed and we were unable to recover it. 00:34:37.090 [2024-07-15 03:37:42.983254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.090 [2024-07-15 03:37:42.983281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.090 qpair failed and we were unable to recover it. 00:34:37.090 [2024-07-15 03:37:42.983427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.090 [2024-07-15 03:37:42.983454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.090 qpair failed and we were unable to recover it. 00:34:37.090 [2024-07-15 03:37:42.983618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.090 [2024-07-15 03:37:42.983648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.090 qpair failed and we were unable to recover it. 00:34:37.090 [2024-07-15 03:37:42.983786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.090 [2024-07-15 03:37:42.983813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.090 qpair failed and we were unable to recover it. 00:34:37.090 [2024-07-15 03:37:42.983976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.090 [2024-07-15 03:37:42.984004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.090 qpair failed and we were unable to recover it. 00:34:37.090 [2024-07-15 03:37:42.984143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.090 [2024-07-15 03:37:42.984172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.090 qpair failed and we were unable to recover it. 00:34:37.090 [2024-07-15 03:37:42.984301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.090 [2024-07-15 03:37:42.984328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.090 qpair failed and we were unable to recover it. 00:34:37.090 [2024-07-15 03:37:42.984469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.090 [2024-07-15 03:37:42.984496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.090 qpair failed and we were unable to recover it. 00:34:37.090 [2024-07-15 03:37:42.984673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.090 [2024-07-15 03:37:42.984703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.090 qpair failed and we were unable to recover it. 00:34:37.090 [2024-07-15 03:37:42.984888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.090 [2024-07-15 03:37:42.984917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.090 qpair failed and we were unable to recover it. 00:34:37.090 [2024-07-15 03:37:42.985105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.090 [2024-07-15 03:37:42.985136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.090 qpair failed and we were unable to recover it. 00:34:37.090 [2024-07-15 03:37:42.985315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.090 [2024-07-15 03:37:42.985344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.090 qpair failed and we were unable to recover it. 00:34:37.090 [2024-07-15 03:37:42.985504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.090 [2024-07-15 03:37:42.985531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.090 qpair failed and we were unable to recover it. 00:34:37.090 [2024-07-15 03:37:42.985713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.090 [2024-07-15 03:37:42.985742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.090 qpair failed and we were unable to recover it. 00:34:37.090 [2024-07-15 03:37:42.985899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.090 [2024-07-15 03:37:42.985930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.090 qpair failed and we were unable to recover it. 00:34:37.090 [2024-07-15 03:37:42.986124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.090 [2024-07-15 03:37:42.986151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.090 qpair failed and we were unable to recover it. 00:34:37.090 [2024-07-15 03:37:42.986335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.090 [2024-07-15 03:37:42.986364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.090 qpair failed and we were unable to recover it. 00:34:37.090 [2024-07-15 03:37:42.986493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.090 [2024-07-15 03:37:42.986525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.090 qpair failed and we were unable to recover it. 00:34:37.090 [2024-07-15 03:37:42.986694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.090 [2024-07-15 03:37:42.986724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.090 qpair failed and we were unable to recover it. 00:34:37.090 [2024-07-15 03:37:42.986933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.090 [2024-07-15 03:37:42.986962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.090 qpair failed and we were unable to recover it. 00:34:37.090 [2024-07-15 03:37:42.987100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.090 [2024-07-15 03:37:42.987127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.090 qpair failed and we were unable to recover it. 00:34:37.090 [2024-07-15 03:37:42.987297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.090 [2024-07-15 03:37:42.987325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.090 qpair failed and we were unable to recover it. 00:34:37.091 [2024-07-15 03:37:42.987510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.091 [2024-07-15 03:37:42.987540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.091 qpair failed and we were unable to recover it. 00:34:37.091 [2024-07-15 03:37:42.987723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.091 [2024-07-15 03:37:42.987753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.091 qpair failed and we were unable to recover it. 00:34:37.091 [2024-07-15 03:37:42.987932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.091 [2024-07-15 03:37:42.987960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.091 qpair failed and we were unable to recover it. 00:34:37.091 [2024-07-15 03:37:42.988103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.091 [2024-07-15 03:37:42.988130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.091 qpair failed and we were unable to recover it. 00:34:37.091 [2024-07-15 03:37:42.988293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.091 [2024-07-15 03:37:42.988320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.091 qpair failed and we were unable to recover it. 00:34:37.091 [2024-07-15 03:37:42.988485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.091 [2024-07-15 03:37:42.988513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.091 qpair failed and we were unable to recover it. 00:34:37.091 [2024-07-15 03:37:42.988618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.091 [2024-07-15 03:37:42.988662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.091 qpair failed and we were unable to recover it. 00:34:37.091 [2024-07-15 03:37:42.988812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.091 [2024-07-15 03:37:42.988842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.091 qpair failed and we were unable to recover it. 00:34:37.091 [2024-07-15 03:37:42.989020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.091 [2024-07-15 03:37:42.989048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.091 qpair failed and we were unable to recover it. 00:34:37.091 [2024-07-15 03:37:42.989201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.091 [2024-07-15 03:37:42.989231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.091 qpair failed and we were unable to recover it. 00:34:37.091 [2024-07-15 03:37:42.989410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.091 [2024-07-15 03:37:42.989440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.091 qpair failed and we were unable to recover it. 00:34:37.091 [2024-07-15 03:37:42.989586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.091 [2024-07-15 03:37:42.989613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.091 qpair failed and we were unable to recover it. 00:34:37.091 [2024-07-15 03:37:42.989768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.091 [2024-07-15 03:37:42.989801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.091 qpair failed and we were unable to recover it. 00:34:37.091 [2024-07-15 03:37:42.989930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.091 [2024-07-15 03:37:42.989961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.091 qpair failed and we were unable to recover it. 00:34:37.091 [2024-07-15 03:37:42.990126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.091 [2024-07-15 03:37:42.990153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.091 qpair failed and we were unable to recover it. 00:34:37.091 [2024-07-15 03:37:42.990282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.091 [2024-07-15 03:37:42.990309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.091 qpair failed and we were unable to recover it. 00:34:37.091 [2024-07-15 03:37:42.990466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.091 [2024-07-15 03:37:42.990496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.091 qpair failed and we were unable to recover it. 00:34:37.091 [2024-07-15 03:37:42.990652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.091 [2024-07-15 03:37:42.990679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.091 qpair failed and we were unable to recover it. 00:34:37.091 [2024-07-15 03:37:42.990820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.091 [2024-07-15 03:37:42.990863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.091 qpair failed and we were unable to recover it. 00:34:37.091 [2024-07-15 03:37:42.991001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.091 [2024-07-15 03:37:42.991031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.091 qpair failed and we were unable to recover it. 00:34:37.091 [2024-07-15 03:37:42.991200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.091 [2024-07-15 03:37:42.991227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.091 qpair failed and we were unable to recover it. 00:34:37.091 [2024-07-15 03:37:42.991359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.091 [2024-07-15 03:37:42.991403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.092 qpair failed and we were unable to recover it. 00:34:37.092 [2024-07-15 03:37:42.991585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.092 [2024-07-15 03:37:42.991615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.092 qpair failed and we were unable to recover it. 00:34:37.092 [2024-07-15 03:37:42.991791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.092 [2024-07-15 03:37:42.991822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.092 qpair failed and we were unable to recover it. 00:34:37.092 [2024-07-15 03:37:42.991975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.092 [2024-07-15 03:37:42.992004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.092 qpair failed and we were unable to recover it. 00:34:37.092 [2024-07-15 03:37:42.992147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.092 [2024-07-15 03:37:42.992174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.092 qpair failed and we were unable to recover it. 00:34:37.092 [2024-07-15 03:37:42.992340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.092 [2024-07-15 03:37:42.992367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.092 qpair failed and we were unable to recover it. 00:34:37.092 [2024-07-15 03:37:42.992546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.092 [2024-07-15 03:37:42.992576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.092 qpair failed and we were unable to recover it. 00:34:37.092 [2024-07-15 03:37:42.992728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.092 [2024-07-15 03:37:42.992757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.092 qpair failed and we were unable to recover it. 00:34:37.092 [2024-07-15 03:37:42.992919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.092 [2024-07-15 03:37:42.992946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.092 qpair failed and we were unable to recover it. 00:34:37.092 [2024-07-15 03:37:42.993104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.092 [2024-07-15 03:37:42.993133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.092 qpair failed and we were unable to recover it. 00:34:37.092 [2024-07-15 03:37:42.993268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.092 [2024-07-15 03:37:42.993299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.092 qpair failed and we were unable to recover it. 00:34:37.092 [2024-07-15 03:37:42.993461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.092 [2024-07-15 03:37:42.993489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.092 qpair failed and we were unable to recover it. 00:34:37.092 [2024-07-15 03:37:42.993621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.092 [2024-07-15 03:37:42.993664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.092 qpair failed and we were unable to recover it. 00:34:37.092 [2024-07-15 03:37:42.993808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.092 [2024-07-15 03:37:42.993838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.092 qpair failed and we were unable to recover it. 00:34:37.092 [2024-07-15 03:37:42.993991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.092 [2024-07-15 03:37:42.994019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.092 qpair failed and we were unable to recover it. 00:34:37.092 [2024-07-15 03:37:42.994161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.092 [2024-07-15 03:37:42.994189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.092 qpair failed and we were unable to recover it. 00:34:37.092 [2024-07-15 03:37:42.994329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.092 [2024-07-15 03:37:42.994356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.092 qpair failed and we were unable to recover it. 00:34:37.092 [2024-07-15 03:37:42.994554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.092 [2024-07-15 03:37:42.994582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.092 qpair failed and we were unable to recover it. 00:34:37.092 [2024-07-15 03:37:42.994741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.092 [2024-07-15 03:37:42.994771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.092 qpair failed and we were unable to recover it. 00:34:37.092 [2024-07-15 03:37:42.994923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.092 [2024-07-15 03:37:42.994953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.092 qpair failed and we were unable to recover it. 00:34:37.092 [2024-07-15 03:37:42.995096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.092 [2024-07-15 03:37:42.995122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.092 qpair failed and we were unable to recover it. 00:34:37.092 [2024-07-15 03:37:42.995262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.092 [2024-07-15 03:37:42.995289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.092 qpair failed and we were unable to recover it. 00:34:37.092 [2024-07-15 03:37:42.995426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.092 [2024-07-15 03:37:42.995456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.092 qpair failed and we were unable to recover it. 00:34:37.092 [2024-07-15 03:37:42.995611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.092 [2024-07-15 03:37:42.995638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.092 qpair failed and we were unable to recover it. 00:34:37.092 [2024-07-15 03:37:42.995778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.092 [2024-07-15 03:37:42.995823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.092 qpair failed and we were unable to recover it. 00:34:37.092 [2024-07-15 03:37:42.995976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.092 [2024-07-15 03:37:42.996006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.092 qpair failed and we were unable to recover it. 00:34:37.092 [2024-07-15 03:37:42.996170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.092 [2024-07-15 03:37:42.996198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.092 qpair failed and we were unable to recover it. 00:34:37.092 [2024-07-15 03:37:42.996317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.092 [2024-07-15 03:37:42.996360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.092 qpair failed and we were unable to recover it. 00:34:37.092 [2024-07-15 03:37:42.996513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.093 [2024-07-15 03:37:42.996542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.093 qpair failed and we were unable to recover it. 00:34:37.093 [2024-07-15 03:37:42.996671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.093 [2024-07-15 03:37:42.996699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.093 qpair failed and we were unable to recover it. 00:34:37.093 [2024-07-15 03:37:42.996842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.093 [2024-07-15 03:37:42.996870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.093 qpair failed and we were unable to recover it. 00:34:37.093 [2024-07-15 03:37:42.997039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.093 [2024-07-15 03:37:42.997070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.093 qpair failed and we were unable to recover it. 00:34:37.093 [2024-07-15 03:37:42.997210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.093 [2024-07-15 03:37:42.997237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.093 qpair failed and we were unable to recover it. 00:34:37.093 [2024-07-15 03:37:42.997398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.093 [2024-07-15 03:37:42.997442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.093 qpair failed and we were unable to recover it. 00:34:37.093 [2024-07-15 03:37:42.997619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.093 [2024-07-15 03:37:42.997649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.093 qpair failed and we were unable to recover it. 00:34:37.093 [2024-07-15 03:37:42.997803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.093 [2024-07-15 03:37:42.997830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.093 qpair failed and we were unable to recover it. 00:34:37.093 [2024-07-15 03:37:42.997992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.093 [2024-07-15 03:37:42.998020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.093 qpair failed and we were unable to recover it. 00:34:37.093 [2024-07-15 03:37:42.998133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.093 [2024-07-15 03:37:42.998180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.093 qpair failed and we were unable to recover it. 00:34:37.093 [2024-07-15 03:37:42.998333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.093 [2024-07-15 03:37:42.998361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.093 qpair failed and we were unable to recover it. 00:34:37.093 [2024-07-15 03:37:42.998494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.093 [2024-07-15 03:37:42.998538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.093 qpair failed and we were unable to recover it. 00:34:37.093 [2024-07-15 03:37:42.998689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.093 [2024-07-15 03:37:42.998717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.093 qpair failed and we were unable to recover it. 00:34:37.093 [2024-07-15 03:37:42.998906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.093 [2024-07-15 03:37:42.998950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.093 qpair failed and we were unable to recover it. 00:34:37.093 [2024-07-15 03:37:42.999092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.093 [2024-07-15 03:37:42.999118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.093 qpair failed and we were unable to recover it. 00:34:37.093 [2024-07-15 03:37:42.999258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.093 [2024-07-15 03:37:42.999287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.093 qpair failed and we were unable to recover it. 00:34:37.093 [2024-07-15 03:37:42.999470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.093 [2024-07-15 03:37:42.999497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.093 qpair failed and we were unable to recover it. 00:34:37.093 [2024-07-15 03:37:42.999658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.093 [2024-07-15 03:37:42.999687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.093 qpair failed and we were unable to recover it. 00:34:37.093 [2024-07-15 03:37:42.999839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.093 [2024-07-15 03:37:42.999869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.093 qpair failed and we were unable to recover it. 00:34:37.093 [2024-07-15 03:37:43.000012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.093 [2024-07-15 03:37:43.000040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.093 qpair failed and we were unable to recover it. 00:34:37.093 [2024-07-15 03:37:43.000184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.093 [2024-07-15 03:37:43.000212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.093 qpair failed and we were unable to recover it. 00:34:37.093 [2024-07-15 03:37:43.000370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.093 [2024-07-15 03:37:43.000400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.093 qpair failed and we were unable to recover it. 00:34:37.093 [2024-07-15 03:37:43.000520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.093 [2024-07-15 03:37:43.000548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.093 qpair failed and we were unable to recover it. 00:34:37.093 [2024-07-15 03:37:43.000677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.093 [2024-07-15 03:37:43.000704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.093 qpair failed and we were unable to recover it. 00:34:37.093 [2024-07-15 03:37:43.000855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.093 [2024-07-15 03:37:43.000894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.093 qpair failed and we were unable to recover it. 00:34:37.093 [2024-07-15 03:37:43.001055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.093 [2024-07-15 03:37:43.001082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.093 qpair failed and we were unable to recover it. 00:34:37.093 [2024-07-15 03:37:43.001214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.093 [2024-07-15 03:37:43.001258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.093 qpair failed and we were unable to recover it. 00:34:37.093 [2024-07-15 03:37:43.001410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.093 [2024-07-15 03:37:43.001440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.093 qpair failed and we were unable to recover it. 00:34:37.093 [2024-07-15 03:37:43.001601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.094 [2024-07-15 03:37:43.001628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.094 qpair failed and we were unable to recover it. 00:34:37.094 [2024-07-15 03:37:43.001762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.094 [2024-07-15 03:37:43.001806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.094 qpair failed and we were unable to recover it. 00:34:37.094 [2024-07-15 03:37:43.001940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.094 [2024-07-15 03:37:43.001971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.094 qpair failed and we were unable to recover it. 00:34:37.094 [2024-07-15 03:37:43.002130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.094 [2024-07-15 03:37:43.002157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.094 qpair failed and we were unable to recover it. 00:34:37.094 [2024-07-15 03:37:43.002338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.094 [2024-07-15 03:37:43.002368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.094 qpair failed and we were unable to recover it. 00:34:37.094 [2024-07-15 03:37:43.002538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.094 [2024-07-15 03:37:43.002564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.094 qpair failed and we were unable to recover it. 00:34:37.094 [2024-07-15 03:37:43.002730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.094 [2024-07-15 03:37:43.002756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.094 qpair failed and we were unable to recover it. 00:34:37.094 [2024-07-15 03:37:43.002918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.094 [2024-07-15 03:37:43.002949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.094 qpair failed and we were unable to recover it. 00:34:37.094 [2024-07-15 03:37:43.003094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.094 [2024-07-15 03:37:43.003124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.094 qpair failed and we were unable to recover it. 00:34:37.094 [2024-07-15 03:37:43.003305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.094 [2024-07-15 03:37:43.003332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.094 qpair failed and we were unable to recover it. 00:34:37.094 [2024-07-15 03:37:43.003463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.094 [2024-07-15 03:37:43.003490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.094 qpair failed and we were unable to recover it. 00:34:37.094 [2024-07-15 03:37:43.003665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.094 [2024-07-15 03:37:43.003695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.094 qpair failed and we were unable to recover it. 00:34:37.094 [2024-07-15 03:37:43.003856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.094 [2024-07-15 03:37:43.003892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.094 qpair failed and we were unable to recover it. 00:34:37.094 [2024-07-15 03:37:43.004009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.094 [2024-07-15 03:37:43.004036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.094 qpair failed and we were unable to recover it. 00:34:37.094 [2024-07-15 03:37:43.004173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.094 [2024-07-15 03:37:43.004201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.094 qpair failed and we were unable to recover it. 00:34:37.094 [2024-07-15 03:37:43.004365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.094 [2024-07-15 03:37:43.004396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.094 qpair failed and we were unable to recover it. 00:34:37.094 [2024-07-15 03:37:43.004555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.094 [2024-07-15 03:37:43.004586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.094 qpair failed and we were unable to recover it. 00:34:37.094 [2024-07-15 03:37:43.004734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.094 [2024-07-15 03:37:43.004765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.094 qpair failed and we were unable to recover it. 00:34:37.094 [2024-07-15 03:37:43.004910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.094 [2024-07-15 03:37:43.004938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.094 qpair failed and we were unable to recover it. 00:34:37.094 [2024-07-15 03:37:43.005074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.094 [2024-07-15 03:37:43.005102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.094 qpair failed and we were unable to recover it. 00:34:37.094 [2024-07-15 03:37:43.005256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.094 [2024-07-15 03:37:43.005286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.094 qpair failed and we were unable to recover it. 00:34:37.094 [2024-07-15 03:37:43.005476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.094 [2024-07-15 03:37:43.005503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.094 qpair failed and we were unable to recover it. 00:34:37.094 [2024-07-15 03:37:43.005634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.094 [2024-07-15 03:37:43.005665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.094 qpair failed and we were unable to recover it. 00:34:37.094 [2024-07-15 03:37:43.005809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.094 [2024-07-15 03:37:43.005839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.094 qpair failed and we were unable to recover it. 00:34:37.094 [2024-07-15 03:37:43.005986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.094 [2024-07-15 03:37:43.006014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.094 qpair failed and we were unable to recover it. 00:34:37.094 [2024-07-15 03:37:43.006150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.094 [2024-07-15 03:37:43.006192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.094 qpair failed and we were unable to recover it. 00:34:37.094 [2024-07-15 03:37:43.006372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.094 [2024-07-15 03:37:43.006402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.094 qpair failed and we were unable to recover it. 00:34:37.094 [2024-07-15 03:37:43.006565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.094 [2024-07-15 03:37:43.006592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.094 qpair failed and we were unable to recover it. 00:34:37.094 [2024-07-15 03:37:43.006783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.094 [2024-07-15 03:37:43.006813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.094 qpair failed and we were unable to recover it. 00:34:37.094 [2024-07-15 03:37:43.006965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.095 [2024-07-15 03:37:43.006996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.095 qpair failed and we were unable to recover it. 00:34:37.095 [2024-07-15 03:37:43.007143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.095 [2024-07-15 03:37:43.007170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.095 qpair failed and we were unable to recover it. 00:34:37.095 [2024-07-15 03:37:43.007309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.095 [2024-07-15 03:37:43.007351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.095 qpair failed and we were unable to recover it. 00:34:37.095 [2024-07-15 03:37:43.007507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.095 [2024-07-15 03:37:43.007538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.095 qpair failed and we were unable to recover it. 00:34:37.095 [2024-07-15 03:37:43.007675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.095 [2024-07-15 03:37:43.007703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.095 qpair failed and we were unable to recover it. 00:34:37.095 [2024-07-15 03:37:43.007840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.095 [2024-07-15 03:37:43.007867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.095 qpair failed and we were unable to recover it. 00:34:37.095 [2024-07-15 03:37:43.008063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.095 [2024-07-15 03:37:43.008093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.095 qpair failed and we were unable to recover it. 00:34:37.095 [2024-07-15 03:37:43.008261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.095 [2024-07-15 03:37:43.008288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.095 qpair failed and we were unable to recover it. 00:34:37.095 [2024-07-15 03:37:43.008470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.095 [2024-07-15 03:37:43.008500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.095 qpair failed and we were unable to recover it. 00:34:37.095 [2024-07-15 03:37:43.008654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.095 [2024-07-15 03:37:43.008685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.095 qpair failed and we were unable to recover it. 00:34:37.095 [2024-07-15 03:37:43.008848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.095 [2024-07-15 03:37:43.008882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.095 qpair failed and we were unable to recover it. 00:34:37.095 [2024-07-15 03:37:43.009025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.095 [2024-07-15 03:37:43.009070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.095 qpair failed and we were unable to recover it. 00:34:37.095 [2024-07-15 03:37:43.009224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.095 [2024-07-15 03:37:43.009254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.095 qpair failed and we were unable to recover it. 00:34:37.095 [2024-07-15 03:37:43.009420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.095 [2024-07-15 03:37:43.009448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.095 qpair failed and we were unable to recover it. 00:34:37.095 [2024-07-15 03:37:43.009599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.095 [2024-07-15 03:37:43.009629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.095 qpair failed and we were unable to recover it. 00:34:37.095 [2024-07-15 03:37:43.009781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.095 [2024-07-15 03:37:43.009812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.095 qpair failed and we were unable to recover it. 00:34:37.095 [2024-07-15 03:37:43.009981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.095 [2024-07-15 03:37:43.010009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.095 qpair failed and we were unable to recover it. 00:34:37.095 [2024-07-15 03:37:43.010166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.095 [2024-07-15 03:37:43.010197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.095 qpair failed and we were unable to recover it. 00:34:37.095 [2024-07-15 03:37:43.010393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.095 [2024-07-15 03:37:43.010420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.095 qpair failed and we were unable to recover it. 00:34:37.095 [2024-07-15 03:37:43.010534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.095 [2024-07-15 03:37:43.010561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.095 qpair failed and we were unable to recover it. 00:34:37.095 [2024-07-15 03:37:43.010671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.095 [2024-07-15 03:37:43.010699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.095 qpair failed and we were unable to recover it. 00:34:37.095 [2024-07-15 03:37:43.010804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.095 [2024-07-15 03:37:43.010831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.095 qpair failed and we were unable to recover it. 00:34:37.095 [2024-07-15 03:37:43.010983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.095 [2024-07-15 03:37:43.011012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.095 qpair failed and we were unable to recover it. 00:34:37.095 [2024-07-15 03:37:43.011169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.095 [2024-07-15 03:37:43.011199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.095 qpair failed and we were unable to recover it. 00:34:37.095 [2024-07-15 03:37:43.011351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.095 [2024-07-15 03:37:43.011381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.095 qpair failed and we were unable to recover it. 00:34:37.095 [2024-07-15 03:37:43.011516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.095 [2024-07-15 03:37:43.011543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.095 qpair failed and we were unable to recover it. 00:34:37.095 [2024-07-15 03:37:43.011706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.096 [2024-07-15 03:37:43.011737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.096 qpair failed and we were unable to recover it. 00:34:37.096 [2024-07-15 03:37:43.011898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.096 [2024-07-15 03:37:43.011929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.096 qpair failed and we were unable to recover it. 00:34:37.096 [2024-07-15 03:37:43.012114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.096 [2024-07-15 03:37:43.012141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.096 qpair failed and we were unable to recover it. 00:34:37.096 [2024-07-15 03:37:43.012276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.096 [2024-07-15 03:37:43.012321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.096 qpair failed and we were unable to recover it. 00:34:37.096 [2024-07-15 03:37:43.012477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.096 [2024-07-15 03:37:43.012506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.096 qpair failed and we were unable to recover it. 00:34:37.096 [2024-07-15 03:37:43.012694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.096 [2024-07-15 03:37:43.012721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.096 qpair failed and we were unable to recover it. 00:34:37.096 [2024-07-15 03:37:43.012830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.096 [2024-07-15 03:37:43.012884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.096 qpair failed and we were unable to recover it. 00:34:37.096 [2024-07-15 03:37:43.013014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.096 [2024-07-15 03:37:43.013042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.096 qpair failed and we were unable to recover it. 00:34:37.096 [2024-07-15 03:37:43.013173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.096 [2024-07-15 03:37:43.013201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.096 qpair failed and we were unable to recover it. 00:34:37.096 [2024-07-15 03:37:43.013342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.096 [2024-07-15 03:37:43.013388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.096 qpair failed and we were unable to recover it. 00:34:37.096 [2024-07-15 03:37:43.013568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.096 [2024-07-15 03:37:43.013598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.096 qpair failed and we were unable to recover it. 00:34:37.096 [2024-07-15 03:37:43.013749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.096 [2024-07-15 03:37:43.013776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.096 qpair failed and we were unable to recover it. 00:34:37.096 [2024-07-15 03:37:43.013874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.096 [2024-07-15 03:37:43.013908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.096 qpair failed and we were unable to recover it. 00:34:37.096 [2024-07-15 03:37:43.014085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.096 [2024-07-15 03:37:43.014113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.096 qpair failed and we were unable to recover it. 00:34:37.096 [2024-07-15 03:37:43.014258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.096 [2024-07-15 03:37:43.014285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.096 qpair failed and we were unable to recover it. 00:34:37.096 [2024-07-15 03:37:43.014421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.096 [2024-07-15 03:37:43.014449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.096 qpair failed and we were unable to recover it. 00:34:37.096 [2024-07-15 03:37:43.014589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.096 [2024-07-15 03:37:43.014618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.096 qpair failed and we were unable to recover it. 00:34:37.096 [2024-07-15 03:37:43.014727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.096 [2024-07-15 03:37:43.014754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.096 qpair failed and we were unable to recover it. 00:34:37.096 [2024-07-15 03:37:43.014857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.096 [2024-07-15 03:37:43.014894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.096 qpair failed and we were unable to recover it. 00:34:37.096 [2024-07-15 03:37:43.015090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.096 [2024-07-15 03:37:43.015120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.096 qpair failed and we were unable to recover it. 00:34:37.096 [2024-07-15 03:37:43.015249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.096 [2024-07-15 03:37:43.015277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.096 qpair failed and we were unable to recover it. 00:34:37.096 [2024-07-15 03:37:43.015419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.096 [2024-07-15 03:37:43.015446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.096 qpair failed and we were unable to recover it. 00:34:37.096 [2024-07-15 03:37:43.015585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.096 [2024-07-15 03:37:43.015615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.096 qpair failed and we were unable to recover it. 00:34:37.096 [2024-07-15 03:37:43.015779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.096 [2024-07-15 03:37:43.015807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.096 qpair failed and we were unable to recover it. 00:34:37.096 [2024-07-15 03:37:43.015947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.096 [2024-07-15 03:37:43.015993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.096 qpair failed and we were unable to recover it. 00:34:37.096 [2024-07-15 03:37:43.016171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.096 [2024-07-15 03:37:43.016201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.096 qpair failed and we were unable to recover it. 00:34:37.096 [2024-07-15 03:37:43.016358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.096 [2024-07-15 03:37:43.016385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.096 qpair failed and we were unable to recover it. 00:34:37.096 [2024-07-15 03:37:43.016527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.097 [2024-07-15 03:37:43.016573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.097 qpair failed and we were unable to recover it. 00:34:37.097 [2024-07-15 03:37:43.016718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.097 [2024-07-15 03:37:43.016748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.097 qpair failed and we were unable to recover it. 00:34:37.097 [2024-07-15 03:37:43.016903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.097 [2024-07-15 03:37:43.016931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.097 qpair failed and we were unable to recover it. 00:34:37.097 [2024-07-15 03:37:43.017116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.097 [2024-07-15 03:37:43.017146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.097 qpair failed and we were unable to recover it. 00:34:37.097 [2024-07-15 03:37:43.017300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.097 [2024-07-15 03:37:43.017329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.097 qpair failed and we were unable to recover it. 00:34:37.097 [2024-07-15 03:37:43.017511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.097 [2024-07-15 03:37:43.017538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.097 qpair failed and we were unable to recover it. 00:34:37.097 [2024-07-15 03:37:43.017691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.097 [2024-07-15 03:37:43.017721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.097 qpair failed and we were unable to recover it. 00:34:37.097 [2024-07-15 03:37:43.017840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.097 [2024-07-15 03:37:43.017869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.097 qpair failed and we were unable to recover it. 00:34:37.097 [2024-07-15 03:37:43.018036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.097 [2024-07-15 03:37:43.018063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.097 qpair failed and we were unable to recover it. 00:34:37.097 [2024-07-15 03:37:43.018201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.097 [2024-07-15 03:37:43.018246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.097 qpair failed and we were unable to recover it. 00:34:37.097 [2024-07-15 03:37:43.018419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.097 [2024-07-15 03:37:43.018449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.097 qpair failed and we were unable to recover it. 00:34:37.097 [2024-07-15 03:37:43.018604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.097 [2024-07-15 03:37:43.018632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.097 qpair failed and we were unable to recover it. 00:34:37.097 [2024-07-15 03:37:43.018815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.097 [2024-07-15 03:37:43.018845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.097 qpair failed and we were unable to recover it. 00:34:37.097 [2024-07-15 03:37:43.018987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.097 [2024-07-15 03:37:43.019023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.097 qpair failed and we were unable to recover it. 00:34:37.097 [2024-07-15 03:37:43.019183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.097 [2024-07-15 03:37:43.019211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.097 qpair failed and we were unable to recover it. 00:34:37.097 [2024-07-15 03:37:43.019394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.097 [2024-07-15 03:37:43.019424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.097 qpair failed and we were unable to recover it. 00:34:37.097 [2024-07-15 03:37:43.019574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.097 [2024-07-15 03:37:43.019604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.097 qpair failed and we were unable to recover it. 00:34:37.097 [2024-07-15 03:37:43.019738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.097 [2024-07-15 03:37:43.019782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.097 qpair failed and we were unable to recover it. 00:34:37.097 [2024-07-15 03:37:43.019919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.097 [2024-07-15 03:37:43.019947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.097 qpair failed and we were unable to recover it. 00:34:37.097 [2024-07-15 03:37:43.020081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.097 [2024-07-15 03:37:43.020108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.097 qpair failed and we were unable to recover it. 00:34:37.097 [2024-07-15 03:37:43.020269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.097 [2024-07-15 03:37:43.020296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.097 qpair failed and we were unable to recover it. 00:34:37.097 [2024-07-15 03:37:43.020454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.097 [2024-07-15 03:37:43.020483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.097 qpair failed and we were unable to recover it. 00:34:37.097 [2024-07-15 03:37:43.020660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.097 [2024-07-15 03:37:43.020690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.097 qpair failed and we were unable to recover it. 00:34:37.097 [2024-07-15 03:37:43.020815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.097 [2024-07-15 03:37:43.020843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.097 qpair failed and we were unable to recover it. 00:34:37.097 [2024-07-15 03:37:43.020988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.097 [2024-07-15 03:37:43.021016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.097 qpair failed and we were unable to recover it. 00:34:37.097 [2024-07-15 03:37:43.021151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.097 [2024-07-15 03:37:43.021195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.097 qpair failed and we were unable to recover it. 00:34:37.098 [2024-07-15 03:37:43.021355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.098 [2024-07-15 03:37:43.021383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.098 qpair failed and we were unable to recover it. 00:34:37.098 [2024-07-15 03:37:43.021503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.098 [2024-07-15 03:37:43.021530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.098 qpair failed and we were unable to recover it. 00:34:37.098 [2024-07-15 03:37:43.021669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.098 [2024-07-15 03:37:43.021696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.098 qpair failed and we were unable to recover it. 00:34:37.098 [2024-07-15 03:37:43.021862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.098 [2024-07-15 03:37:43.021896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.098 qpair failed and we were unable to recover it. 00:34:37.098 [2024-07-15 03:37:43.022044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.098 [2024-07-15 03:37:43.022075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.098 qpair failed and we were unable to recover it. 00:34:37.098 [2024-07-15 03:37:43.022228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.098 [2024-07-15 03:37:43.022259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.098 qpair failed and we were unable to recover it. 00:34:37.098 [2024-07-15 03:37:43.022394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.098 [2024-07-15 03:37:43.022421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.098 qpair failed and we were unable to recover it. 00:34:37.098 [2024-07-15 03:37:43.022554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.098 [2024-07-15 03:37:43.022598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.098 qpair failed and we were unable to recover it. 00:34:37.098 [2024-07-15 03:37:43.022774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.098 [2024-07-15 03:37:43.022805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.098 qpair failed and we were unable to recover it. 00:34:37.098 [2024-07-15 03:37:43.022961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.098 [2024-07-15 03:37:43.022989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.098 qpair failed and we were unable to recover it. 00:34:37.098 [2024-07-15 03:37:43.023126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.098 [2024-07-15 03:37:43.023169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.098 qpair failed and we were unable to recover it. 00:34:37.098 [2024-07-15 03:37:43.023320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.098 [2024-07-15 03:37:43.023350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.098 qpair failed and we were unable to recover it. 00:34:37.098 [2024-07-15 03:37:43.023538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.098 [2024-07-15 03:37:43.023565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.098 qpair failed and we were unable to recover it. 00:34:37.098 [2024-07-15 03:37:43.023709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.098 [2024-07-15 03:37:43.023736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.098 qpair failed and we were unable to recover it. 00:34:37.098 [2024-07-15 03:37:43.023853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.098 [2024-07-15 03:37:43.023889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.098 qpair failed and we were unable to recover it. 00:34:37.098 [2024-07-15 03:37:43.024056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.098 [2024-07-15 03:37:43.024083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.098 qpair failed and we were unable to recover it. 00:34:37.098 [2024-07-15 03:37:43.024218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.098 [2024-07-15 03:37:43.024265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.098 qpair failed and we were unable to recover it. 00:34:37.098 [2024-07-15 03:37:43.024416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.098 [2024-07-15 03:37:43.024445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.098 qpair failed and we were unable to recover it. 00:34:37.098 [2024-07-15 03:37:43.024577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.098 [2024-07-15 03:37:43.024605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.098 qpair failed and we were unable to recover it. 00:34:37.098 [2024-07-15 03:37:43.024773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.098 [2024-07-15 03:37:43.024818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.098 qpair failed and we were unable to recover it. 00:34:37.098 [2024-07-15 03:37:43.024937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.098 [2024-07-15 03:37:43.024980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.098 qpair failed and we were unable to recover it. 00:34:37.098 [2024-07-15 03:37:43.025116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.098 [2024-07-15 03:37:43.025144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.098 qpair failed and we were unable to recover it. 00:34:37.098 [2024-07-15 03:37:43.025283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.098 [2024-07-15 03:37:43.025311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.098 qpair failed and we were unable to recover it. 00:34:37.098 [2024-07-15 03:37:43.025475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.098 [2024-07-15 03:37:43.025505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.098 qpair failed and we were unable to recover it. 00:34:37.098 [2024-07-15 03:37:43.025668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.098 [2024-07-15 03:37:43.025696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.098 qpair failed and we were unable to recover it. 00:34:37.098 [2024-07-15 03:37:43.025816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.098 [2024-07-15 03:37:43.025844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.098 qpair failed and we were unable to recover it. 00:34:37.098 [2024-07-15 03:37:43.025982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.098 [2024-07-15 03:37:43.026010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.098 qpair failed and we were unable to recover it. 00:34:37.098 [2024-07-15 03:37:43.026178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.098 [2024-07-15 03:37:43.026209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.099 qpair failed and we were unable to recover it. 00:34:37.099 [2024-07-15 03:37:43.026322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.099 [2024-07-15 03:37:43.026366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.099 qpair failed and we were unable to recover it. 00:34:37.099 [2024-07-15 03:37:43.026527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.099 [2024-07-15 03:37:43.026557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.099 qpair failed and we were unable to recover it. 00:34:37.099 [2024-07-15 03:37:43.026716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.099 [2024-07-15 03:37:43.026744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.099 qpair failed and we were unable to recover it. 00:34:37.099 [2024-07-15 03:37:43.026925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.099 [2024-07-15 03:37:43.026955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.099 qpair failed and we were unable to recover it. 00:34:37.099 [2024-07-15 03:37:43.027074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.099 [2024-07-15 03:37:43.027104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.099 qpair failed and we were unable to recover it. 00:34:37.099 [2024-07-15 03:37:43.027267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.099 [2024-07-15 03:37:43.027295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.099 qpair failed and we were unable to recover it. 00:34:37.099 [2024-07-15 03:37:43.027455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.099 [2024-07-15 03:37:43.027482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.099 qpair failed and we were unable to recover it. 00:34:37.099 [2024-07-15 03:37:43.027638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.099 [2024-07-15 03:37:43.027668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.099 qpair failed and we were unable to recover it. 00:34:37.099 [2024-07-15 03:37:43.027803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.099 [2024-07-15 03:37:43.027832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.099 qpair failed and we were unable to recover it. 00:34:37.099 [2024-07-15 03:37:43.027972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.099 [2024-07-15 03:37:43.028000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.099 qpair failed and we were unable to recover it. 00:34:37.099 [2024-07-15 03:37:43.028145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.099 [2024-07-15 03:37:43.028173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.099 qpair failed and we were unable to recover it. 00:34:37.099 [2024-07-15 03:37:43.028336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.099 [2024-07-15 03:37:43.028364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.099 qpair failed and we were unable to recover it. 00:34:37.099 [2024-07-15 03:37:43.028523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.099 [2024-07-15 03:37:43.028554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.099 qpair failed and we were unable to recover it. 00:34:37.099 [2024-07-15 03:37:43.028686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.099 [2024-07-15 03:37:43.028717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.099 qpair failed and we were unable to recover it. 00:34:37.099 [2024-07-15 03:37:43.028915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.099 [2024-07-15 03:37:43.028943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.099 qpair failed and we were unable to recover it. 00:34:37.099 [2024-07-15 03:37:43.029099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.099 [2024-07-15 03:37:43.029128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.099 qpair failed and we were unable to recover it. 00:34:37.099 [2024-07-15 03:37:43.029286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.099 [2024-07-15 03:37:43.029316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.099 qpair failed and we were unable to recover it. 00:34:37.099 [2024-07-15 03:37:43.029444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.099 [2024-07-15 03:37:43.029471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.099 qpair failed and we were unable to recover it. 00:34:37.099 [2024-07-15 03:37:43.029633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.099 [2024-07-15 03:37:43.029679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.099 qpair failed and we were unable to recover it. 00:34:37.099 [2024-07-15 03:37:43.029850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.099 [2024-07-15 03:37:43.029887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.099 qpair failed and we were unable to recover it. 00:34:37.099 [2024-07-15 03:37:43.030014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.099 [2024-07-15 03:37:43.030041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.099 qpair failed and we were unable to recover it. 00:34:37.099 [2024-07-15 03:37:43.030179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.099 [2024-07-15 03:37:43.030205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.099 qpair failed and we were unable to recover it. 00:34:37.099 [2024-07-15 03:37:43.030341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.099 [2024-07-15 03:37:43.030368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.099 qpair failed and we were unable to recover it. 00:34:37.099 [2024-07-15 03:37:43.030531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.099 [2024-07-15 03:37:43.030558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.099 qpair failed and we were unable to recover it. 00:34:37.099 [2024-07-15 03:37:43.030737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.099 [2024-07-15 03:37:43.030767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.099 qpair failed and we were unable to recover it. 00:34:37.099 [2024-07-15 03:37:43.030947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.099 [2024-07-15 03:37:43.030978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.099 qpair failed and we were unable to recover it. 00:34:37.099 [2024-07-15 03:37:43.031149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.099 [2024-07-15 03:37:43.031177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.100 qpair failed and we were unable to recover it. 00:34:37.100 [2024-07-15 03:37:43.031317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.100 [2024-07-15 03:37:43.031344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.100 qpair failed and we were unable to recover it. 00:34:37.100 [2024-07-15 03:37:43.031456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.100 [2024-07-15 03:37:43.031483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.100 qpair failed and we were unable to recover it. 00:34:37.100 [2024-07-15 03:37:43.031646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.100 [2024-07-15 03:37:43.031674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.100 qpair failed and we were unable to recover it. 00:34:37.100 [2024-07-15 03:37:43.031813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.100 [2024-07-15 03:37:43.031841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.100 qpair failed and we were unable to recover it. 00:34:37.100 [2024-07-15 03:37:43.031971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.100 [2024-07-15 03:37:43.031999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.100 qpair failed and we were unable to recover it. 00:34:37.100 [2024-07-15 03:37:43.032130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.100 [2024-07-15 03:37:43.032157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.100 qpair failed and we were unable to recover it. 00:34:37.100 [2024-07-15 03:37:43.032338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.100 [2024-07-15 03:37:43.032368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.100 qpair failed and we were unable to recover it. 00:34:37.100 [2024-07-15 03:37:43.032546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.100 [2024-07-15 03:37:43.032576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.100 qpair failed and we were unable to recover it. 00:34:37.100 [2024-07-15 03:37:43.032740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.100 [2024-07-15 03:37:43.032767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.100 qpair failed and we were unable to recover it. 00:34:37.100 [2024-07-15 03:37:43.032898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.100 [2024-07-15 03:37:43.032926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.100 qpair failed and we were unable to recover it. 00:34:37.100 [2024-07-15 03:37:43.033042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.100 [2024-07-15 03:37:43.033069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.100 qpair failed and we were unable to recover it. 00:34:37.100 [2024-07-15 03:37:43.033239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.100 [2024-07-15 03:37:43.033266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.100 qpair failed and we were unable to recover it. 00:34:37.100 [2024-07-15 03:37:43.033393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.100 [2024-07-15 03:37:43.033427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.100 qpair failed and we were unable to recover it. 00:34:37.100 [2024-07-15 03:37:43.033578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.100 [2024-07-15 03:37:43.033608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.100 qpair failed and we were unable to recover it. 00:34:37.100 [2024-07-15 03:37:43.033740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.100 [2024-07-15 03:37:43.033768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.100 qpair failed and we were unable to recover it. 00:34:37.100 [2024-07-15 03:37:43.033930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.100 [2024-07-15 03:37:43.033958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.100 qpair failed and we were unable to recover it. 00:34:37.100 [2024-07-15 03:37:43.034090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.100 [2024-07-15 03:37:43.034117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.100 qpair failed and we were unable to recover it. 00:34:37.100 [2024-07-15 03:37:43.034268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.100 [2024-07-15 03:37:43.034297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.100 qpair failed and we were unable to recover it. 00:34:37.100 [2024-07-15 03:37:43.034431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.100 [2024-07-15 03:37:43.034459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.100 qpair failed and we were unable to recover it. 00:34:37.100 [2024-07-15 03:37:43.034659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.100 [2024-07-15 03:37:43.034685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.100 qpair failed and we were unable to recover it. 00:34:37.100 [2024-07-15 03:37:43.034803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.100 [2024-07-15 03:37:43.034831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.100 qpair failed and we were unable to recover it. 00:34:37.100 [2024-07-15 03:37:43.034945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.100 [2024-07-15 03:37:43.034973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.100 qpair failed and we were unable to recover it. 00:34:37.100 [2024-07-15 03:37:43.035112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.100 [2024-07-15 03:37:43.035141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.100 qpair failed and we were unable to recover it. 00:34:37.100 [2024-07-15 03:37:43.035338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.100 [2024-07-15 03:37:43.035365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.100 qpair failed and we were unable to recover it. 00:34:37.100 [2024-07-15 03:37:43.035550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.100 [2024-07-15 03:37:43.035580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.100 qpair failed and we were unable to recover it. 00:34:37.100 [2024-07-15 03:37:43.035760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.100 [2024-07-15 03:37:43.035787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.100 qpair failed and we were unable to recover it. 00:34:37.100 [2024-07-15 03:37:43.035931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.100 [2024-07-15 03:37:43.035959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.100 qpair failed and we were unable to recover it. 00:34:37.100 [2024-07-15 03:37:43.036096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.100 [2024-07-15 03:37:43.036123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.100 qpair failed and we were unable to recover it. 00:34:37.101 [2024-07-15 03:37:43.036246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.101 [2024-07-15 03:37:43.036277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.101 qpair failed and we were unable to recover it. 00:34:37.101 [2024-07-15 03:37:43.036440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.101 [2024-07-15 03:37:43.036467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.101 qpair failed and we were unable to recover it. 00:34:37.101 [2024-07-15 03:37:43.036607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.101 [2024-07-15 03:37:43.036634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.101 qpair failed and we were unable to recover it. 00:34:37.101 [2024-07-15 03:37:43.036748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.101 [2024-07-15 03:37:43.036774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.101 qpair failed and we were unable to recover it. 00:34:37.101 [2024-07-15 03:37:43.036904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.101 [2024-07-15 03:37:43.036932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.101 qpair failed and we were unable to recover it. 00:34:37.101 [2024-07-15 03:37:43.037037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.101 [2024-07-15 03:37:43.037065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.101 qpair failed and we were unable to recover it. 00:34:37.101 [2024-07-15 03:37:43.037190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.101 [2024-07-15 03:37:43.037222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.101 qpair failed and we were unable to recover it. 00:34:37.101 [2024-07-15 03:37:43.037379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.101 [2024-07-15 03:37:43.037406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.101 qpair failed and we were unable to recover it. 00:34:37.101 [2024-07-15 03:37:43.037524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.101 [2024-07-15 03:37:43.037552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.101 qpair failed and we were unable to recover it. 00:34:37.101 [2024-07-15 03:37:43.037687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.101 [2024-07-15 03:37:43.037715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.101 qpair failed and we were unable to recover it. 00:34:37.101 [2024-07-15 03:37:43.037843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.101 [2024-07-15 03:37:43.037873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.101 qpair failed and we were unable to recover it. 00:34:37.101 [2024-07-15 03:37:43.038042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.101 [2024-07-15 03:37:43.038083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.101 qpair failed and we were unable to recover it. 00:34:37.101 [2024-07-15 03:37:43.038248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.101 [2024-07-15 03:37:43.038280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.101 qpair failed and we were unable to recover it. 00:34:37.101 [2024-07-15 03:37:43.038443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.101 [2024-07-15 03:37:43.038471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.101 qpair failed and we were unable to recover it. 00:34:37.101 [2024-07-15 03:37:43.038689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.101 [2024-07-15 03:37:43.038753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.101 qpair failed and we were unable to recover it. 00:34:37.101 [2024-07-15 03:37:43.038906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.101 [2024-07-15 03:37:43.038937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.101 qpair failed and we were unable to recover it. 00:34:37.101 [2024-07-15 03:37:43.039073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.101 [2024-07-15 03:37:43.039100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.101 qpair failed and we were unable to recover it. 00:34:37.101 [2024-07-15 03:37:43.039216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.101 [2024-07-15 03:37:43.039242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.101 qpair failed and we were unable to recover it. 00:34:37.101 [2024-07-15 03:37:43.039364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.101 [2024-07-15 03:37:43.039391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.101 qpair failed and we were unable to recover it. 00:34:37.101 [2024-07-15 03:37:43.039529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.101 [2024-07-15 03:37:43.039556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.101 qpair failed and we were unable to recover it. 00:34:37.101 [2024-07-15 03:37:43.039691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.101 [2024-07-15 03:37:43.039718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.101 qpair failed and we were unable to recover it. 00:34:37.101 [2024-07-15 03:37:43.039882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.101 [2024-07-15 03:37:43.039927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.101 qpair failed and we were unable to recover it. 00:34:37.101 [2024-07-15 03:37:43.040035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.101 [2024-07-15 03:37:43.040062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.101 qpair failed and we were unable to recover it. 00:34:37.101 [2024-07-15 03:37:43.040227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.101 [2024-07-15 03:37:43.040271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.101 qpair failed and we were unable to recover it. 00:34:37.101 [2024-07-15 03:37:43.040400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.101 [2024-07-15 03:37:43.040430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.101 qpair failed and we were unable to recover it. 00:34:37.101 [2024-07-15 03:37:43.040565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.101 [2024-07-15 03:37:43.040593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.101 qpair failed and we were unable to recover it. 00:34:37.101 [2024-07-15 03:37:43.040732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.101 [2024-07-15 03:37:43.040759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.102 qpair failed and we were unable to recover it. 00:34:37.102 [2024-07-15 03:37:43.040947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.102 [2024-07-15 03:37:43.040975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.102 qpair failed and we were unable to recover it. 00:34:37.102 [2024-07-15 03:37:43.041089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.102 [2024-07-15 03:37:43.041116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.102 qpair failed and we were unable to recover it. 00:34:37.102 [2024-07-15 03:37:43.041257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.102 [2024-07-15 03:37:43.041302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.102 qpair failed and we were unable to recover it. 00:34:37.102 [2024-07-15 03:37:43.041420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.102 [2024-07-15 03:37:43.041450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.102 qpair failed and we were unable to recover it. 00:34:37.102 [2024-07-15 03:37:43.041631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.102 [2024-07-15 03:37:43.041658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.102 qpair failed and we were unable to recover it. 00:34:37.102 [2024-07-15 03:37:43.041820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.102 [2024-07-15 03:37:43.041850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.102 qpair failed and we were unable to recover it. 00:34:37.102 [2024-07-15 03:37:43.041999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.102 [2024-07-15 03:37:43.042027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.102 qpair failed and we were unable to recover it. 00:34:37.102 [2024-07-15 03:37:43.042161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.102 [2024-07-15 03:37:43.042188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.102 qpair failed and we were unable to recover it. 00:34:37.102 [2024-07-15 03:37:43.042351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.102 [2024-07-15 03:37:43.042378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.102 qpair failed and we were unable to recover it. 00:34:37.102 [2024-07-15 03:37:43.042492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.102 [2024-07-15 03:37:43.042519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.102 qpair failed and we were unable to recover it. 00:34:37.102 [2024-07-15 03:37:43.042658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.102 [2024-07-15 03:37:43.042685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.102 qpair failed and we were unable to recover it. 00:34:37.102 [2024-07-15 03:37:43.042850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.102 [2024-07-15 03:37:43.042886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.102 qpair failed and we were unable to recover it. 00:34:37.102 [2024-07-15 03:37:43.043005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.102 [2024-07-15 03:37:43.043032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.102 qpair failed and we were unable to recover it. 00:34:37.102 [2024-07-15 03:37:43.043143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.102 [2024-07-15 03:37:43.043170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.102 qpair failed and we were unable to recover it. 00:34:37.102 [2024-07-15 03:37:43.043305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.102 [2024-07-15 03:37:43.043348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.102 qpair failed and we were unable to recover it. 00:34:37.102 [2024-07-15 03:37:43.043505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.102 [2024-07-15 03:37:43.043535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.102 qpair failed and we were unable to recover it. 00:34:37.102 [2024-07-15 03:37:43.043665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.102 [2024-07-15 03:37:43.043693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.102 qpair failed and we were unable to recover it. 00:34:37.102 [2024-07-15 03:37:43.043810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.102 [2024-07-15 03:37:43.043836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.102 qpair failed and we were unable to recover it. 00:34:37.102 [2024-07-15 03:37:43.043978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.102 [2024-07-15 03:37:43.044006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.102 qpair failed and we were unable to recover it. 00:34:37.102 [2024-07-15 03:37:43.044117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.102 [2024-07-15 03:37:43.044144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.102 qpair failed and we were unable to recover it. 00:34:37.102 [2024-07-15 03:37:43.044286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.102 [2024-07-15 03:37:43.044314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.102 qpair failed and we were unable to recover it. 00:34:37.102 [2024-07-15 03:37:43.044454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.102 [2024-07-15 03:37:43.044484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.102 qpair failed and we were unable to recover it. 00:34:37.102 [2024-07-15 03:37:43.044649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.102 [2024-07-15 03:37:43.044675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.102 qpair failed and we were unable to recover it. 00:34:37.102 [2024-07-15 03:37:43.044777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.102 [2024-07-15 03:37:43.044804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.102 qpair failed and we were unable to recover it. 00:34:37.102 [2024-07-15 03:37:43.044968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.102 [2024-07-15 03:37:43.044996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.102 qpair failed and we were unable to recover it. 00:34:37.102 [2024-07-15 03:37:43.045119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.102 [2024-07-15 03:37:43.045147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.102 qpair failed and we were unable to recover it. 00:34:37.102 [2024-07-15 03:37:43.045275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.103 [2024-07-15 03:37:43.045318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.103 qpair failed and we were unable to recover it. 00:34:37.103 [2024-07-15 03:37:43.045476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.103 [2024-07-15 03:37:43.045506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.103 qpair failed and we were unable to recover it. 00:34:37.103 [2024-07-15 03:37:43.045690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.103 [2024-07-15 03:37:43.045717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.103 qpair failed and we were unable to recover it. 00:34:37.103 [2024-07-15 03:37:43.045825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.103 [2024-07-15 03:37:43.045871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.103 qpair failed and we were unable to recover it. 00:34:37.103 [2024-07-15 03:37:43.046042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.103 [2024-07-15 03:37:43.046068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.103 qpair failed and we were unable to recover it. 00:34:37.103 [2024-07-15 03:37:43.046179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.103 [2024-07-15 03:37:43.046204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.103 qpair failed and we were unable to recover it. 00:34:37.103 [2024-07-15 03:37:43.046393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.103 [2024-07-15 03:37:43.046422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.103 qpair failed and we were unable to recover it. 00:34:37.103 [2024-07-15 03:37:43.046541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.103 [2024-07-15 03:37:43.046570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.103 qpair failed and we were unable to recover it. 00:34:37.103 [2024-07-15 03:37:43.046725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.103 [2024-07-15 03:37:43.046751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.103 qpair failed and we were unable to recover it. 00:34:37.103 [2024-07-15 03:37:43.046866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.103 [2024-07-15 03:37:43.046899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.103 qpair failed and we were unable to recover it. 00:34:37.103 [2024-07-15 03:37:43.047015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.103 [2024-07-15 03:37:43.047040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.103 qpair failed and we were unable to recover it. 00:34:37.103 [2024-07-15 03:37:43.047178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.103 [2024-07-15 03:37:43.047203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.103 qpair failed and we were unable to recover it. 00:34:37.103 [2024-07-15 03:37:43.047320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.103 [2024-07-15 03:37:43.047368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.103 qpair failed and we were unable to recover it. 00:34:37.103 [2024-07-15 03:37:43.047525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.103 [2024-07-15 03:37:43.047553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.103 qpair failed and we were unable to recover it. 00:34:37.103 [2024-07-15 03:37:43.047721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.103 [2024-07-15 03:37:43.047750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.103 qpair failed and we were unable to recover it. 00:34:37.103 [2024-07-15 03:37:43.047906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.103 [2024-07-15 03:37:43.047950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.103 qpair failed and we were unable to recover it. 00:34:37.103 [2024-07-15 03:37:43.048064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.103 [2024-07-15 03:37:43.048090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.103 qpair failed and we were unable to recover it. 00:34:37.103 [2024-07-15 03:37:43.048205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.103 [2024-07-15 03:37:43.048231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.103 qpair failed and we were unable to recover it. 00:34:37.103 [2024-07-15 03:37:43.048399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.103 [2024-07-15 03:37:43.048425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.103 qpair failed and we were unable to recover it. 00:34:37.103 [2024-07-15 03:37:43.048597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.103 [2024-07-15 03:37:43.048622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.103 qpair failed and we were unable to recover it. 00:34:37.103 [2024-07-15 03:37:43.048755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.103 [2024-07-15 03:37:43.048781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.103 qpair failed and we were unable to recover it. 00:34:37.103 [2024-07-15 03:37:43.048959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.103 [2024-07-15 03:37:43.048986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.103 qpair failed and we were unable to recover it. 00:34:37.103 [2024-07-15 03:37:43.049104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.103 [2024-07-15 03:37:43.049129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.103 qpair failed and we were unable to recover it. 00:34:37.103 [2024-07-15 03:37:43.049268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.103 [2024-07-15 03:37:43.049294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.103 qpair failed and we were unable to recover it. 00:34:37.103 [2024-07-15 03:37:43.049437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.103 [2024-07-15 03:37:43.049469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.103 qpair failed and we were unable to recover it. 00:34:37.103 [2024-07-15 03:37:43.049658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.103 [2024-07-15 03:37:43.049686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.103 qpair failed and we were unable to recover it. 00:34:37.103 [2024-07-15 03:37:43.049819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.103 [2024-07-15 03:37:43.049847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.103 qpair failed and we were unable to recover it. 00:34:37.103 [2024-07-15 03:37:43.049985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.103 [2024-07-15 03:37:43.050011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.103 qpair failed and we were unable to recover it. 00:34:37.103 [2024-07-15 03:37:43.050153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.103 [2024-07-15 03:37:43.050195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.103 qpair failed and we were unable to recover it. 00:34:37.103 [2024-07-15 03:37:43.050358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.103 [2024-07-15 03:37:43.050385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.103 qpair failed and we were unable to recover it. 00:34:37.103 [2024-07-15 03:37:43.050512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.104 [2024-07-15 03:37:43.050537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.104 qpair failed and we were unable to recover it. 00:34:37.104 [2024-07-15 03:37:43.050699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.104 [2024-07-15 03:37:43.050725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.104 qpair failed and we were unable to recover it. 00:34:37.104 [2024-07-15 03:37:43.050868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.104 [2024-07-15 03:37:43.050901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.104 qpair failed and we were unable to recover it. 00:34:37.104 [2024-07-15 03:37:43.051037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.104 [2024-07-15 03:37:43.051062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.104 qpair failed and we were unable to recover it. 00:34:37.104 [2024-07-15 03:37:43.051205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.104 [2024-07-15 03:37:43.051233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.104 qpair failed and we were unable to recover it. 00:34:37.104 [2024-07-15 03:37:43.051394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.104 [2024-07-15 03:37:43.051419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.104 qpair failed and we were unable to recover it. 00:34:37.104 [2024-07-15 03:37:43.051602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.104 [2024-07-15 03:37:43.051630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.104 qpair failed and we were unable to recover it. 00:34:37.104 [2024-07-15 03:37:43.051783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.104 [2024-07-15 03:37:43.051811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.104 qpair failed and we were unable to recover it. 00:34:37.104 [2024-07-15 03:37:43.051956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.104 [2024-07-15 03:37:43.051982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.104 qpair failed and we were unable to recover it. 00:34:37.104 [2024-07-15 03:37:43.052117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.104 [2024-07-15 03:37:43.052152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.104 qpair failed and we were unable to recover it. 00:34:37.104 [2024-07-15 03:37:43.052310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.104 [2024-07-15 03:37:43.052338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.104 qpair failed and we were unable to recover it. 00:34:37.104 [2024-07-15 03:37:43.052470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.104 [2024-07-15 03:37:43.052495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.104 qpair failed and we were unable to recover it. 00:34:37.104 [2024-07-15 03:37:43.052637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.104 [2024-07-15 03:37:43.052662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.104 qpair failed and we were unable to recover it. 00:34:37.104 [2024-07-15 03:37:43.052796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.104 [2024-07-15 03:37:43.052822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.104 qpair failed and we were unable to recover it. 00:34:37.104 [2024-07-15 03:37:43.052967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.104 [2024-07-15 03:37:43.052994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.104 qpair failed and we were unable to recover it. 00:34:37.104 [2024-07-15 03:37:43.053187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.104 [2024-07-15 03:37:43.053215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.104 qpair failed and we were unable to recover it. 00:34:37.104 [2024-07-15 03:37:43.053357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.104 [2024-07-15 03:37:43.053385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.104 qpair failed and we were unable to recover it. 00:34:37.104 [2024-07-15 03:37:43.053546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.104 [2024-07-15 03:37:43.053573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.104 qpair failed and we were unable to recover it. 00:34:37.104 [2024-07-15 03:37:43.053693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.104 [2024-07-15 03:37:43.053718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.104 qpair failed and we were unable to recover it. 00:34:37.104 [2024-07-15 03:37:43.053917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.104 [2024-07-15 03:37:43.053945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.104 qpair failed and we were unable to recover it. 00:34:37.104 [2024-07-15 03:37:43.054078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.104 [2024-07-15 03:37:43.054103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.104 qpair failed and we were unable to recover it. 00:34:37.104 [2024-07-15 03:37:43.054248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.104 [2024-07-15 03:37:43.054273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.104 qpair failed and we were unable to recover it. 00:34:37.104 [2024-07-15 03:37:43.054437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.104 [2024-07-15 03:37:43.054465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.104 qpair failed and we were unable to recover it. 00:34:37.104 [2024-07-15 03:37:43.054628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.104 [2024-07-15 03:37:43.054654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.104 qpair failed and we were unable to recover it. 00:34:37.104 [2024-07-15 03:37:43.054770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.104 [2024-07-15 03:37:43.054795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.104 qpair failed and we were unable to recover it. 00:34:37.104 [2024-07-15 03:37:43.054906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.104 [2024-07-15 03:37:43.054933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.105 qpair failed and we were unable to recover it. 00:34:37.105 [2024-07-15 03:37:43.055053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.105 [2024-07-15 03:37:43.055078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.105 qpair failed and we were unable to recover it. 00:34:37.105 [2024-07-15 03:37:43.055194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.105 [2024-07-15 03:37:43.055220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.105 qpair failed and we were unable to recover it. 00:34:37.105 [2024-07-15 03:37:43.055376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.105 [2024-07-15 03:37:43.055404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.105 qpair failed and we were unable to recover it. 00:34:37.105 [2024-07-15 03:37:43.055528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.105 [2024-07-15 03:37:43.055553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.105 qpair failed and we were unable to recover it. 00:34:37.105 [2024-07-15 03:37:43.055718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.105 [2024-07-15 03:37:43.055744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.105 qpair failed and we were unable to recover it. 00:34:37.105 [2024-07-15 03:37:43.055890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.105 [2024-07-15 03:37:43.055921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.105 qpair failed and we were unable to recover it. 00:34:37.105 [2024-07-15 03:37:43.056053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.105 [2024-07-15 03:37:43.056080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.105 qpair failed and we were unable to recover it. 00:34:37.105 [2024-07-15 03:37:43.056273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.105 [2024-07-15 03:37:43.056302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.105 qpair failed and we were unable to recover it. 00:34:37.105 [2024-07-15 03:37:43.056456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.105 [2024-07-15 03:37:43.056484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.105 qpair failed and we were unable to recover it. 00:34:37.105 [2024-07-15 03:37:43.056615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.105 [2024-07-15 03:37:43.056640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.105 qpair failed and we were unable to recover it. 00:34:37.105 [2024-07-15 03:37:43.056786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.105 [2024-07-15 03:37:43.056812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.105 qpair failed and we were unable to recover it. 00:34:37.105 [2024-07-15 03:37:43.056980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.105 [2024-07-15 03:37:43.057010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.105 qpair failed and we were unable to recover it. 00:34:37.105 [2024-07-15 03:37:43.057150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.105 [2024-07-15 03:37:43.057175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.105 qpair failed and we were unable to recover it. 00:34:37.105 [2024-07-15 03:37:43.057299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.105 [2024-07-15 03:37:43.057328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.105 qpair failed and we were unable to recover it. 00:34:37.105 [2024-07-15 03:37:43.057479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.105 [2024-07-15 03:37:43.057507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.105 qpair failed and we were unable to recover it. 00:34:37.105 [2024-07-15 03:37:43.057626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.105 [2024-07-15 03:37:43.057651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.105 qpair failed and we were unable to recover it. 00:34:37.105 [2024-07-15 03:37:43.057764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.105 [2024-07-15 03:37:43.057789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.105 qpair failed and we were unable to recover it. 00:34:37.105 [2024-07-15 03:37:43.057935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.105 [2024-07-15 03:37:43.057961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.105 qpair failed and we were unable to recover it. 00:34:37.105 [2024-07-15 03:37:43.058081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.105 [2024-07-15 03:37:43.058106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.105 qpair failed and we were unable to recover it. 00:34:37.105 [2024-07-15 03:37:43.058252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.105 [2024-07-15 03:37:43.058296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.105 qpair failed and we were unable to recover it. 00:34:37.105 [2024-07-15 03:37:43.058468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.105 [2024-07-15 03:37:43.058497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.105 qpair failed and we were unable to recover it. 00:34:37.105 [2024-07-15 03:37:43.058654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.105 [2024-07-15 03:37:43.058679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.105 qpair failed and we were unable to recover it. 00:34:37.105 [2024-07-15 03:37:43.058865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.105 [2024-07-15 03:37:43.058909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.105 qpair failed and we were unable to recover it. 00:34:37.105 [2024-07-15 03:37:43.059062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.105 [2024-07-15 03:37:43.059091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.105 qpair failed and we were unable to recover it. 00:34:37.105 [2024-07-15 03:37:43.059227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.105 [2024-07-15 03:37:43.059257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.105 qpair failed and we were unable to recover it. 00:34:37.105 [2024-07-15 03:37:43.059418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.105 [2024-07-15 03:37:43.059446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.105 qpair failed and we were unable to recover it. 00:34:37.105 [2024-07-15 03:37:43.059592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.105 [2024-07-15 03:37:43.059620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.105 qpair failed and we were unable to recover it. 00:34:37.105 [2024-07-15 03:37:43.059783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.105 [2024-07-15 03:37:43.059808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.105 qpair failed and we were unable to recover it. 00:34:37.105 [2024-07-15 03:37:43.059970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.105 [2024-07-15 03:37:43.059996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.105 qpair failed and we were unable to recover it. 00:34:37.105 [2024-07-15 03:37:43.060113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.105 [2024-07-15 03:37:43.060138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.105 qpair failed and we were unable to recover it. 00:34:37.105 [2024-07-15 03:37:43.060283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.105 [2024-07-15 03:37:43.060308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.106 qpair failed and we were unable to recover it. 00:34:37.106 [2024-07-15 03:37:43.060450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.106 [2024-07-15 03:37:43.060476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.106 qpair failed and we were unable to recover it. 00:34:37.106 [2024-07-15 03:37:43.060641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.106 [2024-07-15 03:37:43.060666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.106 qpair failed and we were unable to recover it. 00:34:37.106 [2024-07-15 03:37:43.060805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.106 [2024-07-15 03:37:43.060830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.106 qpair failed and we were unable to recover it. 00:34:37.106 [2024-07-15 03:37:43.060984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.106 [2024-07-15 03:37:43.061010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.106 qpair failed and we were unable to recover it. 00:34:37.106 [2024-07-15 03:37:43.061114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.106 [2024-07-15 03:37:43.061157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.106 qpair failed and we were unable to recover it. 00:34:37.106 [2024-07-15 03:37:43.061310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.106 [2024-07-15 03:37:43.061335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.106 qpair failed and we were unable to recover it. 00:34:37.106 [2024-07-15 03:37:43.061472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.106 [2024-07-15 03:37:43.061513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.106 qpair failed and we were unable to recover it. 00:34:37.106 [2024-07-15 03:37:43.061675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.106 [2024-07-15 03:37:43.061703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.106 qpair failed and we were unable to recover it. 00:34:37.106 [2024-07-15 03:37:43.061864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.106 [2024-07-15 03:37:43.061895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.106 qpair failed and we were unable to recover it. 00:34:37.106 [2024-07-15 03:37:43.062030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.106 [2024-07-15 03:37:43.062055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.106 qpair failed and we were unable to recover it. 00:34:37.106 [2024-07-15 03:37:43.062201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.106 [2024-07-15 03:37:43.062229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.106 qpair failed and we were unable to recover it. 00:34:37.106 [2024-07-15 03:37:43.062389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.106 [2024-07-15 03:37:43.062414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.106 qpair failed and we were unable to recover it. 00:34:37.106 [2024-07-15 03:37:43.062551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.106 [2024-07-15 03:37:43.062576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.106 qpair failed and we were unable to recover it. 00:34:37.106 [2024-07-15 03:37:43.062740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.106 [2024-07-15 03:37:43.062768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.106 qpair failed and we were unable to recover it. 00:34:37.106 [2024-07-15 03:37:43.062918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.106 [2024-07-15 03:37:43.062946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.106 qpair failed and we were unable to recover it. 00:34:37.106 [2024-07-15 03:37:43.063090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.106 [2024-07-15 03:37:43.063115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.106 qpair failed and we were unable to recover it. 00:34:37.106 [2024-07-15 03:37:43.063238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.106 [2024-07-15 03:37:43.063267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.106 qpair failed and we were unable to recover it. 00:34:37.106 [2024-07-15 03:37:43.063402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.106 [2024-07-15 03:37:43.063427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.106 qpair failed and we were unable to recover it. 00:34:37.106 [2024-07-15 03:37:43.063543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.106 [2024-07-15 03:37:43.063568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.106 qpair failed and we were unable to recover it. 00:34:37.106 [2024-07-15 03:37:43.063696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.106 [2024-07-15 03:37:43.063724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.106 qpair failed and we were unable to recover it. 00:34:37.106 [2024-07-15 03:37:43.063874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.106 [2024-07-15 03:37:43.063932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.106 qpair failed and we were unable to recover it. 00:34:37.106 [2024-07-15 03:37:43.064049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.106 [2024-07-15 03:37:43.064075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.106 qpair failed and we were unable to recover it. 00:34:37.106 [2024-07-15 03:37:43.064247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.106 [2024-07-15 03:37:43.064274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.106 qpair failed and we were unable to recover it. 00:34:37.106 [2024-07-15 03:37:43.064435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.106 [2024-07-15 03:37:43.064460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.106 qpair failed and we were unable to recover it. 00:34:37.106 [2024-07-15 03:37:43.064603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.106 [2024-07-15 03:37:43.064629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.106 qpair failed and we were unable to recover it. 00:34:37.106 [2024-07-15 03:37:43.064733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.106 [2024-07-15 03:37:43.064758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.106 qpair failed and we were unable to recover it. 00:34:37.106 [2024-07-15 03:37:43.064905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.106 [2024-07-15 03:37:43.064931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.106 qpair failed and we were unable to recover it. 00:34:37.106 [2024-07-15 03:37:43.065043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.106 [2024-07-15 03:37:43.065069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.106 qpair failed and we were unable to recover it. 00:34:37.106 [2024-07-15 03:37:43.065182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.106 [2024-07-15 03:37:43.065207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.106 qpair failed and we were unable to recover it. 00:34:37.106 [2024-07-15 03:37:43.065344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.106 [2024-07-15 03:37:43.065370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.106 qpair failed and we were unable to recover it. 00:34:37.106 [2024-07-15 03:37:43.065539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.106 [2024-07-15 03:37:43.065568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.106 qpair failed and we were unable to recover it. 00:34:37.106 [2024-07-15 03:37:43.065720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.106 [2024-07-15 03:37:43.065748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.106 qpair failed and we were unable to recover it. 00:34:37.106 [2024-07-15 03:37:43.065934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.106 [2024-07-15 03:37:43.065960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.106 qpair failed and we were unable to recover it. 00:34:37.107 [2024-07-15 03:37:43.066067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.107 [2024-07-15 03:37:43.066094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.107 qpair failed and we were unable to recover it. 00:34:37.107 [2024-07-15 03:37:43.066261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.107 [2024-07-15 03:37:43.066289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.107 qpair failed and we were unable to recover it. 00:34:37.107 [2024-07-15 03:37:43.066425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.107 [2024-07-15 03:37:43.066450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.107 qpair failed and we were unable to recover it. 00:34:37.107 [2024-07-15 03:37:43.066590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.107 [2024-07-15 03:37:43.066615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.107 qpair failed and we were unable to recover it. 00:34:37.107 [2024-07-15 03:37:43.066801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.107 [2024-07-15 03:37:43.066829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.107 qpair failed and we were unable to recover it. 00:34:37.107 [2024-07-15 03:37:43.066977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.107 [2024-07-15 03:37:43.067003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.107 qpair failed and we were unable to recover it. 00:34:37.107 [2024-07-15 03:37:43.067143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.107 [2024-07-15 03:37:43.067183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.107 qpair failed and we were unable to recover it. 00:34:37.107 [2024-07-15 03:37:43.067335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.107 [2024-07-15 03:37:43.067364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.107 qpair failed and we were unable to recover it. 00:34:37.107 [2024-07-15 03:37:43.067495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.107 [2024-07-15 03:37:43.067520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.107 qpair failed and we were unable to recover it. 00:34:37.107 [2024-07-15 03:37:43.067686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.107 [2024-07-15 03:37:43.067712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.107 qpair failed and we were unable to recover it. 00:34:37.107 [2024-07-15 03:37:43.067905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.107 [2024-07-15 03:37:43.067939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.107 qpair failed and we were unable to recover it. 00:34:37.107 [2024-07-15 03:37:43.068100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.107 [2024-07-15 03:37:43.068126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.107 qpair failed and we were unable to recover it. 00:34:37.107 [2024-07-15 03:37:43.068311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.107 [2024-07-15 03:37:43.068340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.107 qpair failed and we were unable to recover it. 00:34:37.107 [2024-07-15 03:37:43.068488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.107 [2024-07-15 03:37:43.068516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.107 qpair failed and we were unable to recover it. 00:34:37.107 [2024-07-15 03:37:43.068682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.107 [2024-07-15 03:37:43.068711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.107 qpair failed and we were unable to recover it. 00:34:37.107 [2024-07-15 03:37:43.068859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.107 [2024-07-15 03:37:43.068890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.107 qpair failed and we were unable to recover it. 00:34:37.107 [2024-07-15 03:37:43.069005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.107 [2024-07-15 03:37:43.069030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.107 qpair failed and we were unable to recover it. 00:34:37.107 [2024-07-15 03:37:43.069145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.107 [2024-07-15 03:37:43.069171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.107 qpair failed and we were unable to recover it. 00:34:37.107 [2024-07-15 03:37:43.069359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.107 [2024-07-15 03:37:43.069387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.107 qpair failed and we were unable to recover it. 00:34:37.107 [2024-07-15 03:37:43.069567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.107 [2024-07-15 03:37:43.069595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.107 qpair failed and we were unable to recover it. 00:34:37.107 [2024-07-15 03:37:43.069783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.107 [2024-07-15 03:37:43.069826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.107 qpair failed and we were unable to recover it. 00:34:37.107 [2024-07-15 03:37:43.070005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.107 [2024-07-15 03:37:43.070031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.107 qpair failed and we were unable to recover it. 00:34:37.107 [2024-07-15 03:37:43.070196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.107 [2024-07-15 03:37:43.070225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.107 qpair failed and we were unable to recover it. 00:34:37.107 [2024-07-15 03:37:43.070350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.107 [2024-07-15 03:37:43.070375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.107 qpair failed and we were unable to recover it. 00:34:37.107 [2024-07-15 03:37:43.070484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.107 [2024-07-15 03:37:43.070509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.107 qpair failed and we were unable to recover it. 00:34:37.107 [2024-07-15 03:37:43.070691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.107 [2024-07-15 03:37:43.070719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.107 qpair failed and we were unable to recover it. 00:34:37.107 [2024-07-15 03:37:43.070859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.107 [2024-07-15 03:37:43.070891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.107 qpair failed and we were unable to recover it. 00:34:37.107 [2024-07-15 03:37:43.071018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.107 [2024-07-15 03:37:43.071043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.107 qpair failed and we were unable to recover it. 00:34:37.108 [2024-07-15 03:37:43.071186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.108 [2024-07-15 03:37:43.071212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.108 qpair failed and we were unable to recover it. 00:34:37.108 [2024-07-15 03:37:43.071346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.108 [2024-07-15 03:37:43.071371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.108 qpair failed and we were unable to recover it. 00:34:37.108 [2024-07-15 03:37:43.071509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.108 [2024-07-15 03:37:43.071534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.108 qpair failed and we were unable to recover it. 00:34:37.108 [2024-07-15 03:37:43.071705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.108 [2024-07-15 03:37:43.071730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.108 qpair failed and we were unable to recover it. 00:34:37.108 [2024-07-15 03:37:43.071902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.108 [2024-07-15 03:37:43.071929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.108 qpair failed and we were unable to recover it. 00:34:37.108 [2024-07-15 03:37:43.072096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.108 [2024-07-15 03:37:43.072121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.108 qpair failed and we were unable to recover it. 00:34:37.108 [2024-07-15 03:37:43.072246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.108 [2024-07-15 03:37:43.072274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.108 qpair failed and we were unable to recover it. 00:34:37.108 [2024-07-15 03:37:43.072403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.108 [2024-07-15 03:37:43.072428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.108 qpair failed and we were unable to recover it. 00:34:37.108 [2024-07-15 03:37:43.072561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.108 [2024-07-15 03:37:43.072587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.108 qpair failed and we were unable to recover it. 00:34:37.108 [2024-07-15 03:37:43.072752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.108 [2024-07-15 03:37:43.072780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.108 qpair failed and we were unable to recover it. 00:34:37.108 [2024-07-15 03:37:43.072948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.108 [2024-07-15 03:37:43.072975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.108 qpair failed and we were unable to recover it. 00:34:37.108 [2024-07-15 03:37:43.073139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.108 [2024-07-15 03:37:43.073164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.108 qpair failed and we were unable to recover it. 00:34:37.108 [2024-07-15 03:37:43.073331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.108 [2024-07-15 03:37:43.073360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.108 qpair failed and we were unable to recover it. 00:34:37.108 [2024-07-15 03:37:43.073513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.108 [2024-07-15 03:37:43.073539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.108 qpair failed and we were unable to recover it. 00:34:37.108 [2024-07-15 03:37:43.073657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.108 [2024-07-15 03:37:43.073683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.108 qpair failed and we were unable to recover it. 00:34:37.108 [2024-07-15 03:37:43.073809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.108 [2024-07-15 03:37:43.073838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.108 qpair failed and we were unable to recover it. 00:34:37.108 [2024-07-15 03:37:43.073998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.108 [2024-07-15 03:37:43.074024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.108 qpair failed and we were unable to recover it. 00:34:37.108 [2024-07-15 03:37:43.074181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.108 [2024-07-15 03:37:43.074209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.108 qpair failed and we were unable to recover it. 00:34:37.108 [2024-07-15 03:37:43.074357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.108 [2024-07-15 03:37:43.074385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.108 qpair failed and we were unable to recover it. 00:34:37.108 [2024-07-15 03:37:43.074512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.108 [2024-07-15 03:37:43.074542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.108 qpair failed and we were unable to recover it. 00:34:37.108 [2024-07-15 03:37:43.074680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.108 [2024-07-15 03:37:43.074705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.108 qpair failed and we were unable to recover it. 00:34:37.108 [2024-07-15 03:37:43.074896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.108 [2024-07-15 03:37:43.074932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.108 qpair failed and we were unable to recover it. 00:34:37.108 [2024-07-15 03:37:43.075065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.108 [2024-07-15 03:37:43.075090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.108 qpair failed and we were unable to recover it. 00:34:37.108 [2024-07-15 03:37:43.075242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.108 [2024-07-15 03:37:43.075267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.108 qpair failed and we were unable to recover it. 00:34:37.108 [2024-07-15 03:37:43.075471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.108 [2024-07-15 03:37:43.075499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.108 qpair failed and we were unable to recover it. 00:34:37.108 [2024-07-15 03:37:43.075654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.108 [2024-07-15 03:37:43.075680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.108 qpair failed and we were unable to recover it. 00:34:37.108 [2024-07-15 03:37:43.075782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.108 [2024-07-15 03:37:43.075807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.108 qpair failed and we were unable to recover it. 00:34:37.108 [2024-07-15 03:37:43.075971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.108 [2024-07-15 03:37:43.076002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.108 qpair failed and we were unable to recover it. 00:34:37.108 [2024-07-15 03:37:43.076152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.108 [2024-07-15 03:37:43.076177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.108 qpair failed and we were unable to recover it. 00:34:37.108 [2024-07-15 03:37:43.076286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.108 [2024-07-15 03:37:43.076327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.108 qpair failed and we were unable to recover it. 00:34:37.108 [2024-07-15 03:37:43.076473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.108 [2024-07-15 03:37:43.076502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.108 qpair failed and we were unable to recover it. 00:34:37.108 [2024-07-15 03:37:43.076638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.108 [2024-07-15 03:37:43.076678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.108 qpair failed and we were unable to recover it. 00:34:37.108 [2024-07-15 03:37:43.076833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.108 [2024-07-15 03:37:43.076863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.108 qpair failed and we were unable to recover it. 00:34:37.108 [2024-07-15 03:37:43.077043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.108 [2024-07-15 03:37:43.077069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.108 qpair failed and we were unable to recover it. 00:34:37.108 [2024-07-15 03:37:43.077176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.108 [2024-07-15 03:37:43.077201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.108 qpair failed and we were unable to recover it. 00:34:37.108 [2024-07-15 03:37:43.077382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.109 [2024-07-15 03:37:43.077410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.109 qpair failed and we were unable to recover it. 00:34:37.109 [2024-07-15 03:37:43.077561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.109 [2024-07-15 03:37:43.077590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.109 qpair failed and we were unable to recover it. 00:34:37.109 [2024-07-15 03:37:43.077718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.109 [2024-07-15 03:37:43.077744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.109 qpair failed and we were unable to recover it. 00:34:37.109 [2024-07-15 03:37:43.077913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.109 [2024-07-15 03:37:43.077964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.109 qpair failed and we were unable to recover it. 00:34:37.109 [2024-07-15 03:37:43.078143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.109 [2024-07-15 03:37:43.078176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.109 qpair failed and we were unable to recover it. 00:34:37.109 [2024-07-15 03:37:43.078327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.109 [2024-07-15 03:37:43.078353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.109 qpair failed and we were unable to recover it. 00:34:37.109 [2024-07-15 03:37:43.078465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.109 [2024-07-15 03:37:43.078490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.109 qpair failed and we were unable to recover it. 00:34:37.109 [2024-07-15 03:37:43.078686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.109 [2024-07-15 03:37:43.078711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.109 qpair failed and we were unable to recover it. 00:34:37.109 [2024-07-15 03:37:43.078847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.109 [2024-07-15 03:37:43.078873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.109 qpair failed and we were unable to recover it. 00:34:37.109 [2024-07-15 03:37:43.079027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.109 [2024-07-15 03:37:43.079053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.109 qpair failed and we were unable to recover it. 00:34:37.109 [2024-07-15 03:37:43.079162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.109 [2024-07-15 03:37:43.079203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.109 qpair failed and we were unable to recover it. 00:34:37.109 [2024-07-15 03:37:43.079383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.109 [2024-07-15 03:37:43.079409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.109 qpair failed and we were unable to recover it. 00:34:37.109 [2024-07-15 03:37:43.079514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.109 [2024-07-15 03:37:43.079556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.109 qpair failed and we were unable to recover it. 00:34:37.109 [2024-07-15 03:37:43.079678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.109 [2024-07-15 03:37:43.079706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.109 qpair failed and we were unable to recover it. 00:34:37.109 [2024-07-15 03:37:43.079869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.109 [2024-07-15 03:37:43.079910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.109 qpair failed and we were unable to recover it. 00:34:37.109 [2024-07-15 03:37:43.080050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.109 [2024-07-15 03:37:43.080076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.109 qpair failed and we were unable to recover it. 00:34:37.109 [2024-07-15 03:37:43.080218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.109 [2024-07-15 03:37:43.080247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.109 qpair failed and we were unable to recover it. 00:34:37.109 [2024-07-15 03:37:43.080441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.109 [2024-07-15 03:37:43.080467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.109 qpair failed and we were unable to recover it. 00:34:37.109 [2024-07-15 03:37:43.080604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.109 [2024-07-15 03:37:43.080629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.109 qpair failed and we were unable to recover it. 00:34:37.109 [2024-07-15 03:37:43.080806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.109 [2024-07-15 03:37:43.080838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.109 qpair failed and we were unable to recover it. 00:34:37.109 [2024-07-15 03:37:43.081009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.109 [2024-07-15 03:37:43.081035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.109 qpair failed and we were unable to recover it. 00:34:37.109 [2024-07-15 03:37:43.081218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.109 [2024-07-15 03:37:43.081246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.109 qpair failed and we were unable to recover it. 00:34:37.109 [2024-07-15 03:37:43.081364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.109 [2024-07-15 03:37:43.081392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.109 qpair failed and we were unable to recover it. 00:34:37.109 [2024-07-15 03:37:43.081559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.109 [2024-07-15 03:37:43.081584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.109 qpair failed and we were unable to recover it. 00:34:37.109 [2024-07-15 03:37:43.081744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.109 [2024-07-15 03:37:43.081772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.109 qpair failed and we were unable to recover it. 00:34:37.109 [2024-07-15 03:37:43.081971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.109 [2024-07-15 03:37:43.082000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.110 qpair failed and we were unable to recover it. 00:34:37.110 [2024-07-15 03:37:43.082156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.110 [2024-07-15 03:37:43.082181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.110 qpair failed and we were unable to recover it. 00:34:37.110 [2024-07-15 03:37:43.082317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.110 [2024-07-15 03:37:43.082358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.110 qpair failed and we were unable to recover it. 00:34:37.110 [2024-07-15 03:37:43.082533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.110 [2024-07-15 03:37:43.082562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.110 qpair failed and we were unable to recover it. 00:34:37.110 [2024-07-15 03:37:43.082750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.110 [2024-07-15 03:37:43.082775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.110 qpair failed and we were unable to recover it. 00:34:37.110 [2024-07-15 03:37:43.082937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.110 [2024-07-15 03:37:43.082966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.110 qpair failed and we were unable to recover it. 00:34:37.110 [2024-07-15 03:37:43.083119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.110 [2024-07-15 03:37:43.083149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.110 qpair failed and we were unable to recover it. 00:34:37.110 [2024-07-15 03:37:43.083283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.110 [2024-07-15 03:37:43.083313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.110 qpair failed and we were unable to recover it. 00:34:37.110 [2024-07-15 03:37:43.083452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.110 [2024-07-15 03:37:43.083502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.110 qpair failed and we were unable to recover it. 00:34:37.110 [2024-07-15 03:37:43.083651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.110 [2024-07-15 03:37:43.083679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.110 qpair failed and we were unable to recover it. 00:34:37.110 [2024-07-15 03:37:43.083831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.110 [2024-07-15 03:37:43.083859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.110 qpair failed and we were unable to recover it. 00:34:37.110 [2024-07-15 03:37:43.084038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.110 [2024-07-15 03:37:43.084077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.110 qpair failed and we were unable to recover it. 00:34:37.110 [2024-07-15 03:37:43.084203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.110 [2024-07-15 03:37:43.084231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.110 qpair failed and we were unable to recover it. 00:34:37.110 [2024-07-15 03:37:43.084349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.110 [2024-07-15 03:37:43.084375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.110 qpair failed and we were unable to recover it. 00:34:37.110 [2024-07-15 03:37:43.084496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.110 [2024-07-15 03:37:43.084523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.110 qpair failed and we were unable to recover it. 00:34:37.110 [2024-07-15 03:37:43.084625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.110 [2024-07-15 03:37:43.084651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.110 qpair failed and we were unable to recover it. 00:34:37.110 [2024-07-15 03:37:43.084795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.110 [2024-07-15 03:37:43.084827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.110 qpair failed and we were unable to recover it. 00:34:37.110 [2024-07-15 03:37:43.084978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.110 [2024-07-15 03:37:43.085005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.110 qpair failed and we were unable to recover it. 00:34:37.110 [2024-07-15 03:37:43.085112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.110 [2024-07-15 03:37:43.085138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.110 qpair failed and we were unable to recover it. 00:34:37.110 [2024-07-15 03:37:43.085306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.110 [2024-07-15 03:37:43.085332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.110 qpair failed and we were unable to recover it. 00:34:37.110 [2024-07-15 03:37:43.085498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.110 [2024-07-15 03:37:43.085531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.110 qpair failed and we were unable to recover it. 00:34:37.110 [2024-07-15 03:37:43.085662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.110 [2024-07-15 03:37:43.085693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.110 qpair failed and we were unable to recover it. 00:34:37.110 [2024-07-15 03:37:43.085852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.110 [2024-07-15 03:37:43.085889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.110 qpair failed and we were unable to recover it. 00:34:37.110 [2024-07-15 03:37:43.086085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.110 [2024-07-15 03:37:43.086111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.110 qpair failed and we were unable to recover it. 00:34:37.110 [2024-07-15 03:37:43.086282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.110 [2024-07-15 03:37:43.086311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.110 qpair failed and we were unable to recover it. 00:34:37.110 [2024-07-15 03:37:43.086531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.110 [2024-07-15 03:37:43.086583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.110 qpair failed and we were unable to recover it. 00:34:37.110 [2024-07-15 03:37:43.086732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.110 [2024-07-15 03:37:43.086760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.110 qpair failed and we were unable to recover it. 00:34:37.110 [2024-07-15 03:37:43.086944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.110 [2024-07-15 03:37:43.086970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.110 qpair failed and we were unable to recover it. 00:34:37.110 [2024-07-15 03:37:43.087088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.110 [2024-07-15 03:37:43.087115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.110 qpair failed and we were unable to recover it. 00:34:37.110 [2024-07-15 03:37:43.087261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.110 [2024-07-15 03:37:43.087286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.110 qpair failed and we were unable to recover it. 00:34:37.110 [2024-07-15 03:37:43.087427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.110 [2024-07-15 03:37:43.087453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.110 qpair failed and we were unable to recover it. 00:34:37.110 [2024-07-15 03:37:43.087609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.110 [2024-07-15 03:37:43.087634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.110 qpair failed and we were unable to recover it. 00:34:37.110 [2024-07-15 03:37:43.087774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.111 [2024-07-15 03:37:43.087799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.111 qpair failed and we were unable to recover it. 00:34:37.111 [2024-07-15 03:37:43.087935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.111 [2024-07-15 03:37:43.087962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.111 qpair failed and we were unable to recover it. 00:34:37.111 [2024-07-15 03:37:43.088080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.111 [2024-07-15 03:37:43.088107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.111 qpair failed and we were unable to recover it. 00:34:37.111 [2024-07-15 03:37:43.088239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.111 [2024-07-15 03:37:43.088278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.111 qpair failed and we were unable to recover it. 00:34:37.111 [2024-07-15 03:37:43.088429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.111 [2024-07-15 03:37:43.088455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.111 qpair failed and we were unable to recover it. 00:34:37.111 [2024-07-15 03:37:43.088582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.111 [2024-07-15 03:37:43.088609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.111 qpair failed and we were unable to recover it. 00:34:37.111 [2024-07-15 03:37:43.088753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.111 [2024-07-15 03:37:43.088778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.111 qpair failed and we were unable to recover it. 00:34:37.111 [2024-07-15 03:37:43.088946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.111 [2024-07-15 03:37:43.088972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.111 qpair failed and we were unable to recover it. 00:34:37.111 [2024-07-15 03:37:43.089113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.111 [2024-07-15 03:37:43.089140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.111 qpair failed and we were unable to recover it. 00:34:37.111 [2024-07-15 03:37:43.089274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.111 [2024-07-15 03:37:43.089299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.111 qpair failed and we were unable to recover it. 00:34:37.111 [2024-07-15 03:37:43.089411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.111 [2024-07-15 03:37:43.089438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.111 qpair failed and we were unable to recover it. 00:34:37.111 [2024-07-15 03:37:43.089608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.111 [2024-07-15 03:37:43.089633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.111 qpair failed and we were unable to recover it. 00:34:37.111 [2024-07-15 03:37:43.089800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.111 [2024-07-15 03:37:43.089825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.111 qpair failed and we were unable to recover it. 00:34:37.111 [2024-07-15 03:37:43.089949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.111 [2024-07-15 03:37:43.089977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.111 qpair failed and we were unable to recover it. 00:34:37.111 [2024-07-15 03:37:43.090142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.111 [2024-07-15 03:37:43.090168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.111 qpair failed and we were unable to recover it. 00:34:37.111 [2024-07-15 03:37:43.090308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.111 [2024-07-15 03:37:43.090335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.111 qpair failed and we were unable to recover it. 00:34:37.111 [2024-07-15 03:37:43.090504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.111 [2024-07-15 03:37:43.090530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.111 qpair failed and we were unable to recover it. 00:34:37.111 [2024-07-15 03:37:43.090705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.111 [2024-07-15 03:37:43.090731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.111 qpair failed and we were unable to recover it. 00:34:37.111 [2024-07-15 03:37:43.090883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.111 [2024-07-15 03:37:43.090924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.111 qpair failed and we were unable to recover it. 00:34:37.111 [2024-07-15 03:37:43.091074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.111 [2024-07-15 03:37:43.091101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.111 qpair failed and we were unable to recover it. 00:34:37.111 [2024-07-15 03:37:43.091232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.111 [2024-07-15 03:37:43.091259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.111 qpair failed and we were unable to recover it. 00:34:37.111 [2024-07-15 03:37:43.091424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.111 [2024-07-15 03:37:43.091449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.111 qpair failed and we were unable to recover it. 00:34:37.111 [2024-07-15 03:37:43.091584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.111 [2024-07-15 03:37:43.091610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.111 qpair failed and we were unable to recover it. 00:34:37.111 [2024-07-15 03:37:43.091749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.111 [2024-07-15 03:37:43.091777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.111 qpair failed and we were unable to recover it. 00:34:37.111 [2024-07-15 03:37:43.091958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.111 [2024-07-15 03:37:43.091985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.111 qpair failed and we were unable to recover it. 00:34:37.111 [2024-07-15 03:37:43.092119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.111 [2024-07-15 03:37:43.092154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.111 qpair failed and we were unable to recover it. 00:34:37.111 [2024-07-15 03:37:43.092318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.111 [2024-07-15 03:37:43.092344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.111 qpair failed and we were unable to recover it. 00:34:37.111 [2024-07-15 03:37:43.092474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.111 [2024-07-15 03:37:43.092500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.111 qpair failed and we were unable to recover it. 00:34:37.111 [2024-07-15 03:37:43.092644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.111 [2024-07-15 03:37:43.092669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.111 qpair failed and we were unable to recover it. 00:34:37.111 [2024-07-15 03:37:43.092831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.111 [2024-07-15 03:37:43.092867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.111 qpair failed and we were unable to recover it. 00:34:37.111 [2024-07-15 03:37:43.093032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.111 [2024-07-15 03:37:43.093058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.111 qpair failed and we were unable to recover it. 00:34:37.111 [2024-07-15 03:37:43.093224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.111 [2024-07-15 03:37:43.093249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.111 qpair failed and we were unable to recover it. 00:34:37.111 [2024-07-15 03:37:43.093369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.111 [2024-07-15 03:37:43.093395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.111 qpair failed and we were unable to recover it. 00:34:37.111 [2024-07-15 03:37:43.093534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.111 [2024-07-15 03:37:43.093561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.111 qpair failed and we were unable to recover it. 00:34:37.111 [2024-07-15 03:37:43.093735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.111 [2024-07-15 03:37:43.093763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.111 qpair failed and we were unable to recover it. 00:34:37.111 [2024-07-15 03:37:43.093896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.111 [2024-07-15 03:37:43.093923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.111 qpair failed and we were unable to recover it. 00:34:37.111 [2024-07-15 03:37:43.094086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.111 [2024-07-15 03:37:43.094112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.111 qpair failed and we were unable to recover it. 00:34:37.111 [2024-07-15 03:37:43.094255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.111 [2024-07-15 03:37:43.094281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.112 qpair failed and we were unable to recover it. 00:34:37.112 [2024-07-15 03:37:43.094434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.112 [2024-07-15 03:37:43.094459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.112 qpair failed and we were unable to recover it. 00:34:37.112 [2024-07-15 03:37:43.094595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.112 [2024-07-15 03:37:43.094620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.112 qpair failed and we were unable to recover it. 00:34:37.112 [2024-07-15 03:37:43.094743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.112 [2024-07-15 03:37:43.094771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.112 qpair failed and we were unable to recover it. 00:34:37.112 [2024-07-15 03:37:43.094928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.112 [2024-07-15 03:37:43.094955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.112 qpair failed and we were unable to recover it. 00:34:37.112 [2024-07-15 03:37:43.095092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.112 [2024-07-15 03:37:43.095117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.112 qpair failed and we were unable to recover it. 00:34:37.112 [2024-07-15 03:37:43.095264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.112 [2024-07-15 03:37:43.095290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.112 qpair failed and we were unable to recover it. 00:34:37.112 [2024-07-15 03:37:43.095431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.112 [2024-07-15 03:37:43.095458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.112 qpair failed and we were unable to recover it. 00:34:37.112 [2024-07-15 03:37:43.095622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.112 [2024-07-15 03:37:43.095648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.112 qpair failed and we were unable to recover it. 00:34:37.112 [2024-07-15 03:37:43.095786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.112 [2024-07-15 03:37:43.095812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.112 qpair failed and we were unable to recover it. 00:34:37.112 [2024-07-15 03:37:43.095953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.112 [2024-07-15 03:37:43.095979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.112 qpair failed and we were unable to recover it. 00:34:37.112 [2024-07-15 03:37:43.096145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.112 [2024-07-15 03:37:43.096170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.112 qpair failed and we were unable to recover it. 00:34:37.112 [2024-07-15 03:37:43.096274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.112 [2024-07-15 03:37:43.096299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.112 qpair failed and we were unable to recover it. 00:34:37.112 [2024-07-15 03:37:43.096417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.112 [2024-07-15 03:37:43.096443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.112 qpair failed and we were unable to recover it. 00:34:37.112 [2024-07-15 03:37:43.096583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.112 [2024-07-15 03:37:43.096609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.112 qpair failed and we were unable to recover it. 00:34:37.112 [2024-07-15 03:37:43.096750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.112 [2024-07-15 03:37:43.096779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.112 qpair failed and we were unable to recover it. 00:34:37.112 [2024-07-15 03:37:43.096956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.112 [2024-07-15 03:37:43.096982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.112 qpair failed and we were unable to recover it. 00:34:37.112 [2024-07-15 03:37:43.097091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.112 [2024-07-15 03:37:43.097117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.112 qpair failed and we were unable to recover it. 00:34:37.112 [2024-07-15 03:37:43.097237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.112 [2024-07-15 03:37:43.097263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.112 qpair failed and we were unable to recover it. 00:34:37.112 [2024-07-15 03:37:43.097372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.112 [2024-07-15 03:37:43.097398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.112 qpair failed and we were unable to recover it. 00:34:37.112 [2024-07-15 03:37:43.097544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.112 [2024-07-15 03:37:43.097571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.112 qpair failed and we were unable to recover it. 00:34:37.112 [2024-07-15 03:37:43.097732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.112 [2024-07-15 03:37:43.097760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.112 qpair failed and we were unable to recover it. 00:34:37.112 [2024-07-15 03:37:43.097889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.112 [2024-07-15 03:37:43.097915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.112 qpair failed and we were unable to recover it. 00:34:37.112 [2024-07-15 03:37:43.098050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.112 [2024-07-15 03:37:43.098076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.112 qpair failed and we were unable to recover it. 00:34:37.112 [2024-07-15 03:37:43.098215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.112 [2024-07-15 03:37:43.098241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.112 qpair failed and we were unable to recover it. 00:34:37.112 [2024-07-15 03:37:43.098385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.112 [2024-07-15 03:37:43.098411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.112 qpair failed and we were unable to recover it. 00:34:37.112 [2024-07-15 03:37:43.098554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.112 [2024-07-15 03:37:43.098580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.112 qpair failed and we were unable to recover it. 00:34:37.112 [2024-07-15 03:37:43.098698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.112 [2024-07-15 03:37:43.098723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.112 qpair failed and we were unable to recover it. 00:34:37.112 [2024-07-15 03:37:43.098832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.112 [2024-07-15 03:37:43.098857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.112 qpair failed and we were unable to recover it. 00:34:37.112 [2024-07-15 03:37:43.099014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.112 [2024-07-15 03:37:43.099040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.112 qpair failed and we were unable to recover it. 00:34:37.112 [2024-07-15 03:37:43.099183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.112 [2024-07-15 03:37:43.099208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.112 qpair failed and we were unable to recover it. 00:34:37.112 [2024-07-15 03:37:43.099375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.112 [2024-07-15 03:37:43.099400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.112 qpair failed and we were unable to recover it. 00:34:37.112 [2024-07-15 03:37:43.099513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.112 [2024-07-15 03:37:43.099538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.112 qpair failed and we were unable to recover it. 00:34:37.112 [2024-07-15 03:37:43.099675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.112 [2024-07-15 03:37:43.099702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.112 qpair failed and we were unable to recover it. 00:34:37.112 [2024-07-15 03:37:43.099838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.112 [2024-07-15 03:37:43.099872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.112 qpair failed and we were unable to recover it. 00:34:37.112 [2024-07-15 03:37:43.100059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.112 [2024-07-15 03:37:43.100098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.112 qpair failed and we were unable to recover it. 00:34:37.112 [2024-07-15 03:37:43.100281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.112 [2024-07-15 03:37:43.100310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.112 qpair failed and we were unable to recover it. 00:34:37.112 [2024-07-15 03:37:43.100513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.112 [2024-07-15 03:37:43.100542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.112 qpair failed and we were unable to recover it. 00:34:37.112 [2024-07-15 03:37:43.100759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.112 [2024-07-15 03:37:43.100788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.112 qpair failed and we were unable to recover it. 00:34:37.113 [2024-07-15 03:37:43.100954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.113 [2024-07-15 03:37:43.100981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.113 qpair failed and we were unable to recover it. 00:34:37.113 [2024-07-15 03:37:43.101122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.113 [2024-07-15 03:37:43.101148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.113 qpair failed and we were unable to recover it. 00:34:37.113 [2024-07-15 03:37:43.101268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.113 [2024-07-15 03:37:43.101293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.113 qpair failed and we were unable to recover it. 00:34:37.113 [2024-07-15 03:37:43.101398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.113 [2024-07-15 03:37:43.101423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.113 qpair failed and we were unable to recover it. 00:34:37.113 [2024-07-15 03:37:43.101580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.113 [2024-07-15 03:37:43.101606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.113 qpair failed and we were unable to recover it. 00:34:37.113 [2024-07-15 03:37:43.101742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.113 [2024-07-15 03:37:43.101769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.113 qpair failed and we were unable to recover it. 00:34:37.113 [2024-07-15 03:37:43.101904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.113 [2024-07-15 03:37:43.101937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.113 qpair failed and we were unable to recover it. 00:34:37.113 [2024-07-15 03:37:43.102101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.113 [2024-07-15 03:37:43.102150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.113 qpair failed and we were unable to recover it. 00:34:37.113 [2024-07-15 03:37:43.102329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.113 [2024-07-15 03:37:43.102357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.113 qpair failed and we were unable to recover it. 00:34:37.113 [2024-07-15 03:37:43.102475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.113 [2024-07-15 03:37:43.102503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.113 qpair failed and we were unable to recover it. 00:34:37.113 [2024-07-15 03:37:43.102664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.113 [2024-07-15 03:37:43.102714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.113 qpair failed and we were unable to recover it. 00:34:37.113 [2024-07-15 03:37:43.102837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.113 [2024-07-15 03:37:43.102864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.113 qpair failed and we were unable to recover it. 00:34:37.113 [2024-07-15 03:37:43.102988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.113 [2024-07-15 03:37:43.103015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.113 qpair failed and we were unable to recover it. 00:34:37.113 [2024-07-15 03:37:43.103139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.113 [2024-07-15 03:37:43.103165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.113 qpair failed and we were unable to recover it. 00:34:37.113 [2024-07-15 03:37:43.103275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.113 [2024-07-15 03:37:43.103301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.113 qpair failed and we were unable to recover it. 00:34:37.113 [2024-07-15 03:37:43.103441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.113 [2024-07-15 03:37:43.103466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.113 qpair failed and we were unable to recover it. 00:34:37.113 [2024-07-15 03:37:43.103599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.113 [2024-07-15 03:37:43.103626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.113 qpair failed and we were unable to recover it. 00:34:37.113 [2024-07-15 03:37:43.103765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.113 [2024-07-15 03:37:43.103791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.113 qpair failed and we were unable to recover it. 00:34:37.113 [2024-07-15 03:37:43.103929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.113 [2024-07-15 03:37:43.103957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.113 qpair failed and we were unable to recover it. 00:34:37.113 [2024-07-15 03:37:43.104073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.113 [2024-07-15 03:37:43.104098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.113 qpair failed and we were unable to recover it. 00:34:37.113 [2024-07-15 03:37:43.104289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.113 [2024-07-15 03:37:43.104323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.113 qpair failed and we were unable to recover it. 00:34:37.113 [2024-07-15 03:37:43.104498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.113 [2024-07-15 03:37:43.104527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.113 qpair failed and we were unable to recover it. 00:34:37.113 [2024-07-15 03:37:43.104680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.113 [2024-07-15 03:37:43.104708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.113 qpair failed and we were unable to recover it. 00:34:37.113 [2024-07-15 03:37:43.104825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.113 [2024-07-15 03:37:43.104854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.113 qpair failed and we were unable to recover it. 00:34:37.113 [2024-07-15 03:37:43.104998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.113 [2024-07-15 03:37:43.105023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.113 qpair failed and we were unable to recover it. 00:34:37.113 [2024-07-15 03:37:43.105134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.113 [2024-07-15 03:37:43.105160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.113 qpair failed and we were unable to recover it. 00:34:37.113 [2024-07-15 03:37:43.105275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.113 [2024-07-15 03:37:43.105300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.113 qpair failed and we were unable to recover it. 00:34:37.113 [2024-07-15 03:37:43.105468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.113 [2024-07-15 03:37:43.105496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.113 qpair failed and we were unable to recover it. 00:34:37.113 [2024-07-15 03:37:43.105648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.113 [2024-07-15 03:37:43.105676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.113 qpair failed and we were unable to recover it. 00:34:37.113 [2024-07-15 03:37:43.105796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.113 [2024-07-15 03:37:43.105824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.113 qpair failed and we were unable to recover it. 00:34:37.113 [2024-07-15 03:37:43.105975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.113 [2024-07-15 03:37:43.106002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.113 qpair failed and we were unable to recover it. 00:34:37.113 [2024-07-15 03:37:43.106146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.113 [2024-07-15 03:37:43.106172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.113 qpair failed and we were unable to recover it. 00:34:37.113 [2024-07-15 03:37:43.106362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.113 [2024-07-15 03:37:43.106390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.113 qpair failed and we were unable to recover it. 00:34:37.113 [2024-07-15 03:37:43.106566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.113 [2024-07-15 03:37:43.106595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.113 qpair failed and we were unable to recover it. 00:34:37.113 [2024-07-15 03:37:43.106758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.113 [2024-07-15 03:37:43.106786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.113 qpair failed and we were unable to recover it. 00:34:37.113 [2024-07-15 03:37:43.106935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.113 [2024-07-15 03:37:43.106962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.113 qpair failed and we were unable to recover it. 00:34:37.113 [2024-07-15 03:37:43.107067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.113 [2024-07-15 03:37:43.107093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.113 qpair failed and we were unable to recover it. 00:34:37.113 [2024-07-15 03:37:43.107240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.113 [2024-07-15 03:37:43.107265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.113 qpair failed and we were unable to recover it. 00:34:37.113 [2024-07-15 03:37:43.107448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.113 [2024-07-15 03:37:43.107477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.113 qpair failed and we were unable to recover it. 00:34:37.114 [2024-07-15 03:37:43.107685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.114 [2024-07-15 03:37:43.107713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.114 qpair failed and we were unable to recover it. 00:34:37.114 [2024-07-15 03:37:43.107863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.114 [2024-07-15 03:37:43.107901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.114 qpair failed and we were unable to recover it. 00:34:37.114 [2024-07-15 03:37:43.108023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.114 [2024-07-15 03:37:43.108049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.114 qpair failed and we were unable to recover it. 00:34:37.114 [2024-07-15 03:37:43.108214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.114 [2024-07-15 03:37:43.108243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.114 qpair failed and we were unable to recover it. 00:34:37.114 [2024-07-15 03:37:43.108414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.114 [2024-07-15 03:37:43.108442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.114 qpair failed and we were unable to recover it. 00:34:37.114 [2024-07-15 03:37:43.108571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.114 [2024-07-15 03:37:43.108599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.114 qpair failed and we were unable to recover it. 00:34:37.114 [2024-07-15 03:37:43.108738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.114 [2024-07-15 03:37:43.108766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.114 qpair failed and we were unable to recover it. 00:34:37.114 [2024-07-15 03:37:43.108900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.114 [2024-07-15 03:37:43.108926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.114 qpair failed and we were unable to recover it. 00:34:37.114 [2024-07-15 03:37:43.109070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.114 [2024-07-15 03:37:43.109096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.114 qpair failed and we were unable to recover it. 00:34:37.114 [2024-07-15 03:37:43.109226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.114 [2024-07-15 03:37:43.109254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.114 qpair failed and we were unable to recover it. 00:34:37.114 [2024-07-15 03:37:43.109431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.114 [2024-07-15 03:37:43.109459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.114 qpair failed and we were unable to recover it. 00:34:37.114 [2024-07-15 03:37:43.109650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.114 [2024-07-15 03:37:43.109678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.114 qpair failed and we were unable to recover it. 00:34:37.114 [2024-07-15 03:37:43.109793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.114 [2024-07-15 03:37:43.109821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.114 qpair failed and we were unable to recover it. 00:34:37.114 [2024-07-15 03:37:43.109957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.114 [2024-07-15 03:37:43.109983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.114 qpair failed and we were unable to recover it. 00:34:37.114 [2024-07-15 03:37:43.110117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.114 [2024-07-15 03:37:43.110151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.114 qpair failed and we were unable to recover it. 00:34:37.114 [2024-07-15 03:37:43.110282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.114 [2024-07-15 03:37:43.110310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.114 qpair failed and we were unable to recover it. 00:34:37.114 [2024-07-15 03:37:43.110461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.114 [2024-07-15 03:37:43.110490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.114 qpair failed and we were unable to recover it. 00:34:37.114 [2024-07-15 03:37:43.110631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.114 [2024-07-15 03:37:43.110672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.114 qpair failed and we were unable to recover it. 00:34:37.114 [2024-07-15 03:37:43.110823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.114 [2024-07-15 03:37:43.110851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.114 qpair failed and we were unable to recover it. 00:34:37.114 [2024-07-15 03:37:43.111004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.114 [2024-07-15 03:37:43.111043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.114 qpair failed and we were unable to recover it. 00:34:37.114 [2024-07-15 03:37:43.111225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.114 [2024-07-15 03:37:43.111253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.114 qpair failed and we were unable to recover it. 00:34:37.114 [2024-07-15 03:37:43.111389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.114 [2024-07-15 03:37:43.111418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.114 qpair failed and we were unable to recover it. 00:34:37.114 [2024-07-15 03:37:43.111614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.114 [2024-07-15 03:37:43.111643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.114 qpair failed and we were unable to recover it. 00:34:37.114 [2024-07-15 03:37:43.111799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.114 [2024-07-15 03:37:43.111827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.114 qpair failed and we were unable to recover it. 00:34:37.114 [2024-07-15 03:37:43.111980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.114 [2024-07-15 03:37:43.112007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.114 qpair failed and we were unable to recover it. 00:34:37.114 [2024-07-15 03:37:43.112115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.114 [2024-07-15 03:37:43.112141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.114 qpair failed and we were unable to recover it. 00:34:37.114 [2024-07-15 03:37:43.112276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.114 [2024-07-15 03:37:43.112318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.114 qpair failed and we were unable to recover it. 00:34:37.115 [2024-07-15 03:37:43.112496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.115 [2024-07-15 03:37:43.112524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.115 qpair failed and we were unable to recover it. 00:34:37.115 [2024-07-15 03:37:43.112705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.115 [2024-07-15 03:37:43.112734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.115 qpair failed and we were unable to recover it. 00:34:37.115 [2024-07-15 03:37:43.112890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.115 [2024-07-15 03:37:43.112919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.115 qpair failed and we were unable to recover it. 00:34:37.115 [2024-07-15 03:37:43.113077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.115 [2024-07-15 03:37:43.113103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.115 qpair failed and we were unable to recover it. 00:34:37.115 [2024-07-15 03:37:43.113224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.115 [2024-07-15 03:37:43.113249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.115 qpair failed and we were unable to recover it. 00:34:37.115 [2024-07-15 03:37:43.113364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.115 [2024-07-15 03:37:43.113406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.115 qpair failed and we were unable to recover it. 00:34:37.115 [2024-07-15 03:37:43.113527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.115 [2024-07-15 03:37:43.113555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.115 qpair failed and we were unable to recover it. 00:34:37.115 [2024-07-15 03:37:43.113733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.115 [2024-07-15 03:37:43.113761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.115 qpair failed and we were unable to recover it. 00:34:37.115 [2024-07-15 03:37:43.113933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.115 [2024-07-15 03:37:43.113959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.115 qpair failed and we were unable to recover it. 00:34:37.115 [2024-07-15 03:37:43.114070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.115 [2024-07-15 03:37:43.114096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.115 qpair failed and we were unable to recover it. 00:34:37.115 [2024-07-15 03:37:43.114241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.115 [2024-07-15 03:37:43.114266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.115 qpair failed and we were unable to recover it. 00:34:37.115 [2024-07-15 03:37:43.114416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.115 [2024-07-15 03:37:43.114444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.115 qpair failed and we were unable to recover it. 00:34:37.115 [2024-07-15 03:37:43.114590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.115 [2024-07-15 03:37:43.114619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.115 qpair failed and we were unable to recover it. 00:34:37.115 [2024-07-15 03:37:43.114759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.115 [2024-07-15 03:37:43.114784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.115 qpair failed and we were unable to recover it. 00:34:37.115 [2024-07-15 03:37:43.114952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.115 [2024-07-15 03:37:43.114979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.115 qpair failed and we were unable to recover it. 00:34:37.115 [2024-07-15 03:37:43.115118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.115 [2024-07-15 03:37:43.115168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.115 qpair failed and we were unable to recover it. 00:34:37.115 [2024-07-15 03:37:43.115301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.115 [2024-07-15 03:37:43.115327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.115 qpair failed and we were unable to recover it. 00:34:37.115 [2024-07-15 03:37:43.115461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.115 [2024-07-15 03:37:43.115486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.115 qpair failed and we were unable to recover it. 00:34:37.115 [2024-07-15 03:37:43.115678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.115 [2024-07-15 03:37:43.115705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.115 qpair failed and we were unable to recover it. 00:34:37.115 [2024-07-15 03:37:43.115826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.115 [2024-07-15 03:37:43.115851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.115 qpair failed and we were unable to recover it. 00:34:37.115 [2024-07-15 03:37:43.116008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.115 [2024-07-15 03:37:43.116035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.115 qpair failed and we were unable to recover it. 00:34:37.115 [2024-07-15 03:37:43.116192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.115 [2024-07-15 03:37:43.116222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.115 qpair failed and we were unable to recover it. 00:34:37.115 [2024-07-15 03:37:43.116377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.115 [2024-07-15 03:37:43.116407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.115 qpair failed and we were unable to recover it. 00:34:37.115 [2024-07-15 03:37:43.116525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.115 [2024-07-15 03:37:43.116551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.115 qpair failed and we were unable to recover it. 00:34:37.115 [2024-07-15 03:37:43.116717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.115 [2024-07-15 03:37:43.116747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.115 qpair failed and we were unable to recover it. 00:34:37.115 [2024-07-15 03:37:43.116896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.115 [2024-07-15 03:37:43.116922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.115 qpair failed and we were unable to recover it. 00:34:37.115 [2024-07-15 03:37:43.117070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.115 [2024-07-15 03:37:43.117095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.115 qpair failed and we were unable to recover it. 00:34:37.115 [2024-07-15 03:37:43.117223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.115 [2024-07-15 03:37:43.117251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.115 qpair failed and we were unable to recover it. 00:34:37.115 [2024-07-15 03:37:43.117401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.115 [2024-07-15 03:37:43.117429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.115 qpair failed and we were unable to recover it. 00:34:37.115 [2024-07-15 03:37:43.117580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.115 [2024-07-15 03:37:43.117609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.115 qpair failed and we were unable to recover it. 00:34:37.115 [2024-07-15 03:37:43.117784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.115 [2024-07-15 03:37:43.117811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.115 qpair failed and we were unable to recover it. 00:34:37.115 [2024-07-15 03:37:43.117979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.115 [2024-07-15 03:37:43.118006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.115 qpair failed and we were unable to recover it. 00:34:37.115 [2024-07-15 03:37:43.118176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.115 [2024-07-15 03:37:43.118205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.115 qpair failed and we were unable to recover it. 00:34:37.115 [2024-07-15 03:37:43.118357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.115 [2024-07-15 03:37:43.118385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.115 qpair failed and we were unable to recover it. 00:34:37.115 [2024-07-15 03:37:43.118538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.115 [2024-07-15 03:37:43.118566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.115 qpair failed and we were unable to recover it. 00:34:37.115 [2024-07-15 03:37:43.118720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.115 [2024-07-15 03:37:43.118748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.115 qpair failed and we were unable to recover it. 00:34:37.116 [2024-07-15 03:37:43.118950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.116 [2024-07-15 03:37:43.118977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.116 qpair failed and we were unable to recover it. 00:34:37.116 [2024-07-15 03:37:43.119093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.116 [2024-07-15 03:37:43.119118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.116 qpair failed and we were unable to recover it. 00:34:37.116 [2024-07-15 03:37:43.119255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.116 [2024-07-15 03:37:43.119280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.116 qpair failed and we were unable to recover it. 00:34:37.116 [2024-07-15 03:37:43.119407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.116 [2024-07-15 03:37:43.119435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.116 qpair failed and we were unable to recover it. 00:34:37.116 [2024-07-15 03:37:43.119555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.116 [2024-07-15 03:37:43.119582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.116 qpair failed and we were unable to recover it. 00:34:37.116 [2024-07-15 03:37:43.119759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.116 [2024-07-15 03:37:43.119788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.116 qpair failed and we were unable to recover it. 00:34:37.116 [2024-07-15 03:37:43.119944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.116 [2024-07-15 03:37:43.119971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.116 qpair failed and we were unable to recover it. 00:34:37.116 [2024-07-15 03:37:43.120136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.116 [2024-07-15 03:37:43.120161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.116 qpair failed and we were unable to recover it. 00:34:37.116 [2024-07-15 03:37:43.120318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.116 [2024-07-15 03:37:43.120346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.116 qpair failed and we were unable to recover it. 00:34:37.116 [2024-07-15 03:37:43.120469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.116 [2024-07-15 03:37:43.120497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.116 qpair failed and we were unable to recover it. 00:34:37.116 [2024-07-15 03:37:43.120682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.116 [2024-07-15 03:37:43.120729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.116 qpair failed and we were unable to recover it. 00:34:37.116 [2024-07-15 03:37:43.120897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.116 [2024-07-15 03:37:43.120944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.116 qpair failed and we were unable to recover it. 00:34:37.116 [2024-07-15 03:37:43.121060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.116 [2024-07-15 03:37:43.121085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.116 qpair failed and we were unable to recover it. 00:34:37.116 [2024-07-15 03:37:43.121233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.116 [2024-07-15 03:37:43.121262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.116 qpair failed and we were unable to recover it. 00:34:37.116 [2024-07-15 03:37:43.121421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.116 [2024-07-15 03:37:43.121449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.116 qpair failed and we were unable to recover it. 00:34:37.116 [2024-07-15 03:37:43.121626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.116 [2024-07-15 03:37:43.121654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.116 qpair failed and we were unable to recover it. 00:34:37.116 [2024-07-15 03:37:43.121806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.116 [2024-07-15 03:37:43.121834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.116 qpair failed and we were unable to recover it. 00:34:37.116 [2024-07-15 03:37:43.121982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.116 [2024-07-15 03:37:43.122008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.116 qpair failed and we were unable to recover it. 00:34:37.116 [2024-07-15 03:37:43.122125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.116 [2024-07-15 03:37:43.122151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.116 qpair failed and we were unable to recover it. 00:34:37.116 [2024-07-15 03:37:43.122338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.116 [2024-07-15 03:37:43.122366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.116 qpair failed and we were unable to recover it. 00:34:37.116 [2024-07-15 03:37:43.122546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.116 [2024-07-15 03:37:43.122574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.116 qpair failed and we were unable to recover it. 00:34:37.116 [2024-07-15 03:37:43.122697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.116 [2024-07-15 03:37:43.122724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.116 qpair failed and we were unable to recover it. 00:34:37.116 [2024-07-15 03:37:43.122909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.116 [2024-07-15 03:37:43.122937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.116 qpair failed and we were unable to recover it. 00:34:37.116 [2024-07-15 03:37:43.123044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.116 [2024-07-15 03:37:43.123069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.116 qpair failed and we were unable to recover it. 00:34:37.116 [2024-07-15 03:37:43.123224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.116 [2024-07-15 03:37:43.123252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.116 qpair failed and we were unable to recover it. 00:34:37.116 [2024-07-15 03:37:43.123384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.116 [2024-07-15 03:37:43.123409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.116 qpair failed and we were unable to recover it. 00:34:37.116 [2024-07-15 03:37:43.123531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.116 [2024-07-15 03:37:43.123556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.116 qpair failed and we were unable to recover it. 00:34:37.116 [2024-07-15 03:37:43.123763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.116 [2024-07-15 03:37:43.123788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.116 qpair failed and we were unable to recover it. 00:34:37.116 [2024-07-15 03:37:43.123929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.116 [2024-07-15 03:37:43.123965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.116 qpair failed and we were unable to recover it. 00:34:37.116 [2024-07-15 03:37:43.124136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.116 [2024-07-15 03:37:43.124197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.116 qpair failed and we were unable to recover it. 00:34:37.116 [2024-07-15 03:37:43.124333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.116 [2024-07-15 03:37:43.124378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.116 qpair failed and we were unable to recover it. 00:34:37.116 [2024-07-15 03:37:43.124536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.116 [2024-07-15 03:37:43.124565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.116 qpair failed and we were unable to recover it. 00:34:37.116 [2024-07-15 03:37:43.124732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.116 [2024-07-15 03:37:43.124761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.116 qpair failed and we were unable to recover it. 00:34:37.116 [2024-07-15 03:37:43.124912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.116 [2024-07-15 03:37:43.124945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.116 qpair failed and we were unable to recover it. 00:34:37.116 [2024-07-15 03:37:43.125089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.116 [2024-07-15 03:37:43.125118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.116 qpair failed and we were unable to recover it. 00:34:37.117 [2024-07-15 03:37:43.125260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.117 [2024-07-15 03:37:43.125288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.117 qpair failed and we were unable to recover it. 00:34:37.117 [2024-07-15 03:37:43.125417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.117 [2024-07-15 03:37:43.125443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.117 qpair failed and we were unable to recover it. 00:34:37.117 [2024-07-15 03:37:43.125581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.117 [2024-07-15 03:37:43.125607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.117 qpair failed and we were unable to recover it. 00:34:37.117 [2024-07-15 03:37:43.125713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.117 [2024-07-15 03:37:43.125738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.117 qpair failed and we were unable to recover it. 00:34:37.117 [2024-07-15 03:37:43.125855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.117 [2024-07-15 03:37:43.125888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.117 qpair failed and we were unable to recover it. 00:34:37.117 [2024-07-15 03:37:43.126043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.117 [2024-07-15 03:37:43.126078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.117 qpair failed and we were unable to recover it. 00:34:37.117 [2024-07-15 03:37:43.126231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.117 [2024-07-15 03:37:43.126280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.117 qpair failed and we were unable to recover it. 00:34:37.117 [2024-07-15 03:37:43.126430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.117 [2024-07-15 03:37:43.126460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.117 qpair failed and we were unable to recover it. 00:34:37.117 [2024-07-15 03:37:43.126621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.117 [2024-07-15 03:37:43.126647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.117 qpair failed and we were unable to recover it. 00:34:37.117 [2024-07-15 03:37:43.126788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.117 [2024-07-15 03:37:43.126814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.117 qpair failed and we were unable to recover it. 00:34:37.117 [2024-07-15 03:37:43.126974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.117 [2024-07-15 03:37:43.127004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.117 qpair failed and we were unable to recover it. 00:34:37.117 [2024-07-15 03:37:43.127164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.117 [2024-07-15 03:37:43.127194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.117 qpair failed and we were unable to recover it. 00:34:37.117 [2024-07-15 03:37:43.127420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.117 [2024-07-15 03:37:43.127467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.117 qpair failed and we were unable to recover it. 00:34:37.117 [2024-07-15 03:37:43.127638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.117 [2024-07-15 03:37:43.127664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.117 qpair failed and we were unable to recover it. 00:34:37.117 [2024-07-15 03:37:43.127791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.117 [2024-07-15 03:37:43.127817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.117 qpair failed and we were unable to recover it. 00:34:37.117 [2024-07-15 03:37:43.127973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.117 [2024-07-15 03:37:43.128002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.117 qpair failed and we were unable to recover it. 00:34:37.117 [2024-07-15 03:37:43.128138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.117 [2024-07-15 03:37:43.128167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.117 qpair failed and we were unable to recover it. 00:34:37.117 [2024-07-15 03:37:43.128314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.117 [2024-07-15 03:37:43.128342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.117 qpair failed and we were unable to recover it. 00:34:37.117 [2024-07-15 03:37:43.128497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.117 [2024-07-15 03:37:43.128522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.117 qpair failed and we were unable to recover it. 00:34:37.117 [2024-07-15 03:37:43.128641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.117 [2024-07-15 03:37:43.128667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.117 qpair failed and we were unable to recover it. 00:34:37.117 [2024-07-15 03:37:43.128834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.117 [2024-07-15 03:37:43.128859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.117 qpair failed and we were unable to recover it. 00:34:37.117 [2024-07-15 03:37:43.129038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.117 [2024-07-15 03:37:43.129066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.117 qpair failed and we were unable to recover it. 00:34:37.117 [2024-07-15 03:37:43.129215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.117 [2024-07-15 03:37:43.129244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.117 qpair failed and we were unable to recover it. 00:34:37.117 [2024-07-15 03:37:43.129439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.117 [2024-07-15 03:37:43.129467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.117 qpair failed and we were unable to recover it. 00:34:37.117 [2024-07-15 03:37:43.129624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.117 [2024-07-15 03:37:43.129649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.117 qpair failed and we were unable to recover it. 00:34:37.117 [2024-07-15 03:37:43.129757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.117 [2024-07-15 03:37:43.129784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.117 qpair failed and we were unable to recover it. 00:34:37.117 [2024-07-15 03:37:43.129943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.117 [2024-07-15 03:37:43.129972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.117 qpair failed and we were unable to recover it. 00:34:37.117 [2024-07-15 03:37:43.130130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.117 [2024-07-15 03:37:43.130161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.117 qpair failed and we were unable to recover it. 00:34:37.117 [2024-07-15 03:37:43.130310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.117 [2024-07-15 03:37:43.130338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.117 qpair failed and we were unable to recover it. 00:34:37.117 [2024-07-15 03:37:43.130487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.117 [2024-07-15 03:37:43.130513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.117 qpair failed and we were unable to recover it. 00:34:37.117 [2024-07-15 03:37:43.130627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.117 [2024-07-15 03:37:43.130652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.117 qpair failed and we were unable to recover it. 00:34:37.117 [2024-07-15 03:37:43.130760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.117 [2024-07-15 03:37:43.130785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.117 qpair failed and we were unable to recover it. 00:34:37.117 [2024-07-15 03:37:43.130935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.117 [2024-07-15 03:37:43.130964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.117 qpair failed and we were unable to recover it. 00:34:37.117 [2024-07-15 03:37:43.131155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.117 [2024-07-15 03:37:43.131191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.117 qpair failed and we were unable to recover it. 00:34:37.117 [2024-07-15 03:37:43.131319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.117 [2024-07-15 03:37:43.131348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.117 qpair failed and we were unable to recover it. 00:34:37.117 [2024-07-15 03:37:43.131534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.117 [2024-07-15 03:37:43.131560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.117 qpair failed and we were unable to recover it. 00:34:37.117 [2024-07-15 03:37:43.131663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.117 [2024-07-15 03:37:43.131689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.117 qpair failed and we were unable to recover it. 00:34:37.117 [2024-07-15 03:37:43.131794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.117 [2024-07-15 03:37:43.131819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.117 qpair failed and we were unable to recover it. 00:34:37.117 [2024-07-15 03:37:43.131956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.117 [2024-07-15 03:37:43.131982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.117 qpair failed and we were unable to recover it. 00:34:37.118 [2024-07-15 03:37:43.132091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.118 [2024-07-15 03:37:43.132117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.118 qpair failed and we were unable to recover it. 00:34:37.118 [2024-07-15 03:37:43.132235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.118 [2024-07-15 03:37:43.132260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.118 qpair failed and we were unable to recover it. 00:34:37.118 [2024-07-15 03:37:43.132394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.118 [2024-07-15 03:37:43.132419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.118 qpair failed and we were unable to recover it. 00:34:37.118 [2024-07-15 03:37:43.132543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.118 [2024-07-15 03:37:43.132568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.118 qpair failed and we were unable to recover it. 00:34:37.118 [2024-07-15 03:37:43.132711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.118 [2024-07-15 03:37:43.132736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.118 qpair failed and we were unable to recover it. 00:34:37.118 [2024-07-15 03:37:43.132874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.118 [2024-07-15 03:37:43.132941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.118 qpair failed and we were unable to recover it. 00:34:37.118 [2024-07-15 03:37:43.133161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.118 [2024-07-15 03:37:43.133195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.118 qpair failed and we were unable to recover it. 00:34:37.118 [2024-07-15 03:37:43.133345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.118 [2024-07-15 03:37:43.133373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.118 qpair failed and we were unable to recover it. 00:34:37.118 [2024-07-15 03:37:43.133536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.118 [2024-07-15 03:37:43.133562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.118 qpair failed and we were unable to recover it. 00:34:37.118 [2024-07-15 03:37:43.133728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.118 [2024-07-15 03:37:43.133753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.118 qpair failed and we were unable to recover it. 00:34:37.118 [2024-07-15 03:37:43.133918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.118 [2024-07-15 03:37:43.133945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.118 qpair failed and we were unable to recover it. 00:34:37.118 [2024-07-15 03:37:43.134089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.118 [2024-07-15 03:37:43.134115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.118 qpair failed and we were unable to recover it. 00:34:37.118 [2024-07-15 03:37:43.134251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.118 [2024-07-15 03:37:43.134276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.118 qpair failed and we were unable to recover it. 00:34:37.118 [2024-07-15 03:37:43.134384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.118 [2024-07-15 03:37:43.134410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.118 qpair failed and we were unable to recover it. 00:34:37.118 [2024-07-15 03:37:43.134558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.118 [2024-07-15 03:37:43.134584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.118 qpair failed and we were unable to recover it. 00:34:37.118 [2024-07-15 03:37:43.134743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.118 [2024-07-15 03:37:43.134768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.118 qpair failed and we were unable to recover it. 00:34:37.118 [2024-07-15 03:37:43.134907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.118 [2024-07-15 03:37:43.134940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.118 qpair failed and we were unable to recover it. 00:34:37.118 [2024-07-15 03:37:43.135057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.118 [2024-07-15 03:37:43.135083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.118 qpair failed and we were unable to recover it. 00:34:37.118 [2024-07-15 03:37:43.135220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.118 [2024-07-15 03:37:43.135245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.118 qpair failed and we were unable to recover it. 00:34:37.118 [2024-07-15 03:37:43.135382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.118 [2024-07-15 03:37:43.135408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.118 qpair failed and we were unable to recover it. 00:34:37.118 [2024-07-15 03:37:43.135535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.118 [2024-07-15 03:37:43.135574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.118 qpair failed and we were unable to recover it. 00:34:37.118 [2024-07-15 03:37:43.135722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.118 [2024-07-15 03:37:43.135751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.118 qpair failed and we were unable to recover it. 00:34:37.118 [2024-07-15 03:37:43.135935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.118 [2024-07-15 03:37:43.135966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.118 qpair failed and we were unable to recover it. 00:34:37.118 [2024-07-15 03:37:43.136181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.118 [2024-07-15 03:37:43.136232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.118 qpair failed and we were unable to recover it. 00:34:37.118 [2024-07-15 03:37:43.136444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.118 [2024-07-15 03:37:43.136495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.118 qpair failed and we were unable to recover it. 00:34:37.118 [2024-07-15 03:37:43.136658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.118 [2024-07-15 03:37:43.136684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.118 qpair failed and we were unable to recover it. 00:34:37.118 [2024-07-15 03:37:43.136828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.118 [2024-07-15 03:37:43.136856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.118 qpair failed and we were unable to recover it. 00:34:37.118 [2024-07-15 03:37:43.137055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.118 [2024-07-15 03:37:43.137098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.118 qpair failed and we were unable to recover it. 00:34:37.118 [2024-07-15 03:37:43.137285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.118 [2024-07-15 03:37:43.137316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.118 qpair failed and we were unable to recover it. 00:34:37.118 [2024-07-15 03:37:43.137596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.118 [2024-07-15 03:37:43.137625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.118 qpair failed and we were unable to recover it. 00:34:37.118 [2024-07-15 03:37:43.137781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.118 [2024-07-15 03:37:43.137807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.118 qpair failed and we were unable to recover it. 00:34:37.118 [2024-07-15 03:37:43.137939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.118 [2024-07-15 03:37:43.137968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.118 qpair failed and we were unable to recover it. 00:34:37.118 [2024-07-15 03:37:43.138136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.118 [2024-07-15 03:37:43.138164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.118 qpair failed and we were unable to recover it. 00:34:37.118 [2024-07-15 03:37:43.138371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.118 [2024-07-15 03:37:43.138423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.118 qpair failed and we were unable to recover it. 00:34:37.118 [2024-07-15 03:37:43.138588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.118 [2024-07-15 03:37:43.138616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.118 qpair failed and we were unable to recover it. 00:34:37.118 [2024-07-15 03:37:43.138788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.118 [2024-07-15 03:37:43.138813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.118 qpair failed and we were unable to recover it. 00:34:37.118 [2024-07-15 03:37:43.138946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.118 [2024-07-15 03:37:43.138976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.118 qpair failed and we were unable to recover it. 00:34:37.118 [2024-07-15 03:37:43.139168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.118 [2024-07-15 03:37:43.139193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.118 qpair failed and we were unable to recover it. 00:34:37.118 [2024-07-15 03:37:43.139423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.118 [2024-07-15 03:37:43.139471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.118 qpair failed and we were unable to recover it. 00:34:37.118 [2024-07-15 03:37:43.139649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.118 [2024-07-15 03:37:43.139675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.118 qpair failed and we were unable to recover it. 00:34:37.119 [2024-07-15 03:37:43.139808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.119 [2024-07-15 03:37:43.139833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.119 qpair failed and we were unable to recover it. 00:34:37.119 [2024-07-15 03:37:43.139972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.119 [2024-07-15 03:37:43.140001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.119 qpair failed and we were unable to recover it. 00:34:37.119 [2024-07-15 03:37:43.140152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.119 [2024-07-15 03:37:43.140180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.119 qpair failed and we were unable to recover it. 00:34:37.119 [2024-07-15 03:37:43.140344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.119 [2024-07-15 03:37:43.140372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.119 qpair failed and we were unable to recover it. 00:34:37.119 [2024-07-15 03:37:43.140554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.119 [2024-07-15 03:37:43.140580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.119 qpair failed and we were unable to recover it. 00:34:37.119 [2024-07-15 03:37:43.140683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.119 [2024-07-15 03:37:43.140709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.119 qpair failed and we were unable to recover it. 00:34:37.119 [2024-07-15 03:37:43.140819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.119 [2024-07-15 03:37:43.140846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.119 qpair failed and we were unable to recover it. 00:34:37.119 [2024-07-15 03:37:43.141007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.119 [2024-07-15 03:37:43.141036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.119 qpair failed and we were unable to recover it. 00:34:37.119 [2024-07-15 03:37:43.141194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.119 [2024-07-15 03:37:43.141222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.119 qpair failed and we were unable to recover it. 00:34:37.119 [2024-07-15 03:37:43.141418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.119 [2024-07-15 03:37:43.141446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.119 qpair failed and we were unable to recover it. 00:34:37.119 [2024-07-15 03:37:43.141636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.119 [2024-07-15 03:37:43.141661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.119 qpair failed and we were unable to recover it. 00:34:37.119 [2024-07-15 03:37:43.141775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.119 [2024-07-15 03:37:43.141802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.119 qpair failed and we were unable to recover it. 00:34:37.119 [2024-07-15 03:37:43.141982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.119 [2024-07-15 03:37:43.142013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.119 qpair failed and we were unable to recover it. 00:34:37.119 [2024-07-15 03:37:43.142262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.119 [2024-07-15 03:37:43.142307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.119 qpair failed and we were unable to recover it. 00:34:37.119 [2024-07-15 03:37:43.142471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.119 [2024-07-15 03:37:43.142515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.119 qpair failed and we were unable to recover it. 00:34:37.119 [2024-07-15 03:37:43.142672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.119 [2024-07-15 03:37:43.142698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.119 qpair failed and we were unable to recover it. 00:34:37.119 [2024-07-15 03:37:43.142814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.119 [2024-07-15 03:37:43.142840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.119 qpair failed and we were unable to recover it. 00:34:37.119 [2024-07-15 03:37:43.143029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.119 [2024-07-15 03:37:43.143059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.119 qpair failed and we were unable to recover it. 00:34:37.119 [2024-07-15 03:37:43.143211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.119 [2024-07-15 03:37:43.143242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.119 qpair failed and we were unable to recover it. 00:34:37.119 [2024-07-15 03:37:43.143418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.119 [2024-07-15 03:37:43.143446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.119 qpair failed and we were unable to recover it. 00:34:37.119 [2024-07-15 03:37:43.143605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.119 [2024-07-15 03:37:43.143630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.119 qpair failed and we were unable to recover it. 00:34:37.119 [2024-07-15 03:37:43.143774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.119 [2024-07-15 03:37:43.143800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.119 qpair failed and we were unable to recover it. 00:34:37.119 [2024-07-15 03:37:43.143987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.119 [2024-07-15 03:37:43.144016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.119 qpair failed and we were unable to recover it. 00:34:37.119 [2024-07-15 03:37:43.144186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.119 [2024-07-15 03:37:43.144215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.119 qpair failed and we were unable to recover it. 00:34:37.119 [2024-07-15 03:37:43.144413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.119 [2024-07-15 03:37:43.144439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.119 qpair failed and we were unable to recover it. 00:34:37.119 [2024-07-15 03:37:43.144579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.119 [2024-07-15 03:37:43.144606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.119 qpair failed and we were unable to recover it. 00:34:37.119 [2024-07-15 03:37:43.144741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.119 [2024-07-15 03:37:43.144767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.119 qpair failed and we were unable to recover it. 00:34:37.119 [2024-07-15 03:37:43.144945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.119 [2024-07-15 03:37:43.144974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.119 qpair failed and we were unable to recover it. 00:34:37.119 [2024-07-15 03:37:43.145163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.119 [2024-07-15 03:37:43.145191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.119 qpair failed and we were unable to recover it. 00:34:37.119 [2024-07-15 03:37:43.145386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.119 [2024-07-15 03:37:43.145415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.119 qpair failed and we were unable to recover it. 00:34:37.119 [2024-07-15 03:37:43.145550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.119 [2024-07-15 03:37:43.145575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.119 qpair failed and we were unable to recover it. 00:34:37.119 [2024-07-15 03:37:43.145714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.119 [2024-07-15 03:37:43.145739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.119 qpair failed and we were unable to recover it. 00:34:37.119 [2024-07-15 03:37:43.145885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.120 [2024-07-15 03:37:43.145935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.120 qpair failed and we were unable to recover it. 00:34:37.120 [2024-07-15 03:37:43.146154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.120 [2024-07-15 03:37:43.146188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.120 qpair failed and we were unable to recover it. 00:34:37.120 [2024-07-15 03:37:43.146429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.120 [2024-07-15 03:37:43.146457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.120 qpair failed and we were unable to recover it. 00:34:37.120 [2024-07-15 03:37:43.146606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.120 [2024-07-15 03:37:43.146632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.120 qpair failed and we were unable to recover it. 00:34:37.120 [2024-07-15 03:37:43.146737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.120 [2024-07-15 03:37:43.146764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.120 qpair failed and we were unable to recover it. 00:34:37.120 [2024-07-15 03:37:43.146896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.120 [2024-07-15 03:37:43.146940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.120 qpair failed and we were unable to recover it. 00:34:37.120 [2024-07-15 03:37:43.147142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.120 [2024-07-15 03:37:43.147192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.120 qpair failed and we were unable to recover it. 00:34:37.120 [2024-07-15 03:37:43.147396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.120 [2024-07-15 03:37:43.147425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.120 qpair failed and we were unable to recover it. 00:34:37.120 [2024-07-15 03:37:43.147566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.120 [2024-07-15 03:37:43.147591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.120 qpair failed and we were unable to recover it. 00:34:37.120 [2024-07-15 03:37:43.147727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.120 [2024-07-15 03:37:43.147752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.120 qpair failed and we were unable to recover it. 00:34:37.120 [2024-07-15 03:37:43.147933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.120 [2024-07-15 03:37:43.147962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.120 qpair failed and we were unable to recover it. 00:34:37.120 [2024-07-15 03:37:43.148158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.120 [2024-07-15 03:37:43.148191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.120 qpair failed and we were unable to recover it. 00:34:37.120 [2024-07-15 03:37:43.148376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.120 [2024-07-15 03:37:43.148404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.120 qpair failed and we were unable to recover it. 00:34:37.120 [2024-07-15 03:37:43.148565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.120 [2024-07-15 03:37:43.148591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.120 qpair failed and we were unable to recover it. 00:34:37.120 [2024-07-15 03:37:43.148706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.120 [2024-07-15 03:37:43.148732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.120 qpair failed and we were unable to recover it. 00:34:37.120 [2024-07-15 03:37:43.148880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.120 [2024-07-15 03:37:43.148922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.120 qpair failed and we were unable to recover it. 00:34:37.120 [2024-07-15 03:37:43.149136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.120 [2024-07-15 03:37:43.149164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.120 qpair failed and we were unable to recover it. 00:34:37.120 [2024-07-15 03:37:43.149422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.120 [2024-07-15 03:37:43.149455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.120 qpair failed and we were unable to recover it. 00:34:37.120 [2024-07-15 03:37:43.149630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.120 [2024-07-15 03:37:43.149655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.120 qpair failed and we were unable to recover it. 00:34:37.120 [2024-07-15 03:37:43.149791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.120 [2024-07-15 03:37:43.149817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.120 qpair failed and we were unable to recover it. 00:34:37.120 [2024-07-15 03:37:43.149952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.120 [2024-07-15 03:37:43.149981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.120 qpair failed and we were unable to recover it. 00:34:37.120 [2024-07-15 03:37:43.150152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.120 [2024-07-15 03:37:43.150180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.120 qpair failed and we were unable to recover it. 00:34:37.120 [2024-07-15 03:37:43.150322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.120 [2024-07-15 03:37:43.150350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.121 qpair failed and we were unable to recover it. 00:34:37.121 [2024-07-15 03:37:43.150513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.121 [2024-07-15 03:37:43.150538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.121 qpair failed and we were unable to recover it. 00:34:37.121 [2024-07-15 03:37:43.150674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.121 [2024-07-15 03:37:43.150700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.121 qpair failed and we were unable to recover it. 00:34:37.121 [2024-07-15 03:37:43.150819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.121 [2024-07-15 03:37:43.150845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.121 qpair failed and we were unable to recover it. 00:34:37.121 [2024-07-15 03:37:43.151020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.121 [2024-07-15 03:37:43.151049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.121 qpair failed and we were unable to recover it. 00:34:37.121 [2024-07-15 03:37:43.151265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.121 [2024-07-15 03:37:43.151293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.121 qpair failed and we were unable to recover it. 00:34:37.121 [2024-07-15 03:37:43.151450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.121 [2024-07-15 03:37:43.151478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.121 qpair failed and we were unable to recover it. 00:34:37.121 [2024-07-15 03:37:43.151613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.121 [2024-07-15 03:37:43.151638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.121 qpair failed and we were unable to recover it. 00:34:37.121 [2024-07-15 03:37:43.151783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.121 [2024-07-15 03:37:43.151808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.121 qpair failed and we were unable to recover it. 00:34:37.121 [2024-07-15 03:37:43.151966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.121 [2024-07-15 03:37:43.151995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.121 qpair failed and we were unable to recover it. 00:34:37.121 [2024-07-15 03:37:43.152230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.121 [2024-07-15 03:37:43.152275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.121 qpair failed and we were unable to recover it. 00:34:37.121 [2024-07-15 03:37:43.152475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.121 [2024-07-15 03:37:43.152503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.121 qpair failed and we were unable to recover it. 00:34:37.121 [2024-07-15 03:37:43.152636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.121 [2024-07-15 03:37:43.152661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.121 qpair failed and we were unable to recover it. 00:34:37.121 [2024-07-15 03:37:43.152824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.121 [2024-07-15 03:37:43.152850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.121 qpair failed and we were unable to recover it. 00:34:37.121 [2024-07-15 03:37:43.153020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.121 [2024-07-15 03:37:43.153048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.121 qpair failed and we were unable to recover it. 00:34:37.121 [2024-07-15 03:37:43.153204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.121 [2024-07-15 03:37:43.153232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.121 qpair failed and we were unable to recover it. 00:34:37.121 [2024-07-15 03:37:43.153412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.121 [2024-07-15 03:37:43.153439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.121 qpair failed and we were unable to recover it. 00:34:37.121 [2024-07-15 03:37:43.153623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.121 [2024-07-15 03:37:43.153648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.121 qpair failed and we were unable to recover it. 00:34:37.121 [2024-07-15 03:37:43.153787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.121 [2024-07-15 03:37:43.153812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.121 qpair failed and we were unable to recover it. 00:34:37.121 [2024-07-15 03:37:43.153948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.121 [2024-07-15 03:37:43.153981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.121 qpair failed and we were unable to recover it. 00:34:37.121 [2024-07-15 03:37:43.154162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.121 [2024-07-15 03:37:43.154191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.121 qpair failed and we were unable to recover it. 00:34:37.121 [2024-07-15 03:37:43.154355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.121 [2024-07-15 03:37:43.154385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.121 qpair failed and we were unable to recover it. 00:34:37.121 [2024-07-15 03:37:43.154570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.121 [2024-07-15 03:37:43.154596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.121 qpair failed and we were unable to recover it. 00:34:37.121 [2024-07-15 03:37:43.154714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.121 [2024-07-15 03:37:43.154739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.121 qpair failed and we were unable to recover it. 00:34:37.121 [2024-07-15 03:37:43.154850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.121 [2024-07-15 03:37:43.154881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.121 qpair failed and we were unable to recover it. 00:34:37.121 [2024-07-15 03:37:43.155081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.121 [2024-07-15 03:37:43.155109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.121 qpair failed and we were unable to recover it. 00:34:37.121 [2024-07-15 03:37:43.155329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.121 [2024-07-15 03:37:43.155367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.121 qpair failed and we were unable to recover it. 00:34:37.121 [2024-07-15 03:37:43.155559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.121 [2024-07-15 03:37:43.155588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.121 qpair failed and we were unable to recover it. 00:34:37.121 [2024-07-15 03:37:43.155748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.121 [2024-07-15 03:37:43.155773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.121 qpair failed and we were unable to recover it. 00:34:37.121 [2024-07-15 03:37:43.155938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.121 [2024-07-15 03:37:43.155967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.121 qpair failed and we were unable to recover it. 00:34:37.121 [2024-07-15 03:37:43.156157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.122 [2024-07-15 03:37:43.156185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.122 qpair failed and we were unable to recover it. 00:34:37.122 [2024-07-15 03:37:43.156372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.122 [2024-07-15 03:37:43.156400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.122 qpair failed and we were unable to recover it. 00:34:37.122 [2024-07-15 03:37:43.156555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.122 [2024-07-15 03:37:43.156581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.122 qpair failed and we were unable to recover it. 00:34:37.122 [2024-07-15 03:37:43.156722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.122 [2024-07-15 03:37:43.156748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.122 qpair failed and we were unable to recover it. 00:34:37.122 [2024-07-15 03:37:43.156946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.122 [2024-07-15 03:37:43.156975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.122 qpair failed and we were unable to recover it. 00:34:37.122 [2024-07-15 03:37:43.157141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.122 [2024-07-15 03:37:43.157169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.122 qpair failed and we were unable to recover it. 00:34:37.122 [2024-07-15 03:37:43.157346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.122 [2024-07-15 03:37:43.157375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.122 qpair failed and we were unable to recover it. 00:34:37.122 [2024-07-15 03:37:43.157511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.122 [2024-07-15 03:37:43.157536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.122 qpair failed and we were unable to recover it. 00:34:37.122 [2024-07-15 03:37:43.157649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.122 [2024-07-15 03:37:43.157674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.122 qpair failed and we were unable to recover it. 00:34:37.122 [2024-07-15 03:37:43.157811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.122 [2024-07-15 03:37:43.157836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.122 qpair failed and we were unable to recover it. 00:34:37.122 [2024-07-15 03:37:43.157996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.122 [2024-07-15 03:37:43.158022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.122 qpair failed and we were unable to recover it. 00:34:37.122 [2024-07-15 03:37:43.158162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.122 [2024-07-15 03:37:43.158188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.122 qpair failed and we were unable to recover it. 00:34:37.122 [2024-07-15 03:37:43.158324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.122 [2024-07-15 03:37:43.158350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.122 qpair failed and we were unable to recover it. 00:34:37.122 [2024-07-15 03:37:43.158457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.122 [2024-07-15 03:37:43.158482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.122 qpair failed and we were unable to recover it. 00:34:37.122 [2024-07-15 03:37:43.158654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.122 [2024-07-15 03:37:43.158679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.122 qpair failed and we were unable to recover it. 00:34:37.122 [2024-07-15 03:37:43.158791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.122 [2024-07-15 03:37:43.158818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.122 qpair failed and we were unable to recover it. 00:34:37.122 [2024-07-15 03:37:43.159013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.122 [2024-07-15 03:37:43.159046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.122 qpair failed and we were unable to recover it. 00:34:37.122 [2024-07-15 03:37:43.159163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.122 [2024-07-15 03:37:43.159192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.122 qpair failed and we were unable to recover it. 00:34:37.122 [2024-07-15 03:37:43.159373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.122 [2024-07-15 03:37:43.159418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.122 qpair failed and we were unable to recover it. 00:34:37.122 [2024-07-15 03:37:43.159561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.122 [2024-07-15 03:37:43.159587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.122 qpair failed and we were unable to recover it. 00:34:37.122 [2024-07-15 03:37:43.159727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.122 [2024-07-15 03:37:43.159755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.122 qpair failed and we were unable to recover it. 00:34:37.122 [2024-07-15 03:37:43.159896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.122 [2024-07-15 03:37:43.159924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.122 qpair failed and we were unable to recover it. 00:34:37.122 [2024-07-15 03:37:43.160085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.122 [2024-07-15 03:37:43.160114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.122 qpair failed and we were unable to recover it. 00:34:37.122 [2024-07-15 03:37:43.160264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.122 [2024-07-15 03:37:43.160308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.122 qpair failed and we were unable to recover it. 00:34:37.122 [2024-07-15 03:37:43.160497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.122 [2024-07-15 03:37:43.160539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.122 qpair failed and we were unable to recover it. 00:34:37.122 [2024-07-15 03:37:43.160679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.122 [2024-07-15 03:37:43.160704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.122 qpair failed and we were unable to recover it. 00:34:37.122 [2024-07-15 03:37:43.160870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.122 [2024-07-15 03:37:43.160902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.122 qpair failed and we were unable to recover it. 00:34:37.122 [2024-07-15 03:37:43.161092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.122 [2024-07-15 03:37:43.161138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.122 qpair failed and we were unable to recover it. 00:34:37.122 [2024-07-15 03:37:43.161299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.122 [2024-07-15 03:37:43.161342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.122 qpair failed and we were unable to recover it. 00:34:37.122 [2024-07-15 03:37:43.161507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.123 [2024-07-15 03:37:43.161555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.123 qpair failed and we were unable to recover it. 00:34:37.123 [2024-07-15 03:37:43.161673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.123 [2024-07-15 03:37:43.161700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.123 qpair failed and we were unable to recover it. 00:34:37.123 [2024-07-15 03:37:43.161842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.123 [2024-07-15 03:37:43.161868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.123 qpair failed and we were unable to recover it. 00:34:37.123 [2024-07-15 03:37:43.162042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.123 [2024-07-15 03:37:43.162071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.123 qpair failed and we were unable to recover it. 00:34:37.123 [2024-07-15 03:37:43.162250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.123 [2024-07-15 03:37:43.162295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.123 qpair failed and we were unable to recover it. 00:34:37.123 [2024-07-15 03:37:43.162456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.123 [2024-07-15 03:37:43.162500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.123 qpair failed and we were unable to recover it. 00:34:37.123 [2024-07-15 03:37:43.162618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.123 [2024-07-15 03:37:43.162643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.123 qpair failed and we were unable to recover it. 00:34:37.123 [2024-07-15 03:37:43.162779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.123 [2024-07-15 03:37:43.162805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.123 qpair failed and we were unable to recover it. 00:34:37.123 [2024-07-15 03:37:43.162977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.123 [2024-07-15 03:37:43.163022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.123 qpair failed and we were unable to recover it. 00:34:37.123 [2024-07-15 03:37:43.163225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.123 [2024-07-15 03:37:43.163269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.123 qpair failed and we were unable to recover it. 00:34:37.123 [2024-07-15 03:37:43.163395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.123 [2024-07-15 03:37:43.163439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.123 qpair failed and we were unable to recover it. 00:34:37.123 [2024-07-15 03:37:43.163585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.123 [2024-07-15 03:37:43.163611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.123 qpair failed and we were unable to recover it. 00:34:37.123 [2024-07-15 03:37:43.163734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.123 [2024-07-15 03:37:43.163759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.123 qpair failed and we were unable to recover it. 00:34:37.123 [2024-07-15 03:37:43.163943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.123 [2024-07-15 03:37:43.163992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.123 qpair failed and we were unable to recover it. 00:34:37.123 [2024-07-15 03:37:43.164155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.123 [2024-07-15 03:37:43.164181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.123 qpair failed and we were unable to recover it. 00:34:37.123 [2024-07-15 03:37:43.164318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.123 [2024-07-15 03:37:43.164344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.123 qpair failed and we were unable to recover it. 00:34:37.123 [2024-07-15 03:37:43.164473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.123 [2024-07-15 03:37:43.164500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.123 qpair failed and we were unable to recover it. 00:34:37.123 [2024-07-15 03:37:43.164648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.123 [2024-07-15 03:37:43.164674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.123 qpair failed and we were unable to recover it. 00:34:37.123 [2024-07-15 03:37:43.164815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.123 [2024-07-15 03:37:43.164840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.123 qpair failed and we were unable to recover it. 00:34:37.123 [2024-07-15 03:37:43.164994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.123 [2024-07-15 03:37:43.165021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.123 qpair failed and we were unable to recover it. 00:34:37.123 [2024-07-15 03:37:43.165132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.123 [2024-07-15 03:37:43.165158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.123 qpair failed and we were unable to recover it. 00:34:37.123 [2024-07-15 03:37:43.165331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.123 [2024-07-15 03:37:43.165357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.123 qpair failed and we were unable to recover it. 00:34:37.123 [2024-07-15 03:37:43.165506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.123 [2024-07-15 03:37:43.165533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.123 qpair failed and we were unable to recover it. 00:34:37.123 [2024-07-15 03:37:43.165700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.123 [2024-07-15 03:37:43.165726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.123 qpair failed and we were unable to recover it. 00:34:37.123 [2024-07-15 03:37:43.165866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.123 [2024-07-15 03:37:43.165898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.123 qpair failed and we were unable to recover it. 00:34:37.123 [2024-07-15 03:37:43.166067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.123 [2024-07-15 03:37:43.166095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.123 qpair failed and we were unable to recover it. 00:34:37.123 [2024-07-15 03:37:43.166257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.123 [2024-07-15 03:37:43.166299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.123 qpair failed and we were unable to recover it. 00:34:37.123 [2024-07-15 03:37:43.166488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.123 [2024-07-15 03:37:43.166540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.124 qpair failed and we were unable to recover it. 00:34:37.124 [2024-07-15 03:37:43.166648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.124 [2024-07-15 03:37:43.166675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.124 qpair failed and we were unable to recover it. 00:34:37.124 [2024-07-15 03:37:43.166814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.124 [2024-07-15 03:37:43.166840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.124 qpair failed and we were unable to recover it. 00:34:37.124 [2024-07-15 03:37:43.167013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.124 [2024-07-15 03:37:43.167043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.124 qpair failed and we were unable to recover it. 00:34:37.124 [2024-07-15 03:37:43.167249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.124 [2024-07-15 03:37:43.167292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.124 qpair failed and we were unable to recover it. 00:34:37.124 [2024-07-15 03:37:43.167454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.124 [2024-07-15 03:37:43.167497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.124 qpair failed and we were unable to recover it. 00:34:37.124 [2024-07-15 03:37:43.167634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.124 [2024-07-15 03:37:43.167659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.124 qpair failed and we were unable to recover it. 00:34:37.124 [2024-07-15 03:37:43.167774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.124 [2024-07-15 03:37:43.167800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.124 qpair failed and we were unable to recover it. 00:34:37.124 [2024-07-15 03:37:43.167948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.124 [2024-07-15 03:37:43.167976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.124 qpair failed and we were unable to recover it. 00:34:37.124 [2024-07-15 03:37:43.168109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.124 [2024-07-15 03:37:43.168164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.124 qpair failed and we were unable to recover it. 00:34:37.124 [2024-07-15 03:37:43.168361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.124 [2024-07-15 03:37:43.168405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.124 qpair failed and we were unable to recover it. 00:34:37.124 [2024-07-15 03:37:43.168549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.124 [2024-07-15 03:37:43.168576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.124 qpair failed and we were unable to recover it. 00:34:37.124 [2024-07-15 03:37:43.168715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.124 [2024-07-15 03:37:43.168740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.124 qpair failed and we were unable to recover it. 00:34:37.124 [2024-07-15 03:37:43.168975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.124 [2024-07-15 03:37:43.169020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.124 qpair failed and we were unable to recover it. 00:34:37.124 [2024-07-15 03:37:43.169186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.124 [2024-07-15 03:37:43.169230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.124 qpair failed and we were unable to recover it. 00:34:37.124 [2024-07-15 03:37:43.169394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.124 [2024-07-15 03:37:43.169442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.124 qpair failed and we were unable to recover it. 00:34:37.124 [2024-07-15 03:37:43.169581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.124 [2024-07-15 03:37:43.169607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.124 qpair failed and we were unable to recover it. 00:34:37.124 [2024-07-15 03:37:43.169750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.124 [2024-07-15 03:37:43.169776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.124 qpair failed and we were unable to recover it. 00:34:37.124 [2024-07-15 03:37:43.169942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.124 [2024-07-15 03:37:43.169969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.124 qpair failed and we were unable to recover it. 00:34:37.124 [2024-07-15 03:37:43.170085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.124 [2024-07-15 03:37:43.170112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.124 qpair failed and we were unable to recover it. 00:34:37.124 [2024-07-15 03:37:43.170265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.124 [2024-07-15 03:37:43.170292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.124 qpair failed and we were unable to recover it. 00:34:37.124 [2024-07-15 03:37:43.170424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.124 [2024-07-15 03:37:43.170450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.124 qpair failed and we were unable to recover it. 00:34:37.124 [2024-07-15 03:37:43.170567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.124 [2024-07-15 03:37:43.170593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.124 qpair failed and we were unable to recover it. 00:34:37.124 [2024-07-15 03:37:43.170706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.124 [2024-07-15 03:37:43.170732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.124 qpair failed and we were unable to recover it. 00:34:37.124 [2024-07-15 03:37:43.170897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.124 [2024-07-15 03:37:43.170924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.124 qpair failed and we were unable to recover it. 00:34:37.124 [2024-07-15 03:37:43.171032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.124 [2024-07-15 03:37:43.171058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.124 qpair failed and we were unable to recover it. 00:34:37.124 [2024-07-15 03:37:43.171198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.124 [2024-07-15 03:37:43.171224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.124 qpair failed and we were unable to recover it. 00:34:37.124 [2024-07-15 03:37:43.171447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.124 [2024-07-15 03:37:43.171473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.124 qpair failed and we were unable to recover it. 00:34:37.124 [2024-07-15 03:37:43.171640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.125 [2024-07-15 03:37:43.171665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.125 qpair failed and we were unable to recover it. 00:34:37.125 [2024-07-15 03:37:43.171829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.125 [2024-07-15 03:37:43.171855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.125 qpair failed and we were unable to recover it. 00:34:37.125 [2024-07-15 03:37:43.172050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.125 [2024-07-15 03:37:43.172093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.125 qpair failed and we were unable to recover it. 00:34:37.125 [2024-07-15 03:37:43.172255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.125 [2024-07-15 03:37:43.172285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.125 qpair failed and we were unable to recover it. 00:34:37.125 [2024-07-15 03:37:43.172463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.125 [2024-07-15 03:37:43.172514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.125 qpair failed and we were unable to recover it. 00:34:37.125 [2024-07-15 03:37:43.172631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.125 [2024-07-15 03:37:43.172660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.125 qpair failed and we were unable to recover it. 00:34:37.125 [2024-07-15 03:37:43.172840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.125 [2024-07-15 03:37:43.172868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.125 qpair failed and we were unable to recover it. 00:34:37.125 [2024-07-15 03:37:43.173036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.125 [2024-07-15 03:37:43.173062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.125 qpair failed and we were unable to recover it. 00:34:37.125 [2024-07-15 03:37:43.173220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.125 [2024-07-15 03:37:43.173248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.125 qpair failed and we were unable to recover it. 00:34:37.125 [2024-07-15 03:37:43.173363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.125 [2024-07-15 03:37:43.173392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.125 qpair failed and we were unable to recover it. 00:34:37.125 [2024-07-15 03:37:43.173539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.125 [2024-07-15 03:37:43.173567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.125 qpair failed and we were unable to recover it. 00:34:37.125 [2024-07-15 03:37:43.173772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.125 [2024-07-15 03:37:43.173803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.125 qpair failed and we were unable to recover it. 00:34:37.125 [2024-07-15 03:37:43.173938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.125 [2024-07-15 03:37:43.173971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.125 qpair failed and we were unable to recover it. 00:34:37.125 [2024-07-15 03:37:43.174139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.125 [2024-07-15 03:37:43.174185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.125 qpair failed and we were unable to recover it. 00:34:37.125 [2024-07-15 03:37:43.174359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.125 [2024-07-15 03:37:43.174405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.125 qpair failed and we were unable to recover it. 00:34:37.125 [2024-07-15 03:37:43.174531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.125 [2024-07-15 03:37:43.174573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.125 qpair failed and we were unable to recover it. 00:34:37.125 [2024-07-15 03:37:43.174746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.125 [2024-07-15 03:37:43.174772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.125 qpair failed and we were unable to recover it. 00:34:37.125 [2024-07-15 03:37:43.174941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.125 [2024-07-15 03:37:43.174971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.125 qpair failed and we were unable to recover it. 00:34:37.125 [2024-07-15 03:37:43.175169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.125 [2024-07-15 03:37:43.175212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.125 qpair failed and we were unable to recover it. 00:34:37.125 [2024-07-15 03:37:43.175377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.125 [2024-07-15 03:37:43.175420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.125 qpair failed and we were unable to recover it. 00:34:37.125 [2024-07-15 03:37:43.175551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.125 [2024-07-15 03:37:43.175596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.125 qpair failed and we were unable to recover it. 00:34:37.126 [2024-07-15 03:37:43.175708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.126 [2024-07-15 03:37:43.175735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.126 qpair failed and we were unable to recover it. 00:34:37.126 [2024-07-15 03:37:43.175868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.126 [2024-07-15 03:37:43.175901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.126 qpair failed and we were unable to recover it. 00:34:37.126 [2024-07-15 03:37:43.176045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.126 [2024-07-15 03:37:43.176070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.126 qpair failed and we were unable to recover it. 00:34:37.126 [2024-07-15 03:37:43.176208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.126 [2024-07-15 03:37:43.176235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.126 qpair failed and we were unable to recover it. 00:34:37.126 [2024-07-15 03:37:43.176458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.126 [2024-07-15 03:37:43.176484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.126 qpair failed and we were unable to recover it. 00:34:37.126 [2024-07-15 03:37:43.176632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.126 [2024-07-15 03:37:43.176659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.126 qpair failed and we were unable to recover it. 00:34:37.126 [2024-07-15 03:37:43.176822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.126 [2024-07-15 03:37:43.176848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.126 qpair failed and we were unable to recover it. 00:34:37.126 [2024-07-15 03:37:43.176982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.126 [2024-07-15 03:37:43.177009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.126 qpair failed and we were unable to recover it. 00:34:37.126 [2024-07-15 03:37:43.177147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.126 [2024-07-15 03:37:43.177173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.126 qpair failed and we were unable to recover it. 00:34:37.126 [2024-07-15 03:37:43.177391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.126 [2024-07-15 03:37:43.177417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.126 qpair failed and we were unable to recover it. 00:34:37.126 [2024-07-15 03:37:43.177586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.126 [2024-07-15 03:37:43.177612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.126 qpair failed and we were unable to recover it. 00:34:37.126 [2024-07-15 03:37:43.177730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.126 [2024-07-15 03:37:43.177756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.126 qpair failed and we were unable to recover it. 00:34:37.126 [2024-07-15 03:37:43.177975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.126 [2024-07-15 03:37:43.178002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.126 qpair failed and we were unable to recover it. 00:34:37.126 [2024-07-15 03:37:43.178149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.126 [2024-07-15 03:37:43.178175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.126 qpair failed and we were unable to recover it. 00:34:37.126 [2024-07-15 03:37:43.178313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.126 [2024-07-15 03:37:43.178339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.126 qpair failed and we were unable to recover it. 00:34:37.126 [2024-07-15 03:37:43.178509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.126 [2024-07-15 03:37:43.178535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.126 qpair failed and we were unable to recover it. 00:34:37.126 [2024-07-15 03:37:43.178646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.126 [2024-07-15 03:37:43.178672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.126 qpair failed and we were unable to recover it. 00:34:37.126 [2024-07-15 03:37:43.178795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.126 [2024-07-15 03:37:43.178821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.126 qpair failed and we were unable to recover it. 00:34:37.126 [2024-07-15 03:37:43.179002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.126 [2024-07-15 03:37:43.179029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.126 qpair failed and we were unable to recover it. 00:34:37.126 [2024-07-15 03:37:43.179182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.399 [2024-07-15 03:37:43.179208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.399 qpair failed and we were unable to recover it. 00:34:37.399 [2024-07-15 03:37:43.179345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.399 [2024-07-15 03:37:43.179372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.399 qpair failed and we were unable to recover it. 00:34:37.399 [2024-07-15 03:37:43.179500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.399 [2024-07-15 03:37:43.179526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.399 qpair failed and we were unable to recover it. 00:34:37.399 [2024-07-15 03:37:43.179670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.399 [2024-07-15 03:37:43.179696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.399 qpair failed and we were unable to recover it. 00:34:37.399 [2024-07-15 03:37:43.179817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.399 [2024-07-15 03:37:43.179847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.399 qpair failed and we were unable to recover it. 00:34:37.399 [2024-07-15 03:37:43.180042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.399 [2024-07-15 03:37:43.180073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.399 qpair failed and we were unable to recover it. 00:34:37.399 [2024-07-15 03:37:43.180218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.399 [2024-07-15 03:37:43.180260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.399 qpair failed and we were unable to recover it. 00:34:37.399 [2024-07-15 03:37:43.180476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.399 [2024-07-15 03:37:43.180514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.399 qpair failed and we were unable to recover it. 00:34:37.399 [2024-07-15 03:37:43.180660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.399 [2024-07-15 03:37:43.180697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.399 qpair failed and we were unable to recover it. 00:34:37.399 [2024-07-15 03:37:43.180871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.399 [2024-07-15 03:37:43.180913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.399 qpair failed and we were unable to recover it. 00:34:37.399 [2024-07-15 03:37:43.181058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.399 [2024-07-15 03:37:43.181085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.399 qpair failed and we were unable to recover it. 00:34:37.399 [2024-07-15 03:37:43.181251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.399 [2024-07-15 03:37:43.181281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.399 qpair failed and we were unable to recover it. 00:34:37.399 [2024-07-15 03:37:43.181480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.399 [2024-07-15 03:37:43.181533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.399 qpair failed and we were unable to recover it. 00:34:37.399 [2024-07-15 03:37:43.181695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.399 [2024-07-15 03:37:43.181721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.399 qpair failed and we were unable to recover it. 00:34:37.399 [2024-07-15 03:37:43.181836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.399 [2024-07-15 03:37:43.181862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.399 qpair failed and we were unable to recover it. 00:34:37.399 [2024-07-15 03:37:43.182002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.399 [2024-07-15 03:37:43.182032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.399 qpair failed and we were unable to recover it. 00:34:37.399 [2024-07-15 03:37:43.182250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.399 [2024-07-15 03:37:43.182279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.399 qpair failed and we were unable to recover it. 00:34:37.399 [2024-07-15 03:37:43.182423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.399 [2024-07-15 03:37:43.182452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.399 qpair failed and we were unable to recover it. 00:34:37.399 [2024-07-15 03:37:43.182610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.399 [2024-07-15 03:37:43.182636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.399 qpair failed and we were unable to recover it. 00:34:37.399 [2024-07-15 03:37:43.182766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.399 [2024-07-15 03:37:43.182793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.399 qpair failed and we were unable to recover it. 00:34:37.399 [2024-07-15 03:37:43.182920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.399 [2024-07-15 03:37:43.182950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.399 qpair failed and we were unable to recover it. 00:34:37.399 [2024-07-15 03:37:43.183101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.399 [2024-07-15 03:37:43.183130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.399 qpair failed and we were unable to recover it. 00:34:37.399 [2024-07-15 03:37:43.183307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.399 [2024-07-15 03:37:43.183335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.399 qpair failed and we were unable to recover it. 00:34:37.399 [2024-07-15 03:37:43.183486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.399 [2024-07-15 03:37:43.183511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.399 qpair failed and we were unable to recover it. 00:34:37.399 [2024-07-15 03:37:43.183648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.399 [2024-07-15 03:37:43.183674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.399 qpair failed and we were unable to recover it. 00:34:37.399 [2024-07-15 03:37:43.183791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.399 [2024-07-15 03:37:43.183817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.399 qpair failed and we were unable to recover it. 00:34:37.399 [2024-07-15 03:37:43.183997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.399 [2024-07-15 03:37:43.184027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.399 qpair failed and we were unable to recover it. 00:34:37.399 [2024-07-15 03:37:43.184168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.399 [2024-07-15 03:37:43.184198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.399 qpair failed and we were unable to recover it. 00:34:37.399 [2024-07-15 03:37:43.184327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.399 [2024-07-15 03:37:43.184353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.399 qpair failed and we were unable to recover it. 00:34:37.399 [2024-07-15 03:37:43.184458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.399 [2024-07-15 03:37:43.184483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.399 qpair failed and we were unable to recover it. 00:34:37.399 [2024-07-15 03:37:43.184624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.399 [2024-07-15 03:37:43.184650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.399 qpair failed and we were unable to recover it. 00:34:37.399 [2024-07-15 03:37:43.184787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.399 [2024-07-15 03:37:43.184812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.399 qpair failed and we were unable to recover it. 00:34:37.399 [2024-07-15 03:37:43.184945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.399 [2024-07-15 03:37:43.184975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.399 qpair failed and we were unable to recover it. 00:34:37.399 [2024-07-15 03:37:43.185111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.399 [2024-07-15 03:37:43.185138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.399 qpair failed and we were unable to recover it. 00:34:37.399 [2024-07-15 03:37:43.185322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.399 [2024-07-15 03:37:43.185347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.399 qpair failed and we were unable to recover it. 00:34:37.399 [2024-07-15 03:37:43.185462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.399 [2024-07-15 03:37:43.185487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.400 qpair failed and we were unable to recover it. 00:34:37.400 [2024-07-15 03:37:43.185590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.400 [2024-07-15 03:37:43.185615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.400 qpair failed and we were unable to recover it. 00:34:37.400 [2024-07-15 03:37:43.185783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.400 [2024-07-15 03:37:43.185808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.400 qpair failed and we were unable to recover it. 00:34:37.400 [2024-07-15 03:37:43.185966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.400 [2024-07-15 03:37:43.185996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.400 qpair failed and we were unable to recover it. 00:34:37.400 [2024-07-15 03:37:43.186130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.400 [2024-07-15 03:37:43.186173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.400 qpair failed and we were unable to recover it. 00:34:37.400 [2024-07-15 03:37:43.186378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.400 [2024-07-15 03:37:43.186405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.400 qpair failed and we were unable to recover it. 00:34:37.400 [2024-07-15 03:37:43.186516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.400 [2024-07-15 03:37:43.186542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.400 qpair failed and we were unable to recover it. 00:34:37.400 [2024-07-15 03:37:43.186681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.400 [2024-07-15 03:37:43.186707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.400 qpair failed and we were unable to recover it. 00:34:37.400 [2024-07-15 03:37:43.186847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.400 [2024-07-15 03:37:43.186872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.400 qpair failed and we were unable to recover it. 00:34:37.400 [2024-07-15 03:37:43.187057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.400 [2024-07-15 03:37:43.187085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.400 qpair failed and we were unable to recover it. 00:34:37.400 [2024-07-15 03:37:43.187239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.400 [2024-07-15 03:37:43.187267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.400 qpair failed and we were unable to recover it. 00:34:37.400 [2024-07-15 03:37:43.187437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.400 [2024-07-15 03:37:43.187464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.400 qpair failed and we were unable to recover it. 00:34:37.400 [2024-07-15 03:37:43.187620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.400 [2024-07-15 03:37:43.187645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.400 qpair failed and we were unable to recover it. 00:34:37.400 [2024-07-15 03:37:43.187758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.400 [2024-07-15 03:37:43.187784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.400 qpair failed and we were unable to recover it. 00:34:37.400 [2024-07-15 03:37:43.187967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.400 [2024-07-15 03:37:43.187995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.400 qpair failed and we were unable to recover it. 00:34:37.400 [2024-07-15 03:37:43.188169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.400 [2024-07-15 03:37:43.188197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.400 qpair failed and we were unable to recover it. 00:34:37.400 [2024-07-15 03:37:43.188352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.400 [2024-07-15 03:37:43.188380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.400 qpair failed and we were unable to recover it. 00:34:37.400 [2024-07-15 03:37:43.188573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.400 [2024-07-15 03:37:43.188598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.400 qpair failed and we were unable to recover it. 00:34:37.400 [2024-07-15 03:37:43.188745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.400 [2024-07-15 03:37:43.188770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.400 qpair failed and we were unable to recover it. 00:34:37.400 [2024-07-15 03:37:43.188955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.400 [2024-07-15 03:37:43.188999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.400 qpair failed and we were unable to recover it. 00:34:37.400 [2024-07-15 03:37:43.189227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.400 [2024-07-15 03:37:43.189274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.400 qpair failed and we were unable to recover it. 00:34:37.400 [2024-07-15 03:37:43.189435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.400 [2024-07-15 03:37:43.189480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.400 qpair failed and we were unable to recover it. 00:34:37.400 [2024-07-15 03:37:43.189645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.400 [2024-07-15 03:37:43.189671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.400 qpair failed and we were unable to recover it. 00:34:37.400 [2024-07-15 03:37:43.189791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.400 [2024-07-15 03:37:43.189818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.400 qpair failed and we were unable to recover it. 00:34:37.400 [2024-07-15 03:37:43.189980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.400 [2024-07-15 03:37:43.190026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.400 qpair failed and we were unable to recover it. 00:34:37.400 [2024-07-15 03:37:43.190214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.400 [2024-07-15 03:37:43.190242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.400 qpair failed and we were unable to recover it. 00:34:37.400 [2024-07-15 03:37:43.190422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.400 [2024-07-15 03:37:43.190451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.400 qpair failed and we were unable to recover it. 00:34:37.400 [2024-07-15 03:37:43.190613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.400 [2024-07-15 03:37:43.190639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.400 qpair failed and we were unable to recover it. 00:34:37.400 [2024-07-15 03:37:43.190805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.400 [2024-07-15 03:37:43.190832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.400 qpair failed and we were unable to recover it. 00:34:37.400 [2024-07-15 03:37:43.191042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.400 [2024-07-15 03:37:43.191071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.400 qpair failed and we were unable to recover it. 00:34:37.400 [2024-07-15 03:37:43.191219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.400 [2024-07-15 03:37:43.191248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.400 qpair failed and we were unable to recover it. 00:34:37.400 [2024-07-15 03:37:43.191434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.400 [2024-07-15 03:37:43.191481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.400 qpair failed and we were unable to recover it. 00:34:37.400 [2024-07-15 03:37:43.191645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.401 [2024-07-15 03:37:43.191688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.401 qpair failed and we were unable to recover it. 00:34:37.401 [2024-07-15 03:37:43.191822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.401 [2024-07-15 03:37:43.191849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.401 qpair failed and we were unable to recover it. 00:34:37.401 [2024-07-15 03:37:43.192022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.401 [2024-07-15 03:37:43.192053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.401 qpair failed and we were unable to recover it. 00:34:37.401 [2024-07-15 03:37:43.192232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.401 [2024-07-15 03:37:43.192275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.401 qpair failed and we were unable to recover it. 00:34:37.401 [2024-07-15 03:37:43.192435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.401 [2024-07-15 03:37:43.192478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.401 qpair failed and we were unable to recover it. 00:34:37.401 [2024-07-15 03:37:43.192610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.401 [2024-07-15 03:37:43.192636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.401 qpair failed and we were unable to recover it. 00:34:37.401 [2024-07-15 03:37:43.192803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.401 [2024-07-15 03:37:43.192829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.401 qpair failed and we were unable to recover it. 00:34:37.401 [2024-07-15 03:37:43.193001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.401 [2024-07-15 03:37:43.193046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.401 qpair failed and we were unable to recover it. 00:34:37.401 [2024-07-15 03:37:43.193244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.401 [2024-07-15 03:37:43.193286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.401 qpair failed and we were unable to recover it. 00:34:37.401 [2024-07-15 03:37:43.193446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.401 [2024-07-15 03:37:43.193490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.401 qpair failed and we were unable to recover it. 00:34:37.401 [2024-07-15 03:37:43.193627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.401 [2024-07-15 03:37:43.193653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.401 qpair failed and we were unable to recover it. 00:34:37.401 [2024-07-15 03:37:43.193817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.401 [2024-07-15 03:37:43.193842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.401 qpair failed and we were unable to recover it. 00:34:37.401 [2024-07-15 03:37:43.194010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.401 [2024-07-15 03:37:43.194059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.401 qpair failed and we were unable to recover it. 00:34:37.401 [2024-07-15 03:37:43.194193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.401 [2024-07-15 03:37:43.194236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.401 qpair failed and we were unable to recover it. 00:34:37.401 [2024-07-15 03:37:43.194428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.401 [2024-07-15 03:37:43.194477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.401 qpair failed and we were unable to recover it. 00:34:37.401 [2024-07-15 03:37:43.194628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.401 [2024-07-15 03:37:43.194656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.401 qpair failed and we were unable to recover it. 00:34:37.401 [2024-07-15 03:37:43.194768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.401 [2024-07-15 03:37:43.194796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.401 qpair failed and we were unable to recover it. 00:34:37.401 [2024-07-15 03:37:43.194998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.401 [2024-07-15 03:37:43.195028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.401 qpair failed and we were unable to recover it. 00:34:37.401 [2024-07-15 03:37:43.195164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.401 [2024-07-15 03:37:43.195190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.401 qpair failed and we were unable to recover it. 00:34:37.401 [2024-07-15 03:37:43.195354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.401 [2024-07-15 03:37:43.195379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.401 qpair failed and we were unable to recover it. 00:34:37.401 [2024-07-15 03:37:43.195526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.401 [2024-07-15 03:37:43.195554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.401 qpair failed and we were unable to recover it. 00:34:37.401 [2024-07-15 03:37:43.195734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.401 [2024-07-15 03:37:43.195762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.401 qpair failed and we were unable to recover it. 00:34:37.401 [2024-07-15 03:37:43.195923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.401 [2024-07-15 03:37:43.195950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.401 qpair failed and we were unable to recover it. 00:34:37.401 [2024-07-15 03:37:43.196064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.401 [2024-07-15 03:37:43.196090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.401 qpair failed and we were unable to recover it. 00:34:37.401 [2024-07-15 03:37:43.196229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.401 [2024-07-15 03:37:43.196270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.401 qpair failed and we were unable to recover it. 00:34:37.401 [2024-07-15 03:37:43.196431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.401 [2024-07-15 03:37:43.196456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.401 qpair failed and we were unable to recover it. 00:34:37.401 [2024-07-15 03:37:43.196689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.401 [2024-07-15 03:37:43.196718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.401 qpair failed and we were unable to recover it. 00:34:37.401 [2024-07-15 03:37:43.196873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.401 [2024-07-15 03:37:43.196909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.401 qpair failed and we were unable to recover it. 00:34:37.401 [2024-07-15 03:37:43.197065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.401 [2024-07-15 03:37:43.197090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.401 qpair failed and we were unable to recover it. 00:34:37.401 [2024-07-15 03:37:43.197245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.401 [2024-07-15 03:37:43.197274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.401 qpair failed and we were unable to recover it. 00:34:37.401 [2024-07-15 03:37:43.197425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.401 [2024-07-15 03:37:43.197452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.401 qpair failed and we were unable to recover it. 00:34:37.401 [2024-07-15 03:37:43.197601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.401 [2024-07-15 03:37:43.197628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.401 qpair failed and we were unable to recover it. 00:34:37.401 [2024-07-15 03:37:43.197779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.401 [2024-07-15 03:37:43.197807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.401 qpair failed and we were unable to recover it. 00:34:37.401 [2024-07-15 03:37:43.197945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.401 [2024-07-15 03:37:43.197972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.401 qpair failed and we were unable to recover it. 00:34:37.401 [2024-07-15 03:37:43.198109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.401 [2024-07-15 03:37:43.198134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.401 qpair failed and we were unable to recover it. 00:34:37.401 [2024-07-15 03:37:43.198272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.401 [2024-07-15 03:37:43.198318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.401 qpair failed and we were unable to recover it. 00:34:37.401 [2024-07-15 03:37:43.198495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.401 [2024-07-15 03:37:43.198524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.401 qpair failed and we were unable to recover it. 00:34:37.401 [2024-07-15 03:37:43.198697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.402 [2024-07-15 03:37:43.198725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.402 qpair failed and we were unable to recover it. 00:34:37.402 [2024-07-15 03:37:43.198912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.402 [2024-07-15 03:37:43.198948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.402 qpair failed and we were unable to recover it. 00:34:37.402 [2024-07-15 03:37:43.199086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.402 [2024-07-15 03:37:43.199110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.402 qpair failed and we were unable to recover it. 00:34:37.402 [2024-07-15 03:37:43.199272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.402 [2024-07-15 03:37:43.199300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.402 qpair failed and we were unable to recover it. 00:34:37.402 [2024-07-15 03:37:43.199421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.402 [2024-07-15 03:37:43.199464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.402 qpair failed and we were unable to recover it. 00:34:37.402 [2024-07-15 03:37:43.199592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.402 [2024-07-15 03:37:43.199620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.402 qpair failed and we were unable to recover it. 00:34:37.402 [2024-07-15 03:37:43.199756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.402 [2024-07-15 03:37:43.199781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.402 qpair failed and we were unable to recover it. 00:34:37.402 [2024-07-15 03:37:43.199917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.402 [2024-07-15 03:37:43.199948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.402 qpair failed and we were unable to recover it. 00:34:37.402 [2024-07-15 03:37:43.200102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.402 [2024-07-15 03:37:43.200127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.402 qpair failed and we were unable to recover it. 00:34:37.402 [2024-07-15 03:37:43.200291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.402 [2024-07-15 03:37:43.200319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.402 qpair failed and we were unable to recover it. 00:34:37.402 [2024-07-15 03:37:43.200470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.402 [2024-07-15 03:37:43.200498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.402 qpair failed and we were unable to recover it. 00:34:37.402 [2024-07-15 03:37:43.200664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.402 [2024-07-15 03:37:43.200699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.402 qpair failed and we were unable to recover it. 00:34:37.402 [2024-07-15 03:37:43.200864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.402 [2024-07-15 03:37:43.200902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.402 qpair failed and we were unable to recover it. 00:34:37.402 [2024-07-15 03:37:43.201056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.402 [2024-07-15 03:37:43.201082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.402 qpair failed and we were unable to recover it. 00:34:37.402 [2024-07-15 03:37:43.201291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.402 [2024-07-15 03:37:43.201337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.402 qpair failed and we were unable to recover it. 00:34:37.402 [2024-07-15 03:37:43.201493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.402 [2024-07-15 03:37:43.201521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.402 qpair failed and we were unable to recover it. 00:34:37.402 [2024-07-15 03:37:43.201711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.402 [2024-07-15 03:37:43.201750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.402 qpair failed and we were unable to recover it. 00:34:37.402 [2024-07-15 03:37:43.201945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.402 [2024-07-15 03:37:43.201977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.402 qpair failed and we were unable to recover it. 00:34:37.402 [2024-07-15 03:37:43.202136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.402 [2024-07-15 03:37:43.202164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.402 qpair failed and we were unable to recover it. 00:34:37.402 [2024-07-15 03:37:43.202310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.402 [2024-07-15 03:37:43.202338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.402 qpair failed and we were unable to recover it. 00:34:37.402 [2024-07-15 03:37:43.202494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.402 [2024-07-15 03:37:43.202522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.402 qpair failed and we were unable to recover it. 00:34:37.402 [2024-07-15 03:37:43.202676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.402 [2024-07-15 03:37:43.202704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.402 qpair failed and we were unable to recover it. 00:34:37.402 [2024-07-15 03:37:43.202857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.402 [2024-07-15 03:37:43.202892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.402 qpair failed and we were unable to recover it. 00:34:37.402 [2024-07-15 03:37:43.203050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.402 [2024-07-15 03:37:43.203075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.402 qpair failed and we were unable to recover it. 00:34:37.402 [2024-07-15 03:37:43.203208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.402 [2024-07-15 03:37:43.203252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.402 qpair failed and we were unable to recover it. 00:34:37.402 [2024-07-15 03:37:43.203379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.402 [2024-07-15 03:37:43.203407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.402 qpair failed and we were unable to recover it. 00:34:37.402 [2024-07-15 03:37:43.203557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.402 [2024-07-15 03:37:43.203585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.402 qpair failed and we were unable to recover it. 00:34:37.402 [2024-07-15 03:37:43.203742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.402 [2024-07-15 03:37:43.203770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.402 qpair failed and we were unable to recover it. 00:34:37.402 [2024-07-15 03:37:43.203913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.402 [2024-07-15 03:37:43.203942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.402 qpair failed and we were unable to recover it. 00:34:37.402 [2024-07-15 03:37:43.204083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.402 [2024-07-15 03:37:43.204108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.402 qpair failed and we were unable to recover it. 00:34:37.402 [2024-07-15 03:37:43.204225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.402 [2024-07-15 03:37:43.204267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.402 qpair failed and we were unable to recover it. 00:34:37.402 [2024-07-15 03:37:43.204443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.402 [2024-07-15 03:37:43.204472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.402 qpair failed and we were unable to recover it. 00:34:37.402 [2024-07-15 03:37:43.204688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.402 [2024-07-15 03:37:43.204717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.402 qpair failed and we were unable to recover it. 00:34:37.402 [2024-07-15 03:37:43.204862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.402 [2024-07-15 03:37:43.204897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.402 qpair failed and we were unable to recover it. 00:34:37.402 [2024-07-15 03:37:43.205041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.402 [2024-07-15 03:37:43.205067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.402 qpair failed and we were unable to recover it. 00:34:37.402 [2024-07-15 03:37:43.205226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.402 [2024-07-15 03:37:43.205251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.402 qpair failed and we were unable to recover it. 00:34:37.402 [2024-07-15 03:37:43.205436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.402 [2024-07-15 03:37:43.205464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.402 qpair failed and we were unable to recover it. 00:34:37.402 [2024-07-15 03:37:43.205621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.403 [2024-07-15 03:37:43.205649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.403 qpair failed and we were unable to recover it. 00:34:37.403 [2024-07-15 03:37:43.205800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.403 [2024-07-15 03:37:43.205826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.403 qpair failed and we were unable to recover it. 00:34:37.403 [2024-07-15 03:37:43.205940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.403 [2024-07-15 03:37:43.205966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.403 qpair failed and we were unable to recover it. 00:34:37.403 [2024-07-15 03:37:43.206104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.403 [2024-07-15 03:37:43.206129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.403 qpair failed and we were unable to recover it. 00:34:37.403 [2024-07-15 03:37:43.206299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.403 [2024-07-15 03:37:43.206325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.403 qpair failed and we were unable to recover it. 00:34:37.403 [2024-07-15 03:37:43.206466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.403 [2024-07-15 03:37:43.206491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.403 qpair failed and we were unable to recover it. 00:34:37.403 [2024-07-15 03:37:43.206641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.403 [2024-07-15 03:37:43.206671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.403 qpair failed and we were unable to recover it. 00:34:37.403 [2024-07-15 03:37:43.206784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.403 [2024-07-15 03:37:43.206811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.403 qpair failed and we were unable to recover it. 00:34:37.403 [2024-07-15 03:37:43.206981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.403 [2024-07-15 03:37:43.207008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.403 qpair failed and we were unable to recover it. 00:34:37.403 [2024-07-15 03:37:43.207145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.403 [2024-07-15 03:37:43.207170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.403 qpair failed and we were unable to recover it. 00:34:37.403 [2024-07-15 03:37:43.207321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.403 [2024-07-15 03:37:43.207346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.403 qpair failed and we were unable to recover it. 00:34:37.403 [2024-07-15 03:37:43.207539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.403 [2024-07-15 03:37:43.207568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.403 qpair failed and we were unable to recover it. 00:34:37.403 [2024-07-15 03:37:43.207703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.403 [2024-07-15 03:37:43.207729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.403 qpair failed and we were unable to recover it. 00:34:37.403 [2024-07-15 03:37:43.207900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.403 [2024-07-15 03:37:43.207938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.403 qpair failed and we were unable to recover it. 00:34:37.403 [2024-07-15 03:37:43.208118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.403 [2024-07-15 03:37:43.208146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.403 qpair failed and we were unable to recover it. 00:34:37.403 [2024-07-15 03:37:43.208293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.403 [2024-07-15 03:37:43.208340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.403 qpair failed and we were unable to recover it. 00:34:37.403 [2024-07-15 03:37:43.208580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.403 [2024-07-15 03:37:43.208608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.403 qpair failed and we were unable to recover it. 00:34:37.403 [2024-07-15 03:37:43.208735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.403 [2024-07-15 03:37:43.208760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.403 qpair failed and we were unable to recover it. 00:34:37.403 [2024-07-15 03:37:43.208945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.403 [2024-07-15 03:37:43.208974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.403 qpair failed and we were unable to recover it. 00:34:37.403 [2024-07-15 03:37:43.209150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.403 [2024-07-15 03:37:43.209178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.403 qpair failed and we were unable to recover it. 00:34:37.403 [2024-07-15 03:37:43.209392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.403 [2024-07-15 03:37:43.209420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.403 qpair failed and we were unable to recover it. 00:34:37.403 [2024-07-15 03:37:43.209572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.403 [2024-07-15 03:37:43.209598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.403 qpair failed and we were unable to recover it. 00:34:37.403 [2024-07-15 03:37:43.209717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.403 [2024-07-15 03:37:43.209742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.403 qpair failed and we were unable to recover it. 00:34:37.403 [2024-07-15 03:37:43.209875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.403 [2024-07-15 03:37:43.209923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.403 qpair failed and we were unable to recover it. 00:34:37.403 [2024-07-15 03:37:43.210055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.403 [2024-07-15 03:37:43.210085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.403 qpair failed and we were unable to recover it. 00:34:37.403 [2024-07-15 03:37:43.210275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.403 [2024-07-15 03:37:43.210304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.403 qpair failed and we were unable to recover it. 00:34:37.403 [2024-07-15 03:37:43.210475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.403 [2024-07-15 03:37:43.210503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.403 qpair failed and we were unable to recover it. 00:34:37.403 [2024-07-15 03:37:43.210637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.403 [2024-07-15 03:37:43.210662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.403 qpair failed and we were unable to recover it. 00:34:37.403 [2024-07-15 03:37:43.210832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.403 [2024-07-15 03:37:43.210857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.403 qpair failed and we were unable to recover it. 00:34:37.403 [2024-07-15 03:37:43.211041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.403 [2024-07-15 03:37:43.211070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.403 qpair failed and we were unable to recover it. 00:34:37.404 [2024-07-15 03:37:43.211210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.404 [2024-07-15 03:37:43.211239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.404 qpair failed and we were unable to recover it. 00:34:37.404 [2024-07-15 03:37:43.211452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.404 [2024-07-15 03:37:43.211480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.404 qpair failed and we were unable to recover it. 00:34:37.404 [2024-07-15 03:37:43.211633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.404 [2024-07-15 03:37:43.211658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.404 qpair failed and we were unable to recover it. 00:34:37.404 [2024-07-15 03:37:43.211797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.404 [2024-07-15 03:37:43.211826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.404 qpair failed and we were unable to recover it. 00:34:37.404 [2024-07-15 03:37:43.211997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.404 [2024-07-15 03:37:43.212026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.404 qpair failed and we were unable to recover it. 00:34:37.404 [2024-07-15 03:37:43.212250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.404 [2024-07-15 03:37:43.212278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.404 qpair failed and we were unable to recover it. 00:34:37.404 [2024-07-15 03:37:43.212482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.404 [2024-07-15 03:37:43.212536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.404 qpair failed and we were unable to recover it. 00:34:37.404 [2024-07-15 03:37:43.212694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.404 [2024-07-15 03:37:43.212719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.404 qpair failed and we were unable to recover it. 00:34:37.404 [2024-07-15 03:37:43.212957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.404 [2024-07-15 03:37:43.212986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.404 qpair failed and we were unable to recover it. 00:34:37.404 [2024-07-15 03:37:43.213200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.404 [2024-07-15 03:37:43.213229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.404 qpair failed and we were unable to recover it. 00:34:37.404 [2024-07-15 03:37:43.213440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.404 [2024-07-15 03:37:43.213468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.404 qpair failed and we were unable to recover it. 00:34:37.404 [2024-07-15 03:37:43.213617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.404 [2024-07-15 03:37:43.213643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.404 qpair failed and we were unable to recover it. 00:34:37.404 [2024-07-15 03:37:43.213857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.404 [2024-07-15 03:37:43.213889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.404 qpair failed and we were unable to recover it. 00:34:37.404 [2024-07-15 03:37:43.214049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.404 [2024-07-15 03:37:43.214078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.404 qpair failed and we were unable to recover it. 00:34:37.404 [2024-07-15 03:37:43.214248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.404 [2024-07-15 03:37:43.214276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.404 qpair failed and we were unable to recover it. 00:34:37.404 [2024-07-15 03:37:43.214424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.404 [2024-07-15 03:37:43.214452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.404 qpair failed and we were unable to recover it. 00:34:37.404 [2024-07-15 03:37:43.214637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.404 [2024-07-15 03:37:43.214662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.404 qpair failed and we were unable to recover it. 00:34:37.404 [2024-07-15 03:37:43.214812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.404 [2024-07-15 03:37:43.214837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.404 qpair failed and we were unable to recover it. 00:34:37.404 [2024-07-15 03:37:43.215002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.404 [2024-07-15 03:37:43.215031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.404 qpair failed and we were unable to recover it. 00:34:37.404 [2024-07-15 03:37:43.215240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.404 [2024-07-15 03:37:43.215268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.404 qpair failed and we were unable to recover it. 00:34:37.404 [2024-07-15 03:37:43.215446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.404 [2024-07-15 03:37:43.215473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.404 qpair failed and we were unable to recover it. 00:34:37.404 [2024-07-15 03:37:43.215626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.404 [2024-07-15 03:37:43.215651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.404 qpair failed and we were unable to recover it. 00:34:37.404 [2024-07-15 03:37:43.215814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.404 [2024-07-15 03:37:43.215839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.404 qpair failed and we were unable to recover it. 00:34:37.404 [2024-07-15 03:37:43.216090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.404 [2024-07-15 03:37:43.216118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.404 qpair failed and we were unable to recover it. 00:34:37.404 [2024-07-15 03:37:43.216265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.404 [2024-07-15 03:37:43.216293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.404 qpair failed and we were unable to recover it. 00:34:37.404 [2024-07-15 03:37:43.216437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.404 [2024-07-15 03:37:43.216464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.404 qpair failed and we were unable to recover it. 00:34:37.404 [2024-07-15 03:37:43.216623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.404 [2024-07-15 03:37:43.216648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.404 qpair failed and we were unable to recover it. 00:34:37.404 [2024-07-15 03:37:43.216788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.404 [2024-07-15 03:37:43.216813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.404 qpair failed and we were unable to recover it. 00:34:37.404 [2024-07-15 03:37:43.216970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.404 [2024-07-15 03:37:43.216999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.404 qpair failed and we were unable to recover it. 00:34:37.404 [2024-07-15 03:37:43.217181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.404 [2024-07-15 03:37:43.217209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.404 qpair failed and we were unable to recover it. 00:34:37.404 [2024-07-15 03:37:43.217365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.404 [2024-07-15 03:37:43.217398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.404 qpair failed and we were unable to recover it. 00:34:37.404 [2024-07-15 03:37:43.217553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.404 [2024-07-15 03:37:43.217578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.404 qpair failed and we were unable to recover it. 00:34:37.404 [2024-07-15 03:37:43.217714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.404 [2024-07-15 03:37:43.217740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.404 qpair failed and we were unable to recover it. 00:34:37.404 [2024-07-15 03:37:43.217855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.404 [2024-07-15 03:37:43.217888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.404 qpair failed and we were unable to recover it. 00:34:37.404 [2024-07-15 03:37:43.218077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.404 [2024-07-15 03:37:43.218105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.404 qpair failed and we were unable to recover it. 00:34:37.404 [2024-07-15 03:37:43.218330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.404 [2024-07-15 03:37:43.218358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.404 qpair failed and we were unable to recover it. 00:34:37.404 [2024-07-15 03:37:43.218485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.404 [2024-07-15 03:37:43.218513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.405 qpair failed and we were unable to recover it. 00:34:37.405 [2024-07-15 03:37:43.218671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.405 [2024-07-15 03:37:43.218696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.405 qpair failed and we were unable to recover it. 00:34:37.405 [2024-07-15 03:37:43.218830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.405 [2024-07-15 03:37:43.218855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.405 qpair failed and we were unable to recover it. 00:34:37.405 [2024-07-15 03:37:43.219040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.405 [2024-07-15 03:37:43.219084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.405 qpair failed and we were unable to recover it. 00:34:37.405 [2024-07-15 03:37:43.219336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.405 [2024-07-15 03:37:43.219365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.405 qpair failed and we were unable to recover it. 00:34:37.405 [2024-07-15 03:37:43.219553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.405 [2024-07-15 03:37:43.219582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.405 qpair failed and we were unable to recover it. 00:34:37.405 [2024-07-15 03:37:43.219741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.405 [2024-07-15 03:37:43.219767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.405 qpair failed and we were unable to recover it. 00:34:37.405 [2024-07-15 03:37:43.219913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.405 [2024-07-15 03:37:43.219956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.405 qpair failed and we were unable to recover it. 00:34:37.405 [2024-07-15 03:37:43.220122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.405 [2024-07-15 03:37:43.220151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.405 qpair failed and we were unable to recover it. 00:34:37.405 [2024-07-15 03:37:43.220333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.405 [2024-07-15 03:37:43.220362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.405 qpair failed and we were unable to recover it. 00:34:37.405 [2024-07-15 03:37:43.220487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.405 [2024-07-15 03:37:43.220512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.405 qpair failed and we were unable to recover it. 00:34:37.405 [2024-07-15 03:37:43.220643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.405 [2024-07-15 03:37:43.220668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.405 qpair failed and we were unable to recover it. 00:34:37.405 [2024-07-15 03:37:43.220825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.405 [2024-07-15 03:37:43.220864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.405 qpair failed and we were unable to recover it. 00:34:37.405 [2024-07-15 03:37:43.221024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.405 [2024-07-15 03:37:43.221055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.405 qpair failed and we were unable to recover it. 00:34:37.405 [2024-07-15 03:37:43.221205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.405 [2024-07-15 03:37:43.221246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.405 qpair failed and we were unable to recover it. 00:34:37.405 [2024-07-15 03:37:43.221401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.405 [2024-07-15 03:37:43.221429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.405 qpair failed and we were unable to recover it. 00:34:37.405 [2024-07-15 03:37:43.221588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.405 [2024-07-15 03:37:43.221614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.405 qpair failed and we were unable to recover it. 00:34:37.405 [2024-07-15 03:37:43.221731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.405 [2024-07-15 03:37:43.221756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.405 qpair failed and we were unable to recover it. 00:34:37.405 [2024-07-15 03:37:43.221894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.405 [2024-07-15 03:37:43.221943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.405 qpair failed and we were unable to recover it. 00:34:37.405 [2024-07-15 03:37:43.222096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.405 [2024-07-15 03:37:43.222124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.405 qpair failed and we were unable to recover it. 00:34:37.405 [2024-07-15 03:37:43.222310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.405 [2024-07-15 03:37:43.222335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.405 qpair failed and we were unable to recover it. 00:34:37.405 [2024-07-15 03:37:43.222475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.405 [2024-07-15 03:37:43.222505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.405 qpair failed and we were unable to recover it. 00:34:37.405 [2024-07-15 03:37:43.222650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.405 [2024-07-15 03:37:43.222675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.405 qpair failed and we were unable to recover it. 00:34:37.405 [2024-07-15 03:37:43.222810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.405 [2024-07-15 03:37:43.222836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.405 qpair failed and we were unable to recover it. 00:34:37.405 [2024-07-15 03:37:43.222996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.405 [2024-07-15 03:37:43.223023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.405 qpair failed and we were unable to recover it. 00:34:37.405 [2024-07-15 03:37:43.223132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.405 [2024-07-15 03:37:43.223158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.405 qpair failed and we were unable to recover it. 00:34:37.405 [2024-07-15 03:37:43.223300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.405 [2024-07-15 03:37:43.223325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.405 qpair failed and we were unable to recover it. 00:34:37.405 [2024-07-15 03:37:43.223440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.405 [2024-07-15 03:37:43.223466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.405 qpair failed and we were unable to recover it. 00:34:37.405 [2024-07-15 03:37:43.223683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.405 [2024-07-15 03:37:43.223709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.405 qpair failed and we were unable to recover it. 00:34:37.405 [2024-07-15 03:37:43.223882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.405 [2024-07-15 03:37:43.223908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.405 qpair failed and we were unable to recover it. 00:34:37.405 [2024-07-15 03:37:43.224125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.405 [2024-07-15 03:37:43.224150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.405 qpair failed and we were unable to recover it. 00:34:37.405 [2024-07-15 03:37:43.224319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.405 [2024-07-15 03:37:43.224344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.405 qpair failed and we were unable to recover it. 00:34:37.405 [2024-07-15 03:37:43.224484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.405 [2024-07-15 03:37:43.224510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.405 qpair failed and we were unable to recover it. 00:34:37.405 [2024-07-15 03:37:43.224651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.405 [2024-07-15 03:37:43.224676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.405 qpair failed and we were unable to recover it. 00:34:37.405 [2024-07-15 03:37:43.224814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.405 [2024-07-15 03:37:43.224840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.405 qpair failed and we were unable to recover it. 00:34:37.405 [2024-07-15 03:37:43.224971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.405 [2024-07-15 03:37:43.224997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.405 qpair failed and we were unable to recover it. 00:34:37.405 [2024-07-15 03:37:43.225133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.405 [2024-07-15 03:37:43.225158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.405 qpair failed and we were unable to recover it. 00:34:37.405 [2024-07-15 03:37:43.225302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.406 [2024-07-15 03:37:43.225327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.406 qpair failed and we were unable to recover it. 00:34:37.406 [2024-07-15 03:37:43.225434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.406 [2024-07-15 03:37:43.225460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.406 qpair failed and we were unable to recover it. 00:34:37.406 [2024-07-15 03:37:43.225571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.406 [2024-07-15 03:37:43.225596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.406 qpair failed and we were unable to recover it. 00:34:37.406 [2024-07-15 03:37:43.225716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.406 [2024-07-15 03:37:43.225742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.406 qpair failed and we were unable to recover it. 00:34:37.406 [2024-07-15 03:37:43.225887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.406 [2024-07-15 03:37:43.225914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.406 qpair failed and we were unable to recover it. 00:34:37.406 [2024-07-15 03:37:43.226063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.406 [2024-07-15 03:37:43.226088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.406 qpair failed and we were unable to recover it. 00:34:37.406 [2024-07-15 03:37:43.226225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.406 [2024-07-15 03:37:43.226251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.406 qpair failed and we were unable to recover it. 00:34:37.406 [2024-07-15 03:37:43.226387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.406 [2024-07-15 03:37:43.226413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.406 qpair failed and we were unable to recover it. 00:34:37.406 [2024-07-15 03:37:43.226546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.406 [2024-07-15 03:37:43.226571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.406 qpair failed and we were unable to recover it. 00:34:37.406 [2024-07-15 03:37:43.226746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.406 [2024-07-15 03:37:43.226771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.406 qpair failed and we were unable to recover it. 00:34:37.406 [2024-07-15 03:37:43.226944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.406 [2024-07-15 03:37:43.226971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.406 qpair failed and we were unable to recover it. 00:34:37.406 [2024-07-15 03:37:43.227112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.406 [2024-07-15 03:37:43.227141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.406 qpair failed and we were unable to recover it. 00:34:37.406 [2024-07-15 03:37:43.227280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.406 [2024-07-15 03:37:43.227306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.406 qpair failed and we were unable to recover it. 00:34:37.406 [2024-07-15 03:37:43.227441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.406 [2024-07-15 03:37:43.227466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.406 qpair failed and we were unable to recover it. 00:34:37.406 [2024-07-15 03:37:43.227570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.406 [2024-07-15 03:37:43.227595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.406 qpair failed and we were unable to recover it. 00:34:37.406 [2024-07-15 03:37:43.227727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.406 [2024-07-15 03:37:43.227753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.406 qpair failed and we were unable to recover it. 00:34:37.406 [2024-07-15 03:37:43.227920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.406 [2024-07-15 03:37:43.227946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.406 qpair failed and we were unable to recover it. 00:34:37.406 [2024-07-15 03:37:43.228114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.406 [2024-07-15 03:37:43.228140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.406 qpair failed and we were unable to recover it. 00:34:37.406 [2024-07-15 03:37:43.228249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.406 [2024-07-15 03:37:43.228275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.406 qpair failed and we were unable to recover it. 00:34:37.406 [2024-07-15 03:37:43.228413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.406 [2024-07-15 03:37:43.228439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.406 qpair failed and we were unable to recover it. 00:34:37.406 [2024-07-15 03:37:43.228542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.406 [2024-07-15 03:37:43.228567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.406 qpair failed and we were unable to recover it. 00:34:37.406 [2024-07-15 03:37:43.228673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.406 [2024-07-15 03:37:43.228698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.406 qpair failed and we were unable to recover it. 00:34:37.406 [2024-07-15 03:37:43.228860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.406 [2024-07-15 03:37:43.228892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.406 qpair failed and we were unable to recover it. 00:34:37.406 [2024-07-15 03:37:43.229040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.406 [2024-07-15 03:37:43.229066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.406 qpair failed and we were unable to recover it. 00:34:37.406 [2024-07-15 03:37:43.229196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.406 [2024-07-15 03:37:43.229222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.406 qpair failed and we were unable to recover it. 00:34:37.406 [2024-07-15 03:37:43.229444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.406 [2024-07-15 03:37:43.229469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.406 qpair failed and we were unable to recover it. 00:34:37.406 [2024-07-15 03:37:43.229608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.406 [2024-07-15 03:37:43.229634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.406 qpair failed and we were unable to recover it. 00:34:37.406 [2024-07-15 03:37:43.229797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.406 [2024-07-15 03:37:43.229823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.406 qpair failed and we were unable to recover it. 00:34:37.406 [2024-07-15 03:37:43.229957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.406 [2024-07-15 03:37:43.229983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.406 qpair failed and we were unable to recover it. 00:34:37.406 [2024-07-15 03:37:43.230090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.406 [2024-07-15 03:37:43.230116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.406 qpair failed and we were unable to recover it. 00:34:37.406 [2024-07-15 03:37:43.230217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.406 [2024-07-15 03:37:43.230243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.406 qpair failed and we were unable to recover it. 00:34:37.406 [2024-07-15 03:37:43.230418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.406 [2024-07-15 03:37:43.230443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.406 qpair failed and we were unable to recover it. 00:34:37.406 [2024-07-15 03:37:43.230602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.406 [2024-07-15 03:37:43.230628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.406 qpair failed and we were unable to recover it. 00:34:37.406 [2024-07-15 03:37:43.230739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.406 [2024-07-15 03:37:43.230764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.406 qpair failed and we were unable to recover it. 00:34:37.406 [2024-07-15 03:37:43.230918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.406 [2024-07-15 03:37:43.230944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.406 qpair failed and we were unable to recover it. 00:34:37.406 [2024-07-15 03:37:43.231114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.406 [2024-07-15 03:37:43.231139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.406 qpair failed and we were unable to recover it. 00:34:37.406 [2024-07-15 03:37:43.231354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.406 [2024-07-15 03:37:43.231380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.406 qpair failed and we were unable to recover it. 00:34:37.406 [2024-07-15 03:37:43.231548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.407 [2024-07-15 03:37:43.231573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.407 qpair failed and we were unable to recover it. 00:34:37.407 [2024-07-15 03:37:43.231680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.407 [2024-07-15 03:37:43.231705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.407 qpair failed and we were unable to recover it. 00:34:37.407 [2024-07-15 03:37:43.231854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.407 [2024-07-15 03:37:43.231895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.407 qpair failed and we were unable to recover it. 00:34:37.407 [2024-07-15 03:37:43.232042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.407 [2024-07-15 03:37:43.232067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.407 qpair failed and we were unable to recover it. 00:34:37.407 [2024-07-15 03:37:43.232177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.407 [2024-07-15 03:37:43.232202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.407 qpair failed and we were unable to recover it. 00:34:37.407 [2024-07-15 03:37:43.232367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.407 [2024-07-15 03:37:43.232393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.407 qpair failed and we were unable to recover it. 00:34:37.407 [2024-07-15 03:37:43.232504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.407 [2024-07-15 03:37:43.232529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.407 qpair failed and we were unable to recover it. 00:34:37.407 [2024-07-15 03:37:43.232666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.407 [2024-07-15 03:37:43.232691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.407 qpair failed and we were unable to recover it. 00:34:37.407 [2024-07-15 03:37:43.232852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.407 [2024-07-15 03:37:43.232884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.407 qpair failed and we were unable to recover it. 00:34:37.407 [2024-07-15 03:37:43.233037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.407 [2024-07-15 03:37:43.233070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.407 qpair failed and we were unable to recover it. 00:34:37.407 [2024-07-15 03:37:43.233213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.407 [2024-07-15 03:37:43.233239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.407 qpair failed and we were unable to recover it. 00:34:37.407 [2024-07-15 03:37:43.233403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.407 [2024-07-15 03:37:43.233428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.407 qpair failed and we were unable to recover it. 00:34:37.407 [2024-07-15 03:37:43.233567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.407 [2024-07-15 03:37:43.233592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.407 qpair failed and we were unable to recover it. 00:34:37.407 [2024-07-15 03:37:43.233715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.407 [2024-07-15 03:37:43.233741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.407 qpair failed and we were unable to recover it. 00:34:37.407 [2024-07-15 03:37:43.233882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.407 [2024-07-15 03:37:43.233908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.407 qpair failed and we were unable to recover it. 00:34:37.407 [2024-07-15 03:37:43.234047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.407 [2024-07-15 03:37:43.234073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.407 qpair failed and we were unable to recover it. 00:34:37.407 [2024-07-15 03:37:43.234210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.407 [2024-07-15 03:37:43.234235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.407 qpair failed and we were unable to recover it. 00:34:37.407 [2024-07-15 03:37:43.234349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.407 [2024-07-15 03:37:43.234374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.407 qpair failed and we were unable to recover it. 00:34:37.407 [2024-07-15 03:37:43.234591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.407 [2024-07-15 03:37:43.234616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.407 qpair failed and we were unable to recover it. 00:34:37.407 [2024-07-15 03:37:43.234752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.407 [2024-07-15 03:37:43.234777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.407 qpair failed and we were unable to recover it. 00:34:37.407 [2024-07-15 03:37:43.234940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.407 [2024-07-15 03:37:43.234966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.407 qpair failed and we were unable to recover it. 00:34:37.407 [2024-07-15 03:37:43.235130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.407 [2024-07-15 03:37:43.235155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.407 qpair failed and we were unable to recover it. 00:34:37.407 [2024-07-15 03:37:43.235309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.407 [2024-07-15 03:37:43.235334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.407 qpair failed and we were unable to recover it. 00:34:37.407 [2024-07-15 03:37:43.235494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.407 [2024-07-15 03:37:43.235519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.407 qpair failed and we were unable to recover it. 00:34:37.407 [2024-07-15 03:37:43.235647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.407 [2024-07-15 03:37:43.235672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.407 qpair failed and we were unable to recover it. 00:34:37.407 [2024-07-15 03:37:43.235837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.407 [2024-07-15 03:37:43.235862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.407 qpair failed and we were unable to recover it. 00:34:37.407 [2024-07-15 03:37:43.236040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.407 [2024-07-15 03:37:43.236065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.407 qpair failed and we were unable to recover it. 00:34:37.407 [2024-07-15 03:37:43.236201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.407 [2024-07-15 03:37:43.236227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.407 qpair failed and we were unable to recover it. 00:34:37.407 [2024-07-15 03:37:43.236391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.407 [2024-07-15 03:37:43.236417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.407 qpair failed and we were unable to recover it. 00:34:37.407 [2024-07-15 03:37:43.236537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.407 [2024-07-15 03:37:43.236563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.407 qpair failed and we were unable to recover it. 00:34:37.407 [2024-07-15 03:37:43.236725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.407 [2024-07-15 03:37:43.236751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.407 qpair failed and we were unable to recover it. 00:34:37.407 [2024-07-15 03:37:43.236862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.407 [2024-07-15 03:37:43.236893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.407 qpair failed and we were unable to recover it. 00:34:37.407 [2024-07-15 03:37:43.237034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.407 [2024-07-15 03:37:43.237059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.407 qpair failed and we were unable to recover it. 00:34:37.407 [2024-07-15 03:37:43.237200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.407 [2024-07-15 03:37:43.237225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.407 qpair failed and we were unable to recover it. 00:34:37.407 [2024-07-15 03:37:43.237359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.407 [2024-07-15 03:37:43.237384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.407 qpair failed and we were unable to recover it. 00:34:37.407 [2024-07-15 03:37:43.237546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.407 [2024-07-15 03:37:43.237571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.407 qpair failed and we were unable to recover it. 00:34:37.407 [2024-07-15 03:37:43.237680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.407 [2024-07-15 03:37:43.237705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.407 qpair failed and we were unable to recover it. 00:34:37.407 [2024-07-15 03:37:43.237871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.408 [2024-07-15 03:37:43.237902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.408 qpair failed and we were unable to recover it. 00:34:37.408 [2024-07-15 03:37:43.238018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.408 [2024-07-15 03:37:43.238043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.408 qpair failed and we were unable to recover it. 00:34:37.408 [2024-07-15 03:37:43.238187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.408 [2024-07-15 03:37:43.238213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.408 qpair failed and we were unable to recover it. 00:34:37.408 [2024-07-15 03:37:43.238324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.408 [2024-07-15 03:37:43.238349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.408 qpair failed and we were unable to recover it. 00:34:37.408 [2024-07-15 03:37:43.238480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.408 [2024-07-15 03:37:43.238506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.408 qpair failed and we were unable to recover it. 00:34:37.408 [2024-07-15 03:37:43.238672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.408 [2024-07-15 03:37:43.238701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.408 qpair failed and we were unable to recover it. 00:34:37.408 [2024-07-15 03:37:43.238864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.408 [2024-07-15 03:37:43.238896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.408 qpair failed and we were unable to recover it. 00:34:37.408 [2024-07-15 03:37:43.239037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.408 [2024-07-15 03:37:43.239062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.408 qpair failed and we were unable to recover it. 00:34:37.408 [2024-07-15 03:37:43.239177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.408 [2024-07-15 03:37:43.239202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.408 qpair failed and we were unable to recover it. 00:34:37.408 [2024-07-15 03:37:43.239320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.408 [2024-07-15 03:37:43.239345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.408 qpair failed and we were unable to recover it. 00:34:37.408 [2024-07-15 03:37:43.239495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.408 [2024-07-15 03:37:43.239521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.408 qpair failed and we were unable to recover it. 00:34:37.408 [2024-07-15 03:37:43.239654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.408 [2024-07-15 03:37:43.239679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.408 qpair failed and we were unable to recover it. 00:34:37.408 [2024-07-15 03:37:43.239818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.408 [2024-07-15 03:37:43.239844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.408 qpair failed and we were unable to recover it. 00:34:37.408 [2024-07-15 03:37:43.239987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.408 [2024-07-15 03:37:43.240013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.408 qpair failed and we were unable to recover it. 00:34:37.408 [2024-07-15 03:37:43.240160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.408 [2024-07-15 03:37:43.240185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.408 qpair failed and we were unable to recover it. 00:34:37.408 [2024-07-15 03:37:43.240325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.408 [2024-07-15 03:37:43.240350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.408 qpair failed and we were unable to recover it. 00:34:37.408 [2024-07-15 03:37:43.240515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.408 [2024-07-15 03:37:43.240540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.408 qpair failed and we were unable to recover it. 00:34:37.408 [2024-07-15 03:37:43.240676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.408 [2024-07-15 03:37:43.240701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.408 qpair failed and we were unable to recover it. 00:34:37.408 [2024-07-15 03:37:43.240834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.408 [2024-07-15 03:37:43.240859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.408 qpair failed and we were unable to recover it. 00:34:37.408 [2024-07-15 03:37:43.240988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.408 [2024-07-15 03:37:43.241014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.408 qpair failed and we were unable to recover it. 00:34:37.408 [2024-07-15 03:37:43.241152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.408 [2024-07-15 03:37:43.241177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.408 qpair failed and we were unable to recover it. 00:34:37.408 [2024-07-15 03:37:43.241292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.408 [2024-07-15 03:37:43.241318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.408 qpair failed and we were unable to recover it. 00:34:37.408 [2024-07-15 03:37:43.241451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.408 [2024-07-15 03:37:43.241476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.408 qpair failed and we were unable to recover it. 00:34:37.408 [2024-07-15 03:37:43.241617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.408 [2024-07-15 03:37:43.241643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.408 qpair failed and we were unable to recover it. 00:34:37.408 [2024-07-15 03:37:43.241791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.408 [2024-07-15 03:37:43.241816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.408 qpair failed and we were unable to recover it. 00:34:37.408 [2024-07-15 03:37:43.241967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.408 [2024-07-15 03:37:43.241993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.408 qpair failed and we were unable to recover it. 00:34:37.408 [2024-07-15 03:37:43.242138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.408 [2024-07-15 03:37:43.242165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.408 qpair failed and we were unable to recover it. 00:34:37.408 [2024-07-15 03:37:43.242315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.408 [2024-07-15 03:37:43.242341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.408 qpair failed and we were unable to recover it. 00:34:37.408 [2024-07-15 03:37:43.242494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.408 [2024-07-15 03:37:43.242519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.408 qpair failed and we were unable to recover it. 00:34:37.408 [2024-07-15 03:37:43.242656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.408 [2024-07-15 03:37:43.242681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.408 qpair failed and we were unable to recover it. 00:34:37.408 [2024-07-15 03:37:43.242814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.408 [2024-07-15 03:37:43.242839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.408 qpair failed and we were unable to recover it. 00:34:37.408 [2024-07-15 03:37:43.242983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.408 [2024-07-15 03:37:43.243010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.408 qpair failed and we were unable to recover it. 00:34:37.408 [2024-07-15 03:37:43.243131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.409 [2024-07-15 03:37:43.243161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.409 qpair failed and we were unable to recover it. 00:34:37.409 [2024-07-15 03:37:43.243300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.409 [2024-07-15 03:37:43.243325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.409 qpair failed and we were unable to recover it. 00:34:37.409 [2024-07-15 03:37:43.243438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.409 [2024-07-15 03:37:43.243463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.409 qpair failed and we were unable to recover it. 00:34:37.409 [2024-07-15 03:37:43.243607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.409 [2024-07-15 03:37:43.243632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.409 qpair failed and we were unable to recover it. 00:34:37.409 [2024-07-15 03:37:43.243776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.409 [2024-07-15 03:37:43.243801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.409 qpair failed and we were unable to recover it. 00:34:37.409 [2024-07-15 03:37:43.243940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.409 [2024-07-15 03:37:43.243966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.409 qpair failed and we were unable to recover it. 00:34:37.409 [2024-07-15 03:37:43.244071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.409 [2024-07-15 03:37:43.244097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.409 qpair failed and we were unable to recover it. 00:34:37.409 [2024-07-15 03:37:43.244235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.409 [2024-07-15 03:37:43.244260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.409 qpair failed and we were unable to recover it. 00:34:37.409 [2024-07-15 03:37:43.244396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.409 [2024-07-15 03:37:43.244422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.409 qpair failed and we were unable to recover it. 00:34:37.409 [2024-07-15 03:37:43.244523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.409 [2024-07-15 03:37:43.244548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.409 qpair failed and we were unable to recover it. 00:34:37.409 [2024-07-15 03:37:43.244712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.409 [2024-07-15 03:37:43.244738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.409 qpair failed and we were unable to recover it. 00:34:37.409 [2024-07-15 03:37:43.244884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.409 [2024-07-15 03:37:43.244909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.409 qpair failed and we were unable to recover it. 00:34:37.409 [2024-07-15 03:37:43.245050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.409 [2024-07-15 03:37:43.245075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.409 qpair failed and we were unable to recover it. 00:34:37.409 [2024-07-15 03:37:43.245212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.409 [2024-07-15 03:37:43.245237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.409 qpair failed and we were unable to recover it. 00:34:37.409 [2024-07-15 03:37:43.245356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.409 [2024-07-15 03:37:43.245382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.409 qpair failed and we were unable to recover it. 00:34:37.409 [2024-07-15 03:37:43.245513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.409 [2024-07-15 03:37:43.245538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.409 qpair failed and we were unable to recover it. 00:34:37.409 [2024-07-15 03:37:43.245675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.409 [2024-07-15 03:37:43.245701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.409 qpair failed and we were unable to recover it. 00:34:37.409 [2024-07-15 03:37:43.245810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.409 [2024-07-15 03:37:43.245836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.409 qpair failed and we were unable to recover it. 00:34:37.409 [2024-07-15 03:37:43.245975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.409 [2024-07-15 03:37:43.246001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.409 qpair failed and we were unable to recover it. 00:34:37.409 [2024-07-15 03:37:43.246138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.409 [2024-07-15 03:37:43.246163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.409 qpair failed and we were unable to recover it. 00:34:37.409 [2024-07-15 03:37:43.246274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.409 [2024-07-15 03:37:43.246299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.409 qpair failed and we were unable to recover it. 00:34:37.409 [2024-07-15 03:37:43.246438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.409 [2024-07-15 03:37:43.246463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.409 qpair failed and we were unable to recover it. 00:34:37.409 [2024-07-15 03:37:43.246592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.409 [2024-07-15 03:37:43.246617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.409 qpair failed and we were unable to recover it. 00:34:37.409 [2024-07-15 03:37:43.246732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.409 [2024-07-15 03:37:43.246757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.409 qpair failed and we were unable to recover it. 00:34:37.409 [2024-07-15 03:37:43.246901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.409 [2024-07-15 03:37:43.246927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.409 qpair failed and we were unable to recover it. 00:34:37.409 [2024-07-15 03:37:43.247063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.409 [2024-07-15 03:37:43.247088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.409 qpair failed and we were unable to recover it. 00:34:37.409 [2024-07-15 03:37:43.247302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.409 [2024-07-15 03:37:43.247326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.409 qpair failed and we were unable to recover it. 00:34:37.409 [2024-07-15 03:37:43.247488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.409 [2024-07-15 03:37:43.247517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.409 qpair failed and we were unable to recover it. 00:34:37.409 [2024-07-15 03:37:43.247659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.409 [2024-07-15 03:37:43.247684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.409 qpair failed and we were unable to recover it. 00:34:37.409 [2024-07-15 03:37:43.247859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.409 [2024-07-15 03:37:43.247890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.409 qpair failed and we were unable to recover it. 00:34:37.409 [2024-07-15 03:37:43.248030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.409 [2024-07-15 03:37:43.248056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.409 qpair failed and we were unable to recover it. 00:34:37.409 [2024-07-15 03:37:43.248160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.409 [2024-07-15 03:37:43.248186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.409 qpair failed and we were unable to recover it. 00:34:37.409 [2024-07-15 03:37:43.248327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.409 [2024-07-15 03:37:43.248352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.409 qpair failed and we were unable to recover it. 00:34:37.409 [2024-07-15 03:37:43.248466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.409 [2024-07-15 03:37:43.248493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.409 qpair failed and we were unable to recover it. 00:34:37.409 [2024-07-15 03:37:43.248660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.409 [2024-07-15 03:37:43.248685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.409 qpair failed and we were unable to recover it. 00:34:37.409 [2024-07-15 03:37:43.248817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.409 [2024-07-15 03:37:43.248842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.409 qpair failed and we were unable to recover it. 00:34:37.409 [2024-07-15 03:37:43.248987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.409 [2024-07-15 03:37:43.249013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.409 qpair failed and we were unable to recover it. 00:34:37.409 [2024-07-15 03:37:43.249143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.410 [2024-07-15 03:37:43.249168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.410 qpair failed and we were unable to recover it. 00:34:37.410 [2024-07-15 03:37:43.249334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.410 [2024-07-15 03:37:43.249360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.410 qpair failed and we were unable to recover it. 00:34:37.410 [2024-07-15 03:37:43.249523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.410 [2024-07-15 03:37:43.249548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.410 qpair failed and we were unable to recover it. 00:34:37.410 [2024-07-15 03:37:43.249678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.410 [2024-07-15 03:37:43.249703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.410 qpair failed and we were unable to recover it. 00:34:37.410 [2024-07-15 03:37:43.249925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.410 [2024-07-15 03:37:43.249952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.410 qpair failed and we were unable to recover it. 00:34:37.410 [2024-07-15 03:37:43.250070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.410 [2024-07-15 03:37:43.250095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.410 qpair failed and we were unable to recover it. 00:34:37.410 [2024-07-15 03:37:43.250233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.410 [2024-07-15 03:37:43.250258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.410 qpair failed and we were unable to recover it. 00:34:37.410 [2024-07-15 03:37:43.250398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.410 [2024-07-15 03:37:43.250423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.410 qpair failed and we were unable to recover it. 00:34:37.410 [2024-07-15 03:37:43.250641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.410 [2024-07-15 03:37:43.250667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.410 qpair failed and we were unable to recover it. 00:34:37.410 [2024-07-15 03:37:43.250808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.410 [2024-07-15 03:37:43.250833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.410 qpair failed and we were unable to recover it. 00:34:37.410 [2024-07-15 03:37:43.250982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.410 [2024-07-15 03:37:43.251008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.410 qpair failed and we were unable to recover it. 00:34:37.410 [2024-07-15 03:37:43.251172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.410 [2024-07-15 03:37:43.251198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.410 qpair failed and we were unable to recover it. 00:34:37.410 [2024-07-15 03:37:43.251339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.410 [2024-07-15 03:37:43.251364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.410 qpair failed and we were unable to recover it. 00:34:37.410 [2024-07-15 03:37:43.251505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.410 [2024-07-15 03:37:43.251530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.410 qpair failed and we were unable to recover it. 00:34:37.410 [2024-07-15 03:37:43.251668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.410 [2024-07-15 03:37:43.251693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.410 qpair failed and we were unable to recover it. 00:34:37.410 [2024-07-15 03:37:43.251804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.410 [2024-07-15 03:37:43.251829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.410 qpair failed and we were unable to recover it. 00:34:37.410 [2024-07-15 03:37:43.251949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.410 [2024-07-15 03:37:43.251975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.410 qpair failed and we were unable to recover it. 00:34:37.410 [2024-07-15 03:37:43.252095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.410 [2024-07-15 03:37:43.252121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.410 qpair failed and we were unable to recover it. 00:34:37.410 [2024-07-15 03:37:43.252258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.410 [2024-07-15 03:37:43.252284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.410 qpair failed and we were unable to recover it. 00:34:37.410 [2024-07-15 03:37:43.252396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.410 [2024-07-15 03:37:43.252421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.410 qpair failed and we were unable to recover it. 00:34:37.410 [2024-07-15 03:37:43.252555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.410 [2024-07-15 03:37:43.252580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.410 qpair failed and we were unable to recover it. 00:34:37.410 [2024-07-15 03:37:43.252717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.410 [2024-07-15 03:37:43.252742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.410 qpair failed and we were unable to recover it. 00:34:37.410 [2024-07-15 03:37:43.252855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.410 [2024-07-15 03:37:43.252887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.410 qpair failed and we were unable to recover it. 00:34:37.410 [2024-07-15 03:37:43.253031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.410 [2024-07-15 03:37:43.253056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.410 qpair failed and we were unable to recover it. 00:34:37.410 [2024-07-15 03:37:43.253225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.410 [2024-07-15 03:37:43.253250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.410 qpair failed and we were unable to recover it. 00:34:37.410 [2024-07-15 03:37:43.253358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.410 [2024-07-15 03:37:43.253383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.410 qpair failed and we were unable to recover it. 00:34:37.410 [2024-07-15 03:37:43.253498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.410 [2024-07-15 03:37:43.253523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.410 qpair failed and we were unable to recover it. 00:34:37.410 [2024-07-15 03:37:43.253668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.410 [2024-07-15 03:37:43.253693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.410 qpair failed and we were unable to recover it. 00:34:37.410 [2024-07-15 03:37:43.253854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.410 [2024-07-15 03:37:43.253886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.410 qpair failed and we were unable to recover it. 00:34:37.410 [2024-07-15 03:37:43.254030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.410 [2024-07-15 03:37:43.254055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.410 qpair failed and we were unable to recover it. 00:34:37.410 [2024-07-15 03:37:43.254187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.410 [2024-07-15 03:37:43.254212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.410 qpair failed and we were unable to recover it. 00:34:37.410 [2024-07-15 03:37:43.254327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.410 [2024-07-15 03:37:43.254352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.410 qpair failed and we were unable to recover it. 00:34:37.410 [2024-07-15 03:37:43.254483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.410 [2024-07-15 03:37:43.254508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.410 qpair failed and we were unable to recover it. 00:34:37.410 [2024-07-15 03:37:43.254610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.410 [2024-07-15 03:37:43.254635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.410 qpair failed and we were unable to recover it. 00:34:37.410 [2024-07-15 03:37:43.254804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.410 [2024-07-15 03:37:43.254829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.410 qpair failed and we were unable to recover it. 00:34:37.410 [2024-07-15 03:37:43.254947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.410 [2024-07-15 03:37:43.254973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.410 qpair failed and we were unable to recover it. 00:34:37.410 [2024-07-15 03:37:43.255089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.410 [2024-07-15 03:37:43.255114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.410 qpair failed and we were unable to recover it. 00:34:37.410 [2024-07-15 03:37:43.255246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.411 [2024-07-15 03:37:43.255272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.411 qpair failed and we were unable to recover it. 00:34:37.411 [2024-07-15 03:37:43.255434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.411 [2024-07-15 03:37:43.255459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.411 qpair failed and we were unable to recover it. 00:34:37.411 [2024-07-15 03:37:43.255569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.411 [2024-07-15 03:37:43.255594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.411 qpair failed and we were unable to recover it. 00:34:37.411 [2024-07-15 03:37:43.255759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.411 [2024-07-15 03:37:43.255785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.411 qpair failed and we were unable to recover it. 00:34:37.411 [2024-07-15 03:37:43.255929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.411 [2024-07-15 03:37:43.255956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.411 qpair failed and we were unable to recover it. 00:34:37.411 [2024-07-15 03:37:43.256119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.411 [2024-07-15 03:37:43.256144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.411 qpair failed and we were unable to recover it. 00:34:37.411 [2024-07-15 03:37:43.256258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.411 [2024-07-15 03:37:43.256284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.411 qpair failed and we were unable to recover it. 00:34:37.411 [2024-07-15 03:37:43.256423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.411 [2024-07-15 03:37:43.256448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.411 qpair failed and we were unable to recover it. 00:34:37.411 [2024-07-15 03:37:43.256583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.411 [2024-07-15 03:37:43.256608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.411 qpair failed and we were unable to recover it. 00:34:37.411 [2024-07-15 03:37:43.256745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.411 [2024-07-15 03:37:43.256770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.411 qpair failed and we were unable to recover it. 00:34:37.411 [2024-07-15 03:37:43.256904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.411 [2024-07-15 03:37:43.256931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.411 qpair failed and we were unable to recover it. 00:34:37.411 [2024-07-15 03:37:43.257095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.411 [2024-07-15 03:37:43.257120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.411 qpair failed and we were unable to recover it. 00:34:37.411 [2024-07-15 03:37:43.257280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.411 [2024-07-15 03:37:43.257306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.411 qpair failed and we were unable to recover it. 00:34:37.411 [2024-07-15 03:37:43.257447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.411 [2024-07-15 03:37:43.257472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.411 qpair failed and we were unable to recover it. 00:34:37.411 [2024-07-15 03:37:43.257576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.411 [2024-07-15 03:37:43.257601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.411 qpair failed and we were unable to recover it. 00:34:37.411 [2024-07-15 03:37:43.257737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.411 [2024-07-15 03:37:43.257762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.411 qpair failed and we were unable to recover it. 00:34:37.411 [2024-07-15 03:37:43.257931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.411 [2024-07-15 03:37:43.257957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.411 qpair failed and we were unable to recover it. 00:34:37.411 [2024-07-15 03:37:43.258062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.411 [2024-07-15 03:37:43.258088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.411 qpair failed and we were unable to recover it. 00:34:37.411 [2024-07-15 03:37:43.258210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.411 [2024-07-15 03:37:43.258235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.411 qpair failed and we were unable to recover it. 00:34:37.411 [2024-07-15 03:37:43.258346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.411 [2024-07-15 03:37:43.258371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.411 qpair failed and we were unable to recover it. 00:34:37.411 [2024-07-15 03:37:43.258532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.411 [2024-07-15 03:37:43.258558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.411 qpair failed and we were unable to recover it. 00:34:37.411 [2024-07-15 03:37:43.258669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.411 [2024-07-15 03:37:43.258700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.411 qpair failed and we were unable to recover it. 00:34:37.411 [2024-07-15 03:37:43.258836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.411 [2024-07-15 03:37:43.258862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.411 qpair failed and we were unable to recover it. 00:34:37.411 [2024-07-15 03:37:43.259041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.411 [2024-07-15 03:37:43.259066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.411 qpair failed and we were unable to recover it. 00:34:37.411 [2024-07-15 03:37:43.259200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.411 [2024-07-15 03:37:43.259225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.411 qpair failed and we were unable to recover it. 00:34:37.411 [2024-07-15 03:37:43.259343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.411 [2024-07-15 03:37:43.259368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.411 qpair failed and we were unable to recover it. 00:34:37.411 [2024-07-15 03:37:43.259510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.411 [2024-07-15 03:37:43.259535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.411 qpair failed and we were unable to recover it. 00:34:37.411 [2024-07-15 03:37:43.259637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.411 [2024-07-15 03:37:43.259662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.411 qpair failed and we were unable to recover it. 00:34:37.411 [2024-07-15 03:37:43.259802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.411 [2024-07-15 03:37:43.259827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.411 qpair failed and we were unable to recover it. 00:34:37.411 [2024-07-15 03:37:43.259963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.411 [2024-07-15 03:37:43.259989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.411 qpair failed and we were unable to recover it. 00:34:37.411 [2024-07-15 03:37:43.260098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.411 [2024-07-15 03:37:43.260124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.411 qpair failed and we were unable to recover it. 00:34:37.411 [2024-07-15 03:37:43.260237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.411 [2024-07-15 03:37:43.260263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.411 qpair failed and we were unable to recover it. 00:34:37.411 [2024-07-15 03:37:43.260424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.411 [2024-07-15 03:37:43.260450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.411 qpair failed and we were unable to recover it. 00:34:37.411 [2024-07-15 03:37:43.260558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.411 [2024-07-15 03:37:43.260585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.411 qpair failed and we were unable to recover it. 00:34:37.411 [2024-07-15 03:37:43.260742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.411 [2024-07-15 03:37:43.260768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.411 qpair failed and we were unable to recover it. 00:34:37.411 [2024-07-15 03:37:43.260911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.411 [2024-07-15 03:37:43.260937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.411 qpair failed and we were unable to recover it. 00:34:37.411 [2024-07-15 03:37:43.261080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.411 [2024-07-15 03:37:43.261105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.411 qpair failed and we were unable to recover it. 00:34:37.411 [2024-07-15 03:37:43.261214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.411 [2024-07-15 03:37:43.261239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.411 qpair failed and we were unable to recover it. 00:34:37.412 [2024-07-15 03:37:43.261376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.412 [2024-07-15 03:37:43.261402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.412 qpair failed and we were unable to recover it. 00:34:37.412 [2024-07-15 03:37:43.261513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.412 [2024-07-15 03:37:43.261538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.412 qpair failed and we were unable to recover it. 00:34:37.412 [2024-07-15 03:37:43.261698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.412 [2024-07-15 03:37:43.261723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.412 qpair failed and we were unable to recover it. 00:34:37.412 [2024-07-15 03:37:43.261831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.412 [2024-07-15 03:37:43.261857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.412 qpair failed and we were unable to recover it. 00:34:37.412 [2024-07-15 03:37:43.262003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.412 [2024-07-15 03:37:43.262029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.412 qpair failed and we were unable to recover it. 00:34:37.412 [2024-07-15 03:37:43.262141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.412 [2024-07-15 03:37:43.262167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.412 qpair failed and we were unable to recover it. 00:34:37.412 [2024-07-15 03:37:43.262296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.412 [2024-07-15 03:37:43.262321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.412 qpair failed and we were unable to recover it. 00:34:37.412 [2024-07-15 03:37:43.262488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.412 [2024-07-15 03:37:43.262513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.412 qpair failed and we were unable to recover it. 00:34:37.412 [2024-07-15 03:37:43.262644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.412 [2024-07-15 03:37:43.262669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.412 qpair failed and we were unable to recover it. 00:34:37.412 [2024-07-15 03:37:43.262790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.412 [2024-07-15 03:37:43.262816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.412 qpair failed and we were unable to recover it. 00:34:37.412 [2024-07-15 03:37:43.262958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.412 [2024-07-15 03:37:43.262988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.412 qpair failed and we were unable to recover it. 00:34:37.412 [2024-07-15 03:37:43.263103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.412 [2024-07-15 03:37:43.263129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.412 qpair failed and we were unable to recover it. 00:34:37.412 [2024-07-15 03:37:43.263292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.412 [2024-07-15 03:37:43.263318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.412 qpair failed and we were unable to recover it. 00:34:37.412 [2024-07-15 03:37:43.263484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.412 [2024-07-15 03:37:43.263509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.412 qpair failed and we were unable to recover it. 00:34:37.412 [2024-07-15 03:37:43.263623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.412 [2024-07-15 03:37:43.263650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.412 qpair failed and we were unable to recover it. 00:34:37.412 [2024-07-15 03:37:43.263791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.412 [2024-07-15 03:37:43.263816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.412 qpair failed and we were unable to recover it. 00:34:37.412 [2024-07-15 03:37:43.263922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.412 [2024-07-15 03:37:43.263948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.412 qpair failed and we were unable to recover it. 00:34:37.412 [2024-07-15 03:37:43.264107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.412 [2024-07-15 03:37:43.264135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.412 qpair failed and we were unable to recover it. 00:34:37.412 [2024-07-15 03:37:43.264247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.412 [2024-07-15 03:37:43.264272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.412 qpair failed and we were unable to recover it. 00:34:37.412 [2024-07-15 03:37:43.264437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.412 [2024-07-15 03:37:43.264462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.412 qpair failed and we were unable to recover it. 00:34:37.412 [2024-07-15 03:37:43.264568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.412 [2024-07-15 03:37:43.264593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.412 qpair failed and we were unable to recover it. 00:34:37.412 [2024-07-15 03:37:43.264735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.412 [2024-07-15 03:37:43.264760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.412 qpair failed and we were unable to recover it. 00:34:37.412 [2024-07-15 03:37:43.264862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.412 [2024-07-15 03:37:43.264893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.412 qpair failed and we were unable to recover it. 00:34:37.412 [2024-07-15 03:37:43.265035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.412 [2024-07-15 03:37:43.265060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.412 qpair failed and we were unable to recover it. 00:34:37.412 [2024-07-15 03:37:43.265230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.412 [2024-07-15 03:37:43.265255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.412 qpair failed and we were unable to recover it. 00:34:37.412 [2024-07-15 03:37:43.265398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.412 [2024-07-15 03:37:43.265424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.412 qpair failed and we were unable to recover it. 00:34:37.412 [2024-07-15 03:37:43.265534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.412 [2024-07-15 03:37:43.265559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.412 qpair failed and we were unable to recover it. 00:34:37.412 [2024-07-15 03:37:43.265658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.412 [2024-07-15 03:37:43.265683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.412 qpair failed and we were unable to recover it. 00:34:37.412 [2024-07-15 03:37:43.265826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.412 [2024-07-15 03:37:43.265851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.412 qpair failed and we were unable to recover it. 00:34:37.412 [2024-07-15 03:37:43.266001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.412 [2024-07-15 03:37:43.266026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.412 qpair failed and we were unable to recover it. 00:34:37.412 [2024-07-15 03:37:43.266139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.412 [2024-07-15 03:37:43.266165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.412 qpair failed and we were unable to recover it. 00:34:37.412 [2024-07-15 03:37:43.266311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.412 [2024-07-15 03:37:43.266337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.412 qpair failed and we were unable to recover it. 00:34:37.412 [2024-07-15 03:37:43.266479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.412 [2024-07-15 03:37:43.266505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.412 qpair failed and we were unable to recover it. 00:34:37.412 [2024-07-15 03:37:43.266643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.412 [2024-07-15 03:37:43.266668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.412 qpair failed and we were unable to recover it. 00:34:37.412 [2024-07-15 03:37:43.266898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.412 [2024-07-15 03:37:43.266925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.412 qpair failed and we were unable to recover it. 00:34:37.412 [2024-07-15 03:37:43.267061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.412 [2024-07-15 03:37:43.267086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.412 qpair failed and we were unable to recover it. 00:34:37.412 [2024-07-15 03:37:43.267203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.412 [2024-07-15 03:37:43.267228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.412 qpair failed and we were unable to recover it. 00:34:37.412 [2024-07-15 03:37:43.267341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.413 [2024-07-15 03:37:43.267366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.413 qpair failed and we were unable to recover it. 00:34:37.413 [2024-07-15 03:37:43.267506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.413 [2024-07-15 03:37:43.267532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.413 qpair failed and we were unable to recover it. 00:34:37.413 [2024-07-15 03:37:43.267694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.413 [2024-07-15 03:37:43.267720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.413 qpair failed and we were unable to recover it. 00:34:37.413 [2024-07-15 03:37:43.267827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.413 [2024-07-15 03:37:43.267852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.413 qpair failed and we were unable to recover it. 00:34:37.413 [2024-07-15 03:37:43.267970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.413 [2024-07-15 03:37:43.267995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.413 qpair failed and we were unable to recover it. 00:34:37.413 [2024-07-15 03:37:43.268104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.413 [2024-07-15 03:37:43.268130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.413 qpair failed and we were unable to recover it. 00:34:37.413 [2024-07-15 03:37:43.268260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.413 [2024-07-15 03:37:43.268285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.413 qpair failed and we were unable to recover it. 00:34:37.413 [2024-07-15 03:37:43.268424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.413 [2024-07-15 03:37:43.268449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.413 qpair failed and we were unable to recover it. 00:34:37.413 [2024-07-15 03:37:43.268589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.413 [2024-07-15 03:37:43.268614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.413 qpair failed and we were unable to recover it. 00:34:37.413 [2024-07-15 03:37:43.268729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.413 [2024-07-15 03:37:43.268754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.413 qpair failed and we were unable to recover it. 00:34:37.413 [2024-07-15 03:37:43.268918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.413 [2024-07-15 03:37:43.268944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.413 qpair failed and we were unable to recover it. 00:34:37.413 [2024-07-15 03:37:43.269081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.413 [2024-07-15 03:37:43.269106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.413 qpair failed and we were unable to recover it. 00:34:37.413 [2024-07-15 03:37:43.269225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.413 [2024-07-15 03:37:43.269250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.413 qpair failed and we were unable to recover it. 00:34:37.413 [2024-07-15 03:37:43.269382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.413 [2024-07-15 03:37:43.269407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.413 qpair failed and we were unable to recover it. 00:34:37.413 [2024-07-15 03:37:43.269625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.413 [2024-07-15 03:37:43.269651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.413 qpair failed and we were unable to recover it. 00:34:37.413 [2024-07-15 03:37:43.269786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.413 [2024-07-15 03:37:43.269812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.413 qpair failed and we were unable to recover it. 00:34:37.413 [2024-07-15 03:37:43.269958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.413 [2024-07-15 03:37:43.269984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.413 qpair failed and we were unable to recover it. 00:34:37.413 [2024-07-15 03:37:43.270121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.413 [2024-07-15 03:37:43.270146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.413 qpair failed and we were unable to recover it. 00:34:37.413 [2024-07-15 03:37:43.270261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.413 [2024-07-15 03:37:43.270287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.413 qpair failed and we were unable to recover it. 00:34:37.413 [2024-07-15 03:37:43.270424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.413 [2024-07-15 03:37:43.270449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.413 qpair failed and we were unable to recover it. 00:34:37.413 [2024-07-15 03:37:43.270555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.413 [2024-07-15 03:37:43.270580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.413 qpair failed and we were unable to recover it. 00:34:37.413 [2024-07-15 03:37:43.270722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.413 [2024-07-15 03:37:43.270757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.413 qpair failed and we were unable to recover it. 00:34:37.413 [2024-07-15 03:37:43.270898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.413 [2024-07-15 03:37:43.270926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.413 qpair failed and we were unable to recover it. 00:34:37.413 [2024-07-15 03:37:43.271058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.413 [2024-07-15 03:37:43.271084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.413 qpair failed and we were unable to recover it. 00:34:37.413 [2024-07-15 03:37:43.271196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.413 [2024-07-15 03:37:43.271222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.413 qpair failed and we were unable to recover it. 00:34:37.413 [2024-07-15 03:37:43.271389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.413 [2024-07-15 03:37:43.271416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.413 qpair failed and we were unable to recover it. 00:34:37.413 [2024-07-15 03:37:43.271536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.413 [2024-07-15 03:37:43.271565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.413 qpair failed and we were unable to recover it. 00:34:37.413 [2024-07-15 03:37:43.271718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.413 [2024-07-15 03:37:43.271744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.413 qpair failed and we were unable to recover it. 00:34:37.413 [2024-07-15 03:37:43.271890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.413 [2024-07-15 03:37:43.271917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.413 qpair failed and we were unable to recover it. 00:34:37.413 [2024-07-15 03:37:43.272149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.413 [2024-07-15 03:37:43.272176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.413 qpair failed and we were unable to recover it. 00:34:37.413 [2024-07-15 03:37:43.272286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.413 [2024-07-15 03:37:43.272311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.413 qpair failed and we were unable to recover it. 00:34:37.413 [2024-07-15 03:37:43.272448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.413 [2024-07-15 03:37:43.272474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.413 qpair failed and we were unable to recover it. 00:34:37.414 [2024-07-15 03:37:43.272644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.414 [2024-07-15 03:37:43.272670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.414 qpair failed and we were unable to recover it. 00:34:37.414 [2024-07-15 03:37:43.272829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.414 [2024-07-15 03:37:43.272854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.414 qpair failed and we were unable to recover it. 00:34:37.414 [2024-07-15 03:37:43.272969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.414 [2024-07-15 03:37:43.272994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.414 qpair failed and we were unable to recover it. 00:34:37.414 [2024-07-15 03:37:43.273160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.414 [2024-07-15 03:37:43.273186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.414 qpair failed and we were unable to recover it. 00:34:37.414 [2024-07-15 03:37:43.273353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.414 [2024-07-15 03:37:43.273378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.414 qpair failed and we were unable to recover it. 00:34:37.414 [2024-07-15 03:37:43.273519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.414 [2024-07-15 03:37:43.273544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.414 qpair failed and we were unable to recover it. 00:34:37.414 [2024-07-15 03:37:43.273762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.414 [2024-07-15 03:37:43.273787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.414 qpair failed and we were unable to recover it. 00:34:37.414 [2024-07-15 03:37:43.273895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.414 [2024-07-15 03:37:43.273921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.414 qpair failed and we were unable to recover it. 00:34:37.414 [2024-07-15 03:37:43.274055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.414 [2024-07-15 03:37:43.274081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.414 qpair failed and we were unable to recover it. 00:34:37.414 [2024-07-15 03:37:43.274295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.414 [2024-07-15 03:37:43.274325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.414 qpair failed and we were unable to recover it. 00:34:37.414 [2024-07-15 03:37:43.274490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.414 [2024-07-15 03:37:43.274515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.414 qpair failed and we were unable to recover it. 00:34:37.414 [2024-07-15 03:37:43.274655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.414 [2024-07-15 03:37:43.274680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.414 qpair failed and we were unable to recover it. 00:34:37.414 [2024-07-15 03:37:43.274846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.414 [2024-07-15 03:37:43.274872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.414 qpair failed and we were unable to recover it. 00:34:37.414 [2024-07-15 03:37:43.275033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.414 [2024-07-15 03:37:43.275059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.414 qpair failed and we were unable to recover it. 00:34:37.414 [2024-07-15 03:37:43.275198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.414 [2024-07-15 03:37:43.275224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.414 qpair failed and we were unable to recover it. 00:34:37.414 [2024-07-15 03:37:43.275361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.414 [2024-07-15 03:37:43.275386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.414 qpair failed and we were unable to recover it. 00:34:37.414 [2024-07-15 03:37:43.275522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.414 [2024-07-15 03:37:43.275547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.414 qpair failed and we were unable to recover it. 00:34:37.414 [2024-07-15 03:37:43.275676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.414 [2024-07-15 03:37:43.275702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.414 qpair failed and we were unable to recover it. 00:34:37.414 [2024-07-15 03:37:43.275845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.414 [2024-07-15 03:37:43.275870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.414 qpair failed and we were unable to recover it. 00:34:37.414 [2024-07-15 03:37:43.276015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.414 [2024-07-15 03:37:43.276041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.414 qpair failed and we were unable to recover it. 00:34:37.414 [2024-07-15 03:37:43.276162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.414 [2024-07-15 03:37:43.276189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.414 qpair failed and we were unable to recover it. 00:34:37.414 [2024-07-15 03:37:43.276318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.414 [2024-07-15 03:37:43.276344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.414 qpair failed and we were unable to recover it. 00:34:37.414 [2024-07-15 03:37:43.276487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.414 [2024-07-15 03:37:43.276513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.414 qpair failed and we were unable to recover it. 00:34:37.414 [2024-07-15 03:37:43.276623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.414 [2024-07-15 03:37:43.276648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.414 qpair failed and we were unable to recover it. 00:34:37.414 [2024-07-15 03:37:43.276790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.414 [2024-07-15 03:37:43.276816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.414 qpair failed and we were unable to recover it. 00:34:37.414 [2024-07-15 03:37:43.276958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.414 [2024-07-15 03:37:43.276984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.414 qpair failed and we were unable to recover it. 00:34:37.414 [2024-07-15 03:37:43.277120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.414 [2024-07-15 03:37:43.277145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.414 qpair failed and we were unable to recover it. 00:34:37.414 [2024-07-15 03:37:43.277252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.414 [2024-07-15 03:37:43.277278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.414 qpair failed and we were unable to recover it. 00:34:37.414 [2024-07-15 03:37:43.277442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.414 [2024-07-15 03:37:43.277468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.414 qpair failed and we were unable to recover it. 00:34:37.414 [2024-07-15 03:37:43.277578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.414 [2024-07-15 03:37:43.277604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.414 qpair failed and we were unable to recover it. 00:34:37.414 [2024-07-15 03:37:43.277763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.414 [2024-07-15 03:37:43.277789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.414 qpair failed and we were unable to recover it. 00:34:37.414 [2024-07-15 03:37:43.277931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.414 [2024-07-15 03:37:43.277957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.414 qpair failed and we were unable to recover it. 00:34:37.414 [2024-07-15 03:37:43.278073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.414 [2024-07-15 03:37:43.278098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.414 qpair failed and we were unable to recover it. 00:34:37.414 [2024-07-15 03:37:43.278206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.414 [2024-07-15 03:37:43.278232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.414 qpair failed and we were unable to recover it. 00:34:37.414 [2024-07-15 03:37:43.278394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.414 [2024-07-15 03:37:43.278419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.414 qpair failed and we were unable to recover it. 00:34:37.414 [2024-07-15 03:37:43.278554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.414 [2024-07-15 03:37:43.278579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.414 qpair failed and we were unable to recover it. 00:34:37.414 [2024-07-15 03:37:43.278684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.414 [2024-07-15 03:37:43.278714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.414 qpair failed and we were unable to recover it. 00:34:37.415 [2024-07-15 03:37:43.278872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.415 [2024-07-15 03:37:43.278910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.415 qpair failed and we were unable to recover it. 00:34:37.415 [2024-07-15 03:37:43.279056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.415 [2024-07-15 03:37:43.279081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.415 qpair failed and we were unable to recover it. 00:34:37.415 [2024-07-15 03:37:43.279242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.415 [2024-07-15 03:37:43.279267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.415 qpair failed and we were unable to recover it. 00:34:37.415 [2024-07-15 03:37:43.279402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.415 [2024-07-15 03:37:43.279427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.415 qpair failed and we were unable to recover it. 00:34:37.415 [2024-07-15 03:37:43.279593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.415 [2024-07-15 03:37:43.279618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.415 qpair failed and we were unable to recover it. 00:34:37.415 [2024-07-15 03:37:43.279729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.415 [2024-07-15 03:37:43.279753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.415 qpair failed and we were unable to recover it. 00:34:37.415 [2024-07-15 03:37:43.279889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.415 [2024-07-15 03:37:43.279915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.415 qpair failed and we were unable to recover it. 00:34:37.415 [2024-07-15 03:37:43.280057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.415 [2024-07-15 03:37:43.280084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.415 qpair failed and we were unable to recover it. 00:34:37.415 [2024-07-15 03:37:43.280249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.415 [2024-07-15 03:37:43.280274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.415 qpair failed and we were unable to recover it. 00:34:37.415 [2024-07-15 03:37:43.280412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.415 [2024-07-15 03:37:43.280437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.415 qpair failed and we were unable to recover it. 00:34:37.415 [2024-07-15 03:37:43.280569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.415 [2024-07-15 03:37:43.280595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.415 qpair failed and we were unable to recover it. 00:34:37.415 [2024-07-15 03:37:43.280703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.415 [2024-07-15 03:37:43.280729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.415 qpair failed and we were unable to recover it. 00:34:37.415 [2024-07-15 03:37:43.280838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.415 [2024-07-15 03:37:43.280863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.415 qpair failed and we were unable to recover it. 00:34:37.415 [2024-07-15 03:37:43.281017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.415 [2024-07-15 03:37:43.281043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.415 qpair failed and we were unable to recover it. 00:34:37.415 [2024-07-15 03:37:43.281156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.415 [2024-07-15 03:37:43.281181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.415 qpair failed and we were unable to recover it. 00:34:37.415 [2024-07-15 03:37:43.281293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.415 [2024-07-15 03:37:43.281319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.415 qpair failed and we were unable to recover it. 00:34:37.415 [2024-07-15 03:37:43.281486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.415 [2024-07-15 03:37:43.281511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.415 qpair failed and we were unable to recover it. 00:34:37.415 [2024-07-15 03:37:43.281651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.415 [2024-07-15 03:37:43.281676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.415 qpair failed and we were unable to recover it. 00:34:37.415 [2024-07-15 03:37:43.281839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.415 [2024-07-15 03:37:43.281864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.415 qpair failed and we were unable to recover it. 00:34:37.415 [2024-07-15 03:37:43.281985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.415 [2024-07-15 03:37:43.282010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.415 qpair failed and we were unable to recover it. 00:34:37.415 [2024-07-15 03:37:43.282149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.415 [2024-07-15 03:37:43.282174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.415 qpair failed and we were unable to recover it. 00:34:37.415 [2024-07-15 03:37:43.282282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.415 [2024-07-15 03:37:43.282307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.415 qpair failed and we were unable to recover it. 00:34:37.415 [2024-07-15 03:37:43.282467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.415 [2024-07-15 03:37:43.282492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.415 qpair failed and we were unable to recover it. 00:34:37.415 [2024-07-15 03:37:43.282626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.415 [2024-07-15 03:37:43.282652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.415 qpair failed and we were unable to recover it. 00:34:37.415 [2024-07-15 03:37:43.282788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.415 [2024-07-15 03:37:43.282813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.415 qpair failed and we were unable to recover it. 00:34:37.415 [2024-07-15 03:37:43.282976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.415 [2024-07-15 03:37:43.283002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.415 qpair failed and we were unable to recover it. 00:34:37.415 [2024-07-15 03:37:43.283135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.415 [2024-07-15 03:37:43.283164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.415 qpair failed and we were unable to recover it. 00:34:37.415 [2024-07-15 03:37:43.283275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.415 [2024-07-15 03:37:43.283300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.415 qpair failed and we were unable to recover it. 00:34:37.415 [2024-07-15 03:37:43.283442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.415 [2024-07-15 03:37:43.283467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.415 qpair failed and we were unable to recover it. 00:34:37.415 [2024-07-15 03:37:43.283600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.415 [2024-07-15 03:37:43.283625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.415 qpair failed and we were unable to recover it. 00:34:37.415 [2024-07-15 03:37:43.283768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.415 [2024-07-15 03:37:43.283793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.415 qpair failed and we were unable to recover it. 00:34:37.415 [2024-07-15 03:37:43.283934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.415 [2024-07-15 03:37:43.283960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.415 qpair failed and we were unable to recover it. 00:34:37.415 [2024-07-15 03:37:43.284101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.415 [2024-07-15 03:37:43.284127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.415 qpair failed and we were unable to recover it. 00:34:37.415 [2024-07-15 03:37:43.284342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.415 [2024-07-15 03:37:43.284367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.415 qpair failed and we were unable to recover it. 00:34:37.415 [2024-07-15 03:37:43.284512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.415 [2024-07-15 03:37:43.284537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.415 qpair failed and we were unable to recover it. 00:34:37.415 [2024-07-15 03:37:43.284679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.415 [2024-07-15 03:37:43.284704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.415 qpair failed and we were unable to recover it. 00:34:37.415 [2024-07-15 03:37:43.284806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.415 [2024-07-15 03:37:43.284832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.416 qpair failed and we were unable to recover it. 00:34:37.416 [2024-07-15 03:37:43.284978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.416 [2024-07-15 03:37:43.285003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.416 qpair failed and we were unable to recover it. 00:34:37.416 [2024-07-15 03:37:43.285143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.416 [2024-07-15 03:37:43.285168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.416 qpair failed and we were unable to recover it. 00:34:37.416 [2024-07-15 03:37:43.285282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.416 [2024-07-15 03:37:43.285308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.416 qpair failed and we were unable to recover it. 00:34:37.416 [2024-07-15 03:37:43.285474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.416 [2024-07-15 03:37:43.285500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.416 qpair failed and we were unable to recover it. 00:34:37.416 [2024-07-15 03:37:43.285618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.416 [2024-07-15 03:37:43.285643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.416 qpair failed and we were unable to recover it. 00:34:37.416 [2024-07-15 03:37:43.285759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.416 [2024-07-15 03:37:43.285786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.416 qpair failed and we were unable to recover it. 00:34:37.416 [2024-07-15 03:37:43.286004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.416 [2024-07-15 03:37:43.286030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.416 qpair failed and we were unable to recover it. 00:34:37.416 [2024-07-15 03:37:43.286196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.416 [2024-07-15 03:37:43.286221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.416 qpair failed and we were unable to recover it. 00:34:37.416 [2024-07-15 03:37:43.286363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.416 [2024-07-15 03:37:43.286388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.416 qpair failed and we were unable to recover it. 00:34:37.416 [2024-07-15 03:37:43.286497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.416 [2024-07-15 03:37:43.286522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.416 qpair failed and we were unable to recover it. 00:34:37.416 [2024-07-15 03:37:43.286674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.416 [2024-07-15 03:37:43.286700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.416 qpair failed and we were unable to recover it. 00:34:37.416 [2024-07-15 03:37:43.286839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.416 [2024-07-15 03:37:43.286864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.416 qpair failed and we were unable to recover it. 00:34:37.416 [2024-07-15 03:37:43.287039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.416 [2024-07-15 03:37:43.287064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.416 qpair failed and we were unable to recover it. 00:34:37.416 [2024-07-15 03:37:43.287178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.416 [2024-07-15 03:37:43.287205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.416 qpair failed and we were unable to recover it. 00:34:37.416 [2024-07-15 03:37:43.287375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.416 [2024-07-15 03:37:43.287400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.416 qpair failed and we were unable to recover it. 00:34:37.416 [2024-07-15 03:37:43.287566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.416 [2024-07-15 03:37:43.287592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.416 qpair failed and we were unable to recover it. 00:34:37.416 [2024-07-15 03:37:43.287706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.416 [2024-07-15 03:37:43.287731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.416 qpair failed and we were unable to recover it. 00:34:37.416 [2024-07-15 03:37:43.287882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.416 [2024-07-15 03:37:43.287908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.416 qpair failed and we were unable to recover it. 00:34:37.416 [2024-07-15 03:37:43.288027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.416 [2024-07-15 03:37:43.288052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.416 qpair failed and we were unable to recover it. 00:34:37.416 [2024-07-15 03:37:43.288160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.416 [2024-07-15 03:37:43.288185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.416 qpair failed and we were unable to recover it. 00:34:37.416 [2024-07-15 03:37:43.288296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.416 [2024-07-15 03:37:43.288321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.416 qpair failed and we were unable to recover it. 00:34:37.416 [2024-07-15 03:37:43.288428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.416 [2024-07-15 03:37:43.288453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.416 qpair failed and we were unable to recover it. 00:34:37.416 [2024-07-15 03:37:43.288620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.416 [2024-07-15 03:37:43.288645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.416 qpair failed and we were unable to recover it. 00:34:37.416 [2024-07-15 03:37:43.288747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.416 [2024-07-15 03:37:43.288772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.416 qpair failed and we were unable to recover it. 00:34:37.416 [2024-07-15 03:37:43.288937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.416 [2024-07-15 03:37:43.288963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.416 qpair failed and we were unable to recover it. 00:34:37.416 [2024-07-15 03:37:43.289115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.416 [2024-07-15 03:37:43.289141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.416 qpair failed and we were unable to recover it. 00:34:37.416 [2024-07-15 03:37:43.289276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.416 [2024-07-15 03:37:43.289301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.416 qpair failed and we were unable to recover it. 00:34:37.416 [2024-07-15 03:37:43.289436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.416 [2024-07-15 03:37:43.289461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.416 qpair failed and we were unable to recover it. 00:34:37.416 [2024-07-15 03:37:43.289595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.416 [2024-07-15 03:37:43.289620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.416 qpair failed and we were unable to recover it. 00:34:37.416 [2024-07-15 03:37:43.289761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.416 [2024-07-15 03:37:43.289788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.416 qpair failed and we were unable to recover it. 00:34:37.416 [2024-07-15 03:37:43.289961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.416 [2024-07-15 03:37:43.289987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.416 qpair failed and we were unable to recover it. 00:34:37.416 [2024-07-15 03:37:43.290099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.416 [2024-07-15 03:37:43.290124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.416 qpair failed and we were unable to recover it. 00:34:37.416 [2024-07-15 03:37:43.290260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.416 [2024-07-15 03:37:43.290286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.416 qpair failed and we were unable to recover it. 00:34:37.416 [2024-07-15 03:37:43.290395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.416 [2024-07-15 03:37:43.290420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.416 qpair failed and we were unable to recover it. 00:34:37.416 [2024-07-15 03:37:43.290561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.416 [2024-07-15 03:37:43.290586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.416 qpair failed and we were unable to recover it. 00:34:37.416 [2024-07-15 03:37:43.290749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.416 [2024-07-15 03:37:43.290775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.416 qpair failed and we were unable to recover it. 00:34:37.416 [2024-07-15 03:37:43.290904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.416 [2024-07-15 03:37:43.290930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.416 qpair failed and we were unable to recover it. 00:34:37.417 [2024-07-15 03:37:43.291072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.417 [2024-07-15 03:37:43.291097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.417 qpair failed and we were unable to recover it. 00:34:37.417 [2024-07-15 03:37:43.291204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.417 [2024-07-15 03:37:43.291230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.417 qpair failed and we were unable to recover it. 00:34:37.417 [2024-07-15 03:37:43.291382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.417 [2024-07-15 03:37:43.291407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.417 qpair failed and we were unable to recover it. 00:34:37.417 [2024-07-15 03:37:43.291542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.417 [2024-07-15 03:37:43.291567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.417 qpair failed and we were unable to recover it. 00:34:37.417 [2024-07-15 03:37:43.291704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.417 [2024-07-15 03:37:43.291729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.417 qpair failed and we were unable to recover it. 00:34:37.417 [2024-07-15 03:37:43.291872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.417 [2024-07-15 03:37:43.291903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.417 qpair failed and we were unable to recover it. 00:34:37.417 [2024-07-15 03:37:43.292043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.417 [2024-07-15 03:37:43.292068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.417 qpair failed and we were unable to recover it. 00:34:37.417 [2024-07-15 03:37:43.292179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.417 [2024-07-15 03:37:43.292204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.417 qpair failed and we were unable to recover it. 00:34:37.417 [2024-07-15 03:37:43.292348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.417 [2024-07-15 03:37:43.292373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.417 qpair failed and we were unable to recover it. 00:34:37.417 [2024-07-15 03:37:43.292504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.417 [2024-07-15 03:37:43.292529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.417 qpair failed and we were unable to recover it. 00:34:37.417 [2024-07-15 03:37:43.292679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.417 [2024-07-15 03:37:43.292704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.417 qpair failed and we were unable to recover it. 00:34:37.417 [2024-07-15 03:37:43.292841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.417 [2024-07-15 03:37:43.292866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.417 qpair failed and we were unable to recover it. 00:34:37.417 [2024-07-15 03:37:43.293015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.417 [2024-07-15 03:37:43.293041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.417 qpair failed and we were unable to recover it. 00:34:37.417 [2024-07-15 03:37:43.293182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.417 [2024-07-15 03:37:43.293207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.417 qpair failed and we were unable to recover it. 00:34:37.417 [2024-07-15 03:37:43.293370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.417 [2024-07-15 03:37:43.293396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.417 qpair failed and we were unable to recover it. 00:34:37.417 [2024-07-15 03:37:43.293558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.417 [2024-07-15 03:37:43.293583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.417 qpair failed and we were unable to recover it. 00:34:37.417 [2024-07-15 03:37:43.293723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.417 [2024-07-15 03:37:43.293749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.417 qpair failed and we were unable to recover it. 00:34:37.417 [2024-07-15 03:37:43.293890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.417 [2024-07-15 03:37:43.293917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.417 qpair failed and we were unable to recover it. 00:34:37.417 [2024-07-15 03:37:43.294081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.417 [2024-07-15 03:37:43.294106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.417 qpair failed and we were unable to recover it. 00:34:37.417 [2024-07-15 03:37:43.294280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.417 [2024-07-15 03:37:43.294305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.417 qpair failed and we were unable to recover it. 00:34:37.417 [2024-07-15 03:37:43.294419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.417 [2024-07-15 03:37:43.294448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.417 qpair failed and we were unable to recover it. 00:34:37.417 [2024-07-15 03:37:43.294568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.417 [2024-07-15 03:37:43.294593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.417 qpair failed and we were unable to recover it. 00:34:37.417 [2024-07-15 03:37:43.294704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.417 [2024-07-15 03:37:43.294729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.417 qpair failed and we were unable to recover it. 00:34:37.417 [2024-07-15 03:37:43.294868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.417 [2024-07-15 03:37:43.294905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.417 qpair failed and we were unable to recover it. 00:34:37.417 [2024-07-15 03:37:43.295024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.417 [2024-07-15 03:37:43.295050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.417 qpair failed and we were unable to recover it. 00:34:37.417 [2024-07-15 03:37:43.295198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.417 [2024-07-15 03:37:43.295223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.417 qpair failed and we were unable to recover it. 00:34:37.417 [2024-07-15 03:37:43.295393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.417 [2024-07-15 03:37:43.295419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.417 qpair failed and we were unable to recover it. 00:34:37.417 [2024-07-15 03:37:43.295554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.417 [2024-07-15 03:37:43.295578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.417 qpair failed and we were unable to recover it. 00:34:37.417 [2024-07-15 03:37:43.295710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.417 [2024-07-15 03:37:43.295735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.417 qpair failed and we were unable to recover it. 00:34:37.417 [2024-07-15 03:37:43.295887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.417 [2024-07-15 03:37:43.295913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.417 qpair failed and we were unable to recover it. 00:34:37.417 [2024-07-15 03:37:43.296056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.417 [2024-07-15 03:37:43.296081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.417 qpair failed and we were unable to recover it. 00:34:37.417 [2024-07-15 03:37:43.296194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.417 [2024-07-15 03:37:43.296220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.417 qpair failed and we were unable to recover it. 00:34:37.417 [2024-07-15 03:37:43.296365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.417 [2024-07-15 03:37:43.296391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.417 qpair failed and we were unable to recover it. 00:34:37.417 [2024-07-15 03:37:43.296609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.417 [2024-07-15 03:37:43.296634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.417 qpair failed and we were unable to recover it. 00:34:37.417 [2024-07-15 03:37:43.296769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.417 [2024-07-15 03:37:43.296794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.417 qpair failed and we were unable to recover it. 00:34:37.417 [2024-07-15 03:37:43.296910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.417 [2024-07-15 03:37:43.296936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.417 qpair failed and we were unable to recover it. 00:34:37.417 [2024-07-15 03:37:43.297080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.417 [2024-07-15 03:37:43.297106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.417 qpair failed and we were unable to recover it. 00:34:37.417 [2024-07-15 03:37:43.297249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.418 [2024-07-15 03:37:43.297275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.418 qpair failed and we were unable to recover it. 00:34:37.418 [2024-07-15 03:37:43.297417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.418 [2024-07-15 03:37:43.297443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.418 qpair failed and we were unable to recover it. 00:34:37.418 [2024-07-15 03:37:43.297546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.418 [2024-07-15 03:37:43.297571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.418 qpair failed and we were unable to recover it. 00:34:37.418 [2024-07-15 03:37:43.297711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.418 [2024-07-15 03:37:43.297736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.418 qpair failed and we were unable to recover it. 00:34:37.418 [2024-07-15 03:37:43.297847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.418 [2024-07-15 03:37:43.297872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.418 qpair failed and we were unable to recover it. 00:34:37.418 [2024-07-15 03:37:43.298015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.418 [2024-07-15 03:37:43.298040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.418 qpair failed and we were unable to recover it. 00:34:37.418 [2024-07-15 03:37:43.298204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.418 [2024-07-15 03:37:43.298229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.418 qpair failed and we were unable to recover it. 00:34:37.418 [2024-07-15 03:37:43.298344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.418 [2024-07-15 03:37:43.298370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.418 qpair failed and we were unable to recover it. 00:34:37.418 [2024-07-15 03:37:43.298533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.418 [2024-07-15 03:37:43.298559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.418 qpair failed and we were unable to recover it. 00:34:37.418 [2024-07-15 03:37:43.298718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.418 [2024-07-15 03:37:43.298743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.418 qpair failed and we were unable to recover it. 00:34:37.418 [2024-07-15 03:37:43.298853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.418 [2024-07-15 03:37:43.298895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.418 qpair failed and we were unable to recover it. 00:34:37.418 [2024-07-15 03:37:43.299037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.418 [2024-07-15 03:37:43.299062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.418 qpair failed and we were unable to recover it. 00:34:37.418 [2024-07-15 03:37:43.299173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.418 [2024-07-15 03:37:43.299198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.418 qpair failed and we were unable to recover it. 00:34:37.418 [2024-07-15 03:37:43.299315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.418 [2024-07-15 03:37:43.299340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.418 qpair failed and we were unable to recover it. 00:34:37.418 [2024-07-15 03:37:43.299474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.418 [2024-07-15 03:37:43.299499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.418 qpair failed and we were unable to recover it. 00:34:37.418 [2024-07-15 03:37:43.299638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.418 [2024-07-15 03:37:43.299663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.418 qpair failed and we were unable to recover it. 00:34:37.418 [2024-07-15 03:37:43.299831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.418 [2024-07-15 03:37:43.299857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.418 qpair failed and we were unable to recover it. 00:34:37.418 [2024-07-15 03:37:43.299995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.418 [2024-07-15 03:37:43.300021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.418 qpair failed and we were unable to recover it. 00:34:37.418 [2024-07-15 03:37:43.300183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.418 [2024-07-15 03:37:43.300208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.418 qpair failed and we were unable to recover it. 00:34:37.418 [2024-07-15 03:37:43.300312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.418 [2024-07-15 03:37:43.300337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.418 qpair failed and we were unable to recover it. 00:34:37.418 [2024-07-15 03:37:43.300458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.418 [2024-07-15 03:37:43.300484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.418 qpair failed and we were unable to recover it. 00:34:37.418 [2024-07-15 03:37:43.300616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.418 [2024-07-15 03:37:43.300642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.418 qpair failed and we were unable to recover it. 00:34:37.418 [2024-07-15 03:37:43.300857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.418 [2024-07-15 03:37:43.300891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.418 qpair failed and we were unable to recover it. 00:34:37.418 [2024-07-15 03:37:43.301003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.418 [2024-07-15 03:37:43.301028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.418 qpair failed and we were unable to recover it. 00:34:37.418 [2024-07-15 03:37:43.301149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.418 [2024-07-15 03:37:43.301174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.418 qpair failed and we were unable to recover it. 00:34:37.418 [2024-07-15 03:37:43.301309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.418 [2024-07-15 03:37:43.301334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.418 qpair failed and we were unable to recover it. 00:34:37.418 [2024-07-15 03:37:43.301445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.418 [2024-07-15 03:37:43.301470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.418 qpair failed and we were unable to recover it. 00:34:37.418 [2024-07-15 03:37:43.301609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.418 [2024-07-15 03:37:43.301634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.418 qpair failed and we were unable to recover it. 00:34:37.418 [2024-07-15 03:37:43.301801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.418 [2024-07-15 03:37:43.301826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.418 qpair failed and we were unable to recover it. 00:34:37.418 [2024-07-15 03:37:43.301991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.418 [2024-07-15 03:37:43.302017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.418 qpair failed and we were unable to recover it. 00:34:37.418 [2024-07-15 03:37:43.302132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.418 [2024-07-15 03:37:43.302157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.418 qpair failed and we were unable to recover it. 00:34:37.418 [2024-07-15 03:37:43.302271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.418 [2024-07-15 03:37:43.302296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.418 qpair failed and we were unable to recover it. 00:34:37.418 [2024-07-15 03:37:43.302439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.419 [2024-07-15 03:37:43.302464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.419 qpair failed and we were unable to recover it. 00:34:37.419 [2024-07-15 03:37:43.302576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.419 [2024-07-15 03:37:43.302601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.419 qpair failed and we were unable to recover it. 00:34:37.419 [2024-07-15 03:37:43.302717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.419 [2024-07-15 03:37:43.302743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.419 qpair failed and we were unable to recover it. 00:34:37.419 [2024-07-15 03:37:43.302902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.419 [2024-07-15 03:37:43.302928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.419 qpair failed and we were unable to recover it. 00:34:37.419 [2024-07-15 03:37:43.303076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.419 [2024-07-15 03:37:43.303101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.419 qpair failed and we were unable to recover it. 00:34:37.419 [2024-07-15 03:37:43.303218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.419 [2024-07-15 03:37:43.303243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.419 qpair failed and we were unable to recover it. 00:34:37.419 [2024-07-15 03:37:43.303391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.419 [2024-07-15 03:37:43.303417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.419 qpair failed and we were unable to recover it. 00:34:37.419 [2024-07-15 03:37:43.303559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.419 [2024-07-15 03:37:43.303585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.419 qpair failed and we were unable to recover it. 00:34:37.419 [2024-07-15 03:37:43.303725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.419 [2024-07-15 03:37:43.303750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.419 qpair failed and we were unable to recover it. 00:34:37.419 [2024-07-15 03:37:43.303887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.419 [2024-07-15 03:37:43.303913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.419 qpair failed and we were unable to recover it. 00:34:37.419 [2024-07-15 03:37:43.304014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.419 [2024-07-15 03:37:43.304040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.419 qpair failed and we were unable to recover it. 00:34:37.419 [2024-07-15 03:37:43.304148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.419 [2024-07-15 03:37:43.304173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.419 qpair failed and we were unable to recover it. 00:34:37.419 [2024-07-15 03:37:43.304315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.419 [2024-07-15 03:37:43.304340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.419 qpair failed and we were unable to recover it. 00:34:37.419 [2024-07-15 03:37:43.304473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.419 [2024-07-15 03:37:43.304498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.419 qpair failed and we were unable to recover it. 00:34:37.419 [2024-07-15 03:37:43.304626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.419 [2024-07-15 03:37:43.304651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.419 qpair failed and we were unable to recover it. 00:34:37.419 [2024-07-15 03:37:43.304786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.419 [2024-07-15 03:37:43.304811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.419 qpair failed and we were unable to recover it. 00:34:37.419 [2024-07-15 03:37:43.304951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.419 [2024-07-15 03:37:43.304977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.419 qpair failed and we were unable to recover it. 00:34:37.419 [2024-07-15 03:37:43.305096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.419 [2024-07-15 03:37:43.305123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.419 qpair failed and we were unable to recover it. 00:34:37.419 [2024-07-15 03:37:43.305270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.419 [2024-07-15 03:37:43.305295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.419 qpair failed and we were unable to recover it. 00:34:37.419 [2024-07-15 03:37:43.305414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.419 [2024-07-15 03:37:43.305440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.419 qpair failed and we were unable to recover it. 00:34:37.419 [2024-07-15 03:37:43.305604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.419 [2024-07-15 03:37:43.305629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.419 qpair failed and we were unable to recover it. 00:34:37.419 [2024-07-15 03:37:43.305771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.419 [2024-07-15 03:37:43.305796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.419 qpair failed and we were unable to recover it. 00:34:37.419 [2024-07-15 03:37:43.305911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.419 [2024-07-15 03:37:43.305936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.419 qpair failed and we were unable to recover it. 00:34:37.419 [2024-07-15 03:37:43.306068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.419 [2024-07-15 03:37:43.306093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.419 qpair failed and we were unable to recover it. 00:34:37.419 [2024-07-15 03:37:43.306236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.419 [2024-07-15 03:37:43.306261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.419 qpair failed and we were unable to recover it. 00:34:37.419 [2024-07-15 03:37:43.306425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.419 [2024-07-15 03:37:43.306450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.419 qpair failed and we were unable to recover it. 00:34:37.419 [2024-07-15 03:37:43.306553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.419 [2024-07-15 03:37:43.306579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.419 qpair failed and we were unable to recover it. 00:34:37.419 [2024-07-15 03:37:43.306718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.419 [2024-07-15 03:37:43.306743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.419 qpair failed and we were unable to recover it. 00:34:37.419 [2024-07-15 03:37:43.306886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.419 [2024-07-15 03:37:43.306912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.419 qpair failed and we were unable to recover it. 00:34:37.419 [2024-07-15 03:37:43.307063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.419 [2024-07-15 03:37:43.307088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.419 qpair failed and we were unable to recover it. 00:34:37.419 [2024-07-15 03:37:43.307202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.419 [2024-07-15 03:37:43.307227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.419 qpair failed and we were unable to recover it. 00:34:37.419 [2024-07-15 03:37:43.307364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.419 [2024-07-15 03:37:43.307389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.419 qpair failed and we were unable to recover it. 00:34:37.419 [2024-07-15 03:37:43.307536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.419 [2024-07-15 03:37:43.307562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.419 qpair failed and we were unable to recover it. 00:34:37.419 [2024-07-15 03:37:43.307703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.419 [2024-07-15 03:37:43.307728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.419 qpair failed and we were unable to recover it. 00:34:37.419 [2024-07-15 03:37:43.307868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.419 [2024-07-15 03:37:43.307899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.419 qpair failed and we were unable to recover it. 00:34:37.419 [2024-07-15 03:37:43.308036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.419 [2024-07-15 03:37:43.308061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.419 qpair failed and we were unable to recover it. 00:34:37.419 [2024-07-15 03:37:43.308228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.419 [2024-07-15 03:37:43.308253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.419 qpair failed and we were unable to recover it. 00:34:37.419 [2024-07-15 03:37:43.308417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.419 [2024-07-15 03:37:43.308442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.420 qpair failed and we were unable to recover it. 00:34:37.420 [2024-07-15 03:37:43.308603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.420 [2024-07-15 03:37:43.308628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.420 qpair failed and we were unable to recover it. 00:34:37.420 [2024-07-15 03:37:43.308739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.420 [2024-07-15 03:37:43.308764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.420 qpair failed and we were unable to recover it. 00:34:37.420 [2024-07-15 03:37:43.308909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.420 [2024-07-15 03:37:43.308934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.420 qpair failed and we were unable to recover it. 00:34:37.420 [2024-07-15 03:37:43.309048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.420 [2024-07-15 03:37:43.309075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.420 qpair failed and we were unable to recover it. 00:34:37.420 [2024-07-15 03:37:43.309175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.420 [2024-07-15 03:37:43.309201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.420 qpair failed and we were unable to recover it. 00:34:37.420 [2024-07-15 03:37:43.309356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.420 [2024-07-15 03:37:43.309382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.420 qpair failed and we were unable to recover it. 00:34:37.420 [2024-07-15 03:37:43.309546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.420 [2024-07-15 03:37:43.309571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.420 qpair failed and we were unable to recover it. 00:34:37.420 [2024-07-15 03:37:43.309709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.420 [2024-07-15 03:37:43.309734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.420 qpair failed and we were unable to recover it. 00:34:37.420 [2024-07-15 03:37:43.309840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.420 [2024-07-15 03:37:43.309870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.420 qpair failed and we were unable to recover it. 00:34:37.420 [2024-07-15 03:37:43.309994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.420 [2024-07-15 03:37:43.310020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.420 qpair failed and we were unable to recover it. 00:34:37.420 [2024-07-15 03:37:43.310141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.420 [2024-07-15 03:37:43.310166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.420 qpair failed and we were unable to recover it. 00:34:37.420 [2024-07-15 03:37:43.310299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.420 [2024-07-15 03:37:43.310324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.420 qpair failed and we were unable to recover it. 00:34:37.420 [2024-07-15 03:37:43.310430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.420 [2024-07-15 03:37:43.310455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.420 qpair failed and we were unable to recover it. 00:34:37.420 [2024-07-15 03:37:43.310620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.420 [2024-07-15 03:37:43.310646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.420 qpair failed and we were unable to recover it. 00:34:37.420 [2024-07-15 03:37:43.310756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.420 [2024-07-15 03:37:43.310782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.420 qpair failed and we were unable to recover it. 00:34:37.420 [2024-07-15 03:37:43.310925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.420 [2024-07-15 03:37:43.310952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.420 qpair failed and we were unable to recover it. 00:34:37.420 [2024-07-15 03:37:43.311087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.420 [2024-07-15 03:37:43.311112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.420 qpair failed and we were unable to recover it. 00:34:37.420 [2024-07-15 03:37:43.311254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.420 [2024-07-15 03:37:43.311280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.420 qpair failed and we were unable to recover it. 00:34:37.420 [2024-07-15 03:37:43.311388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.420 [2024-07-15 03:37:43.311415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.420 qpair failed and we were unable to recover it. 00:34:37.420 [2024-07-15 03:37:43.311549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.420 [2024-07-15 03:37:43.311575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.420 qpair failed and we were unable to recover it. 00:34:37.420 [2024-07-15 03:37:43.311740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.420 [2024-07-15 03:37:43.311765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.420 qpair failed and we were unable to recover it. 00:34:37.420 [2024-07-15 03:37:43.311887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.420 [2024-07-15 03:37:43.311913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.420 qpair failed and we were unable to recover it. 00:34:37.420 [2024-07-15 03:37:43.312049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.420 [2024-07-15 03:37:43.312075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.420 qpair failed and we were unable to recover it. 00:34:37.420 [2024-07-15 03:37:43.312213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.420 [2024-07-15 03:37:43.312238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.420 qpair failed and we were unable to recover it. 00:34:37.420 [2024-07-15 03:37:43.312349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.420 [2024-07-15 03:37:43.312375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.420 qpair failed and we were unable to recover it. 00:34:37.420 [2024-07-15 03:37:43.312539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.420 [2024-07-15 03:37:43.312563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.420 qpair failed and we were unable to recover it. 00:34:37.420 [2024-07-15 03:37:43.312676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.420 [2024-07-15 03:37:43.312702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.420 qpair failed and we were unable to recover it. 00:34:37.420 [2024-07-15 03:37:43.312843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.420 [2024-07-15 03:37:43.312868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.420 qpair failed and we were unable to recover it. 00:34:37.420 [2024-07-15 03:37:43.312981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.420 [2024-07-15 03:37:43.313007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.420 qpair failed and we were unable to recover it. 00:34:37.420 [2024-07-15 03:37:43.313146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.420 [2024-07-15 03:37:43.313171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.420 qpair failed and we were unable to recover it. 00:34:37.420 [2024-07-15 03:37:43.313316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.420 [2024-07-15 03:37:43.313341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.420 qpair failed and we were unable to recover it. 00:34:37.420 [2024-07-15 03:37:43.313454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.420 [2024-07-15 03:37:43.313479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.420 qpair failed and we were unable to recover it. 00:34:37.420 [2024-07-15 03:37:43.313619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.420 [2024-07-15 03:37:43.313644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.420 qpair failed and we were unable to recover it. 00:34:37.420 [2024-07-15 03:37:43.313781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.420 [2024-07-15 03:37:43.313807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.420 qpair failed and we were unable to recover it. 00:34:37.420 [2024-07-15 03:37:43.313922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.420 [2024-07-15 03:37:43.313948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.420 qpair failed and we were unable to recover it. 00:34:37.420 [2024-07-15 03:37:43.314059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.420 [2024-07-15 03:37:43.314088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.420 qpair failed and we were unable to recover it. 00:34:37.420 [2024-07-15 03:37:43.314206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.420 [2024-07-15 03:37:43.314232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.420 qpair failed and we were unable to recover it. 00:34:37.421 [2024-07-15 03:37:43.314395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.421 [2024-07-15 03:37:43.314420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.421 qpair failed and we were unable to recover it. 00:34:37.421 [2024-07-15 03:37:43.314556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.421 [2024-07-15 03:37:43.314582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.421 qpair failed and we were unable to recover it. 00:34:37.421 [2024-07-15 03:37:43.314692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.421 [2024-07-15 03:37:43.314719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.421 qpair failed and we were unable to recover it. 00:34:37.421 [2024-07-15 03:37:43.314852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.421 [2024-07-15 03:37:43.314883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.421 qpair failed and we were unable to recover it. 00:34:37.421 [2024-07-15 03:37:43.315021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.421 [2024-07-15 03:37:43.315047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.421 qpair failed and we were unable to recover it. 00:34:37.421 [2024-07-15 03:37:43.315216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.421 [2024-07-15 03:37:43.315241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.421 qpair failed and we were unable to recover it. 00:34:37.421 [2024-07-15 03:37:43.315392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.421 [2024-07-15 03:37:43.315418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.421 qpair failed and we were unable to recover it. 00:34:37.421 [2024-07-15 03:37:43.315534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.421 [2024-07-15 03:37:43.315559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.421 qpair failed and we were unable to recover it. 00:34:37.421 [2024-07-15 03:37:43.315698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.421 [2024-07-15 03:37:43.315723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.421 qpair failed and we were unable to recover it. 00:34:37.421 [2024-07-15 03:37:43.315856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.421 [2024-07-15 03:37:43.315887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.421 qpair failed and we were unable to recover it. 00:34:37.421 [2024-07-15 03:37:43.316002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.421 [2024-07-15 03:37:43.316028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.421 qpair failed and we were unable to recover it. 00:34:37.421 [2024-07-15 03:37:43.316167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.421 [2024-07-15 03:37:43.316192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.421 qpair failed and we were unable to recover it. 00:34:37.421 [2024-07-15 03:37:43.316315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.421 [2024-07-15 03:37:43.316340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.421 qpair failed and we were unable to recover it. 00:34:37.421 [2024-07-15 03:37:43.316480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.421 [2024-07-15 03:37:43.316505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.421 qpair failed and we were unable to recover it. 00:34:37.421 [2024-07-15 03:37:43.316619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.421 [2024-07-15 03:37:43.316644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.421 qpair failed and we were unable to recover it. 00:34:37.421 [2024-07-15 03:37:43.316775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.421 [2024-07-15 03:37:43.316800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.421 qpair failed and we were unable to recover it. 00:34:37.421 [2024-07-15 03:37:43.316965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.421 [2024-07-15 03:37:43.316992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.421 qpair failed and we were unable to recover it. 00:34:37.421 [2024-07-15 03:37:43.317138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.421 [2024-07-15 03:37:43.317163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.421 qpair failed and we were unable to recover it. 00:34:37.421 [2024-07-15 03:37:43.317279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.421 [2024-07-15 03:37:43.317304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.421 qpair failed and we were unable to recover it. 00:34:37.421 [2024-07-15 03:37:43.317448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.421 [2024-07-15 03:37:43.317473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.421 qpair failed and we were unable to recover it. 00:34:37.421 [2024-07-15 03:37:43.317588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.421 [2024-07-15 03:37:43.317613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.421 qpair failed and we were unable to recover it. 00:34:37.421 [2024-07-15 03:37:43.317725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.421 [2024-07-15 03:37:43.317750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.421 qpair failed and we were unable to recover it. 00:34:37.421 [2024-07-15 03:37:43.317864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.421 [2024-07-15 03:37:43.317896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.421 qpair failed and we were unable to recover it. 00:34:37.421 [2024-07-15 03:37:43.318038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.421 [2024-07-15 03:37:43.318063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.421 qpair failed and we were unable to recover it. 00:34:37.421 [2024-07-15 03:37:43.318171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.421 [2024-07-15 03:37:43.318196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.421 qpair failed and we were unable to recover it. 00:34:37.421 [2024-07-15 03:37:43.318341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.421 [2024-07-15 03:37:43.318370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.421 qpair failed and we were unable to recover it. 00:34:37.421 [2024-07-15 03:37:43.318510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.421 [2024-07-15 03:37:43.318536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.421 qpair failed and we were unable to recover it. 00:34:37.421 [2024-07-15 03:37:43.318668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.421 [2024-07-15 03:37:43.318693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.421 qpair failed and we were unable to recover it. 00:34:37.421 [2024-07-15 03:37:43.318825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.421 [2024-07-15 03:37:43.318851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.421 qpair failed and we were unable to recover it. 00:34:37.421 [2024-07-15 03:37:43.319047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.421 [2024-07-15 03:37:43.319086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.421 qpair failed and we were unable to recover it. 00:34:37.421 [2024-07-15 03:37:43.319222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.421 [2024-07-15 03:37:43.319267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.421 qpair failed and we were unable to recover it. 00:34:37.421 [2024-07-15 03:37:43.319460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.421 [2024-07-15 03:37:43.319503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.421 qpair failed and we were unable to recover it. 00:34:37.421 [2024-07-15 03:37:43.319653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.421 [2024-07-15 03:37:43.319680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.421 qpair failed and we were unable to recover it. 00:34:37.421 [2024-07-15 03:37:43.319835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.421 [2024-07-15 03:37:43.319861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.421 qpair failed and we were unable to recover it. 00:34:37.421 [2024-07-15 03:37:43.320022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.421 [2024-07-15 03:37:43.320066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.421 qpair failed and we were unable to recover it. 00:34:37.421 [2024-07-15 03:37:43.320230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.421 [2024-07-15 03:37:43.320274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.421 qpair failed and we were unable to recover it. 00:34:37.421 [2024-07-15 03:37:43.320427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.421 [2024-07-15 03:37:43.320472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.421 qpair failed and we were unable to recover it. 00:34:37.422 [2024-07-15 03:37:43.320622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.422 [2024-07-15 03:37:43.320649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.422 qpair failed and we were unable to recover it. 00:34:37.422 [2024-07-15 03:37:43.320786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.422 [2024-07-15 03:37:43.320812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.422 qpair failed and we were unable to recover it. 00:34:37.422 [2024-07-15 03:37:43.320979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.422 [2024-07-15 03:37:43.321024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.422 qpair failed and we were unable to recover it. 00:34:37.422 [2024-07-15 03:37:43.321188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.422 [2024-07-15 03:37:43.321233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.422 qpair failed and we were unable to recover it. 00:34:37.422 [2024-07-15 03:37:43.321370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.422 [2024-07-15 03:37:43.321415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.422 qpair failed and we were unable to recover it. 00:34:37.422 [2024-07-15 03:37:43.321593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.422 [2024-07-15 03:37:43.321619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.422 qpair failed and we were unable to recover it. 00:34:37.422 [2024-07-15 03:37:43.321740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.422 [2024-07-15 03:37:43.321767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.422 qpair failed and we were unable to recover it. 00:34:37.422 [2024-07-15 03:37:43.321942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.422 [2024-07-15 03:37:43.321986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.422 qpair failed and we were unable to recover it. 00:34:37.422 [2024-07-15 03:37:43.322120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.422 [2024-07-15 03:37:43.322162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.422 qpair failed and we were unable to recover it. 00:34:37.422 [2024-07-15 03:37:43.322312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.422 [2024-07-15 03:37:43.322339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.422 qpair failed and we were unable to recover it. 00:34:37.422 [2024-07-15 03:37:43.322482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.422 [2024-07-15 03:37:43.322508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.422 qpair failed and we were unable to recover it. 00:34:37.422 [2024-07-15 03:37:43.322615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.422 [2024-07-15 03:37:43.322641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.422 qpair failed and we were unable to recover it. 00:34:37.422 [2024-07-15 03:37:43.322813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.422 [2024-07-15 03:37:43.322839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.422 qpair failed and we were unable to recover it. 00:34:37.422 [2024-07-15 03:37:43.322986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.422 [2024-07-15 03:37:43.323013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.422 qpair failed and we were unable to recover it. 00:34:37.422 [2024-07-15 03:37:43.323179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.422 [2024-07-15 03:37:43.323205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.422 qpair failed and we were unable to recover it. 00:34:37.422 [2024-07-15 03:37:43.323345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.422 [2024-07-15 03:37:43.323376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.422 qpair failed and we were unable to recover it. 00:34:37.422 [2024-07-15 03:37:43.323510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.422 [2024-07-15 03:37:43.323537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.422 qpair failed and we were unable to recover it. 00:34:37.422 [2024-07-15 03:37:43.323673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.422 [2024-07-15 03:37:43.323698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.422 qpair failed and we were unable to recover it. 00:34:37.422 [2024-07-15 03:37:43.323875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.422 [2024-07-15 03:37:43.323907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.422 qpair failed and we were unable to recover it. 00:34:37.422 [2024-07-15 03:37:43.324031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.422 [2024-07-15 03:37:43.324059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.422 qpair failed and we were unable to recover it. 00:34:37.422 [2024-07-15 03:37:43.324206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.422 [2024-07-15 03:37:43.324234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.422 qpair failed and we were unable to recover it. 00:34:37.422 [2024-07-15 03:37:43.324415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.422 [2024-07-15 03:37:43.324441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.422 qpair failed and we were unable to recover it. 00:34:37.422 [2024-07-15 03:37:43.324577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.422 [2024-07-15 03:37:43.324603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.422 qpair failed and we were unable to recover it. 00:34:37.422 [2024-07-15 03:37:43.324740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.422 [2024-07-15 03:37:43.324766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.422 qpair failed and we were unable to recover it. 00:34:37.422 [2024-07-15 03:37:43.324909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.422 [2024-07-15 03:37:43.324936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.422 qpair failed and we were unable to recover it. 00:34:37.422 [2024-07-15 03:37:43.325100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.422 [2024-07-15 03:37:43.325127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.422 qpair failed and we were unable to recover it. 00:34:37.422 [2024-07-15 03:37:43.325292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.422 [2024-07-15 03:37:43.325318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.422 qpair failed and we were unable to recover it. 00:34:37.422 [2024-07-15 03:37:43.325476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.422 [2024-07-15 03:37:43.325520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.422 qpair failed and we were unable to recover it. 00:34:37.422 [2024-07-15 03:37:43.325660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.422 [2024-07-15 03:37:43.325687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.422 qpair failed and we were unable to recover it. 00:34:37.422 [2024-07-15 03:37:43.325832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.422 [2024-07-15 03:37:43.325858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.422 qpair failed and we were unable to recover it. 00:34:37.422 [2024-07-15 03:37:43.326040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.422 [2024-07-15 03:37:43.326067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.422 qpair failed and we were unable to recover it. 00:34:37.422 [2024-07-15 03:37:43.326257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.422 [2024-07-15 03:37:43.326302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.422 qpair failed and we were unable to recover it. 00:34:37.422 [2024-07-15 03:37:43.326457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.423 [2024-07-15 03:37:43.326501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.423 qpair failed and we were unable to recover it. 00:34:37.423 [2024-07-15 03:37:43.326644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.423 [2024-07-15 03:37:43.326670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.423 qpair failed and we were unable to recover it. 00:34:37.423 [2024-07-15 03:37:43.326837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.423 [2024-07-15 03:37:43.326863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.423 qpair failed and we were unable to recover it. 00:34:37.423 [2024-07-15 03:37:43.327011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.423 [2024-07-15 03:37:43.327037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.423 qpair failed and we were unable to recover it. 00:34:37.423 [2024-07-15 03:37:43.327212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.423 [2024-07-15 03:37:43.327238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.423 qpair failed and we were unable to recover it. 00:34:37.423 [2024-07-15 03:37:43.327402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.423 [2024-07-15 03:37:43.327445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.423 qpair failed and we were unable to recover it. 00:34:37.423 [2024-07-15 03:37:43.327598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.423 [2024-07-15 03:37:43.327641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.423 qpair failed and we were unable to recover it. 00:34:37.423 [2024-07-15 03:37:43.327808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.423 [2024-07-15 03:37:43.327833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.423 qpair failed and we were unable to recover it. 00:34:37.423 [2024-07-15 03:37:43.328003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.423 [2024-07-15 03:37:43.328034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.423 qpair failed and we were unable to recover it. 00:34:37.423 [2024-07-15 03:37:43.328177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.423 [2024-07-15 03:37:43.328206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.423 qpair failed and we were unable to recover it. 00:34:37.423 [2024-07-15 03:37:43.328364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.423 [2024-07-15 03:37:43.328392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.423 qpair failed and we were unable to recover it. 00:34:37.423 [2024-07-15 03:37:43.328508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.423 [2024-07-15 03:37:43.328536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.423 qpair failed and we were unable to recover it. 00:34:37.423 [2024-07-15 03:37:43.328712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.423 [2024-07-15 03:37:43.328740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.423 qpair failed and we were unable to recover it. 00:34:37.423 [2024-07-15 03:37:43.328894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.423 [2024-07-15 03:37:43.328938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.423 qpair failed and we were unable to recover it. 00:34:37.423 [2024-07-15 03:37:43.329109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.423 [2024-07-15 03:37:43.329134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.423 qpair failed and we were unable to recover it. 00:34:37.423 [2024-07-15 03:37:43.329275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.423 [2024-07-15 03:37:43.329303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.423 qpair failed and we were unable to recover it. 00:34:37.423 [2024-07-15 03:37:43.329467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.423 [2024-07-15 03:37:43.329492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.423 qpair failed and we were unable to recover it. 00:34:37.423 [2024-07-15 03:37:43.329623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.423 [2024-07-15 03:37:43.329651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.423 qpair failed and we were unable to recover it. 00:34:37.423 [2024-07-15 03:37:43.329825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.423 [2024-07-15 03:37:43.329851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.423 qpair failed and we were unable to recover it. 00:34:37.423 [2024-07-15 03:37:43.329996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.423 [2024-07-15 03:37:43.330021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.423 qpair failed and we were unable to recover it. 00:34:37.423 [2024-07-15 03:37:43.330136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.423 [2024-07-15 03:37:43.330161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.423 qpair failed and we were unable to recover it. 00:34:37.423 [2024-07-15 03:37:43.330302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.423 [2024-07-15 03:37:43.330345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.423 qpair failed and we were unable to recover it. 00:34:37.423 [2024-07-15 03:37:43.330493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.423 [2024-07-15 03:37:43.330521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.423 qpair failed and we were unable to recover it. 00:34:37.423 [2024-07-15 03:37:43.330640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.423 [2024-07-15 03:37:43.330668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.423 qpair failed and we were unable to recover it. 00:34:37.423 [2024-07-15 03:37:43.330835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.423 [2024-07-15 03:37:43.330861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.423 qpair failed and we were unable to recover it. 00:34:37.423 [2024-07-15 03:37:43.331000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.423 [2024-07-15 03:37:43.331026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.423 qpair failed and we were unable to recover it. 00:34:37.423 [2024-07-15 03:37:43.331168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.423 [2024-07-15 03:37:43.331193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.423 qpair failed and we were unable to recover it. 00:34:37.423 [2024-07-15 03:37:43.331381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.423 [2024-07-15 03:37:43.331408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.423 qpair failed and we were unable to recover it. 00:34:37.423 [2024-07-15 03:37:43.331559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.423 [2024-07-15 03:37:43.331586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.423 qpair failed and we were unable to recover it. 00:34:37.423 [2024-07-15 03:37:43.331736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.423 [2024-07-15 03:37:43.331764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.423 qpair failed and we were unable to recover it. 00:34:37.423 [2024-07-15 03:37:43.331955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.423 [2024-07-15 03:37:43.331981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.423 qpair failed and we were unable to recover it. 00:34:37.423 [2024-07-15 03:37:43.332115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.423 [2024-07-15 03:37:43.332158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.423 qpair failed and we were unable to recover it. 00:34:37.423 [2024-07-15 03:37:43.332325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.423 [2024-07-15 03:37:43.332349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.423 qpair failed and we were unable to recover it. 00:34:37.423 [2024-07-15 03:37:43.332488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.423 [2024-07-15 03:37:43.332515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.423 qpair failed and we were unable to recover it. 00:34:37.423 [2024-07-15 03:37:43.332696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.423 [2024-07-15 03:37:43.332725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.423 qpair failed and we were unable to recover it. 00:34:37.423 [2024-07-15 03:37:43.332835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.424 [2024-07-15 03:37:43.332863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.424 qpair failed and we were unable to recover it. 00:34:37.424 [2024-07-15 03:37:43.333001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.424 [2024-07-15 03:37:43.333027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.424 qpair failed and we were unable to recover it. 00:34:37.424 [2024-07-15 03:37:43.333170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.424 [2024-07-15 03:37:43.333213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.424 qpair failed and we were unable to recover it. 00:34:37.424 [2024-07-15 03:37:43.333340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.424 [2024-07-15 03:37:43.333367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.424 qpair failed and we were unable to recover it. 00:34:37.424 [2024-07-15 03:37:43.333550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.424 [2024-07-15 03:37:43.333578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.424 qpair failed and we were unable to recover it. 00:34:37.424 [2024-07-15 03:37:43.333717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.424 [2024-07-15 03:37:43.333742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.424 qpair failed and we were unable to recover it. 00:34:37.424 [2024-07-15 03:37:43.333857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.424 [2024-07-15 03:37:43.333888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.424 qpair failed and we were unable to recover it. 00:34:37.424 [2024-07-15 03:37:43.334003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.424 [2024-07-15 03:37:43.334028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.424 qpair failed and we were unable to recover it. 00:34:37.424 [2024-07-15 03:37:43.334185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.424 [2024-07-15 03:37:43.334213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.424 qpair failed and we were unable to recover it. 00:34:37.424 [2024-07-15 03:37:43.334384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.424 [2024-07-15 03:37:43.334412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.424 qpair failed and we were unable to recover it. 00:34:37.424 [2024-07-15 03:37:43.334563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.424 [2024-07-15 03:37:43.334591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.424 qpair failed and we were unable to recover it. 00:34:37.424 [2024-07-15 03:37:43.334713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.424 [2024-07-15 03:37:43.334741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.424 qpair failed and we were unable to recover it. 00:34:37.424 [2024-07-15 03:37:43.334889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.424 [2024-07-15 03:37:43.334931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.424 qpair failed and we were unable to recover it. 00:34:37.424 [2024-07-15 03:37:43.335075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.424 [2024-07-15 03:37:43.335100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.424 qpair failed and we were unable to recover it. 00:34:37.424 [2024-07-15 03:37:43.335233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.424 [2024-07-15 03:37:43.335258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.424 qpair failed and we were unable to recover it. 00:34:37.424 [2024-07-15 03:37:43.335426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.424 [2024-07-15 03:37:43.335453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.424 qpair failed and we were unable to recover it. 00:34:37.424 [2024-07-15 03:37:43.335573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.424 [2024-07-15 03:37:43.335601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.424 qpair failed and we were unable to recover it. 00:34:37.424 [2024-07-15 03:37:43.335744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.424 [2024-07-15 03:37:43.335771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.424 qpair failed and we were unable to recover it. 00:34:37.424 [2024-07-15 03:37:43.335920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.424 [2024-07-15 03:37:43.335947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.424 qpair failed and we were unable to recover it. 00:34:37.424 [2024-07-15 03:37:43.336086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.424 [2024-07-15 03:37:43.336111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.424 qpair failed and we were unable to recover it. 00:34:37.424 [2024-07-15 03:37:43.336299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.424 [2024-07-15 03:37:43.336328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.424 qpair failed and we were unable to recover it. 00:34:37.424 [2024-07-15 03:37:43.336453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.424 [2024-07-15 03:37:43.336495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.424 qpair failed and we were unable to recover it. 00:34:37.424 [2024-07-15 03:37:43.336670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.424 [2024-07-15 03:37:43.336698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.424 qpair failed and we were unable to recover it. 00:34:37.424 [2024-07-15 03:37:43.336873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.424 [2024-07-15 03:37:43.336908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.424 qpair failed and we were unable to recover it. 00:34:37.424 [2024-07-15 03:37:43.337042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.424 [2024-07-15 03:37:43.337068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.424 qpair failed and we were unable to recover it. 00:34:37.424 [2024-07-15 03:37:43.337204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.424 [2024-07-15 03:37:43.337228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.424 qpair failed and we were unable to recover it. 00:34:37.424 [2024-07-15 03:37:43.337392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.424 [2024-07-15 03:37:43.337418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.424 qpair failed and we were unable to recover it. 00:34:37.424 [2024-07-15 03:37:43.337579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.424 [2024-07-15 03:37:43.337607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.424 qpair failed and we were unable to recover it. 00:34:37.424 [2024-07-15 03:37:43.337752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.424 [2024-07-15 03:37:43.337780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.424 qpair failed and we were unable to recover it. 00:34:37.424 [2024-07-15 03:37:43.337941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.424 [2024-07-15 03:37:43.337971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.424 qpair failed and we were unable to recover it. 00:34:37.424 [2024-07-15 03:37:43.338108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.424 [2024-07-15 03:37:43.338133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.424 qpair failed and we were unable to recover it. 00:34:37.424 [2024-07-15 03:37:43.338258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.424 [2024-07-15 03:37:43.338287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.424 qpair failed and we were unable to recover it. 00:34:37.424 [2024-07-15 03:37:43.338433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.424 [2024-07-15 03:37:43.338461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.424 qpair failed and we were unable to recover it. 00:34:37.424 [2024-07-15 03:37:43.338616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.424 [2024-07-15 03:37:43.338641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.424 qpair failed and we were unable to recover it. 00:34:37.424 [2024-07-15 03:37:43.338824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.424 [2024-07-15 03:37:43.338852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.424 qpair failed and we were unable to recover it. 00:34:37.424 [2024-07-15 03:37:43.339055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.424 [2024-07-15 03:37:43.339081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.424 qpair failed and we were unable to recover it. 00:34:37.424 [2024-07-15 03:37:43.339238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.425 [2024-07-15 03:37:43.339266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.425 qpair failed and we were unable to recover it. 00:34:37.425 [2024-07-15 03:37:43.339421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.425 [2024-07-15 03:37:43.339446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.425 qpair failed and we were unable to recover it. 00:34:37.425 [2024-07-15 03:37:43.339587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.425 [2024-07-15 03:37:43.339630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.425 qpair failed and we were unable to recover it. 00:34:37.425 [2024-07-15 03:37:43.339791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.425 [2024-07-15 03:37:43.339816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.425 qpair failed and we were unable to recover it. 00:34:37.425 [2024-07-15 03:37:43.339980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.425 [2024-07-15 03:37:43.340007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.425 qpair failed and we were unable to recover it. 00:34:37.425 [2024-07-15 03:37:43.340118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.425 [2024-07-15 03:37:43.340144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.425 qpair failed and we were unable to recover it. 00:34:37.425 [2024-07-15 03:37:43.340308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.425 [2024-07-15 03:37:43.340354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.425 qpair failed and we were unable to recover it. 00:34:37.425 [2024-07-15 03:37:43.340516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.425 [2024-07-15 03:37:43.340544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.425 qpair failed and we were unable to recover it. 00:34:37.425 [2024-07-15 03:37:43.340658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.425 [2024-07-15 03:37:43.340686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.425 qpair failed and we were unable to recover it. 00:34:37.425 [2024-07-15 03:37:43.340842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.425 [2024-07-15 03:37:43.340867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.425 qpair failed and we were unable to recover it. 00:34:37.425 [2024-07-15 03:37:43.341010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.425 [2024-07-15 03:37:43.341052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.425 qpair failed and we were unable to recover it. 00:34:37.425 [2024-07-15 03:37:43.341204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.425 [2024-07-15 03:37:43.341232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.425 qpair failed and we were unable to recover it. 00:34:37.425 [2024-07-15 03:37:43.341382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.425 [2024-07-15 03:37:43.341410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.425 qpair failed and we were unable to recover it. 00:34:37.425 [2024-07-15 03:37:43.341551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.425 [2024-07-15 03:37:43.341577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.425 qpair failed and we were unable to recover it. 00:34:37.425 [2024-07-15 03:37:43.341689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.425 [2024-07-15 03:37:43.341713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.425 qpair failed and we were unable to recover it. 00:34:37.425 [2024-07-15 03:37:43.341884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.425 [2024-07-15 03:37:43.341913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.425 qpair failed and we were unable to recover it. 00:34:37.425 [2024-07-15 03:37:43.342037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.425 [2024-07-15 03:37:43.342065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.425 qpair failed and we were unable to recover it. 00:34:37.425 [2024-07-15 03:37:43.342251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.425 [2024-07-15 03:37:43.342277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.425 qpair failed and we were unable to recover it. 00:34:37.425 [2024-07-15 03:37:43.342439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.425 [2024-07-15 03:37:43.342467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.425 qpair failed and we were unable to recover it. 00:34:37.425 [2024-07-15 03:37:43.342622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.425 [2024-07-15 03:37:43.342650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.425 qpair failed and we were unable to recover it. 00:34:37.425 [2024-07-15 03:37:43.342800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.425 [2024-07-15 03:37:43.342832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.425 qpair failed and we were unable to recover it. 00:34:37.425 [2024-07-15 03:37:43.342982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.425 [2024-07-15 03:37:43.343008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.425 qpair failed and we were unable to recover it. 00:34:37.425 [2024-07-15 03:37:43.343140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.425 [2024-07-15 03:37:43.343182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.425 qpair failed and we were unable to recover it. 00:34:37.425 [2024-07-15 03:37:43.343339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.425 [2024-07-15 03:37:43.343367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.425 qpair failed and we were unable to recover it. 00:34:37.425 [2024-07-15 03:37:43.343520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.425 [2024-07-15 03:37:43.343550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.425 qpair failed and we were unable to recover it. 00:34:37.425 [2024-07-15 03:37:43.343679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.425 [2024-07-15 03:37:43.343704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.425 qpair failed and we were unable to recover it. 00:34:37.425 [2024-07-15 03:37:43.343845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.425 [2024-07-15 03:37:43.343870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.425 qpair failed and we were unable to recover it. 00:34:37.425 [2024-07-15 03:37:43.344027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.425 [2024-07-15 03:37:43.344053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.425 qpair failed and we were unable to recover it. 00:34:37.425 [2024-07-15 03:37:43.344162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.425 [2024-07-15 03:37:43.344189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.425 qpair failed and we were unable to recover it. 00:34:37.425 [2024-07-15 03:37:43.344325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.425 [2024-07-15 03:37:43.344350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.425 qpair failed and we were unable to recover it. 00:34:37.425 [2024-07-15 03:37:43.344534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.425 [2024-07-15 03:37:43.344562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.425 qpair failed and we were unable to recover it. 00:34:37.425 [2024-07-15 03:37:43.344719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.425 [2024-07-15 03:37:43.344747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.425 qpair failed and we were unable to recover it. 00:34:37.425 [2024-07-15 03:37:43.344890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.425 [2024-07-15 03:37:43.344919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.425 qpair failed and we were unable to recover it. 00:34:37.425 [2024-07-15 03:37:43.345078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.425 [2024-07-15 03:37:43.345103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.425 qpair failed and we were unable to recover it. 00:34:37.425 [2024-07-15 03:37:43.345286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.425 [2024-07-15 03:37:43.345315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.425 qpair failed and we were unable to recover it. 00:34:37.425 [2024-07-15 03:37:43.345471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.425 [2024-07-15 03:37:43.345499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.425 qpair failed and we were unable to recover it. 00:34:37.425 [2024-07-15 03:37:43.345643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.426 [2024-07-15 03:37:43.345671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.426 qpair failed and we were unable to recover it. 00:34:37.426 [2024-07-15 03:37:43.345819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.426 [2024-07-15 03:37:43.345844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.426 qpair failed and we were unable to recover it. 00:34:37.426 [2024-07-15 03:37:43.345990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.426 [2024-07-15 03:37:43.346033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.426 qpair failed and we were unable to recover it. 00:34:37.426 [2024-07-15 03:37:43.346191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.426 [2024-07-15 03:37:43.346219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.426 qpair failed and we were unable to recover it. 00:34:37.426 [2024-07-15 03:37:43.346364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.426 [2024-07-15 03:37:43.346392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.426 qpair failed and we were unable to recover it. 00:34:37.426 [2024-07-15 03:37:43.346581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.426 [2024-07-15 03:37:43.346606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.426 qpair failed and we were unable to recover it. 00:34:37.426 [2024-07-15 03:37:43.346722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.426 [2024-07-15 03:37:43.346765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.426 qpair failed and we were unable to recover it. 00:34:37.426 [2024-07-15 03:37:43.346942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.426 [2024-07-15 03:37:43.346971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.426 qpair failed and we were unable to recover it. 00:34:37.426 [2024-07-15 03:37:43.347123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.426 [2024-07-15 03:37:43.347152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.426 qpair failed and we were unable to recover it. 00:34:37.426 [2024-07-15 03:37:43.347287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.426 [2024-07-15 03:37:43.347312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.426 qpair failed and we were unable to recover it. 00:34:37.426 [2024-07-15 03:37:43.347420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.426 [2024-07-15 03:37:43.347447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.426 qpair failed and we were unable to recover it. 00:34:37.426 [2024-07-15 03:37:43.347599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.426 [2024-07-15 03:37:43.347632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.426 qpair failed and we were unable to recover it. 00:34:37.426 [2024-07-15 03:37:43.347785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.426 [2024-07-15 03:37:43.347826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.426 qpair failed and we were unable to recover it. 00:34:37.426 [2024-07-15 03:37:43.347967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.426 [2024-07-15 03:37:43.347994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.426 qpair failed and we were unable to recover it. 00:34:37.426 [2024-07-15 03:37:43.348130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.426 [2024-07-15 03:37:43.348155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.426 qpair failed and we were unable to recover it. 00:34:37.426 [2024-07-15 03:37:43.348315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.426 [2024-07-15 03:37:43.348342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.426 qpair failed and we were unable to recover it. 00:34:37.426 [2024-07-15 03:37:43.348501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.426 [2024-07-15 03:37:43.348525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.426 qpair failed and we were unable to recover it. 00:34:37.426 [2024-07-15 03:37:43.348659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.426 [2024-07-15 03:37:43.348684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.426 qpair failed and we were unable to recover it. 00:34:37.426 [2024-07-15 03:37:43.348825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.426 [2024-07-15 03:37:43.348850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.426 qpair failed and we were unable to recover it. 00:34:37.426 [2024-07-15 03:37:43.349038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.426 [2024-07-15 03:37:43.349064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.426 qpair failed and we were unable to recover it. 00:34:37.426 [2024-07-15 03:37:43.349220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.426 [2024-07-15 03:37:43.349248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.426 qpair failed and we were unable to recover it. 00:34:37.426 [2024-07-15 03:37:43.349402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.426 [2024-07-15 03:37:43.349428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.426 qpair failed and we were unable to recover it. 00:34:37.426 [2024-07-15 03:37:43.349562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.426 [2024-07-15 03:37:43.349603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.426 qpair failed and we were unable to recover it. 00:34:37.426 [2024-07-15 03:37:43.349757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.426 [2024-07-15 03:37:43.349784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.426 qpair failed and we were unable to recover it. 00:34:37.426 [2024-07-15 03:37:43.349915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.426 [2024-07-15 03:37:43.349943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.426 qpair failed and we were unable to recover it. 00:34:37.426 [2024-07-15 03:37:43.350101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.426 [2024-07-15 03:37:43.350126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.426 qpair failed and we were unable to recover it. 00:34:37.426 [2024-07-15 03:37:43.350267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.426 [2024-07-15 03:37:43.350309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.426 qpair failed and we were unable to recover it. 00:34:37.426 [2024-07-15 03:37:43.350463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.426 [2024-07-15 03:37:43.350490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.426 qpair failed and we were unable to recover it. 00:34:37.426 [2024-07-15 03:37:43.350644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.426 [2024-07-15 03:37:43.350672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.426 qpair failed and we were unable to recover it. 00:34:37.426 [2024-07-15 03:37:43.350856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.426 [2024-07-15 03:37:43.350892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.426 qpair failed and we were unable to recover it. 00:34:37.426 [2024-07-15 03:37:43.351064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.426 [2024-07-15 03:37:43.351089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.426 qpair failed and we were unable to recover it. 00:34:37.426 [2024-07-15 03:37:43.351221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.426 [2024-07-15 03:37:43.351248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.426 qpair failed and we were unable to recover it. 00:34:37.426 [2024-07-15 03:37:43.351432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.426 [2024-07-15 03:37:43.351460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.426 qpair failed and we were unable to recover it. 00:34:37.426 [2024-07-15 03:37:43.351619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.426 [2024-07-15 03:37:43.351643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.426 qpair failed and we were unable to recover it. 00:34:37.426 [2024-07-15 03:37:43.351822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.426 [2024-07-15 03:37:43.351850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.426 qpair failed and we were unable to recover it. 00:34:37.426 [2024-07-15 03:37:43.352051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.426 [2024-07-15 03:37:43.352076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.426 qpair failed and we were unable to recover it. 00:34:37.426 [2024-07-15 03:37:43.352211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.427 [2024-07-15 03:37:43.352240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.427 qpair failed and we were unable to recover it. 00:34:37.427 [2024-07-15 03:37:43.352403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.427 [2024-07-15 03:37:43.352428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.427 qpair failed and we were unable to recover it. 00:34:37.427 [2024-07-15 03:37:43.352609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.427 [2024-07-15 03:37:43.352636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.427 qpair failed and we were unable to recover it. 00:34:37.427 [2024-07-15 03:37:43.352798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.427 [2024-07-15 03:37:43.352826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.427 qpair failed and we were unable to recover it. 00:34:37.427 [2024-07-15 03:37:43.352973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.427 [2024-07-15 03:37:43.352998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.427 qpair failed and we were unable to recover it. 00:34:37.427 [2024-07-15 03:37:43.353138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.427 [2024-07-15 03:37:43.353162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.427 qpair failed and we were unable to recover it. 00:34:37.427 [2024-07-15 03:37:43.353319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.427 [2024-07-15 03:37:43.353348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.427 qpair failed and we were unable to recover it. 00:34:37.427 [2024-07-15 03:37:43.353529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.427 [2024-07-15 03:37:43.353556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.427 qpair failed and we were unable to recover it. 00:34:37.427 [2024-07-15 03:37:43.353697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.427 [2024-07-15 03:37:43.353725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.427 qpair failed and we were unable to recover it. 00:34:37.427 [2024-07-15 03:37:43.353886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.427 [2024-07-15 03:37:43.353912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.427 qpair failed and we were unable to recover it. 00:34:37.427 [2024-07-15 03:37:43.354029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.427 [2024-07-15 03:37:43.354055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.427 qpair failed and we were unable to recover it. 00:34:37.427 [2024-07-15 03:37:43.354227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.427 [2024-07-15 03:37:43.354255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.427 qpair failed and we were unable to recover it. 00:34:37.427 [2024-07-15 03:37:43.354429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.427 [2024-07-15 03:37:43.354457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.427 qpair failed and we were unable to recover it. 00:34:37.427 [2024-07-15 03:37:43.354612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.427 [2024-07-15 03:37:43.354637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.427 qpair failed and we were unable to recover it. 00:34:37.427 [2024-07-15 03:37:43.354803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.427 [2024-07-15 03:37:43.354831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.427 qpair failed and we were unable to recover it. 00:34:37.427 [2024-07-15 03:37:43.354977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.427 [2024-07-15 03:37:43.355005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.427 qpair failed and we were unable to recover it. 00:34:37.427 [2024-07-15 03:37:43.355187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.427 [2024-07-15 03:37:43.355215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.427 qpair failed and we were unable to recover it. 00:34:37.427 [2024-07-15 03:37:43.355350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.427 [2024-07-15 03:37:43.355375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.427 qpair failed and we were unable to recover it. 00:34:37.427 [2024-07-15 03:37:43.355516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.427 [2024-07-15 03:37:43.355540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.427 qpair failed and we were unable to recover it. 00:34:37.427 [2024-07-15 03:37:43.355717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.427 [2024-07-15 03:37:43.355745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.427 qpair failed and we were unable to recover it. 00:34:37.427 [2024-07-15 03:37:43.355883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.427 [2024-07-15 03:37:43.355913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.427 qpair failed and we were unable to recover it. 00:34:37.427 [2024-07-15 03:37:43.356105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.427 [2024-07-15 03:37:43.356130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.427 qpair failed and we were unable to recover it. 00:34:37.427 [2024-07-15 03:37:43.356294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.427 [2024-07-15 03:37:43.356319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.427 qpair failed and we were unable to recover it. 00:34:37.427 [2024-07-15 03:37:43.356458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.427 [2024-07-15 03:37:43.356483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.427 qpair failed and we were unable to recover it. 00:34:37.427 [2024-07-15 03:37:43.356646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.427 [2024-07-15 03:37:43.356675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.427 qpair failed and we were unable to recover it. 00:34:37.427 [2024-07-15 03:37:43.356855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.427 [2024-07-15 03:37:43.356889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.427 qpair failed and we were unable to recover it. 00:34:37.427 [2024-07-15 03:37:43.357072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.427 [2024-07-15 03:37:43.357096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.427 qpair failed and we were unable to recover it. 00:34:37.427 [2024-07-15 03:37:43.357204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.427 [2024-07-15 03:37:43.357247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.427 qpair failed and we were unable to recover it. 00:34:37.427 [2024-07-15 03:37:43.357368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.427 [2024-07-15 03:37:43.357395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.427 qpair failed and we were unable to recover it. 00:34:37.427 [2024-07-15 03:37:43.357557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.427 [2024-07-15 03:37:43.357582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.427 qpair failed and we were unable to recover it. 00:34:37.427 [2024-07-15 03:37:43.357773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.427 [2024-07-15 03:37:43.357801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.427 qpair failed and we were unable to recover it. 00:34:37.427 [2024-07-15 03:37:43.357958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.427 [2024-07-15 03:37:43.357988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.428 qpair failed and we were unable to recover it. 00:34:37.428 [2024-07-15 03:37:43.358139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.428 [2024-07-15 03:37:43.358167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.428 qpair failed and we were unable to recover it. 00:34:37.428 [2024-07-15 03:37:43.358347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.428 [2024-07-15 03:37:43.358371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.428 qpair failed and we were unable to recover it. 00:34:37.428 [2024-07-15 03:37:43.358484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.428 [2024-07-15 03:37:43.358525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.428 qpair failed and we were unable to recover it. 00:34:37.428 [2024-07-15 03:37:43.358678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.428 [2024-07-15 03:37:43.358706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.428 qpair failed and we were unable to recover it. 00:34:37.428 [2024-07-15 03:37:43.358888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.428 [2024-07-15 03:37:43.358916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.428 qpair failed and we were unable to recover it. 00:34:37.428 [2024-07-15 03:37:43.359074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.428 [2024-07-15 03:37:43.359099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.428 qpair failed and we were unable to recover it. 00:34:37.428 [2024-07-15 03:37:43.359284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.428 [2024-07-15 03:37:43.359312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.428 qpair failed and we were unable to recover it. 00:34:37.428 [2024-07-15 03:37:43.359478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.428 [2024-07-15 03:37:43.359506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.428 qpair failed and we were unable to recover it. 00:34:37.428 [2024-07-15 03:37:43.359695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.428 [2024-07-15 03:37:43.359722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.428 qpair failed and we were unable to recover it. 00:34:37.428 [2024-07-15 03:37:43.359883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.428 [2024-07-15 03:37:43.359910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.428 qpair failed and we were unable to recover it. 00:34:37.428 [2024-07-15 03:37:43.360101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.428 [2024-07-15 03:37:43.360130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.428 qpair failed and we were unable to recover it. 00:34:37.428 [2024-07-15 03:37:43.360261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.428 [2024-07-15 03:37:43.360294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.428 qpair failed and we were unable to recover it. 00:34:37.428 [2024-07-15 03:37:43.360443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.428 [2024-07-15 03:37:43.360471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.428 qpair failed and we were unable to recover it. 00:34:37.428 [2024-07-15 03:37:43.360598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.428 [2024-07-15 03:37:43.360622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.428 qpair failed and we were unable to recover it. 00:34:37.428 [2024-07-15 03:37:43.360757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.428 [2024-07-15 03:37:43.360782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.428 qpair failed and we were unable to recover it. 00:34:37.428 [2024-07-15 03:37:43.360963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.428 [2024-07-15 03:37:43.360989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.428 qpair failed and we were unable to recover it. 00:34:37.428 [2024-07-15 03:37:43.361136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.428 [2024-07-15 03:37:43.361160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.428 qpair failed and we were unable to recover it. 00:34:37.428 [2024-07-15 03:37:43.361313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.428 [2024-07-15 03:37:43.361338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.428 qpair failed and we were unable to recover it. 00:34:37.428 [2024-07-15 03:37:43.361476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.428 [2024-07-15 03:37:43.361501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.428 qpair failed and we were unable to recover it. 00:34:37.428 [2024-07-15 03:37:43.361644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.428 [2024-07-15 03:37:43.361672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.428 qpair failed and we were unable to recover it. 00:34:37.428 [2024-07-15 03:37:43.361832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.428 [2024-07-15 03:37:43.361860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.428 qpair failed and we were unable to recover it. 00:34:37.428 [2024-07-15 03:37:43.361998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.428 [2024-07-15 03:37:43.362023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.428 qpair failed and we were unable to recover it. 00:34:37.428 [2024-07-15 03:37:43.362162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.428 [2024-07-15 03:37:43.362188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.428 qpair failed and we were unable to recover it. 00:34:37.428 [2024-07-15 03:37:43.362371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.428 [2024-07-15 03:37:43.362399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.428 qpair failed and we were unable to recover it. 00:34:37.428 [2024-07-15 03:37:43.362549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.428 [2024-07-15 03:37:43.362578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.428 qpair failed and we were unable to recover it. 00:34:37.428 [2024-07-15 03:37:43.362708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.428 [2024-07-15 03:37:43.362733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.428 qpair failed and we were unable to recover it. 00:34:37.428 [2024-07-15 03:37:43.362870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.428 [2024-07-15 03:37:43.362913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.428 qpair failed and we were unable to recover it. 00:34:37.428 [2024-07-15 03:37:43.363081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.428 [2024-07-15 03:37:43.363109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.428 qpair failed and we were unable to recover it. 00:34:37.428 [2024-07-15 03:37:43.363282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.428 [2024-07-15 03:37:43.363310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.428 qpair failed and we were unable to recover it. 00:34:37.428 [2024-07-15 03:37:43.363449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.428 [2024-07-15 03:37:43.363473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.428 qpair failed and we were unable to recover it. 00:34:37.428 [2024-07-15 03:37:43.363615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.428 [2024-07-15 03:37:43.363640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.428 qpair failed and we were unable to recover it. 00:34:37.428 [2024-07-15 03:37:43.363805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.428 [2024-07-15 03:37:43.363830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.428 qpair failed and we were unable to recover it. 00:34:37.428 [2024-07-15 03:37:43.364009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.428 [2024-07-15 03:37:43.364037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.428 qpair failed and we were unable to recover it. 00:34:37.428 [2024-07-15 03:37:43.364222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.428 [2024-07-15 03:37:43.364247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.428 qpair failed and we were unable to recover it. 00:34:37.428 [2024-07-15 03:37:43.364400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.428 [2024-07-15 03:37:43.364428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.428 qpair failed and we were unable to recover it. 00:34:37.428 [2024-07-15 03:37:43.364559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.428 [2024-07-15 03:37:43.364586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.429 qpair failed and we were unable to recover it. 00:34:37.429 [2024-07-15 03:37:43.364751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.429 [2024-07-15 03:37:43.364775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.429 qpair failed and we were unable to recover it. 00:34:37.429 [2024-07-15 03:37:43.364938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.429 [2024-07-15 03:37:43.364964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.429 qpair failed and we were unable to recover it. 00:34:37.429 [2024-07-15 03:37:43.365069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.429 [2024-07-15 03:37:43.365116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.429 qpair failed and we were unable to recover it. 00:34:37.429 [2024-07-15 03:37:43.365250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.429 [2024-07-15 03:37:43.365279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.429 qpair failed and we were unable to recover it. 00:34:37.429 [2024-07-15 03:37:43.365393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.429 [2024-07-15 03:37:43.365421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.429 qpair failed and we were unable to recover it. 00:34:37.429 [2024-07-15 03:37:43.365577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.429 [2024-07-15 03:37:43.365601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.429 qpair failed and we were unable to recover it. 00:34:37.429 [2024-07-15 03:37:43.365740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.429 [2024-07-15 03:37:43.365782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.429 qpair failed and we were unable to recover it. 00:34:37.429 [2024-07-15 03:37:43.365963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.429 [2024-07-15 03:37:43.365992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.429 qpair failed and we were unable to recover it. 00:34:37.429 [2024-07-15 03:37:43.366114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.429 [2024-07-15 03:37:43.366141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.429 qpair failed and we were unable to recover it. 00:34:37.429 [2024-07-15 03:37:43.366318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.429 [2024-07-15 03:37:43.366342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.429 qpair failed and we were unable to recover it. 00:34:37.429 [2024-07-15 03:37:43.366496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.429 [2024-07-15 03:37:43.366524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.429 qpair failed and we were unable to recover it. 00:34:37.429 [2024-07-15 03:37:43.366678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.429 [2024-07-15 03:37:43.366706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.429 qpair failed and we were unable to recover it. 00:34:37.429 [2024-07-15 03:37:43.366896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.429 [2024-07-15 03:37:43.366924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.429 qpair failed and we were unable to recover it. 00:34:37.429 [2024-07-15 03:37:43.367109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.429 [2024-07-15 03:37:43.367134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.429 qpair failed and we were unable to recover it. 00:34:37.429 [2024-07-15 03:37:43.367273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.429 [2024-07-15 03:37:43.367298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.429 qpair failed and we were unable to recover it. 00:34:37.429 [2024-07-15 03:37:43.367440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.429 [2024-07-15 03:37:43.367465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.429 qpair failed and we were unable to recover it. 00:34:37.429 [2024-07-15 03:37:43.367617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.429 [2024-07-15 03:37:43.367644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.429 qpair failed and we were unable to recover it. 00:34:37.429 [2024-07-15 03:37:43.367818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.429 [2024-07-15 03:37:43.367846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.429 qpair failed and we were unable to recover it. 00:34:37.429 [2024-07-15 03:37:43.367988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.429 [2024-07-15 03:37:43.368015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.429 qpair failed and we were unable to recover it. 00:34:37.429 [2024-07-15 03:37:43.368157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.429 [2024-07-15 03:37:43.368182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.429 qpair failed and we were unable to recover it. 00:34:37.429 [2024-07-15 03:37:43.368319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.429 [2024-07-15 03:37:43.368347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.429 qpair failed and we were unable to recover it. 00:34:37.429 [2024-07-15 03:37:43.368532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.429 [2024-07-15 03:37:43.368558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.429 qpair failed and we were unable to recover it. 00:34:37.429 [2024-07-15 03:37:43.368669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.429 [2024-07-15 03:37:43.368711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.429 qpair failed and we were unable to recover it. 00:34:37.429 [2024-07-15 03:37:43.368868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.429 [2024-07-15 03:37:43.368902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.429 qpair failed and we were unable to recover it. 00:34:37.429 [2024-07-15 03:37:43.369053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.429 [2024-07-15 03:37:43.369081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.429 qpair failed and we were unable to recover it. 00:34:37.429 [2024-07-15 03:37:43.369248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.429 [2024-07-15 03:37:43.369273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.429 qpair failed and we were unable to recover it. 00:34:37.429 [2024-07-15 03:37:43.369406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.429 [2024-07-15 03:37:43.369431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.429 qpair failed and we were unable to recover it. 00:34:37.429 [2024-07-15 03:37:43.369640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.429 [2024-07-15 03:37:43.369665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.429 qpair failed and we were unable to recover it. 00:34:37.429 [2024-07-15 03:37:43.369808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.429 [2024-07-15 03:37:43.369833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.429 qpair failed and we were unable to recover it. 00:34:37.429 [2024-07-15 03:37:43.369992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.429 [2024-07-15 03:37:43.370018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.429 qpair failed and we were unable to recover it. 00:34:37.429 [2024-07-15 03:37:43.370134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.429 [2024-07-15 03:37:43.370160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.429 qpair failed and we were unable to recover it. 00:34:37.429 [2024-07-15 03:37:43.370305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.429 [2024-07-15 03:37:43.370332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.429 qpair failed and we were unable to recover it. 00:34:37.429 [2024-07-15 03:37:43.370499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.429 [2024-07-15 03:37:43.370523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.429 qpair failed and we were unable to recover it. 00:34:37.429 [2024-07-15 03:37:43.370665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.429 [2024-07-15 03:37:43.370690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.429 qpair failed and we were unable to recover it. 00:34:37.429 [2024-07-15 03:37:43.370869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.429 [2024-07-15 03:37:43.370904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.429 qpair failed and we were unable to recover it. 00:34:37.429 [2024-07-15 03:37:43.371085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.429 [2024-07-15 03:37:43.371112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.429 qpair failed and we were unable to recover it. 00:34:37.430 [2024-07-15 03:37:43.371267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.430 [2024-07-15 03:37:43.371294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.430 qpair failed and we were unable to recover it. 00:34:37.430 [2024-07-15 03:37:43.371454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.430 [2024-07-15 03:37:43.371479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.430 qpair failed and we were unable to recover it. 00:34:37.430 [2024-07-15 03:37:43.371599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.430 [2024-07-15 03:37:43.371624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.430 qpair failed and we were unable to recover it. 00:34:37.430 [2024-07-15 03:37:43.371767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.430 [2024-07-15 03:37:43.371792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.430 qpair failed and we were unable to recover it. 00:34:37.430 [2024-07-15 03:37:43.371951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.430 [2024-07-15 03:37:43.371980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.430 qpair failed and we were unable to recover it. 00:34:37.430 [2024-07-15 03:37:43.372132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.430 [2024-07-15 03:37:43.372156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.430 qpair failed and we were unable to recover it. 00:34:37.430 [2024-07-15 03:37:43.372274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.430 [2024-07-15 03:37:43.372299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.430 qpair failed and we were unable to recover it. 00:34:37.430 [2024-07-15 03:37:43.372446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.430 [2024-07-15 03:37:43.372474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.430 qpair failed and we were unable to recover it. 00:34:37.430 [2024-07-15 03:37:43.372655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.430 [2024-07-15 03:37:43.372679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.430 qpair failed and we were unable to recover it. 00:34:37.430 [2024-07-15 03:37:43.372817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.430 [2024-07-15 03:37:43.372842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.430 qpair failed and we were unable to recover it. 00:34:37.430 [2024-07-15 03:37:43.372965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.430 [2024-07-15 03:37:43.373008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.430 qpair failed and we were unable to recover it. 00:34:37.430 [2024-07-15 03:37:43.373164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.430 [2024-07-15 03:37:43.373191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.430 qpair failed and we were unable to recover it. 00:34:37.430 [2024-07-15 03:37:43.373339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.430 [2024-07-15 03:37:43.373366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.430 qpair failed and we were unable to recover it. 00:34:37.430 [2024-07-15 03:37:43.373492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.430 [2024-07-15 03:37:43.373517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.430 qpair failed and we were unable to recover it. 00:34:37.430 [2024-07-15 03:37:43.373652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.430 [2024-07-15 03:37:43.373676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.430 qpair failed and we were unable to recover it. 00:34:37.430 [2024-07-15 03:37:43.373819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.430 [2024-07-15 03:37:43.373847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.430 qpair failed and we were unable to recover it. 00:34:37.430 [2024-07-15 03:37:43.374018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.430 [2024-07-15 03:37:43.374043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.430 qpair failed and we were unable to recover it. 00:34:37.430 [2024-07-15 03:37:43.374204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.430 [2024-07-15 03:37:43.374229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.430 qpair failed and we were unable to recover it. 00:34:37.430 [2024-07-15 03:37:43.374389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.430 [2024-07-15 03:37:43.374416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.430 qpair failed and we were unable to recover it. 00:34:37.430 [2024-07-15 03:37:43.374576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.430 [2024-07-15 03:37:43.374601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.430 qpair failed and we were unable to recover it. 00:34:37.430 [2024-07-15 03:37:43.374743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.430 [2024-07-15 03:37:43.374767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.430 qpair failed and we were unable to recover it. 00:34:37.430 [2024-07-15 03:37:43.374948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.430 [2024-07-15 03:37:43.374974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.430 qpair failed and we were unable to recover it. 00:34:37.430 [2024-07-15 03:37:43.375094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.430 [2024-07-15 03:37:43.375134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.430 qpair failed and we were unable to recover it. 00:34:37.430 [2024-07-15 03:37:43.375313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.430 [2024-07-15 03:37:43.375342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.430 qpair failed and we were unable to recover it. 00:34:37.430 [2024-07-15 03:37:43.375472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.430 [2024-07-15 03:37:43.375500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.430 qpair failed and we were unable to recover it. 00:34:37.430 [2024-07-15 03:37:43.375659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.430 [2024-07-15 03:37:43.375684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.430 qpair failed and we were unable to recover it. 00:34:37.430 [2024-07-15 03:37:43.375823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.430 [2024-07-15 03:37:43.375865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.430 qpair failed and we were unable to recover it. 00:34:37.430 [2024-07-15 03:37:43.376028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.430 [2024-07-15 03:37:43.376057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.430 qpair failed and we were unable to recover it. 00:34:37.430 [2024-07-15 03:37:43.376182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.430 [2024-07-15 03:37:43.376211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.430 qpair failed and we were unable to recover it. 00:34:37.430 [2024-07-15 03:37:43.376395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.430 [2024-07-15 03:37:43.376421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.430 qpair failed and we were unable to recover it. 00:34:37.430 [2024-07-15 03:37:43.376570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.430 [2024-07-15 03:37:43.376599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.430 qpair failed and we were unable to recover it. 00:34:37.430 [2024-07-15 03:37:43.376747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.430 [2024-07-15 03:37:43.376774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.430 qpair failed and we were unable to recover it. 00:34:37.430 [2024-07-15 03:37:43.376963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.430 [2024-07-15 03:37:43.376989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.430 qpair failed and we were unable to recover it. 00:34:37.430 [2024-07-15 03:37:43.377134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.430 [2024-07-15 03:37:43.377159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.430 qpair failed and we were unable to recover it. 00:34:37.430 [2024-07-15 03:37:43.377337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.430 [2024-07-15 03:37:43.377371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.430 qpair failed and we were unable to recover it. 00:34:37.430 [2024-07-15 03:37:43.377522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.430 [2024-07-15 03:37:43.377549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.430 qpair failed and we were unable to recover it. 00:34:37.431 [2024-07-15 03:37:43.377728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.431 [2024-07-15 03:37:43.377755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.431 qpair failed and we were unable to recover it. 00:34:37.431 [2024-07-15 03:37:43.377892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.431 [2024-07-15 03:37:43.377918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.431 qpair failed and we were unable to recover it. 00:34:37.431 [2024-07-15 03:37:43.378031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.431 [2024-07-15 03:37:43.378055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.431 qpair failed and we were unable to recover it. 00:34:37.431 [2024-07-15 03:37:43.378230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.431 [2024-07-15 03:37:43.378257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.431 qpair failed and we were unable to recover it. 00:34:37.431 [2024-07-15 03:37:43.378407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.431 [2024-07-15 03:37:43.378435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.431 qpair failed and we were unable to recover it. 00:34:37.431 [2024-07-15 03:37:43.378621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.431 [2024-07-15 03:37:43.378646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.431 qpair failed and we were unable to recover it. 00:34:37.431 [2024-07-15 03:37:43.378801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.431 [2024-07-15 03:37:43.378828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.431 qpair failed and we were unable to recover it. 00:34:37.431 [2024-07-15 03:37:43.379003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.431 [2024-07-15 03:37:43.379031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.431 qpair failed and we were unable to recover it. 00:34:37.431 [2024-07-15 03:37:43.379147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.431 [2024-07-15 03:37:43.379175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.431 qpair failed and we were unable to recover it. 00:34:37.431 [2024-07-15 03:37:43.379308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.431 [2024-07-15 03:37:43.379332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.431 qpair failed and we were unable to recover it. 00:34:37.431 [2024-07-15 03:37:43.379467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.431 [2024-07-15 03:37:43.379492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.431 qpair failed and we were unable to recover it. 00:34:37.431 [2024-07-15 03:37:43.379662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.431 [2024-07-15 03:37:43.379687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.431 qpair failed and we were unable to recover it. 00:34:37.431 [2024-07-15 03:37:43.379829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.431 [2024-07-15 03:37:43.379854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.431 qpair failed and we were unable to recover it. 00:34:37.431 [2024-07-15 03:37:43.379975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.431 [2024-07-15 03:37:43.380001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.431 qpair failed and we were unable to recover it. 00:34:37.431 [2024-07-15 03:37:43.380139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.431 [2024-07-15 03:37:43.380163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.431 qpair failed and we were unable to recover it. 00:34:37.431 [2024-07-15 03:37:43.380356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.431 [2024-07-15 03:37:43.380383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.431 qpair failed and we were unable to recover it. 00:34:37.431 [2024-07-15 03:37:43.380574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.431 [2024-07-15 03:37:43.380598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.431 qpair failed and we were unable to recover it. 00:34:37.431 [2024-07-15 03:37:43.380704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.431 [2024-07-15 03:37:43.380729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.431 qpair failed and we were unable to recover it. 00:34:37.431 [2024-07-15 03:37:43.380896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.431 [2024-07-15 03:37:43.380943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.431 qpair failed and we were unable to recover it. 00:34:37.431 [2024-07-15 03:37:43.381069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.431 [2024-07-15 03:37:43.381097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.431 qpair failed and we were unable to recover it. 00:34:37.431 [2024-07-15 03:37:43.381251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.431 [2024-07-15 03:37:43.381279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.431 qpair failed and we were unable to recover it. 00:34:37.431 [2024-07-15 03:37:43.381428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.431 [2024-07-15 03:37:43.381453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.431 qpair failed and we were unable to recover it. 00:34:37.431 [2024-07-15 03:37:43.381638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.431 [2024-07-15 03:37:43.381665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.431 qpair failed and we were unable to recover it. 00:34:37.431 [2024-07-15 03:37:43.381792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.431 [2024-07-15 03:37:43.381820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.431 qpair failed and we were unable to recover it. 00:34:37.431 [2024-07-15 03:37:43.381944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.431 [2024-07-15 03:37:43.381972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.431 qpair failed and we were unable to recover it. 00:34:37.431 [2024-07-15 03:37:43.382102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.431 [2024-07-15 03:37:43.382131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.431 qpair failed and we were unable to recover it. 00:34:37.431 [2024-07-15 03:37:43.382269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.431 [2024-07-15 03:37:43.382294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.431 qpair failed and we were unable to recover it. 00:34:37.431 [2024-07-15 03:37:43.382456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.431 [2024-07-15 03:37:43.382483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.431 qpair failed and we were unable to recover it. 00:34:37.431 [2024-07-15 03:37:43.382628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.431 [2024-07-15 03:37:43.382656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.431 qpair failed and we were unable to recover it. 00:34:37.431 [2024-07-15 03:37:43.382840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.431 [2024-07-15 03:37:43.382865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.431 qpair failed and we were unable to recover it. 00:34:37.431 [2024-07-15 03:37:43.382988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.431 [2024-07-15 03:37:43.383029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.431 qpair failed and we were unable to recover it. 00:34:37.431 [2024-07-15 03:37:43.383187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.431 [2024-07-15 03:37:43.383215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.431 qpair failed and we were unable to recover it. 00:34:37.431 [2024-07-15 03:37:43.383355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.431 [2024-07-15 03:37:43.383381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.431 qpair failed and we were unable to recover it. 00:34:37.431 [2024-07-15 03:37:43.383508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.431 [2024-07-15 03:37:43.383533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.431 qpair failed and we were unable to recover it. 00:34:37.431 [2024-07-15 03:37:43.383696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.431 [2024-07-15 03:37:43.383741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.431 qpair failed and we were unable to recover it. 00:34:37.431 [2024-07-15 03:37:43.383917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.431 [2024-07-15 03:37:43.383945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.432 qpair failed and we were unable to recover it. 00:34:37.432 [2024-07-15 03:37:43.384140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.432 [2024-07-15 03:37:43.384167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.432 qpair failed and we were unable to recover it. 00:34:37.432 [2024-07-15 03:37:43.384353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.432 [2024-07-15 03:37:43.384378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.432 qpair failed and we were unable to recover it. 00:34:37.432 [2024-07-15 03:37:43.384539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.432 [2024-07-15 03:37:43.384567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.432 qpair failed and we were unable to recover it. 00:34:37.432 [2024-07-15 03:37:43.384694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.432 [2024-07-15 03:37:43.384722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.432 qpair failed and we were unable to recover it. 00:34:37.432 [2024-07-15 03:37:43.384884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.432 [2024-07-15 03:37:43.384914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.432 qpair failed and we were unable to recover it. 00:34:37.432 [2024-07-15 03:37:43.385072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.432 [2024-07-15 03:37:43.385097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.432 qpair failed and we were unable to recover it. 00:34:37.432 [2024-07-15 03:37:43.385253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.432 [2024-07-15 03:37:43.385277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.432 qpair failed and we were unable to recover it. 00:34:37.432 [2024-07-15 03:37:43.385468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.432 [2024-07-15 03:37:43.385495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.432 qpair failed and we were unable to recover it. 00:34:37.432 [2024-07-15 03:37:43.385645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.432 [2024-07-15 03:37:43.385672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.432 qpair failed and we were unable to recover it. 00:34:37.432 [2024-07-15 03:37:43.385809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.432 [2024-07-15 03:37:43.385834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.432 qpair failed and we were unable to recover it. 00:34:37.432 [2024-07-15 03:37:43.385953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.432 [2024-07-15 03:37:43.385978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.432 qpair failed and we were unable to recover it. 00:34:37.432 [2024-07-15 03:37:43.386172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.432 [2024-07-15 03:37:43.386199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.432 qpair failed and we were unable to recover it. 00:34:37.432 [2024-07-15 03:37:43.386381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.432 [2024-07-15 03:37:43.386409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.432 qpair failed and we were unable to recover it. 00:34:37.432 [2024-07-15 03:37:43.386590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.432 [2024-07-15 03:37:43.386615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.432 qpair failed and we were unable to recover it. 00:34:37.432 [2024-07-15 03:37:43.386774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.432 [2024-07-15 03:37:43.386802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.432 qpair failed and we were unable to recover it. 00:34:37.432 [2024-07-15 03:37:43.386954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.432 [2024-07-15 03:37:43.386982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.432 qpair failed and we were unable to recover it. 00:34:37.432 [2024-07-15 03:37:43.387101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.432 [2024-07-15 03:37:43.387134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.432 qpair failed and we were unable to recover it. 00:34:37.432 [2024-07-15 03:37:43.387292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.432 [2024-07-15 03:37:43.387317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.432 qpair failed and we were unable to recover it. 00:34:37.432 [2024-07-15 03:37:43.387463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.432 [2024-07-15 03:37:43.387505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.432 qpair failed and we were unable to recover it. 00:34:37.432 [2024-07-15 03:37:43.387631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.432 [2024-07-15 03:37:43.387660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.432 qpair failed and we were unable to recover it. 00:34:37.432 [2024-07-15 03:37:43.387791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.432 [2024-07-15 03:37:43.387820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.432 qpair failed and we were unable to recover it. 00:34:37.432 [2024-07-15 03:37:43.387986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.432 [2024-07-15 03:37:43.388011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.432 qpair failed and we were unable to recover it. 00:34:37.432 [2024-07-15 03:37:43.388148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.432 [2024-07-15 03:37:43.388172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.432 qpair failed and we were unable to recover it. 00:34:37.432 [2024-07-15 03:37:43.388335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.432 [2024-07-15 03:37:43.388363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.432 qpair failed and we were unable to recover it. 00:34:37.432 [2024-07-15 03:37:43.388491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.432 [2024-07-15 03:37:43.388518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.432 qpair failed and we were unable to recover it. 00:34:37.432 [2024-07-15 03:37:43.388691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.432 [2024-07-15 03:37:43.388719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.432 qpair failed and we were unable to recover it. 00:34:37.432 [2024-07-15 03:37:43.388835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.432 [2024-07-15 03:37:43.388863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.432 qpair failed and we were unable to recover it. 00:34:37.432 [2024-07-15 03:37:43.389057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.432 [2024-07-15 03:37:43.389083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.432 qpair failed and we were unable to recover it. 00:34:37.432 [2024-07-15 03:37:43.389220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.432 [2024-07-15 03:37:43.389244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.432 qpair failed and we were unable to recover it. 00:34:37.432 [2024-07-15 03:37:43.389359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.432 [2024-07-15 03:37:43.389384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.432 qpair failed and we were unable to recover it. 00:34:37.432 [2024-07-15 03:37:43.389523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.432 [2024-07-15 03:37:43.389547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.433 qpair failed and we were unable to recover it. 00:34:37.433 [2024-07-15 03:37:43.389731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.433 [2024-07-15 03:37:43.389755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.433 qpair failed and we were unable to recover it. 00:34:37.433 [2024-07-15 03:37:43.389896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.433 [2024-07-15 03:37:43.389922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.433 qpair failed and we were unable to recover it. 00:34:37.433 [2024-07-15 03:37:43.390092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.433 [2024-07-15 03:37:43.390119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.433 qpair failed and we were unable to recover it. 00:34:37.433 [2024-07-15 03:37:43.390249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.433 [2024-07-15 03:37:43.390276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.433 qpair failed and we were unable to recover it. 00:34:37.433 [2024-07-15 03:37:43.390424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.433 [2024-07-15 03:37:43.390451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.433 qpair failed and we were unable to recover it. 00:34:37.433 [2024-07-15 03:37:43.390608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.433 [2024-07-15 03:37:43.390636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.433 qpair failed and we were unable to recover it. 00:34:37.433 [2024-07-15 03:37:43.390761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.433 [2024-07-15 03:37:43.390786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.433 qpair failed and we were unable to recover it. 00:34:37.433 [2024-07-15 03:37:43.390901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.433 [2024-07-15 03:37:43.390928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.433 qpair failed and we were unable to recover it. 00:34:37.433 [2024-07-15 03:37:43.391045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.433 [2024-07-15 03:37:43.391069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.433 qpair failed and we were unable to recover it. 00:34:37.433 [2024-07-15 03:37:43.391218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.433 [2024-07-15 03:37:43.391246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.433 qpair failed and we were unable to recover it. 00:34:37.433 [2024-07-15 03:37:43.391404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.433 [2024-07-15 03:37:43.391429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.433 qpair failed and we were unable to recover it. 00:34:37.433 [2024-07-15 03:37:43.391561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.433 [2024-07-15 03:37:43.391602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.433 qpair failed and we were unable to recover it. 00:34:37.433 [2024-07-15 03:37:43.391791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.433 [2024-07-15 03:37:43.391818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.433 qpair failed and we were unable to recover it. 00:34:37.433 [2024-07-15 03:37:43.391984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.433 [2024-07-15 03:37:43.392010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.433 qpair failed and we were unable to recover it. 00:34:37.433 [2024-07-15 03:37:43.392177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.433 [2024-07-15 03:37:43.392202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.433 qpair failed and we were unable to recover it. 00:34:37.433 [2024-07-15 03:37:43.392362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.433 [2024-07-15 03:37:43.392390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.433 qpair failed and we were unable to recover it. 00:34:37.433 [2024-07-15 03:37:43.392515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.433 [2024-07-15 03:37:43.392542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.433 qpair failed and we were unable to recover it. 00:34:37.433 [2024-07-15 03:37:43.392692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.433 [2024-07-15 03:37:43.392719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.433 qpair failed and we were unable to recover it. 00:34:37.433 [2024-07-15 03:37:43.392909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.433 [2024-07-15 03:37:43.392935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.433 qpair failed and we were unable to recover it. 00:34:37.433 [2024-07-15 03:37:43.393097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.433 [2024-07-15 03:37:43.393125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.433 qpair failed and we were unable to recover it. 00:34:37.433 [2024-07-15 03:37:43.393255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.433 [2024-07-15 03:37:43.393282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.433 qpair failed and we were unable to recover it. 00:34:37.433 [2024-07-15 03:37:43.393437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.433 [2024-07-15 03:37:43.393465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.433 qpair failed and we were unable to recover it. 00:34:37.433 [2024-07-15 03:37:43.393646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.433 [2024-07-15 03:37:43.393671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.433 qpair failed and we were unable to recover it. 00:34:37.433 [2024-07-15 03:37:43.393807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.433 [2024-07-15 03:37:43.393850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.433 qpair failed and we were unable to recover it. 00:34:37.433 [2024-07-15 03:37:43.394015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.433 [2024-07-15 03:37:43.394040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.433 qpair failed and we were unable to recover it. 00:34:37.433 [2024-07-15 03:37:43.394180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.433 [2024-07-15 03:37:43.394206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.433 qpair failed and we were unable to recover it. 00:34:37.433 [2024-07-15 03:37:43.394350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.433 [2024-07-15 03:37:43.394376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.433 qpair failed and we were unable to recover it. 00:34:37.433 [2024-07-15 03:37:43.394525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.433 [2024-07-15 03:37:43.394552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.433 qpair failed and we were unable to recover it. 00:34:37.433 [2024-07-15 03:37:43.394702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.433 [2024-07-15 03:37:43.394730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.433 qpair failed and we were unable to recover it. 00:34:37.433 [2024-07-15 03:37:43.394950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.433 [2024-07-15 03:37:43.394979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.433 qpair failed and we were unable to recover it. 00:34:37.433 [2024-07-15 03:37:43.395146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.433 [2024-07-15 03:37:43.395171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.433 qpair failed and we were unable to recover it. 00:34:37.433 [2024-07-15 03:37:43.395279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.433 [2024-07-15 03:37:43.395305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.433 qpair failed and we were unable to recover it. 00:34:37.433 [2024-07-15 03:37:43.395501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.433 [2024-07-15 03:37:43.395526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.433 qpair failed and we were unable to recover it. 00:34:37.433 [2024-07-15 03:37:43.395668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.433 [2024-07-15 03:37:43.395692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.433 qpair failed and we were unable to recover it. 00:34:37.433 [2024-07-15 03:37:43.395868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.433 [2024-07-15 03:37:43.395900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.434 qpair failed and we were unable to recover it. 00:34:37.434 [2024-07-15 03:37:43.396055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.434 [2024-07-15 03:37:43.396083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.434 qpair failed and we were unable to recover it. 00:34:37.434 [2024-07-15 03:37:43.396237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.434 [2024-07-15 03:37:43.396264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.434 qpair failed and we were unable to recover it. 00:34:37.434 [2024-07-15 03:37:43.396440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.434 [2024-07-15 03:37:43.396467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.434 qpair failed and we were unable to recover it. 00:34:37.434 [2024-07-15 03:37:43.396651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.434 [2024-07-15 03:37:43.396676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.434 qpair failed and we were unable to recover it. 00:34:37.434 [2024-07-15 03:37:43.396855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.434 [2024-07-15 03:37:43.396889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.434 qpair failed and we were unable to recover it. 00:34:37.434 [2024-07-15 03:37:43.397052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.434 [2024-07-15 03:37:43.397076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.434 qpair failed and we were unable to recover it. 00:34:37.434 [2024-07-15 03:37:43.397234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.434 [2024-07-15 03:37:43.397261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.434 qpair failed and we were unable to recover it. 00:34:37.434 [2024-07-15 03:37:43.397399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.434 [2024-07-15 03:37:43.397440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.434 qpair failed and we were unable to recover it. 00:34:37.434 [2024-07-15 03:37:43.397617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.434 [2024-07-15 03:37:43.397645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.434 qpair failed and we were unable to recover it. 00:34:37.434 [2024-07-15 03:37:43.397790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.434 [2024-07-15 03:37:43.397817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.434 qpair failed and we were unable to recover it. 00:34:37.434 [2024-07-15 03:37:43.397980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.434 [2024-07-15 03:37:43.398005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.434 qpair failed and we were unable to recover it. 00:34:37.434 [2024-07-15 03:37:43.398146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.434 [2024-07-15 03:37:43.398171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.434 qpair failed and we were unable to recover it. 00:34:37.434 [2024-07-15 03:37:43.398357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.434 [2024-07-15 03:37:43.398385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.434 qpair failed and we were unable to recover it. 00:34:37.434 [2024-07-15 03:37:43.398537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.434 [2024-07-15 03:37:43.398565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.434 qpair failed and we were unable to recover it. 00:34:37.434 [2024-07-15 03:37:43.398690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.434 [2024-07-15 03:37:43.398718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.434 qpair failed and we were unable to recover it. 00:34:37.434 [2024-07-15 03:37:43.398848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.434 [2024-07-15 03:37:43.398873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.434 qpair failed and we were unable to recover it. 00:34:37.434 [2024-07-15 03:37:43.398997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.434 [2024-07-15 03:37:43.399021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.434 qpair failed and we were unable to recover it. 00:34:37.434 [2024-07-15 03:37:43.399169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.434 [2024-07-15 03:37:43.399198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.434 qpair failed and we were unable to recover it. 00:34:37.434 [2024-07-15 03:37:43.399375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.434 [2024-07-15 03:37:43.399407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.434 qpair failed and we were unable to recover it. 00:34:37.434 [2024-07-15 03:37:43.399573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.434 [2024-07-15 03:37:43.399597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.434 qpair failed and we were unable to recover it. 00:34:37.434 [2024-07-15 03:37:43.399740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.434 [2024-07-15 03:37:43.399765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.434 qpair failed and we were unable to recover it. 00:34:37.434 [2024-07-15 03:37:43.399902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.434 [2024-07-15 03:37:43.399931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.434 qpair failed and we were unable to recover it. 00:34:37.434 [2024-07-15 03:37:43.400080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.434 [2024-07-15 03:37:43.400107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.434 qpair failed and we were unable to recover it. 00:34:37.434 [2024-07-15 03:37:43.400264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.434 [2024-07-15 03:37:43.400289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.434 qpair failed and we were unable to recover it. 00:34:37.434 [2024-07-15 03:37:43.400424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.434 [2024-07-15 03:37:43.400466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.434 qpair failed and we were unable to recover it. 00:34:37.434 [2024-07-15 03:37:43.400602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.434 [2024-07-15 03:37:43.400629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.434 qpair failed and we were unable to recover it. 00:34:37.434 [2024-07-15 03:37:43.400778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.434 [2024-07-15 03:37:43.400806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.434 qpair failed and we were unable to recover it. 00:34:37.434 [2024-07-15 03:37:43.400941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.434 [2024-07-15 03:37:43.400966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.434 qpair failed and we were unable to recover it. 00:34:37.434 [2024-07-15 03:37:43.401129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.434 [2024-07-15 03:37:43.401175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.434 qpair failed and we were unable to recover it. 00:34:37.434 [2024-07-15 03:37:43.401357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.434 [2024-07-15 03:37:43.401384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.434 qpair failed and we were unable to recover it. 00:34:37.434 [2024-07-15 03:37:43.401502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.434 [2024-07-15 03:37:43.401530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.434 qpair failed and we were unable to recover it. 00:34:37.434 [2024-07-15 03:37:43.401663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.434 [2024-07-15 03:37:43.401687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.434 qpair failed and we were unable to recover it. 00:34:37.434 [2024-07-15 03:37:43.401802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.434 [2024-07-15 03:37:43.401827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.434 qpair failed and we were unable to recover it. 00:34:37.434 [2024-07-15 03:37:43.401971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.434 [2024-07-15 03:37:43.402000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.434 qpair failed and we were unable to recover it. 00:34:37.434 [2024-07-15 03:37:43.402121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.434 [2024-07-15 03:37:43.402149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.434 qpair failed and we were unable to recover it. 00:34:37.434 [2024-07-15 03:37:43.402315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.434 [2024-07-15 03:37:43.402340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.434 qpair failed and we were unable to recover it. 00:34:37.434 [2024-07-15 03:37:43.402502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.434 [2024-07-15 03:37:43.402544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.434 qpair failed and we were unable to recover it. 00:34:37.434 [2024-07-15 03:37:43.402670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.434 [2024-07-15 03:37:43.402697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.434 qpair failed and we were unable to recover it. 00:34:37.435 [2024-07-15 03:37:43.402826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.435 [2024-07-15 03:37:43.402853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.435 qpair failed and we were unable to recover it. 00:34:37.435 [2024-07-15 03:37:43.403061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.435 [2024-07-15 03:37:43.403086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.435 qpair failed and we were unable to recover it. 00:34:37.435 [2024-07-15 03:37:43.403242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.435 [2024-07-15 03:37:43.403271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.435 qpair failed and we were unable to recover it. 00:34:37.435 [2024-07-15 03:37:43.403399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.435 [2024-07-15 03:37:43.403427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.435 qpair failed and we were unable to recover it. 00:34:37.435 [2024-07-15 03:37:43.403548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.435 [2024-07-15 03:37:43.403575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.435 qpair failed and we were unable to recover it. 00:34:37.435 [2024-07-15 03:37:43.403740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.435 [2024-07-15 03:37:43.403765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.435 qpair failed and we were unable to recover it. 00:34:37.435 [2024-07-15 03:37:43.403869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.435 [2024-07-15 03:37:43.403901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.435 qpair failed and we were unable to recover it. 00:34:37.435 [2024-07-15 03:37:43.404039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.435 [2024-07-15 03:37:43.404071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.435 qpair failed and we were unable to recover it. 00:34:37.435 [2024-07-15 03:37:43.404226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.435 [2024-07-15 03:37:43.404254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.435 qpair failed and we were unable to recover it. 00:34:37.435 [2024-07-15 03:37:43.404399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.435 [2024-07-15 03:37:43.404424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.435 qpair failed and we were unable to recover it. 00:34:37.435 [2024-07-15 03:37:43.404541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.435 [2024-07-15 03:37:43.404565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.435 qpair failed and we were unable to recover it. 00:34:37.435 [2024-07-15 03:37:43.404716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.435 [2024-07-15 03:37:43.404745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.435 qpair failed and we were unable to recover it. 00:34:37.435 [2024-07-15 03:37:43.404862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.435 [2024-07-15 03:37:43.404895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.435 qpair failed and we were unable to recover it. 00:34:37.435 [2024-07-15 03:37:43.405035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.435 [2024-07-15 03:37:43.405061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.435 qpair failed and we were unable to recover it. 00:34:37.435 [2024-07-15 03:37:43.405173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.435 [2024-07-15 03:37:43.405198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.435 qpair failed and we were unable to recover it. 00:34:37.435 [2024-07-15 03:37:43.405388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.435 [2024-07-15 03:37:43.405416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.435 qpair failed and we were unable to recover it. 00:34:37.435 [2024-07-15 03:37:43.405548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.435 [2024-07-15 03:37:43.405576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.435 qpair failed and we were unable to recover it. 00:34:37.435 [2024-07-15 03:37:43.405724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.435 [2024-07-15 03:37:43.405751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.435 qpair failed and we were unable to recover it. 00:34:37.435 [2024-07-15 03:37:43.405894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.435 [2024-07-15 03:37:43.405935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.435 qpair failed and we were unable to recover it. 00:34:37.435 [2024-07-15 03:37:43.406053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.435 [2024-07-15 03:37:43.406078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.435 qpair failed and we were unable to recover it. 00:34:37.435 [2024-07-15 03:37:43.406203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.435 [2024-07-15 03:37:43.406228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.435 qpair failed and we were unable to recover it. 00:34:37.435 [2024-07-15 03:37:43.406368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.435 [2024-07-15 03:37:43.406393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.435 qpair failed and we were unable to recover it. 00:34:37.435 [2024-07-15 03:37:43.406548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.435 [2024-07-15 03:37:43.406576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.435 qpair failed and we were unable to recover it. 00:34:37.435 [2024-07-15 03:37:43.406755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.435 [2024-07-15 03:37:43.406782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.435 qpair failed and we were unable to recover it. 00:34:37.435 [2024-07-15 03:37:43.406907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.435 [2024-07-15 03:37:43.406935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.435 qpair failed and we were unable to recover it. 00:34:37.435 [2024-07-15 03:37:43.407118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.435 [2024-07-15 03:37:43.407143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.435 qpair failed and we were unable to recover it. 00:34:37.435 [2024-07-15 03:37:43.407319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.435 [2024-07-15 03:37:43.407346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.435 qpair failed and we were unable to recover it. 00:34:37.435 [2024-07-15 03:37:43.407474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.435 [2024-07-15 03:37:43.407501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.435 qpair failed and we were unable to recover it. 00:34:37.435 [2024-07-15 03:37:43.407656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.435 [2024-07-15 03:37:43.407684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.435 qpair failed and we were unable to recover it. 00:34:37.435 [2024-07-15 03:37:43.407817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.435 [2024-07-15 03:37:43.407843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.435 qpair failed and we were unable to recover it. 00:34:37.435 [2024-07-15 03:37:43.407990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.435 [2024-07-15 03:37:43.408015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.435 qpair failed and we were unable to recover it. 00:34:37.435 [2024-07-15 03:37:43.408188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.435 [2024-07-15 03:37:43.408213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.435 qpair failed and we were unable to recover it. 00:34:37.435 [2024-07-15 03:37:43.408375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.435 [2024-07-15 03:37:43.408403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.435 qpair failed and we were unable to recover it. 00:34:37.435 [2024-07-15 03:37:43.408558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.435 [2024-07-15 03:37:43.408583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.435 qpair failed and we were unable to recover it. 00:34:37.435 [2024-07-15 03:37:43.408696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.435 [2024-07-15 03:37:43.408720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.435 qpair failed and we were unable to recover it. 00:34:37.435 [2024-07-15 03:37:43.408918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.435 [2024-07-15 03:37:43.408947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.435 qpair failed and we were unable to recover it. 00:34:37.435 [2024-07-15 03:37:43.409101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.435 [2024-07-15 03:37:43.409129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.435 qpair failed and we were unable to recover it. 00:34:37.435 [2024-07-15 03:37:43.409258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.435 [2024-07-15 03:37:43.409282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.435 qpair failed and we were unable to recover it. 00:34:37.435 [2024-07-15 03:37:43.409401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.435 [2024-07-15 03:37:43.409426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.435 qpair failed and we were unable to recover it. 00:34:37.436 [2024-07-15 03:37:43.409593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.436 [2024-07-15 03:37:43.409618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.436 qpair failed and we were unable to recover it. 00:34:37.436 [2024-07-15 03:37:43.409745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.436 [2024-07-15 03:37:43.409772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.436 qpair failed and we were unable to recover it. 00:34:37.436 [2024-07-15 03:37:43.409907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.436 [2024-07-15 03:37:43.409933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.436 qpair failed and we were unable to recover it. 00:34:37.436 [2024-07-15 03:37:43.410095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.436 [2024-07-15 03:37:43.410120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.436 qpair failed and we were unable to recover it. 00:34:37.436 [2024-07-15 03:37:43.410264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.436 [2024-07-15 03:37:43.410291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.436 qpair failed and we were unable to recover it. 00:34:37.436 [2024-07-15 03:37:43.410442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.436 [2024-07-15 03:37:43.410470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.436 qpair failed and we were unable to recover it. 00:34:37.436 [2024-07-15 03:37:43.410634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.436 [2024-07-15 03:37:43.410658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.436 qpair failed and we were unable to recover it. 00:34:37.436 [2024-07-15 03:37:43.410792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.436 [2024-07-15 03:37:43.410816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.436 qpair failed and we were unable to recover it. 00:34:37.436 [2024-07-15 03:37:43.410994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.436 [2024-07-15 03:37:43.411023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.436 qpair failed and we were unable to recover it. 00:34:37.436 [2024-07-15 03:37:43.411178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.436 [2024-07-15 03:37:43.411206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.436 qpair failed and we were unable to recover it. 00:34:37.436 [2024-07-15 03:37:43.411387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.436 [2024-07-15 03:37:43.411411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.436 qpair failed and we were unable to recover it. 00:34:37.436 [2024-07-15 03:37:43.411563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.436 [2024-07-15 03:37:43.411591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.436 qpair failed and we were unable to recover it. 00:34:37.436 [2024-07-15 03:37:43.411741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.436 [2024-07-15 03:37:43.411769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.436 qpair failed and we were unable to recover it. 00:34:37.436 [2024-07-15 03:37:43.411916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.436 [2024-07-15 03:37:43.411944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.436 qpair failed and we were unable to recover it. 00:34:37.436 [2024-07-15 03:37:43.412079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.436 [2024-07-15 03:37:43.412105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.436 qpair failed and we were unable to recover it. 00:34:37.436 [2024-07-15 03:37:43.412218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.436 [2024-07-15 03:37:43.412243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.436 qpair failed and we were unable to recover it. 00:34:37.436 [2024-07-15 03:37:43.412429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.436 [2024-07-15 03:37:43.412456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.436 qpair failed and we were unable to recover it. 00:34:37.436 [2024-07-15 03:37:43.412587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.436 [2024-07-15 03:37:43.412613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.436 qpair failed and we were unable to recover it. 00:34:37.436 [2024-07-15 03:37:43.412775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.436 [2024-07-15 03:37:43.412799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.436 qpair failed and we were unable to recover it. 00:34:37.436 [2024-07-15 03:37:43.412984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.436 [2024-07-15 03:37:43.413012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.436 qpair failed and we were unable to recover it. 00:34:37.436 [2024-07-15 03:37:43.413143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.436 [2024-07-15 03:37:43.413170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.436 qpair failed and we were unable to recover it. 00:34:37.436 [2024-07-15 03:37:43.413290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.436 [2024-07-15 03:37:43.413317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.436 qpair failed and we were unable to recover it. 00:34:37.436 [2024-07-15 03:37:43.413497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.436 [2024-07-15 03:37:43.413522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.436 qpair failed and we were unable to recover it. 00:34:37.436 [2024-07-15 03:37:43.413680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.436 [2024-07-15 03:37:43.413708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.436 qpair failed and we were unable to recover it. 00:34:37.436 [2024-07-15 03:37:43.413859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.436 [2024-07-15 03:37:43.413894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.436 qpair failed and we were unable to recover it. 00:34:37.436 [2024-07-15 03:37:43.414044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.436 [2024-07-15 03:37:43.414071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.436 qpair failed and we were unable to recover it. 00:34:37.436 [2024-07-15 03:37:43.414230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.436 [2024-07-15 03:37:43.414255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.436 qpair failed and we were unable to recover it. 00:34:37.436 [2024-07-15 03:37:43.414363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.436 [2024-07-15 03:37:43.414388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.436 qpair failed and we were unable to recover it. 00:34:37.436 [2024-07-15 03:37:43.414490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.436 [2024-07-15 03:37:43.414515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.436 qpair failed and we were unable to recover it. 00:34:37.436 [2024-07-15 03:37:43.414641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.436 [2024-07-15 03:37:43.414671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.436 qpair failed and we were unable to recover it. 00:34:37.436 [2024-07-15 03:37:43.414889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.436 [2024-07-15 03:37:43.414932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.436 qpair failed and we were unable to recover it. 00:34:37.436 [2024-07-15 03:37:43.415098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.436 [2024-07-15 03:37:43.415124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.436 qpair failed and we were unable to recover it. 00:34:37.436 [2024-07-15 03:37:43.415297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.436 [2024-07-15 03:37:43.415324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.436 qpair failed and we were unable to recover it. 00:34:37.436 [2024-07-15 03:37:43.415460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.436 [2024-07-15 03:37:43.415488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.436 qpair failed and we were unable to recover it. 00:34:37.436 [2024-07-15 03:37:43.415670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.436 [2024-07-15 03:37:43.415695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.436 qpair failed and we were unable to recover it. 00:34:37.436 [2024-07-15 03:37:43.415799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.436 [2024-07-15 03:37:43.415838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.436 qpair failed and we were unable to recover it. 00:34:37.436 [2024-07-15 03:37:43.415993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.436 [2024-07-15 03:37:43.416026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.436 qpair failed and we were unable to recover it. 00:34:37.436 [2024-07-15 03:37:43.416204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.436 [2024-07-15 03:37:43.416232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.436 qpair failed and we were unable to recover it. 00:34:37.436 [2024-07-15 03:37:43.416362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.436 [2024-07-15 03:37:43.416387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.436 qpair failed and we were unable to recover it. 00:34:37.437 [2024-07-15 03:37:43.416532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.437 [2024-07-15 03:37:43.416557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.437 qpair failed and we were unable to recover it. 00:34:37.437 [2024-07-15 03:37:43.416699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.437 [2024-07-15 03:37:43.416726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.437 qpair failed and we were unable to recover it. 00:34:37.437 [2024-07-15 03:37:43.416874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.437 [2024-07-15 03:37:43.416910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.437 qpair failed and we were unable to recover it. 00:34:37.437 [2024-07-15 03:37:43.417038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.437 [2024-07-15 03:37:43.417063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.437 qpair failed and we were unable to recover it. 00:34:37.437 [2024-07-15 03:37:43.417184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.437 [2024-07-15 03:37:43.417208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.437 qpair failed and we were unable to recover it. 00:34:37.437 [2024-07-15 03:37:43.417346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.437 [2024-07-15 03:37:43.417371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.437 qpair failed and we were unable to recover it. 00:34:37.437 [2024-07-15 03:37:43.417557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.437 [2024-07-15 03:37:43.417584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.437 qpair failed and we were unable to recover it. 00:34:37.437 [2024-07-15 03:37:43.417717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.437 [2024-07-15 03:37:43.417741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.437 qpair failed and we were unable to recover it. 00:34:37.437 [2024-07-15 03:37:43.417847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.437 [2024-07-15 03:37:43.417872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.437 qpair failed and we were unable to recover it. 00:34:37.437 [2024-07-15 03:37:43.418001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.437 [2024-07-15 03:37:43.418027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.437 qpair failed and we were unable to recover it. 00:34:37.437 [2024-07-15 03:37:43.418139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.437 [2024-07-15 03:37:43.418181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.437 qpair failed and we were unable to recover it. 00:34:37.437 [2024-07-15 03:37:43.418345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.437 [2024-07-15 03:37:43.418370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.437 qpair failed and we were unable to recover it. 00:34:37.437 [2024-07-15 03:37:43.418482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.437 [2024-07-15 03:37:43.418507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.437 qpair failed and we were unable to recover it. 00:34:37.437 [2024-07-15 03:37:43.418671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.437 [2024-07-15 03:37:43.418699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.437 qpair failed and we were unable to recover it. 00:34:37.437 [2024-07-15 03:37:43.418816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.437 [2024-07-15 03:37:43.418843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.437 qpair failed and we were unable to recover it. 00:34:37.437 [2024-07-15 03:37:43.419039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.437 [2024-07-15 03:37:43.419065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.437 qpair failed and we were unable to recover it. 00:34:37.437 [2024-07-15 03:37:43.419220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.437 [2024-07-15 03:37:43.419249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.437 qpair failed and we were unable to recover it. 00:34:37.437 [2024-07-15 03:37:43.419398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.437 [2024-07-15 03:37:43.419426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.437 qpair failed and we were unable to recover it. 00:34:37.437 [2024-07-15 03:37:43.419553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.437 [2024-07-15 03:37:43.419580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.437 qpair failed and we were unable to recover it. 00:34:37.437 [2024-07-15 03:37:43.419737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.437 [2024-07-15 03:37:43.419763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.437 qpair failed and we were unable to recover it. 00:34:37.437 [2024-07-15 03:37:43.419886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.437 [2024-07-15 03:37:43.419913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.437 qpair failed and we were unable to recover it. 00:34:37.437 [2024-07-15 03:37:43.420056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.437 [2024-07-15 03:37:43.420083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.437 qpair failed and we were unable to recover it. 00:34:37.437 [2024-07-15 03:37:43.420260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.437 [2024-07-15 03:37:43.420287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.437 qpair failed and we were unable to recover it. 00:34:37.437 [2024-07-15 03:37:43.420421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.437 [2024-07-15 03:37:43.420446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.437 qpair failed and we were unable to recover it. 00:34:37.437 [2024-07-15 03:37:43.420558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.437 [2024-07-15 03:37:43.420586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.437 qpair failed and we were unable to recover it. 00:34:37.437 [2024-07-15 03:37:43.420777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.437 [2024-07-15 03:37:43.420801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.437 qpair failed and we were unable to recover it. 00:34:37.437 [2024-07-15 03:37:43.420959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.437 [2024-07-15 03:37:43.421002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.437 qpair failed and we were unable to recover it. 00:34:37.437 [2024-07-15 03:37:43.421163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.437 [2024-07-15 03:37:43.421188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.437 qpair failed and we were unable to recover it. 00:34:37.437 [2024-07-15 03:37:43.421328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.437 [2024-07-15 03:37:43.421370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.437 qpair failed and we were unable to recover it. 00:34:37.437 [2024-07-15 03:37:43.421550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.437 [2024-07-15 03:37:43.421577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.437 qpair failed and we were unable to recover it. 00:34:37.437 [2024-07-15 03:37:43.421736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.437 [2024-07-15 03:37:43.421763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.437 qpair failed and we were unable to recover it. 00:34:37.437 [2024-07-15 03:37:43.421918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.437 [2024-07-15 03:37:43.421943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.437 qpair failed and we were unable to recover it. 00:34:37.437 [2024-07-15 03:37:43.422061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.437 [2024-07-15 03:37:43.422086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.437 qpair failed and we were unable to recover it. 00:34:37.437 [2024-07-15 03:37:43.422199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.437 [2024-07-15 03:37:43.422225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.437 qpair failed and we were unable to recover it. 00:34:37.437 [2024-07-15 03:37:43.422378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.437 [2024-07-15 03:37:43.422406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.437 qpair failed and we were unable to recover it. 00:34:37.437 [2024-07-15 03:37:43.422543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.438 [2024-07-15 03:37:43.422567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.438 qpair failed and we were unable to recover it. 00:34:37.438 [2024-07-15 03:37:43.422728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.438 [2024-07-15 03:37:43.422753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.438 qpair failed and we were unable to recover it. 00:34:37.438 [2024-07-15 03:37:43.422892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.438 [2024-07-15 03:37:43.422921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.438 qpair failed and we were unable to recover it. 00:34:37.438 [2024-07-15 03:37:43.423049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.438 [2024-07-15 03:37:43.423077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.438 qpair failed and we were unable to recover it. 00:34:37.438 [2024-07-15 03:37:43.423236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.438 [2024-07-15 03:37:43.423261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.438 qpair failed and we were unable to recover it. 00:34:37.438 [2024-07-15 03:37:43.423420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.438 [2024-07-15 03:37:43.423445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.438 qpair failed and we were unable to recover it. 00:34:37.438 [2024-07-15 03:37:43.423583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.438 [2024-07-15 03:37:43.423608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.438 qpair failed and we were unable to recover it. 00:34:37.438 [2024-07-15 03:37:43.423803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.438 [2024-07-15 03:37:43.423828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.438 qpair failed and we were unable to recover it. 00:34:37.438 [2024-07-15 03:37:43.424002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.438 [2024-07-15 03:37:43.424028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.438 qpair failed and we were unable to recover it. 00:34:37.438 [2024-07-15 03:37:43.424144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.438 [2024-07-15 03:37:43.424187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.438 qpair failed and we were unable to recover it. 00:34:37.438 [2024-07-15 03:37:43.424346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.438 [2024-07-15 03:37:43.424373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.438 qpair failed and we were unable to recover it. 00:34:37.438 [2024-07-15 03:37:43.424520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.438 [2024-07-15 03:37:43.424547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.438 qpair failed and we were unable to recover it. 00:34:37.438 [2024-07-15 03:37:43.424695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.438 [2024-07-15 03:37:43.424721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.438 qpair failed and we were unable to recover it. 00:34:37.438 [2024-07-15 03:37:43.424860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.438 [2024-07-15 03:37:43.424907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.438 qpair failed and we were unable to recover it. 00:34:37.438 [2024-07-15 03:37:43.425088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.438 [2024-07-15 03:37:43.425115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.438 qpair failed and we were unable to recover it. 00:34:37.438 [2024-07-15 03:37:43.425258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.438 [2024-07-15 03:37:43.425284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.438 qpair failed and we were unable to recover it. 00:34:37.438 [2024-07-15 03:37:43.425428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.438 [2024-07-15 03:37:43.425457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.438 qpair failed and we were unable to recover it. 00:34:37.438 [2024-07-15 03:37:43.425572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.438 [2024-07-15 03:37:43.425615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.438 qpair failed and we were unable to recover it. 00:34:37.438 [2024-07-15 03:37:43.425778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.438 [2024-07-15 03:37:43.425806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.438 qpair failed and we were unable to recover it. 00:34:37.438 [2024-07-15 03:37:43.425993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.438 [2024-07-15 03:37:43.426022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.438 qpair failed and we were unable to recover it. 00:34:37.438 [2024-07-15 03:37:43.426186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.438 [2024-07-15 03:37:43.426210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.438 qpair failed and we were unable to recover it. 00:34:37.438 [2024-07-15 03:37:43.426355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.438 [2024-07-15 03:37:43.426379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.438 qpair failed and we were unable to recover it. 00:34:37.438 [2024-07-15 03:37:43.426518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.438 [2024-07-15 03:37:43.426543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.438 qpair failed and we were unable to recover it. 00:34:37.438 [2024-07-15 03:37:43.426692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.438 [2024-07-15 03:37:43.426719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.438 qpair failed and we were unable to recover it. 00:34:37.438 [2024-07-15 03:37:43.426905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.438 [2024-07-15 03:37:43.426930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.438 qpair failed and we were unable to recover it. 00:34:37.438 [2024-07-15 03:37:43.427087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.438 [2024-07-15 03:37:43.427115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.438 qpair failed and we were unable to recover it. 00:34:37.438 [2024-07-15 03:37:43.427293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.438 [2024-07-15 03:37:43.427321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.438 qpair failed and we were unable to recover it. 00:34:37.438 [2024-07-15 03:37:43.427448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.438 [2024-07-15 03:37:43.427475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.438 qpair failed and we were unable to recover it. 00:34:37.438 [2024-07-15 03:37:43.427638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.438 [2024-07-15 03:37:43.427662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.438 qpair failed and we were unable to recover it. 00:34:37.438 [2024-07-15 03:37:43.427815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.438 [2024-07-15 03:37:43.427857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.438 qpair failed and we were unable to recover it. 00:34:37.438 [2024-07-15 03:37:43.428038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.438 [2024-07-15 03:37:43.428076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.438 qpair failed and we were unable to recover it. 00:34:37.438 [2024-07-15 03:37:43.428227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.438 [2024-07-15 03:37:43.428254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.438 qpair failed and we were unable to recover it. 00:34:37.438 [2024-07-15 03:37:43.428410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.438 [2024-07-15 03:37:43.428436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.438 qpair failed and we were unable to recover it. 00:34:37.438 [2024-07-15 03:37:43.428543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.438 [2024-07-15 03:37:43.428568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.438 qpair failed and we were unable to recover it. 00:34:37.438 [2024-07-15 03:37:43.428735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.438 [2024-07-15 03:37:43.428761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.438 qpair failed and we were unable to recover it. 00:34:37.438 [2024-07-15 03:37:43.428881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.438 [2024-07-15 03:37:43.428908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.438 qpair failed and we were unable to recover it. 00:34:37.438 [2024-07-15 03:37:43.429052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.438 [2024-07-15 03:37:43.429077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.438 qpair failed and we were unable to recover it. 00:34:37.438 [2024-07-15 03:37:43.429195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.438 [2024-07-15 03:37:43.429220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.438 qpair failed and we were unable to recover it. 00:34:37.438 [2024-07-15 03:37:43.429336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.438 [2024-07-15 03:37:43.429361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.438 qpair failed and we were unable to recover it. 00:34:37.438 [2024-07-15 03:37:43.429499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.438 [2024-07-15 03:37:43.429524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.438 qpair failed and we were unable to recover it. 00:34:37.438 [2024-07-15 03:37:43.429656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.438 [2024-07-15 03:37:43.429681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.438 qpair failed and we were unable to recover it. 00:34:37.438 [2024-07-15 03:37:43.429793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.438 [2024-07-15 03:37:43.429818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.438 qpair failed and we were unable to recover it. 00:34:37.439 [2024-07-15 03:37:43.429970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.439 [2024-07-15 03:37:43.430009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.439 qpair failed and we were unable to recover it. 00:34:37.439 [2024-07-15 03:37:43.430137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.439 [2024-07-15 03:37:43.430169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.439 qpair failed and we were unable to recover it. 00:34:37.439 [2024-07-15 03:37:43.430285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.439 [2024-07-15 03:37:43.430310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.439 qpair failed and we were unable to recover it. 00:34:37.439 [2024-07-15 03:37:43.430424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.439 [2024-07-15 03:37:43.430450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.439 qpair failed and we were unable to recover it. 00:34:37.439 [2024-07-15 03:37:43.430590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.439 [2024-07-15 03:37:43.430616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.439 qpair failed and we were unable to recover it. 00:34:37.439 [2024-07-15 03:37:43.430722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.439 [2024-07-15 03:37:43.430747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.439 qpair failed and we were unable to recover it. 00:34:37.439 [2024-07-15 03:37:43.430896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.439 [2024-07-15 03:37:43.430921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.439 qpair failed and we were unable to recover it. 00:34:37.439 [2024-07-15 03:37:43.431036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.439 [2024-07-15 03:37:43.431061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.439 qpair failed and we were unable to recover it. 00:34:37.439 [2024-07-15 03:37:43.431212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.439 [2024-07-15 03:37:43.431237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.439 qpair failed and we were unable to recover it. 00:34:37.439 [2024-07-15 03:37:43.431378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.439 [2024-07-15 03:37:43.431402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.439 qpair failed and we were unable to recover it. 00:34:37.439 [2024-07-15 03:37:43.431520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.439 [2024-07-15 03:37:43.431546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.439 qpair failed and we were unable to recover it. 00:34:37.439 [2024-07-15 03:37:43.431709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.439 [2024-07-15 03:37:43.431734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.439 qpair failed and we were unable to recover it. 00:34:37.439 [2024-07-15 03:37:43.431940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.439 [2024-07-15 03:37:43.431967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.439 qpair failed and we were unable to recover it. 00:34:37.439 [2024-07-15 03:37:43.432089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.439 [2024-07-15 03:37:43.432115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.439 qpair failed and we were unable to recover it. 00:34:37.439 [2024-07-15 03:37:43.432260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.439 [2024-07-15 03:37:43.432285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.439 qpair failed and we were unable to recover it. 00:34:37.439 [2024-07-15 03:37:43.432459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.439 [2024-07-15 03:37:43.432484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.439 qpair failed and we were unable to recover it. 00:34:37.439 [2024-07-15 03:37:43.432591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.439 [2024-07-15 03:37:43.432617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.439 qpair failed and we were unable to recover it. 00:34:37.439 [2024-07-15 03:37:43.432750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.439 [2024-07-15 03:37:43.432775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.439 qpair failed and we were unable to recover it. 00:34:37.439 [2024-07-15 03:37:43.432942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.439 [2024-07-15 03:37:43.432968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.439 qpair failed and we were unable to recover it. 00:34:37.439 [2024-07-15 03:37:43.433072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.439 [2024-07-15 03:37:43.433097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.439 qpair failed and we were unable to recover it. 00:34:37.439 [2024-07-15 03:37:43.433211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.439 [2024-07-15 03:37:43.433236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.439 qpair failed and we were unable to recover it. 00:34:37.439 [2024-07-15 03:37:43.433351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.439 [2024-07-15 03:37:43.433376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.439 qpair failed and we were unable to recover it. 00:34:37.439 [2024-07-15 03:37:43.433499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.439 [2024-07-15 03:37:43.433525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.439 qpair failed and we were unable to recover it. 00:34:37.439 [2024-07-15 03:37:43.433665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.439 [2024-07-15 03:37:43.433691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.439 qpair failed and we were unable to recover it. 00:34:37.439 [2024-07-15 03:37:43.433869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.439 [2024-07-15 03:37:43.433938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.439 qpair failed and we were unable to recover it. 00:34:37.439 [2024-07-15 03:37:43.434058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.439 [2024-07-15 03:37:43.434085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.439 qpair failed and we were unable to recover it. 00:34:37.439 [2024-07-15 03:37:43.434235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.439 [2024-07-15 03:37:43.434260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.439 qpair failed and we were unable to recover it. 00:34:37.439 [2024-07-15 03:37:43.434376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.439 [2024-07-15 03:37:43.434402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.439 qpair failed and we were unable to recover it. 00:34:37.439 [2024-07-15 03:37:43.434545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.439 [2024-07-15 03:37:43.434576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.439 qpair failed and we were unable to recover it. 00:34:37.439 [2024-07-15 03:37:43.434721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.439 [2024-07-15 03:37:43.434746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.439 qpair failed and we were unable to recover it. 00:34:37.439 [2024-07-15 03:37:43.434888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.439 [2024-07-15 03:37:43.434915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.439 qpair failed and we were unable to recover it. 00:34:37.439 [2024-07-15 03:37:43.435078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.439 [2024-07-15 03:37:43.435104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.439 qpair failed and we were unable to recover it. 00:34:37.439 [2024-07-15 03:37:43.435234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.439 [2024-07-15 03:37:43.435258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.439 qpair failed and we were unable to recover it. 00:34:37.439 [2024-07-15 03:37:43.435397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.439 [2024-07-15 03:37:43.435422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.439 qpair failed and we were unable to recover it. 00:34:37.439 [2024-07-15 03:37:43.435569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.439 [2024-07-15 03:37:43.435594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.439 qpair failed and we were unable to recover it. 00:34:37.439 [2024-07-15 03:37:43.435763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.439 [2024-07-15 03:37:43.435790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.439 qpair failed and we were unable to recover it. 00:34:37.439 [2024-07-15 03:37:43.435956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.439 [2024-07-15 03:37:43.435983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.439 qpair failed and we were unable to recover it. 00:34:37.439 [2024-07-15 03:37:43.436125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.439 [2024-07-15 03:37:43.436150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.439 qpair failed and we were unable to recover it. 00:34:37.439 [2024-07-15 03:37:43.436307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.439 [2024-07-15 03:37:43.436332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.439 qpair failed and we were unable to recover it. 00:34:37.439 [2024-07-15 03:37:43.436474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.439 [2024-07-15 03:37:43.436500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.439 qpair failed and we were unable to recover it. 00:34:37.439 [2024-07-15 03:37:43.436639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.439 [2024-07-15 03:37:43.436664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.439 qpair failed and we were unable to recover it. 00:34:37.439 [2024-07-15 03:37:43.436774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.439 [2024-07-15 03:37:43.436799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.440 qpair failed and we were unable to recover it. 00:34:37.440 [2024-07-15 03:37:43.436973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.440 [2024-07-15 03:37:43.437000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.440 qpair failed and we were unable to recover it. 00:34:37.440 [2024-07-15 03:37:43.437141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.440 [2024-07-15 03:37:43.437166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.440 qpair failed and we were unable to recover it. 00:34:37.440 [2024-07-15 03:37:43.437308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.440 [2024-07-15 03:37:43.437333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.440 qpair failed and we were unable to recover it. 00:34:37.440 [2024-07-15 03:37:43.437498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.440 [2024-07-15 03:37:43.437524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.440 qpair failed and we were unable to recover it. 00:34:37.440 [2024-07-15 03:37:43.437665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.440 [2024-07-15 03:37:43.437690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.440 qpair failed and we were unable to recover it. 00:34:37.440 [2024-07-15 03:37:43.437849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.440 [2024-07-15 03:37:43.437882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.440 qpair failed and we were unable to recover it. 00:34:37.440 [2024-07-15 03:37:43.438048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.440 [2024-07-15 03:37:43.438073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.440 qpair failed and we were unable to recover it. 00:34:37.440 [2024-07-15 03:37:43.438191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.440 [2024-07-15 03:37:43.438216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.440 qpair failed and we were unable to recover it. 00:34:37.440 [2024-07-15 03:37:43.438365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.440 [2024-07-15 03:37:43.438390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.440 qpair failed and we were unable to recover it. 00:34:37.440 [2024-07-15 03:37:43.438501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.440 [2024-07-15 03:37:43.438526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.440 qpair failed and we were unable to recover it. 00:34:37.440 [2024-07-15 03:37:43.438662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.440 [2024-07-15 03:37:43.438687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.440 qpair failed and we were unable to recover it. 00:34:37.440 [2024-07-15 03:37:43.438843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.440 [2024-07-15 03:37:43.438871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.440 qpair failed and we were unable to recover it. 00:34:37.440 [2024-07-15 03:37:43.439036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.440 [2024-07-15 03:37:43.439062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.440 qpair failed and we were unable to recover it. 00:34:37.440 [2024-07-15 03:37:43.439201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.440 [2024-07-15 03:37:43.439227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.440 qpair failed and we were unable to recover it. 00:34:37.440 [2024-07-15 03:37:43.439366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.440 [2024-07-15 03:37:43.439392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.440 qpair failed and we were unable to recover it. 00:34:37.440 [2024-07-15 03:37:43.439561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.440 [2024-07-15 03:37:43.439586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.440 qpair failed and we were unable to recover it. 00:34:37.440 [2024-07-15 03:37:43.439735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.440 [2024-07-15 03:37:43.439763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.440 qpair failed and we were unable to recover it. 00:34:37.440 [2024-07-15 03:37:43.439894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.440 [2024-07-15 03:37:43.439920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.440 qpair failed and we were unable to recover it. 00:34:37.440 [2024-07-15 03:37:43.440056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.440 [2024-07-15 03:37:43.440082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.440 qpair failed and we were unable to recover it. 00:34:37.440 [2024-07-15 03:37:43.440207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.440 [2024-07-15 03:37:43.440233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.440 qpair failed and we were unable to recover it. 00:34:37.440 [2024-07-15 03:37:43.440345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.440 [2024-07-15 03:37:43.440370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.440 qpair failed and we were unable to recover it. 00:34:37.440 [2024-07-15 03:37:43.440506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.440 [2024-07-15 03:37:43.440531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.440 qpair failed and we were unable to recover it. 00:34:37.440 [2024-07-15 03:37:43.440640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.440 [2024-07-15 03:37:43.440666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.440 qpair failed and we were unable to recover it. 00:34:37.440 [2024-07-15 03:37:43.440776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.440 [2024-07-15 03:37:43.440801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.440 qpair failed and we were unable to recover it. 00:34:37.440 [2024-07-15 03:37:43.440963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.440 [2024-07-15 03:37:43.441000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.440 qpair failed and we were unable to recover it. 00:34:37.440 [2024-07-15 03:37:43.441137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.440 [2024-07-15 03:37:43.441162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.440 qpair failed and we were unable to recover it. 00:34:37.440 [2024-07-15 03:37:43.441303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.440 [2024-07-15 03:37:43.441332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.440 qpair failed and we were unable to recover it. 00:34:37.440 [2024-07-15 03:37:43.441495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.440 [2024-07-15 03:37:43.441520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.440 qpair failed and we were unable to recover it. 00:34:37.440 [2024-07-15 03:37:43.441638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.440 [2024-07-15 03:37:43.441665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.440 qpair failed and we were unable to recover it. 00:34:37.440 [2024-07-15 03:37:43.441848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.440 [2024-07-15 03:37:43.441882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.440 qpair failed and we were unable to recover it. 00:34:37.440 [2024-07-15 03:37:43.442040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.440 [2024-07-15 03:37:43.442066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.440 qpair failed and we were unable to recover it. 00:34:37.440 [2024-07-15 03:37:43.442210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.440 [2024-07-15 03:37:43.442235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.440 qpair failed and we were unable to recover it. 00:34:37.440 [2024-07-15 03:37:43.442390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.440 [2024-07-15 03:37:43.442415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.440 qpair failed and we were unable to recover it. 00:34:37.440 [2024-07-15 03:37:43.442524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.440 [2024-07-15 03:37:43.442551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.440 qpair failed and we were unable to recover it. 00:34:37.440 [2024-07-15 03:37:43.442745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.440 [2024-07-15 03:37:43.442773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.440 qpair failed and we were unable to recover it. 00:34:37.440 [2024-07-15 03:37:43.442966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.440 [2024-07-15 03:37:43.442992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.440 qpair failed and we were unable to recover it. 00:34:37.440 [2024-07-15 03:37:43.443161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.440 [2024-07-15 03:37:43.443187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.440 qpair failed and we were unable to recover it. 00:34:37.440 [2024-07-15 03:37:43.443300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.440 [2024-07-15 03:37:43.443327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.440 qpair failed and we were unable to recover it. 00:34:37.440 [2024-07-15 03:37:43.443431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.440 [2024-07-15 03:37:43.443456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.440 qpair failed and we were unable to recover it. 00:34:37.440 [2024-07-15 03:37:43.443574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.440 [2024-07-15 03:37:43.443600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.440 qpair failed and we were unable to recover it. 00:34:37.440 [2024-07-15 03:37:43.443732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.440 [2024-07-15 03:37:43.443758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.440 qpair failed and we were unable to recover it. 00:34:37.440 [2024-07-15 03:37:43.443880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.441 [2024-07-15 03:37:43.443906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.441 qpair failed and we were unable to recover it. 00:34:37.441 [2024-07-15 03:37:43.444044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.441 [2024-07-15 03:37:43.444069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.441 qpair failed and we were unable to recover it. 00:34:37.441 [2024-07-15 03:37:43.444205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.441 [2024-07-15 03:37:43.444230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.441 qpair failed and we were unable to recover it. 00:34:37.441 [2024-07-15 03:37:43.444343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.441 [2024-07-15 03:37:43.444368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.441 qpair failed and we were unable to recover it. 00:34:37.441 [2024-07-15 03:37:43.444505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.441 [2024-07-15 03:37:43.444530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.441 qpair failed and we were unable to recover it. 00:34:37.441 [2024-07-15 03:37:43.444666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.441 [2024-07-15 03:37:43.444692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.441 qpair failed and we were unable to recover it. 00:34:37.441 [2024-07-15 03:37:43.444828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.441 [2024-07-15 03:37:43.444854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.441 qpair failed and we were unable to recover it. 00:34:37.441 [2024-07-15 03:37:43.444986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.441 [2024-07-15 03:37:43.445025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.441 qpair failed and we were unable to recover it. 00:34:37.441 [2024-07-15 03:37:43.445168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.441 [2024-07-15 03:37:43.445196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.441 qpair failed and we were unable to recover it. 00:34:37.441 [2024-07-15 03:37:43.445338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.441 [2024-07-15 03:37:43.445365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.441 qpair failed and we were unable to recover it. 00:34:37.441 [2024-07-15 03:37:43.445527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.441 [2024-07-15 03:37:43.445572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.441 qpair failed and we were unable to recover it. 00:34:37.441 [2024-07-15 03:37:43.445737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.441 [2024-07-15 03:37:43.445780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.441 qpair failed and we were unable to recover it. 00:34:37.441 [2024-07-15 03:37:43.445893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.441 [2024-07-15 03:37:43.445920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.441 qpair failed and we were unable to recover it. 00:34:37.441 [2024-07-15 03:37:43.446059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.441 [2024-07-15 03:37:43.446085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.441 qpair failed and we were unable to recover it. 00:34:37.441 [2024-07-15 03:37:43.446247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.441 [2024-07-15 03:37:43.446290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.441 qpair failed and we were unable to recover it. 00:34:37.441 [2024-07-15 03:37:43.446430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.441 [2024-07-15 03:37:43.446472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.441 qpair failed and we were unable to recover it. 00:34:37.441 [2024-07-15 03:37:43.446616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.441 [2024-07-15 03:37:43.446660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.441 qpair failed and we were unable to recover it. 00:34:37.441 [2024-07-15 03:37:43.446822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.441 [2024-07-15 03:37:43.446848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.441 qpair failed and we were unable to recover it. 00:34:37.441 [2024-07-15 03:37:43.447018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.441 [2024-07-15 03:37:43.447045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.441 qpair failed and we were unable to recover it. 00:34:37.441 [2024-07-15 03:37:43.447213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.441 [2024-07-15 03:37:43.447256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.441 qpair failed and we were unable to recover it. 00:34:37.441 [2024-07-15 03:37:43.447418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.441 [2024-07-15 03:37:43.447462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.441 qpair failed and we were unable to recover it. 00:34:37.441 [2024-07-15 03:37:43.447592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.441 [2024-07-15 03:37:43.447636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.441 qpair failed and we were unable to recover it. 00:34:37.441 [2024-07-15 03:37:43.447771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.441 [2024-07-15 03:37:43.447796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.441 qpair failed and we were unable to recover it. 00:34:37.441 [2024-07-15 03:37:43.447932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.441 [2024-07-15 03:37:43.447962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.441 qpair failed and we were unable to recover it. 00:34:37.441 [2024-07-15 03:37:43.448137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.441 [2024-07-15 03:37:43.448166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.441 qpair failed and we were unable to recover it. 00:34:37.441 [2024-07-15 03:37:43.448319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.441 [2024-07-15 03:37:43.448366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.441 qpair failed and we were unable to recover it. 00:34:37.441 [2024-07-15 03:37:43.448479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.441 [2024-07-15 03:37:43.448506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.441 qpair failed and we were unable to recover it. 00:34:37.441 [2024-07-15 03:37:43.448651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.441 [2024-07-15 03:37:43.448677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.441 qpair failed and we were unable to recover it. 00:34:37.441 [2024-07-15 03:37:43.448813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.441 [2024-07-15 03:37:43.448839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.441 qpair failed and we were unable to recover it. 00:34:37.441 [2024-07-15 03:37:43.449032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.441 [2024-07-15 03:37:43.449078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.441 qpair failed and we were unable to recover it. 00:34:37.441 [2024-07-15 03:37:43.449275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.441 [2024-07-15 03:37:43.449318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.441 qpair failed and we were unable to recover it. 00:34:37.441 [2024-07-15 03:37:43.449532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.441 [2024-07-15 03:37:43.449580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.441 qpair failed and we were unable to recover it. 00:34:37.441 [2024-07-15 03:37:43.449716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.441 [2024-07-15 03:37:43.449742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.441 qpair failed and we were unable to recover it. 00:34:37.441 [2024-07-15 03:37:43.449889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.441 [2024-07-15 03:37:43.449916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.441 qpair failed and we were unable to recover it. 00:34:37.441 [2024-07-15 03:37:43.450115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.441 [2024-07-15 03:37:43.450162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.441 qpair failed and we were unable to recover it. 00:34:37.441 [2024-07-15 03:37:43.450342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.441 [2024-07-15 03:37:43.450370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.441 qpair failed and we were unable to recover it. 00:34:37.441 [2024-07-15 03:37:43.450558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.441 [2024-07-15 03:37:43.450601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.441 qpair failed and we were unable to recover it. 00:34:37.442 [2024-07-15 03:37:43.450735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.442 [2024-07-15 03:37:43.450761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.442 qpair failed and we were unable to recover it. 00:34:37.442 [2024-07-15 03:37:43.450947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.442 [2024-07-15 03:37:43.450991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.442 qpair failed and we were unable to recover it. 00:34:37.442 [2024-07-15 03:37:43.451126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.442 [2024-07-15 03:37:43.451155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.442 qpair failed and we were unable to recover it. 00:34:37.442 [2024-07-15 03:37:43.451308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.442 [2024-07-15 03:37:43.451351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.442 qpair failed and we were unable to recover it. 00:34:37.442 [2024-07-15 03:37:43.451459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.442 [2024-07-15 03:37:43.451485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.442 qpair failed and we were unable to recover it. 00:34:37.442 [2024-07-15 03:37:43.451604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.442 [2024-07-15 03:37:43.451629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.442 qpair failed and we were unable to recover it. 00:34:37.442 [2024-07-15 03:37:43.451781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.442 [2024-07-15 03:37:43.451806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.442 qpair failed and we were unable to recover it. 00:34:37.442 [2024-07-15 03:37:43.451971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.442 [2024-07-15 03:37:43.451998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.442 qpair failed and we were unable to recover it. 00:34:37.442 [2024-07-15 03:37:43.452116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.442 [2024-07-15 03:37:43.452141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.442 qpair failed and we were unable to recover it. 00:34:37.442 [2024-07-15 03:37:43.452311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.442 [2024-07-15 03:37:43.452337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.442 qpair failed and we were unable to recover it. 00:34:37.442 [2024-07-15 03:37:43.452527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.442 [2024-07-15 03:37:43.452570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.442 qpair failed and we were unable to recover it. 00:34:37.442 [2024-07-15 03:37:43.452711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.442 [2024-07-15 03:37:43.452738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.442 qpair failed and we were unable to recover it. 00:34:37.442 [2024-07-15 03:37:43.452914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.442 [2024-07-15 03:37:43.452943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.442 qpair failed and we were unable to recover it. 00:34:37.442 [2024-07-15 03:37:43.453102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.442 [2024-07-15 03:37:43.453131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.442 qpair failed and we were unable to recover it. 00:34:37.442 [2024-07-15 03:37:43.453339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.442 [2024-07-15 03:37:43.453381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.442 qpair failed and we were unable to recover it. 00:34:37.442 [2024-07-15 03:37:43.453519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.442 [2024-07-15 03:37:43.453549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.442 qpair failed and we were unable to recover it. 00:34:37.442 [2024-07-15 03:37:43.453686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.442 [2024-07-15 03:37:43.453711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.442 qpair failed and we were unable to recover it. 00:34:37.442 [2024-07-15 03:37:43.453867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.442 [2024-07-15 03:37:43.453917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.442 qpair failed and we were unable to recover it. 00:34:37.442 [2024-07-15 03:37:43.454056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.442 [2024-07-15 03:37:43.454098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.442 qpair failed and we were unable to recover it. 00:34:37.442 [2024-07-15 03:37:43.454229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.442 [2024-07-15 03:37:43.454273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.442 qpair failed and we were unable to recover it. 00:34:37.442 [2024-07-15 03:37:43.454448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.442 [2024-07-15 03:37:43.454474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.442 qpair failed and we were unable to recover it. 00:34:37.442 [2024-07-15 03:37:43.454639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.442 [2024-07-15 03:37:43.454665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.442 qpair failed and we were unable to recover it. 00:34:37.442 [2024-07-15 03:37:43.454831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.442 [2024-07-15 03:37:43.454857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.442 qpair failed and we were unable to recover it. 00:34:37.442 [2024-07-15 03:37:43.455023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.442 [2024-07-15 03:37:43.455066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.442 qpair failed and we were unable to recover it. 00:34:37.442 [2024-07-15 03:37:43.455263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.442 [2024-07-15 03:37:43.455306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.442 qpair failed and we were unable to recover it. 00:34:37.442 [2024-07-15 03:37:43.455435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.442 [2024-07-15 03:37:43.455479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.442 qpair failed and we were unable to recover it. 00:34:37.442 [2024-07-15 03:37:43.455601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.442 [2024-07-15 03:37:43.455628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.442 qpair failed and we were unable to recover it. 00:34:37.442 [2024-07-15 03:37:43.455792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.442 [2024-07-15 03:37:43.455818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.442 qpair failed and we were unable to recover it. 00:34:37.442 [2024-07-15 03:37:43.455982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.442 [2024-07-15 03:37:43.456027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.442 qpair failed and we were unable to recover it. 00:34:37.442 [2024-07-15 03:37:43.456163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.442 [2024-07-15 03:37:43.456206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.442 qpair failed and we were unable to recover it. 00:34:37.442 [2024-07-15 03:37:43.456375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.442 [2024-07-15 03:37:43.456419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.442 qpair failed and we were unable to recover it. 00:34:37.442 [2024-07-15 03:37:43.456535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.442 [2024-07-15 03:37:43.456561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.442 qpair failed and we were unable to recover it. 00:34:37.442 [2024-07-15 03:37:43.456703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.442 [2024-07-15 03:37:43.456729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.442 qpair failed and we were unable to recover it. 00:34:37.442 [2024-07-15 03:37:43.456859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.442 [2024-07-15 03:37:43.456901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.442 qpair failed and we were unable to recover it. 00:34:37.442 [2024-07-15 03:37:43.457102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.442 [2024-07-15 03:37:43.457131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.442 qpair failed and we were unable to recover it. 00:34:37.442 [2024-07-15 03:37:43.457332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.442 [2024-07-15 03:37:43.457376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.442 qpair failed and we were unable to recover it. 00:34:37.442 [2024-07-15 03:37:43.457505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.442 [2024-07-15 03:37:43.457548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.442 qpair failed and we were unable to recover it. 00:34:37.442 [2024-07-15 03:37:43.457719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.442 [2024-07-15 03:37:43.457745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.442 qpair failed and we were unable to recover it. 00:34:37.442 [2024-07-15 03:37:43.457912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.442 [2024-07-15 03:37:43.457939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.442 qpair failed and we were unable to recover it. 00:34:37.442 [2024-07-15 03:37:43.458096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.442 [2024-07-15 03:37:43.458139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.442 qpair failed and we were unable to recover it. 00:34:37.442 [2024-07-15 03:37:43.458323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.442 [2024-07-15 03:37:43.458366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.442 qpair failed and we were unable to recover it. 00:34:37.442 [2024-07-15 03:37:43.458484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.442 [2024-07-15 03:37:43.458511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.442 qpair failed and we were unable to recover it. 00:34:37.442 [2024-07-15 03:37:43.458629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.443 [2024-07-15 03:37:43.458654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.443 qpair failed and we were unable to recover it. 00:34:37.443 [2024-07-15 03:37:43.458815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.443 [2024-07-15 03:37:43.458841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.443 qpair failed and we were unable to recover it. 00:34:37.443 [2024-07-15 03:37:43.459007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.443 [2024-07-15 03:37:43.459051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.443 qpair failed and we were unable to recover it. 00:34:37.443 [2024-07-15 03:37:43.459187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.443 [2024-07-15 03:37:43.459232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.443 qpair failed and we were unable to recover it. 00:34:37.443 [2024-07-15 03:37:43.459396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.443 [2024-07-15 03:37:43.459425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.443 qpair failed and we were unable to recover it. 00:34:37.443 [2024-07-15 03:37:43.459578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.443 [2024-07-15 03:37:43.459603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.443 qpair failed and we were unable to recover it. 00:34:37.443 [2024-07-15 03:37:43.459748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.443 [2024-07-15 03:37:43.459773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.443 qpair failed and we were unable to recover it. 00:34:37.443 [2024-07-15 03:37:43.459890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.443 [2024-07-15 03:37:43.459917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.443 qpair failed and we were unable to recover it. 00:34:37.443 [2024-07-15 03:37:43.460058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.443 [2024-07-15 03:37:43.460084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.443 qpair failed and we were unable to recover it. 00:34:37.443 [2024-07-15 03:37:43.460197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.443 [2024-07-15 03:37:43.460222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.443 qpair failed and we were unable to recover it. 00:34:37.443 [2024-07-15 03:37:43.460383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.443 [2024-07-15 03:37:43.460409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.443 qpair failed and we were unable to recover it. 00:34:37.443 [2024-07-15 03:37:43.460570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.443 [2024-07-15 03:37:43.460595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.443 qpair failed and we were unable to recover it. 00:34:37.443 [2024-07-15 03:37:43.460737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.443 [2024-07-15 03:37:43.460762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.443 qpair failed and we were unable to recover it. 00:34:37.443 [2024-07-15 03:37:43.460887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.443 [2024-07-15 03:37:43.460918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.443 qpair failed and we were unable to recover it. 00:34:37.443 [2024-07-15 03:37:43.461034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.443 [2024-07-15 03:37:43.461060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.443 qpair failed and we were unable to recover it. 00:34:37.443 [2024-07-15 03:37:43.461205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.443 [2024-07-15 03:37:43.461231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.443 qpair failed and we were unable to recover it. 00:34:37.443 [2024-07-15 03:37:43.461360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.443 [2024-07-15 03:37:43.461389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.443 qpair failed and we were unable to recover it. 00:34:37.443 [2024-07-15 03:37:43.461566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.443 [2024-07-15 03:37:43.461592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.443 qpair failed and we were unable to recover it. 00:34:37.443 [2024-07-15 03:37:43.461737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.443 [2024-07-15 03:37:43.461763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.443 qpair failed and we were unable to recover it. 00:34:37.443 [2024-07-15 03:37:43.461903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.443 [2024-07-15 03:37:43.461930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.443 qpair failed and we were unable to recover it. 00:34:37.443 [2024-07-15 03:37:43.462095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.443 [2024-07-15 03:37:43.462124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.443 qpair failed and we were unable to recover it. 00:34:37.443 [2024-07-15 03:37:43.462275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.443 [2024-07-15 03:37:43.462304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.443 qpair failed and we were unable to recover it. 00:34:37.443 [2024-07-15 03:37:43.462445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.443 [2024-07-15 03:37:43.462488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.443 qpair failed and we were unable to recover it. 00:34:37.443 [2024-07-15 03:37:43.462650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.443 [2024-07-15 03:37:43.462676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.443 qpair failed and we were unable to recover it. 00:34:37.443 [2024-07-15 03:37:43.462814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.443 [2024-07-15 03:37:43.462840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.443 qpair failed and we were unable to recover it. 00:34:37.443 [2024-07-15 03:37:43.462986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.443 [2024-07-15 03:37:43.463013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.443 qpair failed and we were unable to recover it. 00:34:37.443 [2024-07-15 03:37:43.463159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.443 [2024-07-15 03:37:43.463185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.443 qpair failed and we were unable to recover it. 00:34:37.443 [2024-07-15 03:37:43.463356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.443 [2024-07-15 03:37:43.463382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.443 qpair failed and we were unable to recover it. 00:34:37.443 [2024-07-15 03:37:43.463585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.443 [2024-07-15 03:37:43.463611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.443 qpair failed and we were unable to recover it. 00:34:37.443 [2024-07-15 03:37:43.463780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.443 [2024-07-15 03:37:43.463806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.443 qpair failed and we were unable to recover it. 00:34:37.443 [2024-07-15 03:37:43.463968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.443 [2024-07-15 03:37:43.464012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.443 qpair failed and we were unable to recover it. 00:34:37.443 [2024-07-15 03:37:43.464171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.443 [2024-07-15 03:37:43.464214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.443 qpair failed and we were unable to recover it. 00:34:37.443 [2024-07-15 03:37:43.464370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.443 [2024-07-15 03:37:43.464416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.443 qpair failed and we were unable to recover it. 00:34:37.443 [2024-07-15 03:37:43.464550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.443 [2024-07-15 03:37:43.464576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.443 qpair failed and we were unable to recover it. 00:34:37.443 [2024-07-15 03:37:43.464707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.443 [2024-07-15 03:37:43.464733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.443 qpair failed and we were unable to recover it. 00:34:37.443 [2024-07-15 03:37:43.464895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.443 [2024-07-15 03:37:43.464940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.443 qpair failed and we were unable to recover it. 00:34:37.443 [2024-07-15 03:37:43.465096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.443 [2024-07-15 03:37:43.465139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.443 qpair failed and we were unable to recover it. 00:34:37.443 [2024-07-15 03:37:43.465324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.443 [2024-07-15 03:37:43.465372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.443 qpair failed and we were unable to recover it. 00:34:37.443 [2024-07-15 03:37:43.465510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.443 [2024-07-15 03:37:43.465536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.443 qpair failed and we were unable to recover it. 00:34:37.443 [2024-07-15 03:37:43.465698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.443 [2024-07-15 03:37:43.465724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.443 qpair failed and we were unable to recover it. 00:34:37.443 [2024-07-15 03:37:43.465937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.443 [2024-07-15 03:37:43.465964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.443 qpair failed and we were unable to recover it. 00:34:37.443 [2024-07-15 03:37:43.466127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.443 [2024-07-15 03:37:43.466170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.443 qpair failed and we were unable to recover it. 00:34:37.443 [2024-07-15 03:37:43.466347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.443 [2024-07-15 03:37:43.466373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.443 qpair failed and we were unable to recover it. 00:34:37.443 [2024-07-15 03:37:43.466540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.443 [2024-07-15 03:37:43.466566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.443 qpair failed and we were unable to recover it. 00:34:37.443 [2024-07-15 03:37:43.466705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.444 [2024-07-15 03:37:43.466730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.444 qpair failed and we were unable to recover it. 00:34:37.444 [2024-07-15 03:37:43.466852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.444 [2024-07-15 03:37:43.466903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.444 qpair failed and we were unable to recover it. 00:34:37.444 [2024-07-15 03:37:43.467034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.444 [2024-07-15 03:37:43.467079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.444 qpair failed and we were unable to recover it. 00:34:37.444 [2024-07-15 03:37:43.467125] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x230ef20 (9): Bad file descriptor 00:34:37.444 [2024-07-15 03:37:43.467356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.444 [2024-07-15 03:37:43.467391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.444 qpair failed and we were unable to recover it. 00:34:37.444 [2024-07-15 03:37:43.467556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.444 [2024-07-15 03:37:43.467583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.444 qpair failed and we were unable to recover it. 00:34:37.444 [2024-07-15 03:37:43.467720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.444 [2024-07-15 03:37:43.467745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.444 qpair failed and we were unable to recover it. 00:34:37.444 [2024-07-15 03:37:43.467938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.444 [2024-07-15 03:37:43.467968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.444 qpair failed and we were unable to recover it. 00:34:37.444 [2024-07-15 03:37:43.468119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.444 [2024-07-15 03:37:43.468148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.444 qpair failed and we were unable to recover it. 00:34:37.444 [2024-07-15 03:37:43.468349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.444 [2024-07-15 03:37:43.468377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.444 qpair failed and we were unable to recover it. 00:34:37.444 [2024-07-15 03:37:43.468565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.444 [2024-07-15 03:37:43.468591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.444 qpair failed and we were unable to recover it. 00:34:37.444 [2024-07-15 03:37:43.468726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.444 [2024-07-15 03:37:43.468751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.444 qpair failed and we were unable to recover it. 00:34:37.444 [2024-07-15 03:37:43.468941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.444 [2024-07-15 03:37:43.468971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.444 qpair failed and we were unable to recover it. 00:34:37.444 [2024-07-15 03:37:43.469151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.444 [2024-07-15 03:37:43.469179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.444 qpair failed and we were unable to recover it. 00:34:37.444 [2024-07-15 03:37:43.469353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.444 [2024-07-15 03:37:43.469382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.444 qpair failed and we were unable to recover it. 00:34:37.444 [2024-07-15 03:37:43.469540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.444 [2024-07-15 03:37:43.469566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.444 qpair failed and we were unable to recover it. 00:34:37.444 [2024-07-15 03:37:43.469708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.444 [2024-07-15 03:37:43.469734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.444 qpair failed and we were unable to recover it. 00:34:37.444 [2024-07-15 03:37:43.469896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.444 [2024-07-15 03:37:43.469940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.444 qpair failed and we were unable to recover it. 00:34:37.444 [2024-07-15 03:37:43.470177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.444 [2024-07-15 03:37:43.470205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.444 qpair failed and we were unable to recover it. 00:34:37.444 [2024-07-15 03:37:43.470513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.444 [2024-07-15 03:37:43.470566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.444 qpair failed and we were unable to recover it. 00:34:37.444 [2024-07-15 03:37:43.470703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.444 [2024-07-15 03:37:43.470728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.444 qpair failed and we were unable to recover it. 00:34:37.444 [2024-07-15 03:37:43.470894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.444 [2024-07-15 03:37:43.470939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.444 qpair failed and we were unable to recover it. 00:34:37.444 [2024-07-15 03:37:43.471093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.444 [2024-07-15 03:37:43.471157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.444 qpair failed and we were unable to recover it. 00:34:37.444 [2024-07-15 03:37:43.471393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.444 [2024-07-15 03:37:43.471422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.444 qpair failed and we were unable to recover it. 00:34:37.444 [2024-07-15 03:37:43.471578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.444 [2024-07-15 03:37:43.471604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.444 qpair failed and we were unable to recover it. 00:34:37.444 [2024-07-15 03:37:43.471745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.444 [2024-07-15 03:37:43.471771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.444 qpair failed and we were unable to recover it. 00:34:37.444 [2024-07-15 03:37:43.471929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.444 [2024-07-15 03:37:43.471958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.444 qpair failed and we were unable to recover it. 00:34:37.444 [2024-07-15 03:37:43.472107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.444 [2024-07-15 03:37:43.472136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.444 qpair failed and we were unable to recover it. 00:34:37.444 [2024-07-15 03:37:43.472327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.444 [2024-07-15 03:37:43.472355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.444 qpair failed and we were unable to recover it. 00:34:37.444 [2024-07-15 03:37:43.472508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.444 [2024-07-15 03:37:43.472533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.444 qpair failed and we were unable to recover it. 00:34:37.444 [2024-07-15 03:37:43.472671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.444 [2024-07-15 03:37:43.472697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.444 qpair failed and we were unable to recover it. 00:34:37.444 [2024-07-15 03:37:43.472836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.444 [2024-07-15 03:37:43.472862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.444 qpair failed and we were unable to recover it. 00:34:37.444 [2024-07-15 03:37:43.473019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.444 [2024-07-15 03:37:43.473048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.444 qpair failed and we were unable to recover it. 00:34:37.444 [2024-07-15 03:37:43.473269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.444 [2024-07-15 03:37:43.473298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.444 qpair failed and we were unable to recover it. 00:34:37.444 [2024-07-15 03:37:43.473545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.444 [2024-07-15 03:37:43.473574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.444 qpair failed and we were unable to recover it. 00:34:37.444 [2024-07-15 03:37:43.473732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.444 [2024-07-15 03:37:43.473758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.444 qpair failed and we were unable to recover it. 00:34:37.444 [2024-07-15 03:37:43.473893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.444 [2024-07-15 03:37:43.473924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.444 qpair failed and we were unable to recover it. 00:34:37.444 [2024-07-15 03:37:43.474169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.444 [2024-07-15 03:37:43.474219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.444 qpair failed and we were unable to recover it. 00:34:37.444 [2024-07-15 03:37:43.474411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.444 [2024-07-15 03:37:43.474436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.444 qpair failed and we were unable to recover it. 00:34:37.444 [2024-07-15 03:37:43.474577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.444 [2024-07-15 03:37:43.474603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.444 qpair failed and we were unable to recover it. 00:34:37.444 [2024-07-15 03:37:43.474714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.444 [2024-07-15 03:37:43.474739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.444 qpair failed and we were unable to recover it. 00:34:37.444 [2024-07-15 03:37:43.474887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.444 [2024-07-15 03:37:43.474931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.444 qpair failed and we were unable to recover it. 00:34:37.444 [2024-07-15 03:37:43.475159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.444 [2024-07-15 03:37:43.475187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.444 qpair failed and we were unable to recover it. 00:34:37.444 [2024-07-15 03:37:43.475497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.444 [2024-07-15 03:37:43.475549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.444 qpair failed and we were unable to recover it. 00:34:37.444 [2024-07-15 03:37:43.475728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.444 [2024-07-15 03:37:43.475754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.444 qpair failed and we were unable to recover it. 00:34:37.444 [2024-07-15 03:37:43.475862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.445 [2024-07-15 03:37:43.475892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.445 qpair failed and we were unable to recover it. 00:34:37.445 [2024-07-15 03:37:43.476052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.445 [2024-07-15 03:37:43.476080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.445 qpair failed and we were unable to recover it. 00:34:37.445 [2024-07-15 03:37:43.476278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.445 [2024-07-15 03:37:43.476303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.445 qpair failed and we were unable to recover it. 00:34:37.445 [2024-07-15 03:37:43.476477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.445 [2024-07-15 03:37:43.476503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.445 qpair failed and we were unable to recover it. 00:34:37.445 [2024-07-15 03:37:43.476614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.445 [2024-07-15 03:37:43.476640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.445 qpair failed and we were unable to recover it. 00:34:37.445 [2024-07-15 03:37:43.476791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.445 [2024-07-15 03:37:43.476816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.445 qpair failed and we were unable to recover it. 00:34:37.445 [2024-07-15 03:37:43.476981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.445 [2024-07-15 03:37:43.477011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.445 qpair failed and we were unable to recover it. 00:34:37.445 [2024-07-15 03:37:43.477192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.445 [2024-07-15 03:37:43.477221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.445 qpair failed and we were unable to recover it. 00:34:37.445 [2024-07-15 03:37:43.477391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.445 [2024-07-15 03:37:43.477419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.445 qpair failed and we were unable to recover it. 00:34:37.445 [2024-07-15 03:37:43.477546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.445 [2024-07-15 03:37:43.477572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.445 qpair failed and we were unable to recover it. 00:34:37.445 [2024-07-15 03:37:43.477738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.445 [2024-07-15 03:37:43.477764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.445 qpair failed and we were unable to recover it. 00:34:37.445 [2024-07-15 03:37:43.477926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.445 [2024-07-15 03:37:43.477955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.445 qpair failed and we were unable to recover it. 00:34:37.445 [2024-07-15 03:37:43.478108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.445 [2024-07-15 03:37:43.478136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.445 qpair failed and we were unable to recover it. 00:34:37.445 [2024-07-15 03:37:43.478307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.445 [2024-07-15 03:37:43.478332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.445 qpair failed and we were unable to recover it. 00:34:37.445 [2024-07-15 03:37:43.478495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.445 [2024-07-15 03:37:43.478521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.445 qpair failed and we were unable to recover it. 00:34:37.445 [2024-07-15 03:37:43.478685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.445 [2024-07-15 03:37:43.478711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.445 qpair failed and we were unable to recover it. 00:34:37.445 [2024-07-15 03:37:43.478845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.445 [2024-07-15 03:37:43.478870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.445 qpair failed and we were unable to recover it. 00:34:37.445 [2024-07-15 03:37:43.479011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.445 [2024-07-15 03:37:43.479040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.445 qpair failed and we were unable to recover it. 00:34:37.445 [2024-07-15 03:37:43.479207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.445 [2024-07-15 03:37:43.479250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.445 qpair failed and we were unable to recover it. 00:34:37.445 [2024-07-15 03:37:43.479422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.445 [2024-07-15 03:37:43.479451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.445 qpair failed and we were unable to recover it. 00:34:37.445 [2024-07-15 03:37:43.479594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.445 [2024-07-15 03:37:43.479619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.445 qpair failed and we were unable to recover it. 00:34:37.445 [2024-07-15 03:37:43.479724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.445 [2024-07-15 03:37:43.479749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.445 qpair failed and we were unable to recover it. 00:34:37.445 [2024-07-15 03:37:43.479937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.445 [2024-07-15 03:37:43.479967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.445 qpair failed and we were unable to recover it. 00:34:37.445 [2024-07-15 03:37:43.480113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.445 [2024-07-15 03:37:43.480141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.445 qpair failed and we were unable to recover it. 00:34:37.445 [2024-07-15 03:37:43.480299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.445 [2024-07-15 03:37:43.480324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.445 qpair failed and we were unable to recover it. 00:34:37.445 [2024-07-15 03:37:43.480466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.445 [2024-07-15 03:37:43.480492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.445 qpair failed and we were unable to recover it. 00:34:37.445 [2024-07-15 03:37:43.480657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.445 [2024-07-15 03:37:43.480683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.445 qpair failed and we were unable to recover it. 00:34:37.445 [2024-07-15 03:37:43.480805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.445 [2024-07-15 03:37:43.480845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.445 qpair failed and we were unable to recover it. 00:34:37.445 [2024-07-15 03:37:43.481024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.445 [2024-07-15 03:37:43.481073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.445 qpair failed and we were unable to recover it. 00:34:37.445 [2024-07-15 03:37:43.481251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.445 [2024-07-15 03:37:43.481278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.445 qpair failed and we were unable to recover it. 00:34:37.445 [2024-07-15 03:37:43.481557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.445 [2024-07-15 03:37:43.481610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.445 qpair failed and we were unable to recover it. 00:34:37.445 [2024-07-15 03:37:43.481775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.445 [2024-07-15 03:37:43.481801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.445 qpair failed and we were unable to recover it. 00:34:37.445 [2024-07-15 03:37:43.481985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.445 [2024-07-15 03:37:43.482029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.445 qpair failed and we were unable to recover it. 00:34:37.445 [2024-07-15 03:37:43.482213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.445 [2024-07-15 03:37:43.482257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.445 qpair failed and we were unable to recover it. 00:34:37.445 [2024-07-15 03:37:43.482421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.445 [2024-07-15 03:37:43.482463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.445 qpair failed and we were unable to recover it. 00:34:37.445 [2024-07-15 03:37:43.482578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.445 [2024-07-15 03:37:43.482604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.445 qpair failed and we were unable to recover it. 00:34:37.445 [2024-07-15 03:37:43.482765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.445 [2024-07-15 03:37:43.482791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.445 qpair failed and we were unable to recover it. 00:34:37.445 [2024-07-15 03:37:43.482987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.445 [2024-07-15 03:37:43.483031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.445 qpair failed and we were unable to recover it. 00:34:37.445 [2024-07-15 03:37:43.483191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.445 [2024-07-15 03:37:43.483235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.445 qpair failed and we were unable to recover it. 00:34:37.445 [2024-07-15 03:37:43.483406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.445 [2024-07-15 03:37:43.483433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.445 qpair failed and we were unable to recover it. 00:34:37.445 [2024-07-15 03:37:43.483554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.445 [2024-07-15 03:37:43.483580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.445 qpair failed and we were unable to recover it. 00:34:37.445 [2024-07-15 03:37:43.483742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.445 [2024-07-15 03:37:43.483768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.445 qpair failed and we were unable to recover it. 00:34:37.445 [2024-07-15 03:37:43.483902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.446 [2024-07-15 03:37:43.483929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.446 qpair failed and we were unable to recover it. 00:34:37.446 [2024-07-15 03:37:43.484099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.446 [2024-07-15 03:37:43.484142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.446 qpair failed and we were unable to recover it. 00:34:37.446 [2024-07-15 03:37:43.484299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.446 [2024-07-15 03:37:43.484342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.446 qpair failed and we were unable to recover it. 00:34:37.446 [2024-07-15 03:37:43.484488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.446 [2024-07-15 03:37:43.484531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.446 qpair failed and we were unable to recover it. 00:34:37.446 [2024-07-15 03:37:43.484663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.446 [2024-07-15 03:37:43.484689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.446 qpair failed and we were unable to recover it. 00:34:37.446 [2024-07-15 03:37:43.484827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.446 [2024-07-15 03:37:43.484853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.446 qpair failed and we were unable to recover it. 00:34:37.446 [2024-07-15 03:37:43.485018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.446 [2024-07-15 03:37:43.485050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.446 qpair failed and we were unable to recover it. 00:34:37.446 [2024-07-15 03:37:43.485205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.446 [2024-07-15 03:37:43.485233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.446 qpair failed and we were unable to recover it. 00:34:37.446 [2024-07-15 03:37:43.485385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.446 [2024-07-15 03:37:43.485413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.446 qpair failed and we were unable to recover it. 00:34:37.446 [2024-07-15 03:37:43.485554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.446 [2024-07-15 03:37:43.485581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.446 qpair failed and we were unable to recover it. 00:34:37.446 [2024-07-15 03:37:43.485742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.446 [2024-07-15 03:37:43.485771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.446 qpair failed and we were unable to recover it. 00:34:37.446 [2024-07-15 03:37:43.485937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.446 [2024-07-15 03:37:43.485962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.446 qpair failed and we were unable to recover it. 00:34:37.446 [2024-07-15 03:37:43.486068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.446 [2024-07-15 03:37:43.486093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.446 qpair failed and we were unable to recover it. 00:34:37.446 [2024-07-15 03:37:43.486212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.446 [2024-07-15 03:37:43.486237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.446 qpair failed and we were unable to recover it. 00:34:37.446 [2024-07-15 03:37:43.486368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.446 [2024-07-15 03:37:43.486394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.446 qpair failed and we were unable to recover it. 00:34:37.446 [2024-07-15 03:37:43.486528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.446 [2024-07-15 03:37:43.486557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.446 qpair failed and we were unable to recover it. 00:34:37.446 [2024-07-15 03:37:43.486689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.446 [2024-07-15 03:37:43.486719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.446 qpair failed and we were unable to recover it. 00:34:37.446 [2024-07-15 03:37:43.486866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.446 [2024-07-15 03:37:43.486901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.446 qpair failed and we were unable to recover it. 00:34:37.446 [2024-07-15 03:37:43.487044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.446 [2024-07-15 03:37:43.487070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.446 qpair failed and we were unable to recover it. 00:34:37.446 [2024-07-15 03:37:43.487227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.446 [2024-07-15 03:37:43.487274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.446 qpair failed and we were unable to recover it. 00:34:37.446 [2024-07-15 03:37:43.487415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.446 [2024-07-15 03:37:43.487441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.446 qpair failed and we were unable to recover it. 00:34:37.446 [2024-07-15 03:37:43.487583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.446 [2024-07-15 03:37:43.487610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.446 qpair failed and we were unable to recover it. 00:34:37.446 [2024-07-15 03:37:43.487729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.446 [2024-07-15 03:37:43.487753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.446 qpair failed and we were unable to recover it. 00:34:37.446 [2024-07-15 03:37:43.487892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.446 [2024-07-15 03:37:43.487917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.446 qpair failed and we were unable to recover it. 00:34:37.446 [2024-07-15 03:37:43.488033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.446 [2024-07-15 03:37:43.488058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.446 qpair failed and we were unable to recover it. 00:34:37.446 [2024-07-15 03:37:43.488196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.446 [2024-07-15 03:37:43.488221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.446 qpair failed and we were unable to recover it. 00:34:37.446 [2024-07-15 03:37:43.488388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.446 [2024-07-15 03:37:43.488413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.446 qpair failed and we were unable to recover it. 00:34:37.446 [2024-07-15 03:37:43.488576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.446 [2024-07-15 03:37:43.488601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.446 qpair failed and we were unable to recover it. 00:34:37.446 [2024-07-15 03:37:43.488719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.446 [2024-07-15 03:37:43.488744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.446 qpair failed and we were unable to recover it. 00:34:37.446 [2024-07-15 03:37:43.488847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.446 [2024-07-15 03:37:43.488872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.446 qpair failed and we were unable to recover it. 00:34:37.446 [2024-07-15 03:37:43.488995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.446 [2024-07-15 03:37:43.489020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.446 qpair failed and we were unable to recover it. 00:34:37.446 [2024-07-15 03:37:43.489150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.446 [2024-07-15 03:37:43.489191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.446 qpair failed and we were unable to recover it. 00:34:37.446 [2024-07-15 03:37:43.489340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.446 [2024-07-15 03:37:43.489369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.446 qpair failed and we were unable to recover it. 00:34:37.446 [2024-07-15 03:37:43.489517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.446 [2024-07-15 03:37:43.489545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.446 qpair failed and we were unable to recover it. 00:34:37.446 [2024-07-15 03:37:43.489664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.446 [2024-07-15 03:37:43.489692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.446 qpair failed and we were unable to recover it. 00:34:37.446 [2024-07-15 03:37:43.489869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.446 [2024-07-15 03:37:43.489903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.446 qpair failed and we were unable to recover it. 00:34:37.446 [2024-07-15 03:37:43.490066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.446 [2024-07-15 03:37:43.490091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.446 qpair failed and we were unable to recover it. 00:34:37.446 [2024-07-15 03:37:43.490254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.446 [2024-07-15 03:37:43.490283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.446 qpair failed and we were unable to recover it. 00:34:37.446 [2024-07-15 03:37:43.490408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.446 [2024-07-15 03:37:43.490436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.446 qpair failed and we were unable to recover it. 00:34:37.446 [2024-07-15 03:37:43.490649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.447 [2024-07-15 03:37:43.490678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.447 qpair failed and we were unable to recover it. 00:34:37.447 [2024-07-15 03:37:43.490795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.447 [2024-07-15 03:37:43.490823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.447 qpair failed and we were unable to recover it. 00:34:37.447 [2024-07-15 03:37:43.490984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.447 [2024-07-15 03:37:43.491010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.447 qpair failed and we were unable to recover it. 00:34:37.447 [2024-07-15 03:37:43.491151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.447 [2024-07-15 03:37:43.491178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.447 qpair failed and we were unable to recover it. 00:34:37.447 [2024-07-15 03:37:43.491337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.447 [2024-07-15 03:37:43.491369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.447 qpair failed and we were unable to recover it. 00:34:37.447 [2024-07-15 03:37:43.491522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.447 [2024-07-15 03:37:43.491549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.447 qpair failed and we were unable to recover it. 00:34:37.447 [2024-07-15 03:37:43.491728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.447 [2024-07-15 03:37:43.491756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.447 qpair failed and we were unable to recover it. 00:34:37.447 [2024-07-15 03:37:43.491912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.447 [2024-07-15 03:37:43.491953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.447 qpair failed and we were unable to recover it. 00:34:37.447 [2024-07-15 03:37:43.492085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.447 [2024-07-15 03:37:43.492110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.447 qpair failed and we were unable to recover it. 00:34:37.447 [2024-07-15 03:37:43.492250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.447 [2024-07-15 03:37:43.492275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.447 qpair failed and we were unable to recover it. 00:34:37.447 [2024-07-15 03:37:43.492415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.447 [2024-07-15 03:37:43.492455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.447 qpair failed and we were unable to recover it. 00:34:37.447 [2024-07-15 03:37:43.492609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.447 [2024-07-15 03:37:43.492636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.447 qpair failed and we were unable to recover it. 00:34:37.447 [2024-07-15 03:37:43.492794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.447 [2024-07-15 03:37:43.492821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.447 qpair failed and we were unable to recover it. 00:34:37.447 [2024-07-15 03:37:43.492997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.447 [2024-07-15 03:37:43.493023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.447 qpair failed and we were unable to recover it. 00:34:37.447 [2024-07-15 03:37:43.493142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.447 [2024-07-15 03:37:43.493184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.447 qpair failed and we were unable to recover it. 00:34:37.447 [2024-07-15 03:37:43.493354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.447 [2024-07-15 03:37:43.493379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.447 qpair failed and we were unable to recover it. 00:34:37.447 [2024-07-15 03:37:43.493534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.447 [2024-07-15 03:37:43.493562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.447 qpair failed and we were unable to recover it. 00:34:37.447 [2024-07-15 03:37:43.493712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.447 [2024-07-15 03:37:43.493740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.447 qpair failed and we were unable to recover it. 00:34:37.447 [2024-07-15 03:37:43.493907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.447 [2024-07-15 03:37:43.493932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.447 qpair failed and we were unable to recover it. 00:34:37.447 [2024-07-15 03:37:43.494069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.447 [2024-07-15 03:37:43.494093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.447 qpair failed and we were unable to recover it. 00:34:37.447 [2024-07-15 03:37:43.494308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.447 [2024-07-15 03:37:43.494333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.447 qpair failed and we were unable to recover it. 00:34:37.447 [2024-07-15 03:37:43.494467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.447 [2024-07-15 03:37:43.494494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.447 qpair failed and we were unable to recover it. 00:34:37.447 [2024-07-15 03:37:43.494625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.447 [2024-07-15 03:37:43.494655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.447 qpair failed and we were unable to recover it. 00:34:37.447 [2024-07-15 03:37:43.494808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.447 [2024-07-15 03:37:43.494836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.447 qpair failed and we were unable to recover it. 00:34:37.447 [2024-07-15 03:37:43.495020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.447 [2024-07-15 03:37:43.495046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.447 qpair failed and we were unable to recover it. 00:34:37.447 [2024-07-15 03:37:43.495275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.447 [2024-07-15 03:37:43.495332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.447 qpair failed and we were unable to recover it. 00:34:37.447 [2024-07-15 03:37:43.495480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.447 [2024-07-15 03:37:43.495508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.447 qpair failed and we were unable to recover it. 00:34:37.447 [2024-07-15 03:37:43.495644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.447 [2024-07-15 03:37:43.495688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.447 qpair failed and we were unable to recover it. 00:34:37.447 [2024-07-15 03:37:43.495833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.447 [2024-07-15 03:37:43.495860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.447 qpair failed and we were unable to recover it. 00:34:37.447 [2024-07-15 03:37:43.496030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.447 [2024-07-15 03:37:43.496055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.447 qpair failed and we were unable to recover it. 00:34:37.447 [2024-07-15 03:37:43.496159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.447 [2024-07-15 03:37:43.496184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.447 qpair failed and we were unable to recover it. 00:34:37.447 [2024-07-15 03:37:43.496322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.447 [2024-07-15 03:37:43.496362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.447 qpair failed and we were unable to recover it. 00:34:37.447 [2024-07-15 03:37:43.496492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.447 [2024-07-15 03:37:43.496520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.447 qpair failed and we were unable to recover it. 00:34:37.447 [2024-07-15 03:37:43.496671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.447 [2024-07-15 03:37:43.496698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.447 qpair failed and we were unable to recover it. 00:34:37.447 [2024-07-15 03:37:43.496849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.447 [2024-07-15 03:37:43.496881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.447 qpair failed and we were unable to recover it. 00:34:37.447 [2024-07-15 03:37:43.497030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.447 [2024-07-15 03:37:43.497056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.447 qpair failed and we were unable to recover it. 00:34:37.447 [2024-07-15 03:37:43.497169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.447 [2024-07-15 03:37:43.497193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.447 qpair failed and we were unable to recover it. 00:34:37.447 [2024-07-15 03:37:43.497329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.447 [2024-07-15 03:37:43.497369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.447 qpair failed and we were unable to recover it. 00:34:37.447 [2024-07-15 03:37:43.497547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.447 [2024-07-15 03:37:43.497576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.447 qpair failed and we were unable to recover it. 00:34:37.447 [2024-07-15 03:37:43.497754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.447 [2024-07-15 03:37:43.497782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.447 qpair failed and we were unable to recover it. 00:34:37.447 [2024-07-15 03:37:43.497982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.447 [2024-07-15 03:37:43.498007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.447 qpair failed and we were unable to recover it. 00:34:37.447 [2024-07-15 03:37:43.498124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.447 [2024-07-15 03:37:43.498149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.447 qpair failed and we were unable to recover it. 00:34:37.447 [2024-07-15 03:37:43.498347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.447 [2024-07-15 03:37:43.498373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.447 qpair failed and we were unable to recover it. 00:34:37.447 [2024-07-15 03:37:43.498531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.447 [2024-07-15 03:37:43.498558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.447 qpair failed and we were unable to recover it. 00:34:37.447 [2024-07-15 03:37:43.498712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.447 [2024-07-15 03:37:43.498739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.447 qpair failed and we were unable to recover it. 00:34:37.447 [2024-07-15 03:37:43.498904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.447 [2024-07-15 03:37:43.498929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.447 qpair failed and we were unable to recover it. 00:34:37.447 [2024-07-15 03:37:43.499074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.447 [2024-07-15 03:37:43.499100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.448 qpair failed and we were unable to recover it. 00:34:37.448 [2024-07-15 03:37:43.499263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.448 [2024-07-15 03:37:43.499290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.448 qpair failed and we were unable to recover it. 00:34:37.448 [2024-07-15 03:37:43.499417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.448 [2024-07-15 03:37:43.499460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.448 qpair failed and we were unable to recover it. 00:34:37.448 [2024-07-15 03:37:43.499619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.448 [2024-07-15 03:37:43.499647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.448 qpair failed and we were unable to recover it. 00:34:37.448 [2024-07-15 03:37:43.499762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.448 [2024-07-15 03:37:43.499789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.448 qpair failed and we were unable to recover it. 00:34:37.448 [2024-07-15 03:37:43.499950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.448 [2024-07-15 03:37:43.499976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.448 qpair failed and we were unable to recover it. 00:34:37.448 [2024-07-15 03:37:43.500116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.448 [2024-07-15 03:37:43.500140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.448 qpair failed and we were unable to recover it. 00:34:37.448 [2024-07-15 03:37:43.500280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.448 [2024-07-15 03:37:43.500306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.448 qpair failed and we were unable to recover it. 00:34:37.448 [2024-07-15 03:37:43.500464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.448 [2024-07-15 03:37:43.500492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.448 qpair failed and we were unable to recover it. 00:34:37.448 [2024-07-15 03:37:43.500646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.448 [2024-07-15 03:37:43.500675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.448 qpair failed and we were unable to recover it. 00:34:37.448 [2024-07-15 03:37:43.500791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.448 [2024-07-15 03:37:43.500818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.448 qpair failed and we were unable to recover it. 00:34:37.448 [2024-07-15 03:37:43.500981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.448 [2024-07-15 03:37:43.501007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.448 qpair failed and we were unable to recover it. 00:34:37.448 [2024-07-15 03:37:43.501118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.448 [2024-07-15 03:37:43.501143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.448 qpair failed and we were unable to recover it. 00:34:37.448 [2024-07-15 03:37:43.501314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.448 [2024-07-15 03:37:43.501342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.448 qpair failed and we were unable to recover it. 00:34:37.448 [2024-07-15 03:37:43.501465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.448 [2024-07-15 03:37:43.501507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.448 qpair failed and we were unable to recover it. 00:34:37.448 [2024-07-15 03:37:43.501664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.448 [2024-07-15 03:37:43.501692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.448 qpair failed and we were unable to recover it. 00:34:37.448 [2024-07-15 03:37:43.501889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.448 [2024-07-15 03:37:43.501934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.448 qpair failed and we were unable to recover it. 00:34:37.448 [2024-07-15 03:37:43.502096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.448 [2024-07-15 03:37:43.502121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.448 qpair failed and we were unable to recover it. 00:34:37.448 [2024-07-15 03:37:43.502245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.448 [2024-07-15 03:37:43.502273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.448 qpair failed and we were unable to recover it. 00:34:37.448 [2024-07-15 03:37:43.502400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.448 [2024-07-15 03:37:43.502427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.448 qpair failed and we were unable to recover it. 00:34:37.448 [2024-07-15 03:37:43.502605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.448 [2024-07-15 03:37:43.502632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.448 qpair failed and we were unable to recover it. 00:34:37.448 [2024-07-15 03:37:43.502759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.448 [2024-07-15 03:37:43.502789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.448 qpair failed and we were unable to recover it. 00:34:37.448 [2024-07-15 03:37:43.502983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.448 [2024-07-15 03:37:43.503009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.448 qpair failed and we were unable to recover it. 00:34:37.448 [2024-07-15 03:37:43.503153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.448 [2024-07-15 03:37:43.503177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.448 qpair failed and we were unable to recover it. 00:34:37.448 [2024-07-15 03:37:43.503364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.448 [2024-07-15 03:37:43.503393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.448 qpair failed and we were unable to recover it. 00:34:37.448 [2024-07-15 03:37:43.503545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.448 [2024-07-15 03:37:43.503573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.448 qpair failed and we were unable to recover it. 00:34:37.448 [2024-07-15 03:37:43.503695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.448 [2024-07-15 03:37:43.503743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.448 qpair failed and we were unable to recover it. 00:34:37.448 [2024-07-15 03:37:43.503902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.448 [2024-07-15 03:37:43.503944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.448 qpair failed and we were unable to recover it. 00:34:37.448 [2024-07-15 03:37:43.504113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.448 [2024-07-15 03:37:43.504139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.448 qpair failed and we were unable to recover it. 00:34:37.448 [2024-07-15 03:37:43.504317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.448 [2024-07-15 03:37:43.504342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.448 qpair failed and we were unable to recover it. 00:34:37.448 [2024-07-15 03:37:43.504527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.448 [2024-07-15 03:37:43.504555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.448 qpair failed and we were unable to recover it. 00:34:37.448 [2024-07-15 03:37:43.504735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.448 [2024-07-15 03:37:43.504762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.448 qpair failed and we were unable to recover it. 00:34:37.448 [2024-07-15 03:37:43.504897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.448 [2024-07-15 03:37:43.504923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.448 qpair failed and we were unable to recover it. 00:34:37.448 [2024-07-15 03:37:43.505070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.448 [2024-07-15 03:37:43.505112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.448 qpair failed and we were unable to recover it. 00:34:37.448 [2024-07-15 03:37:43.505282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.448 [2024-07-15 03:37:43.505311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.448 qpair failed and we were unable to recover it. 00:34:37.448 [2024-07-15 03:37:43.505471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.448 [2024-07-15 03:37:43.505495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.448 qpair failed and we were unable to recover it. 00:34:37.448 [2024-07-15 03:37:43.505681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.448 [2024-07-15 03:37:43.505709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.448 qpair failed and we were unable to recover it. 00:34:37.448 [2024-07-15 03:37:43.505862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.448 [2024-07-15 03:37:43.505897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.448 qpair failed and we were unable to recover it. 00:34:37.448 [2024-07-15 03:37:43.506055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.448 [2024-07-15 03:37:43.506080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.448 qpair failed and we were unable to recover it. 00:34:37.448 [2024-07-15 03:37:43.506197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.448 [2024-07-15 03:37:43.506222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.448 qpair failed and we were unable to recover it. 00:34:37.448 [2024-07-15 03:37:43.506341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.448 [2024-07-15 03:37:43.506367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.448 qpair failed and we were unable to recover it. 00:34:37.448 [2024-07-15 03:37:43.506531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.448 [2024-07-15 03:37:43.506555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.448 qpair failed and we were unable to recover it. 00:34:37.448 [2024-07-15 03:37:43.506697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.448 [2024-07-15 03:37:43.506725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.448 qpair failed and we were unable to recover it. 00:34:37.448 [2024-07-15 03:37:43.506895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.448 [2024-07-15 03:37:43.506924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.448 qpair failed and we were unable to recover it. 00:34:37.448 [2024-07-15 03:37:43.507058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.448 [2024-07-15 03:37:43.507083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.448 qpair failed and we were unable to recover it. 00:34:37.448 [2024-07-15 03:37:43.507216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.448 [2024-07-15 03:37:43.507241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.448 qpair failed and we were unable to recover it. 00:34:37.448 [2024-07-15 03:37:43.507409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.448 [2024-07-15 03:37:43.507437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.448 qpair failed and we were unable to recover it. 00:34:37.448 [2024-07-15 03:37:43.507625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.449 [2024-07-15 03:37:43.507650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.449 qpair failed and we were unable to recover it. 00:34:37.449 [2024-07-15 03:37:43.507809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.449 [2024-07-15 03:37:43.507836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.449 qpair failed and we were unable to recover it. 00:34:37.449 [2024-07-15 03:37:43.507991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.449 [2024-07-15 03:37:43.508019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.449 qpair failed and we were unable to recover it. 00:34:37.449 [2024-07-15 03:37:43.508154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.449 [2024-07-15 03:37:43.508180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.449 qpair failed and we were unable to recover it. 00:34:37.449 [2024-07-15 03:37:43.508289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.449 [2024-07-15 03:37:43.508314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.449 qpair failed and we were unable to recover it. 00:34:37.449 [2024-07-15 03:37:43.508469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.449 [2024-07-15 03:37:43.508497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.449 qpair failed and we were unable to recover it. 00:34:37.449 [2024-07-15 03:37:43.508690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.449 [2024-07-15 03:37:43.508719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.449 qpair failed and we were unable to recover it. 00:34:37.449 [2024-07-15 03:37:43.508889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.449 [2024-07-15 03:37:43.508917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.449 qpair failed and we were unable to recover it. 00:34:37.449 [2024-07-15 03:37:43.509081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.449 [2024-07-15 03:37:43.509106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.449 qpair failed and we were unable to recover it. 00:34:37.449 [2024-07-15 03:37:43.509249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.449 [2024-07-15 03:37:43.509274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.449 qpair failed and we were unable to recover it. 00:34:37.449 [2024-07-15 03:37:43.509429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.449 [2024-07-15 03:37:43.509457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.449 qpair failed and we were unable to recover it. 00:34:37.449 [2024-07-15 03:37:43.509582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.449 [2024-07-15 03:37:43.509609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.449 qpair failed and we were unable to recover it. 00:34:37.449 [2024-07-15 03:37:43.509778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.449 [2024-07-15 03:37:43.509803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.449 qpair failed and we were unable to recover it. 00:34:37.449 [2024-07-15 03:37:43.509953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.449 [2024-07-15 03:37:43.509979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.449 qpair failed and we were unable to recover it. 00:34:37.449 [2024-07-15 03:37:43.510110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.449 [2024-07-15 03:37:43.510137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.449 qpair failed and we were unable to recover it. 00:34:37.449 [2024-07-15 03:37:43.510304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.449 [2024-07-15 03:37:43.510329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.449 qpair failed and we were unable to recover it. 00:34:37.449 [2024-07-15 03:37:43.510444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.449 [2024-07-15 03:37:43.510484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.449 qpair failed and we were unable to recover it. 00:34:37.449 [2024-07-15 03:37:43.510600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.449 [2024-07-15 03:37:43.510627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.449 qpair failed and we were unable to recover it. 00:34:37.449 [2024-07-15 03:37:43.510790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.449 [2024-07-15 03:37:43.510816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.449 qpair failed and we were unable to recover it. 00:34:37.449 [2024-07-15 03:37:43.510969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.449 [2024-07-15 03:37:43.510996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.449 qpair failed and we were unable to recover it. 00:34:37.449 [2024-07-15 03:37:43.511148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.449 [2024-07-15 03:37:43.511176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.449 qpair failed and we were unable to recover it. 00:34:37.449 [2024-07-15 03:37:43.511301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.449 [2024-07-15 03:37:43.511326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.449 qpair failed and we were unable to recover it. 00:34:37.449 [2024-07-15 03:37:43.511488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.449 [2024-07-15 03:37:43.511528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.449 qpair failed and we were unable to recover it. 00:34:37.449 [2024-07-15 03:37:43.511678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.449 [2024-07-15 03:37:43.511705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.449 qpair failed and we were unable to recover it. 00:34:37.449 [2024-07-15 03:37:43.511845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.449 [2024-07-15 03:37:43.511870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.449 qpair failed and we were unable to recover it. 00:34:37.449 [2024-07-15 03:37:43.512022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.449 [2024-07-15 03:37:43.512047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.449 qpair failed and we were unable to recover it. 00:34:37.449 [2024-07-15 03:37:43.512232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.449 [2024-07-15 03:37:43.512259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.449 qpair failed and we were unable to recover it. 00:34:37.449 [2024-07-15 03:37:43.512419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.449 [2024-07-15 03:37:43.512444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.449 qpair failed and we were unable to recover it. 00:34:37.449 [2024-07-15 03:37:43.512602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.449 [2024-07-15 03:37:43.512630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.449 qpair failed and we were unable to recover it. 00:34:37.449 [2024-07-15 03:37:43.512805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.449 [2024-07-15 03:37:43.512832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.449 qpair failed and we were unable to recover it. 00:34:37.449 [2024-07-15 03:37:43.513023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.449 [2024-07-15 03:37:43.513049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.449 qpair failed and we were unable to recover it. 00:34:37.449 [2024-07-15 03:37:43.513238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.449 [2024-07-15 03:37:43.513266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.449 qpair failed and we were unable to recover it. 00:34:37.449 [2024-07-15 03:37:43.513382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.449 [2024-07-15 03:37:43.513409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.449 qpair failed and we were unable to recover it. 00:34:37.449 [2024-07-15 03:37:43.513574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.449 [2024-07-15 03:37:43.513600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.449 qpair failed and we were unable to recover it. 00:34:37.449 [2024-07-15 03:37:43.513796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.449 [2024-07-15 03:37:43.513823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.449 qpair failed and we were unable to recover it. 00:34:37.449 [2024-07-15 03:37:43.513960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.449 [2024-07-15 03:37:43.513985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.449 qpair failed and we were unable to recover it. 00:34:37.449 [2024-07-15 03:37:43.514125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.449 [2024-07-15 03:37:43.514150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.449 qpair failed and we were unable to recover it. 00:34:37.449 [2024-07-15 03:37:43.514254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.449 [2024-07-15 03:37:43.514279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.449 qpair failed and we were unable to recover it. 00:34:37.449 [2024-07-15 03:37:43.514426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.449 [2024-07-15 03:37:43.514453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.450 qpair failed and we were unable to recover it. 00:34:37.450 [2024-07-15 03:37:43.514608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.450 [2024-07-15 03:37:43.514633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.450 qpair failed and we were unable to recover it. 00:34:37.450 [2024-07-15 03:37:43.514816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.450 [2024-07-15 03:37:43.514843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.450 qpair failed and we were unable to recover it. 00:34:37.450 [2024-07-15 03:37:43.515009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.450 [2024-07-15 03:37:43.515037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.450 qpair failed and we were unable to recover it. 00:34:37.450 [2024-07-15 03:37:43.515189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.450 [2024-07-15 03:37:43.515215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.450 qpair failed and we were unable to recover it. 00:34:37.450 [2024-07-15 03:37:43.515320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.450 [2024-07-15 03:37:43.515345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.450 qpair failed and we were unable to recover it. 00:34:37.450 [2024-07-15 03:37:43.515550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.450 [2024-07-15 03:37:43.515574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.450 qpair failed and we were unable to recover it. 00:34:37.450 [2024-07-15 03:37:43.515738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.450 [2024-07-15 03:37:43.515763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.450 qpair failed and we were unable to recover it. 00:34:37.450 [2024-07-15 03:37:43.515870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.450 [2024-07-15 03:37:43.515922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.450 qpair failed and we were unable to recover it. 00:34:37.450 [2024-07-15 03:37:43.516081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.450 [2024-07-15 03:37:43.516108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.450 qpair failed and we were unable to recover it. 00:34:37.450 [2024-07-15 03:37:43.516304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.450 [2024-07-15 03:37:43.516328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.450 qpair failed and we were unable to recover it. 00:34:37.450 [2024-07-15 03:37:43.516490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.450 [2024-07-15 03:37:43.516518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.450 qpair failed and we were unable to recover it. 00:34:37.450 [2024-07-15 03:37:43.516674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.450 [2024-07-15 03:37:43.516702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.450 qpair failed and we were unable to recover it. 00:34:37.450 [2024-07-15 03:37:43.516897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.450 [2024-07-15 03:37:43.516928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.450 qpair failed and we were unable to recover it. 00:34:37.450 [2024-07-15 03:37:43.517087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.450 [2024-07-15 03:37:43.517115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.450 qpair failed and we were unable to recover it. 00:34:37.450 [2024-07-15 03:37:43.517269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.450 [2024-07-15 03:37:43.517297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.450 qpair failed and we were unable to recover it. 00:34:37.450 [2024-07-15 03:37:43.517458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.450 [2024-07-15 03:37:43.517484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.450 qpair failed and we were unable to recover it. 00:34:37.450 [2024-07-15 03:37:43.517624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.450 [2024-07-15 03:37:43.517665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.450 qpair failed and we were unable to recover it. 00:34:37.450 [2024-07-15 03:37:43.517789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.450 [2024-07-15 03:37:43.517816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.450 qpair failed and we were unable to recover it. 00:34:37.450 [2024-07-15 03:37:43.517977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.450 [2024-07-15 03:37:43.518004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.450 qpair failed and we were unable to recover it. 00:34:37.450 [2024-07-15 03:37:43.518135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.450 [2024-07-15 03:37:43.518160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.450 qpair failed and we were unable to recover it. 00:34:37.450 [2024-07-15 03:37:43.518319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.450 [2024-07-15 03:37:43.518346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.450 qpair failed and we were unable to recover it. 00:34:37.450 [2024-07-15 03:37:43.518506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.450 [2024-07-15 03:37:43.518531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.450 qpair failed and we were unable to recover it. 00:34:37.450 [2024-07-15 03:37:43.518720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.450 [2024-07-15 03:37:43.518749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.450 qpair failed and we were unable to recover it. 00:34:37.450 [2024-07-15 03:37:43.518868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.450 [2024-07-15 03:37:43.518903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.450 qpair failed and we were unable to recover it. 00:34:37.450 [2024-07-15 03:37:43.519073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.450 [2024-07-15 03:37:43.519099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.450 qpair failed and we were unable to recover it. 00:34:37.450 [2024-07-15 03:37:43.519255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.450 [2024-07-15 03:37:43.519284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.450 qpair failed and we were unable to recover it. 00:34:37.450 [2024-07-15 03:37:43.519396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.450 [2024-07-15 03:37:43.519424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.450 qpair failed and we were unable to recover it. 00:34:37.450 [2024-07-15 03:37:43.519585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.450 [2024-07-15 03:37:43.519611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.450 qpair failed and we were unable to recover it. 00:34:37.450 [2024-07-15 03:37:43.519753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.450 [2024-07-15 03:37:43.519795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.450 qpair failed and we were unable to recover it. 00:34:37.450 [2024-07-15 03:37:43.519982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.450 [2024-07-15 03:37:43.520011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.450 qpair failed and we were unable to recover it. 00:34:37.450 [2024-07-15 03:37:43.520195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.450 [2024-07-15 03:37:43.520219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.450 qpair failed and we were unable to recover it. 00:34:37.450 [2024-07-15 03:37:43.520358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.450 [2024-07-15 03:37:43.520402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.450 qpair failed and we were unable to recover it. 00:34:37.450 [2024-07-15 03:37:43.520578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.450 [2024-07-15 03:37:43.520607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.450 qpair failed and we were unable to recover it. 00:34:37.450 [2024-07-15 03:37:43.520734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.450 [2024-07-15 03:37:43.520775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.450 qpair failed and we were unable to recover it. 00:34:37.450 [2024-07-15 03:37:43.520974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.450 [2024-07-15 03:37:43.521000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.450 qpair failed and we were unable to recover it. 00:34:37.450 [2024-07-15 03:37:43.521143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.450 [2024-07-15 03:37:43.521190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.450 qpair failed and we were unable to recover it. 00:34:37.450 [2024-07-15 03:37:43.521316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.450 [2024-07-15 03:37:43.521340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.450 qpair failed and we were unable to recover it. 00:34:37.450 [2024-07-15 03:37:43.521500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.450 [2024-07-15 03:37:43.521540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.450 qpair failed and we were unable to recover it. 00:34:37.450 [2024-07-15 03:37:43.521715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.450 [2024-07-15 03:37:43.521743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.450 qpair failed and we were unable to recover it. 00:34:37.450 [2024-07-15 03:37:43.521892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.450 [2024-07-15 03:37:43.521923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.450 qpair failed and we were unable to recover it. 00:34:37.450 [2024-07-15 03:37:43.522041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.450 [2024-07-15 03:37:43.522066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.450 qpair failed and we were unable to recover it. 00:34:37.450 [2024-07-15 03:37:43.522226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.450 [2024-07-15 03:37:43.522254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.450 qpair failed and we were unable to recover it. 00:34:37.450 [2024-07-15 03:37:43.522423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.450 [2024-07-15 03:37:43.522449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.450 qpair failed and we were unable to recover it. 00:34:37.450 [2024-07-15 03:37:43.522594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.450 [2024-07-15 03:37:43.522618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.450 qpair failed and we were unable to recover it. 00:34:37.450 [2024-07-15 03:37:43.522760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.450 [2024-07-15 03:37:43.522785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.450 qpair failed and we were unable to recover it. 00:34:37.450 [2024-07-15 03:37:43.522943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.451 [2024-07-15 03:37:43.522969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.451 qpair failed and we were unable to recover it. 00:34:37.451 [2024-07-15 03:37:43.523137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.451 [2024-07-15 03:37:43.523164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.451 qpair failed and we were unable to recover it. 00:34:37.451 [2024-07-15 03:37:43.523307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.451 [2024-07-15 03:37:43.523351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.451 qpair failed and we were unable to recover it. 00:34:37.451 [2024-07-15 03:37:43.523492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.451 [2024-07-15 03:37:43.523518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.451 qpair failed and we were unable to recover it. 00:34:37.451 [2024-07-15 03:37:43.523668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.451 [2024-07-15 03:37:43.523696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.451 qpair failed and we were unable to recover it. 00:34:37.451 [2024-07-15 03:37:43.523841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.451 [2024-07-15 03:37:43.523866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.451 qpair failed and we were unable to recover it. 00:34:37.451 [2024-07-15 03:37:43.524015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.451 [2024-07-15 03:37:43.524043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.451 qpair failed and we were unable to recover it. 00:34:37.451 [2024-07-15 03:37:43.524203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.451 [2024-07-15 03:37:43.524232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.451 qpair failed and we were unable to recover it. 00:34:37.451 [2024-07-15 03:37:43.524353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.451 [2024-07-15 03:37:43.524381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.451 qpair failed and we were unable to recover it. 00:34:37.451 [2024-07-15 03:37:43.524509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.451 [2024-07-15 03:37:43.524537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.451 qpair failed and we were unable to recover it. 00:34:37.451 [2024-07-15 03:37:43.524693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.451 [2024-07-15 03:37:43.524750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.451 qpair failed and we were unable to recover it. 00:34:37.451 [2024-07-15 03:37:43.524907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.451 [2024-07-15 03:37:43.524937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.451 qpair failed and we were unable to recover it. 00:34:37.451 [2024-07-15 03:37:43.525118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.451 [2024-07-15 03:37:43.525151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.451 qpair failed and we were unable to recover it. 00:34:37.451 [2024-07-15 03:37:43.525347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.451 [2024-07-15 03:37:43.525375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.451 qpair failed and we were unable to recover it. 00:34:37.451 [2024-07-15 03:37:43.525495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.451 [2024-07-15 03:37:43.525523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.451 qpair failed and we were unable to recover it. 00:34:37.451 [2024-07-15 03:37:43.525687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.451 [2024-07-15 03:37:43.525713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.451 qpair failed and we were unable to recover it. 00:34:37.451 [2024-07-15 03:37:43.525899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.451 [2024-07-15 03:37:43.525938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.451 qpair failed and we were unable to recover it. 00:34:37.451 [2024-07-15 03:37:43.526074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.451 [2024-07-15 03:37:43.526118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.451 qpair failed and we were unable to recover it. 00:34:37.451 [2024-07-15 03:37:43.526271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.451 [2024-07-15 03:37:43.526297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.451 qpair failed and we were unable to recover it. 00:34:37.451 [2024-07-15 03:37:43.526416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.451 [2024-07-15 03:37:43.526441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.451 qpair failed and we were unable to recover it. 00:34:37.734 [2024-07-15 03:37:43.526584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.734 [2024-07-15 03:37:43.526612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.734 qpair failed and we were unable to recover it. 00:34:37.734 [2024-07-15 03:37:43.526771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.734 [2024-07-15 03:37:43.526796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.734 qpair failed and we were unable to recover it. 00:34:37.734 [2024-07-15 03:37:43.526915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.734 [2024-07-15 03:37:43.526941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.734 qpair failed and we were unable to recover it. 00:34:37.734 [2024-07-15 03:37:43.527061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.734 [2024-07-15 03:37:43.527086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.734 qpair failed and we were unable to recover it. 00:34:37.734 [2024-07-15 03:37:43.527197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.734 [2024-07-15 03:37:43.527222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.734 qpair failed and we were unable to recover it. 00:34:37.734 [2024-07-15 03:37:43.527367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.734 [2024-07-15 03:37:43.527419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.734 qpair failed and we were unable to recover it. 00:34:37.734 [2024-07-15 03:37:43.527599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.734 [2024-07-15 03:37:43.527636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.734 qpair failed and we were unable to recover it. 00:34:37.734 [2024-07-15 03:37:43.527798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.734 [2024-07-15 03:37:43.527830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.734 qpair failed and we were unable to recover it. 00:34:37.734 [2024-07-15 03:37:43.527996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.734 [2024-07-15 03:37:43.528029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.734 qpair failed and we were unable to recover it. 00:34:37.734 [2024-07-15 03:37:43.528151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.734 [2024-07-15 03:37:43.528178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.734 qpair failed and we were unable to recover it. 00:34:37.734 [2024-07-15 03:37:43.528324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.734 [2024-07-15 03:37:43.528350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.734 qpair failed and we were unable to recover it. 00:34:37.734 [2024-07-15 03:37:43.528468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.734 [2024-07-15 03:37:43.528495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.734 qpair failed and we were unable to recover it. 00:34:37.734 [2024-07-15 03:37:43.528614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.734 [2024-07-15 03:37:43.528640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.734 qpair failed and we were unable to recover it. 00:34:37.734 [2024-07-15 03:37:43.528751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.734 [2024-07-15 03:37:43.528777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.734 qpair failed and we were unable to recover it. 00:34:37.734 [2024-07-15 03:37:43.528959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.734 [2024-07-15 03:37:43.528986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.734 qpair failed and we were unable to recover it. 00:34:37.734 [2024-07-15 03:37:43.529100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.734 [2024-07-15 03:37:43.529125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.734 qpair failed and we were unable to recover it. 00:34:37.734 [2024-07-15 03:37:43.529263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.734 [2024-07-15 03:37:43.529289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.734 qpair failed and we were unable to recover it. 00:34:37.734 [2024-07-15 03:37:43.529425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.734 [2024-07-15 03:37:43.529450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.734 qpair failed and we were unable to recover it. 00:34:37.734 [2024-07-15 03:37:43.529554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.734 [2024-07-15 03:37:43.529579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.734 qpair failed and we were unable to recover it. 00:34:37.734 [2024-07-15 03:37:43.529697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.734 [2024-07-15 03:37:43.529723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.734 qpair failed and we were unable to recover it. 00:34:37.734 [2024-07-15 03:37:43.529862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.734 [2024-07-15 03:37:43.529895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.734 qpair failed and we were unable to recover it. 00:34:37.734 [2024-07-15 03:37:43.530010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.734 [2024-07-15 03:37:43.530035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.734 qpair failed and we were unable to recover it. 00:34:37.734 [2024-07-15 03:37:43.530183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.734 [2024-07-15 03:37:43.530210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.734 qpair failed and we were unable to recover it. 00:34:37.734 [2024-07-15 03:37:43.530350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.734 [2024-07-15 03:37:43.530376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.734 qpair failed and we were unable to recover it. 00:34:37.734 [2024-07-15 03:37:43.530509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.734 [2024-07-15 03:37:43.530540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.734 qpair failed and we were unable to recover it. 00:34:37.734 [2024-07-15 03:37:43.530697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.734 [2024-07-15 03:37:43.530726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.734 qpair failed and we were unable to recover it. 00:34:37.734 [2024-07-15 03:37:43.530895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.734 [2024-07-15 03:37:43.530951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.734 qpair failed and we were unable to recover it. 00:34:37.734 [2024-07-15 03:37:43.531061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.734 [2024-07-15 03:37:43.531088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.734 qpair failed and we were unable to recover it. 00:34:37.734 [2024-07-15 03:37:43.531237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.734 [2024-07-15 03:37:43.531262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.734 qpair failed and we were unable to recover it. 00:34:37.734 [2024-07-15 03:37:43.531398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.734 [2024-07-15 03:37:43.531423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.734 qpair failed and we were unable to recover it. 00:34:37.734 [2024-07-15 03:37:43.531555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.734 [2024-07-15 03:37:43.531580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.734 qpair failed and we were unable to recover it. 00:34:37.734 [2024-07-15 03:37:43.531689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.734 [2024-07-15 03:37:43.531714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.734 qpair failed and we were unable to recover it. 00:34:37.734 [2024-07-15 03:37:43.531853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.734 [2024-07-15 03:37:43.531888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.734 qpair failed and we were unable to recover it. 00:34:37.734 [2024-07-15 03:37:43.532002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.734 [2024-07-15 03:37:43.532027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.734 qpair failed and we were unable to recover it. 00:34:37.734 [2024-07-15 03:37:43.532184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.734 [2024-07-15 03:37:43.532209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.734 qpair failed and we were unable to recover it. 00:34:37.734 [2024-07-15 03:37:43.532320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.734 [2024-07-15 03:37:43.532345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.734 qpair failed and we were unable to recover it. 00:34:37.734 [2024-07-15 03:37:43.532494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.734 [2024-07-15 03:37:43.532519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.734 qpair failed and we were unable to recover it. 00:34:37.734 [2024-07-15 03:37:43.532669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.734 [2024-07-15 03:37:43.532695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.734 qpair failed and we were unable to recover it. 00:34:37.734 [2024-07-15 03:37:43.532829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.734 [2024-07-15 03:37:43.532856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.734 qpair failed and we were unable to recover it. 00:34:37.734 [2024-07-15 03:37:43.533028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.734 [2024-07-15 03:37:43.533052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.734 qpair failed and we were unable to recover it. 00:34:37.734 [2024-07-15 03:37:43.533160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.734 [2024-07-15 03:37:43.533185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.734 qpair failed and we were unable to recover it. 00:34:37.734 [2024-07-15 03:37:43.533323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.734 [2024-07-15 03:37:43.533349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.734 qpair failed and we were unable to recover it. 00:34:37.734 [2024-07-15 03:37:43.533500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.734 [2024-07-15 03:37:43.533524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.734 qpair failed and we were unable to recover it. 00:34:37.734 [2024-07-15 03:37:43.533692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.734 [2024-07-15 03:37:43.533717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.734 qpair failed and we were unable to recover it. 00:34:37.734 [2024-07-15 03:37:43.533892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.734 [2024-07-15 03:37:43.533934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.734 qpair failed and we were unable to recover it. 00:34:37.734 [2024-07-15 03:37:43.534076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.734 [2024-07-15 03:37:43.534100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.734 qpair failed and we were unable to recover it. 00:34:37.734 [2024-07-15 03:37:43.534223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.734 [2024-07-15 03:37:43.534248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.734 qpair failed and we were unable to recover it. 00:34:37.734 [2024-07-15 03:37:43.534386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.734 [2024-07-15 03:37:43.534410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.734 qpair failed and we were unable to recover it. 00:34:37.734 [2024-07-15 03:37:43.534525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.734 [2024-07-15 03:37:43.534549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.734 qpair failed and we were unable to recover it. 00:34:37.734 [2024-07-15 03:37:43.534715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.734 [2024-07-15 03:37:43.534741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.734 qpair failed and we were unable to recover it. 00:34:37.734 [2024-07-15 03:37:43.534856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.735 [2024-07-15 03:37:43.534887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.735 qpair failed and we were unable to recover it. 00:34:37.735 [2024-07-15 03:37:43.535037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.735 [2024-07-15 03:37:43.535067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.735 qpair failed and we were unable to recover it. 00:34:37.735 [2024-07-15 03:37:43.535237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.735 [2024-07-15 03:37:43.535262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.735 qpair failed and we were unable to recover it. 00:34:37.735 [2024-07-15 03:37:43.535396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.735 [2024-07-15 03:37:43.535420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.735 qpair failed and we were unable to recover it. 00:34:37.735 [2024-07-15 03:37:43.535537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.735 [2024-07-15 03:37:43.535562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.735 qpair failed and we were unable to recover it. 00:34:37.735 [2024-07-15 03:37:43.535751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.735 [2024-07-15 03:37:43.535779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.735 qpair failed and we were unable to recover it. 00:34:37.735 [2024-07-15 03:37:43.535914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.735 [2024-07-15 03:37:43.535939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.735 qpair failed and we were unable to recover it. 00:34:37.735 [2024-07-15 03:37:43.536074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.735 [2024-07-15 03:37:43.536099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.735 qpair failed and we were unable to recover it. 00:34:37.735 [2024-07-15 03:37:43.536236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.735 [2024-07-15 03:37:43.536261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.735 qpair failed and we were unable to recover it. 00:34:37.735 [2024-07-15 03:37:43.536400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.735 [2024-07-15 03:37:43.536425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.735 qpair failed and we were unable to recover it. 00:34:37.735 [2024-07-15 03:37:43.536596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.735 [2024-07-15 03:37:43.536621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.735 qpair failed and we were unable to recover it. 00:34:37.735 [2024-07-15 03:37:43.536736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.735 [2024-07-15 03:37:43.536761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.735 qpair failed and we were unable to recover it. 00:34:37.735 [2024-07-15 03:37:43.536927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.735 [2024-07-15 03:37:43.536953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.735 qpair failed and we were unable to recover it. 00:34:37.735 [2024-07-15 03:37:43.537062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.735 [2024-07-15 03:37:43.537087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.735 qpair failed and we were unable to recover it. 00:34:37.735 [2024-07-15 03:37:43.537264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.735 [2024-07-15 03:37:43.537288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.735 qpair failed and we were unable to recover it. 00:34:37.735 [2024-07-15 03:37:43.537406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.735 [2024-07-15 03:37:43.537432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.735 qpair failed and we were unable to recover it. 00:34:37.735 [2024-07-15 03:37:43.537568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.735 [2024-07-15 03:37:43.537592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.735 qpair failed and we were unable to recover it. 00:34:37.735 [2024-07-15 03:37:43.537747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.735 [2024-07-15 03:37:43.537775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.735 qpair failed and we were unable to recover it. 00:34:37.735 [2024-07-15 03:37:43.537913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.735 [2024-07-15 03:37:43.537939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.735 qpair failed and we were unable to recover it. 00:34:37.735 [2024-07-15 03:37:43.538077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.735 [2024-07-15 03:37:43.538102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.735 qpair failed and we were unable to recover it. 00:34:37.735 [2024-07-15 03:37:43.538242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.735 [2024-07-15 03:37:43.538267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.735 qpair failed and we were unable to recover it. 00:34:37.735 [2024-07-15 03:37:43.538406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.735 [2024-07-15 03:37:43.538431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.735 qpair failed and we were unable to recover it. 00:34:37.735 [2024-07-15 03:37:43.538539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.735 [2024-07-15 03:37:43.538564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.735 qpair failed and we were unable to recover it. 00:34:37.735 [2024-07-15 03:37:43.538734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.735 [2024-07-15 03:37:43.538759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.735 qpair failed and we were unable to recover it. 00:34:37.735 [2024-07-15 03:37:43.538863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.735 [2024-07-15 03:37:43.538894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.735 qpair failed and we were unable to recover it. 00:34:37.735 [2024-07-15 03:37:43.539031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.735 [2024-07-15 03:37:43.539055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.735 qpair failed and we were unable to recover it. 00:34:37.735 [2024-07-15 03:37:43.539199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.735 [2024-07-15 03:37:43.539224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.735 qpair failed and we were unable to recover it. 00:34:37.735 [2024-07-15 03:37:43.539393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.735 [2024-07-15 03:37:43.539417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.735 qpair failed and we were unable to recover it. 00:34:37.735 [2024-07-15 03:37:43.539528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.735 [2024-07-15 03:37:43.539557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.735 qpair failed and we were unable to recover it. 00:34:37.735 [2024-07-15 03:37:43.539674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.735 [2024-07-15 03:37:43.539699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.735 qpair failed and we were unable to recover it. 00:34:37.735 [2024-07-15 03:37:43.539847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.735 [2024-07-15 03:37:43.539896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.735 qpair failed and we were unable to recover it. 00:34:37.735 [2024-07-15 03:37:43.540067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.735 [2024-07-15 03:37:43.540094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.735 qpair failed and we were unable to recover it. 00:34:37.735 [2024-07-15 03:37:43.540207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.735 [2024-07-15 03:37:43.540233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.735 qpair failed and we were unable to recover it. 00:34:37.735 [2024-07-15 03:37:43.540364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.735 [2024-07-15 03:37:43.540389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.735 qpair failed and we were unable to recover it. 00:34:37.735 [2024-07-15 03:37:43.540528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.735 [2024-07-15 03:37:43.540553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.735 qpair failed and we were unable to recover it. 00:34:37.735 [2024-07-15 03:37:43.540691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.735 [2024-07-15 03:37:43.540721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.736 qpair failed and we were unable to recover it. 00:34:37.736 [2024-07-15 03:37:43.540924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.736 [2024-07-15 03:37:43.540951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.736 qpair failed and we were unable to recover it. 00:34:37.736 [2024-07-15 03:37:43.541119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.736 [2024-07-15 03:37:43.541144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.736 qpair failed and we were unable to recover it. 00:34:37.736 [2024-07-15 03:37:43.541284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.736 [2024-07-15 03:37:43.541308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.736 qpair failed and we were unable to recover it. 00:34:37.736 [2024-07-15 03:37:43.541467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.736 [2024-07-15 03:37:43.541492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.736 qpair failed and we were unable to recover it. 00:34:37.736 [2024-07-15 03:37:43.541605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.736 [2024-07-15 03:37:43.541629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.736 qpair failed and we were unable to recover it. 00:34:37.736 [2024-07-15 03:37:43.541795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.736 [2024-07-15 03:37:43.541820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.736 qpair failed and we were unable to recover it. 00:34:37.736 [2024-07-15 03:37:43.541963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.736 [2024-07-15 03:37:43.541990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.736 qpair failed and we were unable to recover it. 00:34:37.736 [2024-07-15 03:37:43.542132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.736 [2024-07-15 03:37:43.542159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.736 qpair failed and we were unable to recover it. 00:34:37.736 [2024-07-15 03:37:43.542279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.736 [2024-07-15 03:37:43.542304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.736 qpair failed and we were unable to recover it. 00:34:37.736 [2024-07-15 03:37:43.542475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.736 [2024-07-15 03:37:43.542501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.736 qpair failed and we were unable to recover it. 00:34:37.736 [2024-07-15 03:37:43.542614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.736 [2024-07-15 03:37:43.542640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.736 qpair failed and we were unable to recover it. 00:34:37.736 [2024-07-15 03:37:43.542772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.736 [2024-07-15 03:37:43.542799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.736 qpair failed and we were unable to recover it. 00:34:37.736 [2024-07-15 03:37:43.543010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.736 [2024-07-15 03:37:43.543049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.736 qpair failed and we were unable to recover it. 00:34:37.736 [2024-07-15 03:37:43.543196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.736 [2024-07-15 03:37:43.543223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.736 qpair failed and we were unable to recover it. 00:34:37.736 [2024-07-15 03:37:43.543367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.736 [2024-07-15 03:37:43.543393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.736 qpair failed and we were unable to recover it. 00:34:37.736 [2024-07-15 03:37:43.543535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.736 [2024-07-15 03:37:43.543561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.736 qpair failed and we were unable to recover it. 00:34:37.736 [2024-07-15 03:37:43.543673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.736 [2024-07-15 03:37:43.543698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.736 qpair failed and we were unable to recover it. 00:34:37.736 [2024-07-15 03:37:43.543811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.736 [2024-07-15 03:37:43.543836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.736 qpair failed and we were unable to recover it. 00:34:37.736 [2024-07-15 03:37:43.543981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.736 [2024-07-15 03:37:43.544008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.736 qpair failed and we were unable to recover it. 00:34:37.736 [2024-07-15 03:37:43.544144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.736 [2024-07-15 03:37:43.544176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.736 qpair failed and we were unable to recover it. 00:34:37.736 [2024-07-15 03:37:43.544314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.736 [2024-07-15 03:37:43.544339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.736 qpair failed and we were unable to recover it. 00:34:37.736 [2024-07-15 03:37:43.544478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.736 [2024-07-15 03:37:43.544504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.736 qpair failed and we were unable to recover it. 00:34:37.736 [2024-07-15 03:37:43.544619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.736 [2024-07-15 03:37:43.544643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.736 qpair failed and we were unable to recover it. 00:34:37.736 [2024-07-15 03:37:43.544794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.736 [2024-07-15 03:37:43.544823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.736 qpair failed and we were unable to recover it. 00:34:37.736 [2024-07-15 03:37:43.544993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.736 [2024-07-15 03:37:43.545032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.736 qpair failed and we were unable to recover it. 00:34:37.736 [2024-07-15 03:37:43.545211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.736 [2024-07-15 03:37:43.545238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.736 qpair failed and we were unable to recover it. 00:34:37.736 [2024-07-15 03:37:43.545357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.736 [2024-07-15 03:37:43.545384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.736 qpair failed and we were unable to recover it. 00:34:37.736 [2024-07-15 03:37:43.545547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.736 [2024-07-15 03:37:43.545573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.736 qpair failed and we were unable to recover it. 00:34:37.736 [2024-07-15 03:37:43.545707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.736 [2024-07-15 03:37:43.545733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.736 qpair failed and we were unable to recover it. 00:34:37.736 [2024-07-15 03:37:43.545885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.736 [2024-07-15 03:37:43.545913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.736 qpair failed and we were unable to recover it. 00:34:37.736 [2024-07-15 03:37:43.546057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.736 [2024-07-15 03:37:43.546082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.736 qpair failed and we were unable to recover it. 00:34:37.736 [2024-07-15 03:37:43.546221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.736 [2024-07-15 03:37:43.546246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.736 qpair failed and we were unable to recover it. 00:34:37.736 [2024-07-15 03:37:43.546386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.736 [2024-07-15 03:37:43.546411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.736 qpair failed and we were unable to recover it. 00:34:37.736 [2024-07-15 03:37:43.546576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.736 [2024-07-15 03:37:43.546601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.736 qpair failed and we were unable to recover it. 00:34:37.736 [2024-07-15 03:37:43.546753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.736 [2024-07-15 03:37:43.546781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.736 qpair failed and we were unable to recover it. 00:34:37.736 [2024-07-15 03:37:43.546938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.736 [2024-07-15 03:37:43.546964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.737 qpair failed and we were unable to recover it. 00:34:37.737 [2024-07-15 03:37:43.547069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.737 [2024-07-15 03:37:43.547095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.737 qpair failed and we were unable to recover it. 00:34:37.737 [2024-07-15 03:37:43.547211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.737 [2024-07-15 03:37:43.547236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.737 qpair failed and we were unable to recover it. 00:34:37.737 [2024-07-15 03:37:43.547407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.737 [2024-07-15 03:37:43.547433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.737 qpair failed and we were unable to recover it. 00:34:37.737 [2024-07-15 03:37:43.547569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.737 [2024-07-15 03:37:43.547594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.737 qpair failed and we were unable to recover it. 00:34:37.737 [2024-07-15 03:37:43.547731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.737 [2024-07-15 03:37:43.547759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.737 qpair failed and we were unable to recover it. 00:34:37.737 [2024-07-15 03:37:43.547927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.737 [2024-07-15 03:37:43.547954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.737 qpair failed and we were unable to recover it. 00:34:37.737 [2024-07-15 03:37:43.548090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.737 [2024-07-15 03:37:43.548115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.737 qpair failed and we were unable to recover it. 00:34:37.737 [2024-07-15 03:37:43.548256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.737 [2024-07-15 03:37:43.548281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.737 qpair failed and we were unable to recover it. 00:34:37.737 [2024-07-15 03:37:43.548393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.737 [2024-07-15 03:37:43.548418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.737 qpair failed and we were unable to recover it. 00:34:37.737 [2024-07-15 03:37:43.548553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.737 [2024-07-15 03:37:43.548578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.737 qpair failed and we were unable to recover it. 00:34:37.737 [2024-07-15 03:37:43.548741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.737 [2024-07-15 03:37:43.548767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.737 qpair failed and we were unable to recover it. 00:34:37.737 [2024-07-15 03:37:43.548909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.737 [2024-07-15 03:37:43.548936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.737 qpair failed and we were unable to recover it. 00:34:37.737 [2024-07-15 03:37:43.549081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.737 [2024-07-15 03:37:43.549106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.737 qpair failed and we were unable to recover it. 00:34:37.737 [2024-07-15 03:37:43.549210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.737 [2024-07-15 03:37:43.549235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.737 qpair failed and we were unable to recover it. 00:34:37.737 [2024-07-15 03:37:43.549375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.737 [2024-07-15 03:37:43.549402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.737 qpair failed and we were unable to recover it. 00:34:37.737 [2024-07-15 03:37:43.549505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.737 [2024-07-15 03:37:43.549530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.737 qpair failed and we were unable to recover it. 00:34:37.737 [2024-07-15 03:37:43.549645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.737 [2024-07-15 03:37:43.549670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.737 qpair failed and we were unable to recover it. 00:34:37.737 [2024-07-15 03:37:43.549816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.737 [2024-07-15 03:37:43.549843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.737 qpair failed and we were unable to recover it. 00:34:37.737 [2024-07-15 03:37:43.550007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.737 [2024-07-15 03:37:43.550034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.737 qpair failed and we were unable to recover it. 00:34:37.737 [2024-07-15 03:37:43.550150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.737 [2024-07-15 03:37:43.550176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.737 qpair failed and we were unable to recover it. 00:34:37.737 [2024-07-15 03:37:43.550282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.737 [2024-07-15 03:37:43.550307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.737 qpair failed and we were unable to recover it. 00:34:37.737 [2024-07-15 03:37:43.550448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.737 [2024-07-15 03:37:43.550473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.737 qpair failed and we were unable to recover it. 00:34:37.737 [2024-07-15 03:37:43.550602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.737 [2024-07-15 03:37:43.550627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.737 qpair failed and we were unable to recover it. 00:34:37.737 [2024-07-15 03:37:43.550771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.737 [2024-07-15 03:37:43.550801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.737 qpair failed and we were unable to recover it. 00:34:37.737 [2024-07-15 03:37:43.550980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.737 [2024-07-15 03:37:43.551019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.737 qpair failed and we were unable to recover it. 00:34:37.737 [2024-07-15 03:37:43.551165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.737 [2024-07-15 03:37:43.551192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.737 qpair failed and we were unable to recover it. 00:34:37.737 [2024-07-15 03:37:43.551334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.737 [2024-07-15 03:37:43.551359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.737 qpair failed and we were unable to recover it. 00:34:37.737 [2024-07-15 03:37:43.551474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.737 [2024-07-15 03:37:43.551500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.737 qpair failed and we were unable to recover it. 00:34:37.737 [2024-07-15 03:37:43.551637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.737 [2024-07-15 03:37:43.551679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.737 qpair failed and we were unable to recover it. 00:34:37.737 [2024-07-15 03:37:43.551809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.737 [2024-07-15 03:37:43.551833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.737 qpair failed and we were unable to recover it. 00:34:37.737 [2024-07-15 03:37:43.551977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.737 [2024-07-15 03:37:43.552003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.737 qpair failed and we were unable to recover it. 00:34:37.737 [2024-07-15 03:37:43.552117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.737 [2024-07-15 03:37:43.552143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.737 qpair failed and we were unable to recover it. 00:34:37.737 [2024-07-15 03:37:43.552308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.737 [2024-07-15 03:37:43.552333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.737 qpair failed and we were unable to recover it. 00:34:37.737 [2024-07-15 03:37:43.552494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.737 [2024-07-15 03:37:43.552519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.737 qpair failed and we were unable to recover it. 00:34:37.737 [2024-07-15 03:37:43.552630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.737 [2024-07-15 03:37:43.552656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.737 qpair failed and we were unable to recover it. 00:34:37.737 [2024-07-15 03:37:43.552799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.737 [2024-07-15 03:37:43.552823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.737 qpair failed and we were unable to recover it. 00:34:37.738 [2024-07-15 03:37:43.552926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.738 [2024-07-15 03:37:43.552952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.738 qpair failed and we were unable to recover it. 00:34:37.738 [2024-07-15 03:37:43.553120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.738 [2024-07-15 03:37:43.553146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.738 qpair failed and we were unable to recover it. 00:34:37.738 [2024-07-15 03:37:43.553261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.738 [2024-07-15 03:37:43.553286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.738 qpair failed and we were unable to recover it. 00:34:37.738 [2024-07-15 03:37:43.553457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.738 [2024-07-15 03:37:43.553483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.738 qpair failed and we were unable to recover it. 00:34:37.738 [2024-07-15 03:37:43.553622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.738 [2024-07-15 03:37:43.553647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.738 qpair failed and we were unable to recover it. 00:34:37.738 [2024-07-15 03:37:43.553811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.738 [2024-07-15 03:37:43.553838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.738 qpair failed and we were unable to recover it. 00:34:37.738 [2024-07-15 03:37:43.553989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.738 [2024-07-15 03:37:43.554029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.738 qpair failed and we were unable to recover it. 00:34:37.738 [2024-07-15 03:37:43.554140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.738 [2024-07-15 03:37:43.554167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.738 qpair failed and we were unable to recover it. 00:34:37.738 [2024-07-15 03:37:43.554306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.738 [2024-07-15 03:37:43.554332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.738 qpair failed and we were unable to recover it. 00:34:37.738 [2024-07-15 03:37:43.554442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.738 [2024-07-15 03:37:43.554468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.738 qpair failed and we were unable to recover it. 00:34:37.738 [2024-07-15 03:37:43.554627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.738 [2024-07-15 03:37:43.554652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.738 qpair failed and we were unable to recover it. 00:34:37.738 [2024-07-15 03:37:43.554794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.738 [2024-07-15 03:37:43.554819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.738 qpair failed and we were unable to recover it. 00:34:37.738 [2024-07-15 03:37:43.554988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.738 [2024-07-15 03:37:43.555014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.738 qpair failed and we were unable to recover it. 00:34:37.738 [2024-07-15 03:37:43.555134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.738 [2024-07-15 03:37:43.555161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.738 qpair failed and we were unable to recover it. 00:34:37.738 [2024-07-15 03:37:43.555302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.738 [2024-07-15 03:37:43.555331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.738 qpair failed and we were unable to recover it. 00:34:37.738 [2024-07-15 03:37:43.555494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.738 [2024-07-15 03:37:43.555519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.738 qpair failed and we were unable to recover it. 00:34:37.738 [2024-07-15 03:37:43.555669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.738 [2024-07-15 03:37:43.555694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.738 qpair failed and we were unable to recover it. 00:34:37.738 [2024-07-15 03:37:43.555829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.738 [2024-07-15 03:37:43.555854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.738 qpair failed and we were unable to recover it. 00:34:37.738 [2024-07-15 03:37:43.555992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.738 [2024-07-15 03:37:43.556031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.738 qpair failed and we were unable to recover it. 00:34:37.738 [2024-07-15 03:37:43.556143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.738 [2024-07-15 03:37:43.556170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.738 qpair failed and we were unable to recover it. 00:34:37.738 [2024-07-15 03:37:43.556287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.738 [2024-07-15 03:37:43.556312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.738 qpair failed and we were unable to recover it. 00:34:37.738 [2024-07-15 03:37:43.556427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.738 [2024-07-15 03:37:43.556454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.738 qpair failed and we were unable to recover it. 00:34:37.738 [2024-07-15 03:37:43.556622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.738 [2024-07-15 03:37:43.556648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.738 qpair failed and we were unable to recover it. 00:34:37.738 [2024-07-15 03:37:43.556807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.738 [2024-07-15 03:37:43.556838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.738 qpair failed and we were unable to recover it. 00:34:37.738 [2024-07-15 03:37:43.557003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.738 [2024-07-15 03:37:43.557029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.738 qpair failed and we were unable to recover it. 00:34:37.738 [2024-07-15 03:37:43.557172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.738 [2024-07-15 03:37:43.557198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.738 qpair failed and we were unable to recover it. 00:34:37.738 [2024-07-15 03:37:43.557342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.738 [2024-07-15 03:37:43.557369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.738 qpair failed and we were unable to recover it. 00:34:37.738 [2024-07-15 03:37:43.557513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.738 [2024-07-15 03:37:43.557540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.738 qpair failed and we were unable to recover it. 00:34:37.738 [2024-07-15 03:37:43.557656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.738 [2024-07-15 03:37:43.557681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.738 qpair failed and we were unable to recover it. 00:34:37.738 [2024-07-15 03:37:43.557839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.738 [2024-07-15 03:37:43.557866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.738 qpair failed and we were unable to recover it. 00:34:37.739 [2024-07-15 03:37:43.558027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.739 [2024-07-15 03:37:43.558053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.739 qpair failed and we were unable to recover it. 00:34:37.739 [2024-07-15 03:37:43.558188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.739 [2024-07-15 03:37:43.558213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.739 qpair failed and we were unable to recover it. 00:34:37.739 [2024-07-15 03:37:43.558353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.739 [2024-07-15 03:37:43.558377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.739 qpair failed and we were unable to recover it. 00:34:37.739 [2024-07-15 03:37:43.558519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.739 [2024-07-15 03:37:43.558545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.739 qpair failed and we were unable to recover it. 00:34:37.739 [2024-07-15 03:37:43.558680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.739 [2024-07-15 03:37:43.558704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.739 qpair failed and we were unable to recover it. 00:34:37.739 [2024-07-15 03:37:43.558869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.739 [2024-07-15 03:37:43.558905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.739 qpair failed and we were unable to recover it. 00:34:37.739 [2024-07-15 03:37:43.559021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.739 [2024-07-15 03:37:43.559047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.739 qpair failed and we were unable to recover it. 00:34:37.739 [2024-07-15 03:37:43.559185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.739 [2024-07-15 03:37:43.559210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.739 qpair failed and we were unable to recover it. 00:34:37.739 [2024-07-15 03:37:43.559378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.739 [2024-07-15 03:37:43.559403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.739 qpair failed and we were unable to recover it. 00:34:37.739 [2024-07-15 03:37:43.559564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.739 [2024-07-15 03:37:43.559589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.739 qpair failed and we were unable to recover it. 00:34:37.739 [2024-07-15 03:37:43.559695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.739 [2024-07-15 03:37:43.559720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.739 qpair failed and we were unable to recover it. 00:34:37.739 [2024-07-15 03:37:43.559829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.739 [2024-07-15 03:37:43.559857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.739 qpair failed and we were unable to recover it. 00:34:37.739 [2024-07-15 03:37:43.559977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.739 [2024-07-15 03:37:43.560002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.739 qpair failed and we were unable to recover it. 00:34:37.739 [2024-07-15 03:37:43.560164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.739 [2024-07-15 03:37:43.560189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.739 qpair failed and we were unable to recover it. 00:34:37.739 [2024-07-15 03:37:43.560304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.739 [2024-07-15 03:37:43.560329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.739 qpair failed and we were unable to recover it. 00:34:37.739 [2024-07-15 03:37:43.560492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.739 [2024-07-15 03:37:43.560516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.739 qpair failed and we were unable to recover it. 00:34:37.739 [2024-07-15 03:37:43.560656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.739 [2024-07-15 03:37:43.560682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.739 qpair failed and we were unable to recover it. 00:34:37.739 [2024-07-15 03:37:43.560807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.739 [2024-07-15 03:37:43.560836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.739 qpair failed and we were unable to recover it. 00:34:37.739 [2024-07-15 03:37:43.560998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.739 [2024-07-15 03:37:43.561037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.739 qpair failed and we were unable to recover it. 00:34:37.739 [2024-07-15 03:37:43.561188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.739 [2024-07-15 03:37:43.561216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.739 qpair failed and we were unable to recover it. 00:34:37.739 [2024-07-15 03:37:43.561359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.739 [2024-07-15 03:37:43.561385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.739 qpair failed and we were unable to recover it. 00:34:37.739 [2024-07-15 03:37:43.561524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.739 [2024-07-15 03:37:43.561550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.739 qpair failed and we were unable to recover it. 00:34:37.739 [2024-07-15 03:37:43.561704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.739 [2024-07-15 03:37:43.561732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.739 qpair failed and we were unable to recover it. 00:34:37.739 [2024-07-15 03:37:43.561979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.739 [2024-07-15 03:37:43.562006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.739 qpair failed and we were unable to recover it. 00:34:37.739 [2024-07-15 03:37:43.562118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.739 [2024-07-15 03:37:43.562144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.739 qpair failed and we were unable to recover it. 00:34:37.739 [2024-07-15 03:37:43.562292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.739 [2024-07-15 03:37:43.562318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.739 qpair failed and we were unable to recover it. 00:34:37.739 [2024-07-15 03:37:43.562459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.739 [2024-07-15 03:37:43.562485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.739 qpair failed and we were unable to recover it. 00:34:37.739 [2024-07-15 03:37:43.562626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.739 [2024-07-15 03:37:43.562653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.739 qpair failed and we were unable to recover it. 00:34:37.739 [2024-07-15 03:37:43.562768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.739 [2024-07-15 03:37:43.562792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.739 qpair failed and we were unable to recover it. 00:34:37.739 [2024-07-15 03:37:43.562899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.739 [2024-07-15 03:37:43.562924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.739 qpair failed and we were unable to recover it. 00:34:37.739 [2024-07-15 03:37:43.563066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.739 [2024-07-15 03:37:43.563092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.739 qpair failed and we were unable to recover it. 00:34:37.739 [2024-07-15 03:37:43.563226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.739 [2024-07-15 03:37:43.563250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.739 qpair failed and we were unable to recover it. 00:34:37.739 [2024-07-15 03:37:43.563386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.739 [2024-07-15 03:37:43.563411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.739 qpair failed and we were unable to recover it. 00:34:37.739 [2024-07-15 03:37:43.563578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.739 [2024-07-15 03:37:43.563604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.739 qpair failed and we were unable to recover it. 00:34:37.739 [2024-07-15 03:37:43.563763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.739 [2024-07-15 03:37:43.563792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.739 qpair failed and we were unable to recover it. 00:34:37.739 [2024-07-15 03:37:43.563955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.739 [2024-07-15 03:37:43.563981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.739 qpair failed and we were unable to recover it. 00:34:37.739 [2024-07-15 03:37:43.564120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.739 [2024-07-15 03:37:43.564146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.740 qpair failed and we were unable to recover it. 00:34:37.740 [2024-07-15 03:37:43.564312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.740 [2024-07-15 03:37:43.564338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.740 qpair failed and we were unable to recover it. 00:34:37.740 [2024-07-15 03:37:43.564474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.740 [2024-07-15 03:37:43.564504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.740 qpair failed and we were unable to recover it. 00:34:37.740 [2024-07-15 03:37:43.564667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.740 [2024-07-15 03:37:43.564692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.740 qpair failed and we were unable to recover it. 00:34:37.740 [2024-07-15 03:37:43.564798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.740 [2024-07-15 03:37:43.564823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.740 qpair failed and we were unable to recover it. 00:34:37.740 [2024-07-15 03:37:43.564965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.740 [2024-07-15 03:37:43.564993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.740 qpair failed and we were unable to recover it. 00:34:37.740 [2024-07-15 03:37:43.565153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.740 [2024-07-15 03:37:43.565179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.740 qpair failed and we were unable to recover it. 00:34:37.740 [2024-07-15 03:37:43.565290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.740 [2024-07-15 03:37:43.565316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.740 qpair failed and we were unable to recover it. 00:34:37.740 [2024-07-15 03:37:43.565469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.740 [2024-07-15 03:37:43.565494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.740 qpair failed and we were unable to recover it. 00:34:37.740 [2024-07-15 03:37:43.565631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.740 [2024-07-15 03:37:43.565657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.740 qpair failed and we were unable to recover it. 00:34:37.740 [2024-07-15 03:37:43.565812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.740 [2024-07-15 03:37:43.565840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.740 qpair failed and we were unable to recover it. 00:34:37.740 [2024-07-15 03:37:43.566009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.740 [2024-07-15 03:37:43.566035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.740 qpair failed and we were unable to recover it. 00:34:37.740 [2024-07-15 03:37:43.566198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.740 [2024-07-15 03:37:43.566224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.740 qpair failed and we were unable to recover it. 00:34:37.740 [2024-07-15 03:37:43.566339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.740 [2024-07-15 03:37:43.566366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.740 qpair failed and we were unable to recover it. 00:34:37.740 [2024-07-15 03:37:43.566509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.740 [2024-07-15 03:37:43.566535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.740 qpair failed and we were unable to recover it. 00:34:37.740 [2024-07-15 03:37:43.566637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.740 [2024-07-15 03:37:43.566662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.740 qpair failed and we were unable to recover it. 00:34:37.740 [2024-07-15 03:37:43.566817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.740 [2024-07-15 03:37:43.566842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.740 qpair failed and we were unable to recover it. 00:34:37.740 [2024-07-15 03:37:43.566984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.740 [2024-07-15 03:37:43.567010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.740 qpair failed and we were unable to recover it. 00:34:37.740 [2024-07-15 03:37:43.567122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.740 [2024-07-15 03:37:43.567149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.740 qpair failed and we were unable to recover it. 00:34:37.740 [2024-07-15 03:37:43.567283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.740 [2024-07-15 03:37:43.567308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.740 qpair failed and we were unable to recover it. 00:34:37.740 [2024-07-15 03:37:43.567469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.740 [2024-07-15 03:37:43.567495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.740 qpair failed and we were unable to recover it. 00:34:37.740 [2024-07-15 03:37:43.567655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.740 [2024-07-15 03:37:43.567681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.740 qpair failed and we were unable to recover it. 00:34:37.740 [2024-07-15 03:37:43.567832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.740 [2024-07-15 03:37:43.567861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.740 qpair failed and we were unable to recover it. 00:34:37.740 [2024-07-15 03:37:43.568016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.740 [2024-07-15 03:37:43.568041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.740 qpair failed and we were unable to recover it. 00:34:37.740 [2024-07-15 03:37:43.568153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.740 [2024-07-15 03:37:43.568179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.740 qpair failed and we were unable to recover it. 00:34:37.740 [2024-07-15 03:37:43.568344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.740 [2024-07-15 03:37:43.568368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.740 qpair failed and we were unable to recover it. 00:34:37.740 [2024-07-15 03:37:43.568510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.740 [2024-07-15 03:37:43.568536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.740 qpair failed and we were unable to recover it. 00:34:37.740 [2024-07-15 03:37:43.568649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.740 [2024-07-15 03:37:43.568674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.740 qpair failed and we were unable to recover it. 00:34:37.740 [2024-07-15 03:37:43.568812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.740 [2024-07-15 03:37:43.568837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.740 qpair failed and we were unable to recover it. 00:34:37.740 [2024-07-15 03:37:43.568959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.740 [2024-07-15 03:37:43.568986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.740 qpair failed and we were unable to recover it. 00:34:37.740 [2024-07-15 03:37:43.569094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.740 [2024-07-15 03:37:43.569119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.740 qpair failed and we were unable to recover it. 00:34:37.740 [2024-07-15 03:37:43.569263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.740 [2024-07-15 03:37:43.569289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.740 qpair failed and we were unable to recover it. 00:34:37.740 [2024-07-15 03:37:43.569462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.740 [2024-07-15 03:37:43.569487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.740 qpair failed and we were unable to recover it. 00:34:37.740 [2024-07-15 03:37:43.569595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.740 [2024-07-15 03:37:43.569620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.740 qpair failed and we were unable to recover it. 00:34:37.740 [2024-07-15 03:37:43.569763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.740 [2024-07-15 03:37:43.569788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.740 qpair failed and we were unable to recover it. 00:34:37.740 [2024-07-15 03:37:43.569929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.740 [2024-07-15 03:37:43.569956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.740 qpair failed and we were unable to recover it. 00:34:37.740 [2024-07-15 03:37:43.570123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.740 [2024-07-15 03:37:43.570148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.740 qpair failed and we were unable to recover it. 00:34:37.740 [2024-07-15 03:37:43.570262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.741 [2024-07-15 03:37:43.570288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.741 qpair failed and we were unable to recover it. 00:34:37.741 [2024-07-15 03:37:43.570431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.741 [2024-07-15 03:37:43.570456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.741 qpair failed and we were unable to recover it. 00:34:37.741 [2024-07-15 03:37:43.570595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.741 [2024-07-15 03:37:43.570621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.741 qpair failed and we were unable to recover it. 00:34:37.741 [2024-07-15 03:37:43.570737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.741 [2024-07-15 03:37:43.570763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.741 qpair failed and we were unable to recover it. 00:34:37.741 [2024-07-15 03:37:43.570887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.741 [2024-07-15 03:37:43.570913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.741 qpair failed and we were unable to recover it. 00:34:37.741 [2024-07-15 03:37:43.571078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.741 [2024-07-15 03:37:43.571108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.741 qpair failed and we were unable to recover it. 00:34:37.741 [2024-07-15 03:37:43.571215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.741 [2024-07-15 03:37:43.571241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.741 qpair failed and we were unable to recover it. 00:34:37.741 [2024-07-15 03:37:43.571376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.741 [2024-07-15 03:37:43.571401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.741 qpair failed and we were unable to recover it. 00:34:37.741 [2024-07-15 03:37:43.571543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.741 [2024-07-15 03:37:43.571569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.741 qpair failed and we were unable to recover it. 00:34:37.741 [2024-07-15 03:37:43.571726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.741 [2024-07-15 03:37:43.571753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.741 qpair failed and we were unable to recover it. 00:34:37.741 [2024-07-15 03:37:43.571909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.741 [2024-07-15 03:37:43.571956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.741 qpair failed and we were unable to recover it. 00:34:37.741 [2024-07-15 03:37:43.572095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.741 [2024-07-15 03:37:43.572120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.741 qpair failed and we were unable to recover it. 00:34:37.741 [2024-07-15 03:37:43.572228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.741 [2024-07-15 03:37:43.572253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.741 qpair failed and we were unable to recover it. 00:34:37.741 [2024-07-15 03:37:43.572397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.741 [2024-07-15 03:37:43.572422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.741 qpair failed and we were unable to recover it. 00:34:37.741 [2024-07-15 03:37:43.572521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.741 [2024-07-15 03:37:43.572547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.741 qpair failed and we were unable to recover it. 00:34:37.741 [2024-07-15 03:37:43.572707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.741 [2024-07-15 03:37:43.572732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.741 qpair failed and we were unable to recover it. 00:34:37.741 [2024-07-15 03:37:43.572870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.741 [2024-07-15 03:37:43.572901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.741 qpair failed and we were unable to recover it. 00:34:37.741 [2024-07-15 03:37:43.573036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.741 [2024-07-15 03:37:43.573062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.741 qpair failed and we were unable to recover it. 00:34:37.741 [2024-07-15 03:37:43.573197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.741 [2024-07-15 03:37:43.573222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.741 qpair failed and we were unable to recover it. 00:34:37.741 [2024-07-15 03:37:43.573365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.741 [2024-07-15 03:37:43.573390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.741 qpair failed and we were unable to recover it. 00:34:37.741 [2024-07-15 03:37:43.573527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.741 [2024-07-15 03:37:43.573552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.741 qpair failed and we were unable to recover it. 00:34:37.741 [2024-07-15 03:37:43.573706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.741 [2024-07-15 03:37:43.573734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.741 qpair failed and we were unable to recover it. 00:34:37.741 [2024-07-15 03:37:43.573888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.741 [2024-07-15 03:37:43.573932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.741 qpair failed and we were unable to recover it. 00:34:37.741 [2024-07-15 03:37:43.574098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.741 [2024-07-15 03:37:43.574123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.741 qpair failed and we were unable to recover it. 00:34:37.741 [2024-07-15 03:37:43.574255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.741 [2024-07-15 03:37:43.574280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.741 qpair failed and we were unable to recover it. 00:34:37.741 [2024-07-15 03:37:43.574454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.741 [2024-07-15 03:37:43.574479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.741 qpair failed and we were unable to recover it. 00:34:37.741 [2024-07-15 03:37:43.574587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.741 [2024-07-15 03:37:43.574612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.741 qpair failed and we were unable to recover it. 00:34:37.741 [2024-07-15 03:37:43.574741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.741 [2024-07-15 03:37:43.574766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.741 qpair failed and we were unable to recover it. 00:34:37.741 [2024-07-15 03:37:43.574942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.741 [2024-07-15 03:37:43.574968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.741 qpair failed and we were unable to recover it. 00:34:37.741 [2024-07-15 03:37:43.575109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.741 [2024-07-15 03:37:43.575134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.741 qpair failed and we were unable to recover it. 00:34:37.741 [2024-07-15 03:37:43.575274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.741 [2024-07-15 03:37:43.575300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.741 qpair failed and we were unable to recover it. 00:34:37.741 [2024-07-15 03:37:43.575464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.741 [2024-07-15 03:37:43.575490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.741 qpair failed and we were unable to recover it. 00:34:37.741 [2024-07-15 03:37:43.575608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.741 [2024-07-15 03:37:43.575633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.741 qpair failed and we were unable to recover it. 00:34:37.741 [2024-07-15 03:37:43.575806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.741 [2024-07-15 03:37:43.575834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.741 qpair failed and we were unable to recover it. 00:34:37.741 [2024-07-15 03:37:43.575973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.741 [2024-07-15 03:37:43.575999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.741 qpair failed and we were unable to recover it. 00:34:37.741 [2024-07-15 03:37:43.576159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.741 [2024-07-15 03:37:43.576184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.741 qpair failed and we were unable to recover it. 00:34:37.741 [2024-07-15 03:37:43.576323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.741 [2024-07-15 03:37:43.576349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.741 qpair failed and we were unable to recover it. 00:34:37.742 [2024-07-15 03:37:43.576516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.742 [2024-07-15 03:37:43.576541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.742 qpair failed and we were unable to recover it. 00:34:37.742 [2024-07-15 03:37:43.576676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.742 [2024-07-15 03:37:43.576702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.742 qpair failed and we were unable to recover it. 00:34:37.742 [2024-07-15 03:37:43.576889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.742 [2024-07-15 03:37:43.576936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.742 qpair failed and we were unable to recover it. 00:34:37.742 [2024-07-15 03:37:43.577054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.742 [2024-07-15 03:37:43.577081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.742 qpair failed and we were unable to recover it. 00:34:37.742 [2024-07-15 03:37:43.577245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.742 [2024-07-15 03:37:43.577271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.742 qpair failed and we were unable to recover it. 00:34:37.742 [2024-07-15 03:37:43.577373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.742 [2024-07-15 03:37:43.577398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.742 qpair failed and we were unable to recover it. 00:34:37.742 [2024-07-15 03:37:43.577541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.742 [2024-07-15 03:37:43.577566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.742 qpair failed and we were unable to recover it. 00:34:37.742 [2024-07-15 03:37:43.577704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.742 [2024-07-15 03:37:43.577729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.742 qpair failed and we were unable to recover it. 00:34:37.742 [2024-07-15 03:37:43.577841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.742 [2024-07-15 03:37:43.577870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.742 qpair failed and we were unable to recover it. 00:34:37.742 [2024-07-15 03:37:43.577983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.742 [2024-07-15 03:37:43.578008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.742 qpair failed and we were unable to recover it. 00:34:37.742 [2024-07-15 03:37:43.578142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.742 [2024-07-15 03:37:43.578168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.742 qpair failed and we were unable to recover it. 00:34:37.742 [2024-07-15 03:37:43.578301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.742 [2024-07-15 03:37:43.578326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.742 qpair failed and we were unable to recover it. 00:34:37.742 [2024-07-15 03:37:43.578470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.742 [2024-07-15 03:37:43.578496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.742 qpair failed and we were unable to recover it. 00:34:37.742 [2024-07-15 03:37:43.578626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.742 [2024-07-15 03:37:43.578651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.742 qpair failed and we were unable to recover it. 00:34:37.742 [2024-07-15 03:37:43.578808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.742 [2024-07-15 03:37:43.578836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.742 qpair failed and we were unable to recover it. 00:34:37.742 [2024-07-15 03:37:43.579002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.742 [2024-07-15 03:37:43.579028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.742 qpair failed and we were unable to recover it. 00:34:37.742 [2024-07-15 03:37:43.579167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.742 [2024-07-15 03:37:43.579193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.742 qpair failed and we were unable to recover it. 00:34:37.742 [2024-07-15 03:37:43.579332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.742 [2024-07-15 03:37:43.579358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.742 qpair failed and we were unable to recover it. 00:34:37.742 [2024-07-15 03:37:43.579522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.742 [2024-07-15 03:37:43.579548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.742 qpair failed and we were unable to recover it. 00:34:37.742 [2024-07-15 03:37:43.579680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.742 [2024-07-15 03:37:43.579705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.742 qpair failed and we were unable to recover it. 00:34:37.742 [2024-07-15 03:37:43.579901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.742 [2024-07-15 03:37:43.579945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.742 qpair failed and we were unable to recover it. 00:34:37.742 [2024-07-15 03:37:43.580088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.742 [2024-07-15 03:37:43.580113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.742 qpair failed and we were unable to recover it. 00:34:37.742 [2024-07-15 03:37:43.580250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.742 [2024-07-15 03:37:43.580275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.742 qpair failed and we were unable to recover it. 00:34:37.742 [2024-07-15 03:37:43.580385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.742 [2024-07-15 03:37:43.580410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.742 qpair failed and we were unable to recover it. 00:34:37.742 [2024-07-15 03:37:43.580552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.742 [2024-07-15 03:37:43.580579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.742 qpair failed and we were unable to recover it. 00:34:37.742 [2024-07-15 03:37:43.580714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.742 [2024-07-15 03:37:43.580740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.742 qpair failed and we were unable to recover it. 00:34:37.742 [2024-07-15 03:37:43.580900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.742 [2024-07-15 03:37:43.580927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.742 qpair failed and we were unable to recover it. 00:34:37.742 [2024-07-15 03:37:43.581068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.742 [2024-07-15 03:37:43.581094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.742 qpair failed and we were unable to recover it. 00:34:37.742 [2024-07-15 03:37:43.581229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.742 [2024-07-15 03:37:43.581255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.742 qpair failed and we were unable to recover it. 00:34:37.742 [2024-07-15 03:37:43.581395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.742 [2024-07-15 03:37:43.581421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.742 qpair failed and we were unable to recover it. 00:34:37.742 [2024-07-15 03:37:43.581585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.742 [2024-07-15 03:37:43.581610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.742 qpair failed and we were unable to recover it. 00:34:37.742 [2024-07-15 03:37:43.581723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.742 [2024-07-15 03:37:43.581751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.742 qpair failed and we were unable to recover it. 00:34:37.742 [2024-07-15 03:37:43.581938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.742 [2024-07-15 03:37:43.581964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.742 qpair failed and we were unable to recover it. 00:34:37.742 [2024-07-15 03:37:43.582097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.742 [2024-07-15 03:37:43.582122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.742 qpair failed and we were unable to recover it. 00:34:37.742 [2024-07-15 03:37:43.582257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.742 [2024-07-15 03:37:43.582282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.742 qpair failed and we were unable to recover it. 00:34:37.742 [2024-07-15 03:37:43.582456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.742 [2024-07-15 03:37:43.582482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.742 qpair failed and we were unable to recover it. 00:34:37.743 [2024-07-15 03:37:43.582597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.743 [2024-07-15 03:37:43.582622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.743 qpair failed and we were unable to recover it. 00:34:37.743 [2024-07-15 03:37:43.582726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.743 [2024-07-15 03:37:43.582752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.743 qpair failed and we were unable to recover it. 00:34:37.743 [2024-07-15 03:37:43.582862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.743 [2024-07-15 03:37:43.582894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.743 qpair failed and we were unable to recover it. 00:34:37.743 [2024-07-15 03:37:43.583032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.743 [2024-07-15 03:37:43.583058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.743 qpair failed and we were unable to recover it. 00:34:37.743 [2024-07-15 03:37:43.583193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.743 [2024-07-15 03:37:43.583218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.743 qpair failed and we were unable to recover it. 00:34:37.743 [2024-07-15 03:37:43.583321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.743 [2024-07-15 03:37:43.583347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.743 qpair failed and we were unable to recover it. 00:34:37.743 [2024-07-15 03:37:43.583481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.743 [2024-07-15 03:37:43.583506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.743 qpair failed and we were unable to recover it. 00:34:37.743 [2024-07-15 03:37:43.583641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.743 [2024-07-15 03:37:43.583666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.743 qpair failed and we were unable to recover it. 00:34:37.743 [2024-07-15 03:37:43.583841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.743 [2024-07-15 03:37:43.583869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.743 qpair failed and we were unable to recover it. 00:34:37.743 [2024-07-15 03:37:43.584030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.743 [2024-07-15 03:37:43.584056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.743 qpair failed and we were unable to recover it. 00:34:37.743 [2024-07-15 03:37:43.584190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.743 [2024-07-15 03:37:43.584216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.743 qpair failed and we were unable to recover it. 00:34:37.743 [2024-07-15 03:37:43.584358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.743 [2024-07-15 03:37:43.584383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.743 qpair failed and we were unable to recover it. 00:34:37.743 [2024-07-15 03:37:43.584525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.743 [2024-07-15 03:37:43.584554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.743 qpair failed and we were unable to recover it. 00:34:37.743 [2024-07-15 03:37:43.584716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.743 [2024-07-15 03:37:43.584746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.743 qpair failed and we were unable to recover it. 00:34:37.743 [2024-07-15 03:37:43.584925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.743 [2024-07-15 03:37:43.584951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.743 qpair failed and we were unable to recover it. 00:34:37.743 [2024-07-15 03:37:43.585085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.743 [2024-07-15 03:37:43.585110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.743 qpair failed and we were unable to recover it. 00:34:37.743 [2024-07-15 03:37:43.585249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.743 [2024-07-15 03:37:43.585275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.743 qpair failed and we were unable to recover it. 00:34:37.743 [2024-07-15 03:37:43.585388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.743 [2024-07-15 03:37:43.585413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.743 qpair failed and we were unable to recover it. 00:34:37.743 [2024-07-15 03:37:43.585559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.743 [2024-07-15 03:37:43.585584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.743 qpair failed and we were unable to recover it. 00:34:37.743 [2024-07-15 03:37:43.585721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.743 [2024-07-15 03:37:43.585746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.743 qpair failed and we were unable to recover it. 00:34:37.743 [2024-07-15 03:37:43.585886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.743 [2024-07-15 03:37:43.585912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.743 qpair failed and we were unable to recover it. 00:34:37.743 [2024-07-15 03:37:43.586078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.743 [2024-07-15 03:37:43.586103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.743 qpair failed and we were unable to recover it. 00:34:37.743 [2024-07-15 03:37:43.586239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.743 [2024-07-15 03:37:43.586265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.743 qpair failed and we were unable to recover it. 00:34:37.743 [2024-07-15 03:37:43.586411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.743 [2024-07-15 03:37:43.586436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.743 qpair failed and we were unable to recover it. 00:34:37.743 [2024-07-15 03:37:43.586602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.743 [2024-07-15 03:37:43.586627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.743 qpair failed and we were unable to recover it. 00:34:37.743 [2024-07-15 03:37:43.586780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.743 [2024-07-15 03:37:43.586808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.743 qpair failed and we were unable to recover it. 00:34:37.743 [2024-07-15 03:37:43.586966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.743 [2024-07-15 03:37:43.586992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.743 qpair failed and we were unable to recover it. 00:34:37.743 [2024-07-15 03:37:43.587159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.743 [2024-07-15 03:37:43.587185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.743 qpair failed and we were unable to recover it. 00:34:37.743 [2024-07-15 03:37:43.587302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.743 [2024-07-15 03:37:43.587328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.743 qpair failed and we were unable to recover it. 00:34:37.743 [2024-07-15 03:37:43.587493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.743 [2024-07-15 03:37:43.587519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.743 qpair failed and we were unable to recover it. 00:34:37.743 [2024-07-15 03:37:43.587652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.743 [2024-07-15 03:37:43.587678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.743 qpair failed and we were unable to recover it. 00:34:37.743 [2024-07-15 03:37:43.587840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.744 [2024-07-15 03:37:43.587865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.744 qpair failed and we were unable to recover it. 00:34:37.744 [2024-07-15 03:37:43.588010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.744 [2024-07-15 03:37:43.588035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.744 qpair failed and we were unable to recover it. 00:34:37.744 [2024-07-15 03:37:43.588166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.744 [2024-07-15 03:37:43.588192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.744 qpair failed and we were unable to recover it. 00:34:37.744 [2024-07-15 03:37:43.588330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.744 [2024-07-15 03:37:43.588357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.744 qpair failed and we were unable to recover it. 00:34:37.744 [2024-07-15 03:37:43.588492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.744 [2024-07-15 03:37:43.588517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.744 qpair failed and we were unable to recover it. 00:34:37.744 [2024-07-15 03:37:43.588650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.744 [2024-07-15 03:37:43.588675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.744 qpair failed and we were unable to recover it. 00:34:37.744 [2024-07-15 03:37:43.588832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.744 [2024-07-15 03:37:43.588860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.744 qpair failed and we were unable to recover it. 00:34:37.744 [2024-07-15 03:37:43.588998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.744 [2024-07-15 03:37:43.589024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.744 qpair failed and we were unable to recover it. 00:34:37.744 [2024-07-15 03:37:43.589177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.744 [2024-07-15 03:37:43.589216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.744 qpair failed and we were unable to recover it. 00:34:37.744 [2024-07-15 03:37:43.589331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.744 [2024-07-15 03:37:43.589361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.744 qpair failed and we were unable to recover it. 00:34:37.744 [2024-07-15 03:37:43.589476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.744 [2024-07-15 03:37:43.589504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.744 qpair failed and we were unable to recover it. 00:34:37.744 [2024-07-15 03:37:43.589673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.744 [2024-07-15 03:37:43.589700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.744 qpair failed and we were unable to recover it. 00:34:37.744 [2024-07-15 03:37:43.589807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.744 [2024-07-15 03:37:43.589835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.744 qpair failed and we were unable to recover it. 00:34:37.744 [2024-07-15 03:37:43.590011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.744 [2024-07-15 03:37:43.590038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.744 qpair failed and we were unable to recover it. 00:34:37.744 [2024-07-15 03:37:43.590206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.744 [2024-07-15 03:37:43.590232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.744 qpair failed and we were unable to recover it. 00:34:37.744 [2024-07-15 03:37:43.590396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.744 [2024-07-15 03:37:43.590440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.744 qpair failed and we were unable to recover it. 00:34:37.744 [2024-07-15 03:37:43.590625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.744 [2024-07-15 03:37:43.590669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.744 qpair failed and we were unable to recover it. 00:34:37.744 [2024-07-15 03:37:43.590840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.744 [2024-07-15 03:37:43.590866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.744 qpair failed and we were unable to recover it. 00:34:37.744 [2024-07-15 03:37:43.591018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.744 [2024-07-15 03:37:43.591044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.744 qpair failed and we were unable to recover it. 00:34:37.744 [2024-07-15 03:37:43.591233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.744 [2024-07-15 03:37:43.591275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.744 qpair failed and we were unable to recover it. 00:34:37.744 [2024-07-15 03:37:43.591410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.744 [2024-07-15 03:37:43.591460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.744 qpair failed and we were unable to recover it. 00:34:37.744 [2024-07-15 03:37:43.591644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.744 [2024-07-15 03:37:43.591693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.744 qpair failed and we were unable to recover it. 00:34:37.744 [2024-07-15 03:37:43.591834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.744 [2024-07-15 03:37:43.591860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.744 qpair failed and we were unable to recover it. 00:34:37.744 [2024-07-15 03:37:43.592008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.744 [2024-07-15 03:37:43.592034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.744 qpair failed and we were unable to recover it. 00:34:37.744 [2024-07-15 03:37:43.592192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.744 [2024-07-15 03:37:43.592236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.744 qpair failed and we were unable to recover it. 00:34:37.744 [2024-07-15 03:37:43.592397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.744 [2024-07-15 03:37:43.592443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.744 qpair failed and we were unable to recover it. 00:34:37.744 [2024-07-15 03:37:43.592607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.744 [2024-07-15 03:37:43.592652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.744 qpair failed and we were unable to recover it. 00:34:37.744 [2024-07-15 03:37:43.592804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.744 [2024-07-15 03:37:43.592830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.744 qpair failed and we were unable to recover it. 00:34:37.744 [2024-07-15 03:37:43.592988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.744 [2024-07-15 03:37:43.593033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.744 qpair failed and we were unable to recover it. 00:34:37.744 [2024-07-15 03:37:43.593163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.744 [2024-07-15 03:37:43.593207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.744 qpair failed and we were unable to recover it. 00:34:37.744 [2024-07-15 03:37:43.593390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.744 [2024-07-15 03:37:43.593438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.744 qpair failed and we were unable to recover it. 00:34:37.744 [2024-07-15 03:37:43.593590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.744 [2024-07-15 03:37:43.593635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.744 qpair failed and we were unable to recover it. 00:34:37.744 [2024-07-15 03:37:43.593755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.744 [2024-07-15 03:37:43.593780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.744 qpair failed and we were unable to recover it. 00:34:37.744 [2024-07-15 03:37:43.593931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.744 [2024-07-15 03:37:43.593959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.744 qpair failed and we were unable to recover it. 00:34:37.744 [2024-07-15 03:37:43.594127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.744 [2024-07-15 03:37:43.594154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.744 qpair failed and we were unable to recover it. 00:34:37.744 [2024-07-15 03:37:43.594302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.745 [2024-07-15 03:37:43.594328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.745 qpair failed and we were unable to recover it. 00:34:37.745 [2024-07-15 03:37:43.594460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.745 [2024-07-15 03:37:43.594486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.745 qpair failed and we were unable to recover it. 00:34:37.745 [2024-07-15 03:37:43.594608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.745 [2024-07-15 03:37:43.594634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.745 qpair failed and we were unable to recover it. 00:34:37.745 [2024-07-15 03:37:43.594777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.745 [2024-07-15 03:37:43.594803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.745 qpair failed and we were unable to recover it. 00:34:37.745 [2024-07-15 03:37:43.594938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.745 [2024-07-15 03:37:43.594965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.745 qpair failed and we were unable to recover it. 00:34:37.745 [2024-07-15 03:37:43.595118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.745 [2024-07-15 03:37:43.595144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.745 qpair failed and we were unable to recover it. 00:34:37.745 [2024-07-15 03:37:43.595283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.745 [2024-07-15 03:37:43.595309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.745 qpair failed and we were unable to recover it. 00:34:37.745 [2024-07-15 03:37:43.595433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.745 [2024-07-15 03:37:43.595476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.745 qpair failed and we were unable to recover it. 00:34:37.745 [2024-07-15 03:37:43.595604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.745 [2024-07-15 03:37:43.595630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.745 qpair failed and we were unable to recover it. 00:34:37.745 [2024-07-15 03:37:43.595796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.745 [2024-07-15 03:37:43.595822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.745 qpair failed and we were unable to recover it. 00:34:37.745 [2024-07-15 03:37:43.595987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.745 [2024-07-15 03:37:43.596031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.745 qpair failed and we were unable to recover it. 00:34:37.745 [2024-07-15 03:37:43.596193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.745 [2024-07-15 03:37:43.596240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.745 qpair failed and we were unable to recover it. 00:34:37.745 [2024-07-15 03:37:43.596353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.745 [2024-07-15 03:37:43.596380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.745 qpair failed and we were unable to recover it. 00:34:37.745 [2024-07-15 03:37:43.596541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.745 [2024-07-15 03:37:43.596579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.745 qpair failed and we were unable to recover it. 00:34:37.745 [2024-07-15 03:37:43.596730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.745 [2024-07-15 03:37:43.596757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.745 qpair failed and we were unable to recover it. 00:34:37.745 [2024-07-15 03:37:43.596897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.745 [2024-07-15 03:37:43.596943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.745 qpair failed and we were unable to recover it. 00:34:37.745 [2024-07-15 03:37:43.597075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.745 [2024-07-15 03:37:43.597103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.745 qpair failed and we were unable to recover it. 00:34:37.745 [2024-07-15 03:37:43.597233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.745 [2024-07-15 03:37:43.597261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.745 qpair failed and we were unable to recover it. 00:34:37.745 [2024-07-15 03:37:43.597419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.745 [2024-07-15 03:37:43.597446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.745 qpair failed and we were unable to recover it. 00:34:37.745 [2024-07-15 03:37:43.597634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.745 [2024-07-15 03:37:43.597681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.745 qpair failed and we were unable to recover it. 00:34:37.745 [2024-07-15 03:37:43.597852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.745 [2024-07-15 03:37:43.597885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.745 qpair failed and we were unable to recover it. 00:34:37.745 [2024-07-15 03:37:43.598033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.745 [2024-07-15 03:37:43.598061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.745 qpair failed and we were unable to recover it. 00:34:37.745 [2024-07-15 03:37:43.598246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.745 [2024-07-15 03:37:43.598288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.745 qpair failed and we were unable to recover it. 00:34:37.745 [2024-07-15 03:37:43.598422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.745 [2024-07-15 03:37:43.598465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.745 qpair failed and we were unable to recover it. 00:34:37.745 [2024-07-15 03:37:43.598624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.745 [2024-07-15 03:37:43.598667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.745 qpair failed and we were unable to recover it. 00:34:37.745 [2024-07-15 03:37:43.598842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.745 [2024-07-15 03:37:43.598868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.745 qpair failed and we were unable to recover it. 00:34:37.745 [2024-07-15 03:37:43.598988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.745 [2024-07-15 03:37:43.599014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.745 qpair failed and we were unable to recover it. 00:34:37.745 [2024-07-15 03:37:43.599207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.745 [2024-07-15 03:37:43.599251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.745 qpair failed and we were unable to recover it. 00:34:37.745 [2024-07-15 03:37:43.599413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.745 [2024-07-15 03:37:43.599457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.745 qpair failed and we were unable to recover it. 00:34:37.745 [2024-07-15 03:37:43.599640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.745 [2024-07-15 03:37:43.599684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.745 qpair failed and we were unable to recover it. 00:34:37.745 [2024-07-15 03:37:43.599850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.745 [2024-07-15 03:37:43.599883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.745 qpair failed and we were unable to recover it. 00:34:37.745 [2024-07-15 03:37:43.600023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.745 [2024-07-15 03:37:43.600050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.745 qpair failed and we were unable to recover it. 00:34:37.745 [2024-07-15 03:37:43.600179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.745 [2024-07-15 03:37:43.600222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.745 qpair failed and we were unable to recover it. 00:34:37.745 [2024-07-15 03:37:43.600410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.745 [2024-07-15 03:37:43.600454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.745 qpair failed and we were unable to recover it. 00:34:37.745 [2024-07-15 03:37:43.600636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.745 [2024-07-15 03:37:43.600680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.745 qpair failed and we were unable to recover it. 00:34:37.745 [2024-07-15 03:37:43.600791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.745 [2024-07-15 03:37:43.600817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.745 qpair failed and we were unable to recover it. 00:34:37.745 [2024-07-15 03:37:43.600998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.745 [2024-07-15 03:37:43.601042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.746 qpair failed and we were unable to recover it. 00:34:37.746 [2024-07-15 03:37:43.601194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.746 [2024-07-15 03:37:43.601238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.746 qpair failed and we were unable to recover it. 00:34:37.746 [2024-07-15 03:37:43.601392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.746 [2024-07-15 03:37:43.601436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.746 qpair failed and we were unable to recover it. 00:34:37.746 [2024-07-15 03:37:43.601587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.746 [2024-07-15 03:37:43.601630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.746 qpair failed and we were unable to recover it. 00:34:37.746 [2024-07-15 03:37:43.601810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.746 [2024-07-15 03:37:43.601838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.746 qpair failed and we were unable to recover it. 00:34:37.746 [2024-07-15 03:37:43.601985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.746 [2024-07-15 03:37:43.602011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.746 qpair failed and we were unable to recover it. 00:34:37.746 [2024-07-15 03:37:43.602172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.746 [2024-07-15 03:37:43.602200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.746 qpair failed and we were unable to recover it. 00:34:37.746 [2024-07-15 03:37:43.602447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.746 [2024-07-15 03:37:43.602496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.746 qpair failed and we were unable to recover it. 00:34:37.746 [2024-07-15 03:37:43.602621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.746 [2024-07-15 03:37:43.602648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.746 qpair failed and we were unable to recover it. 00:34:37.746 [2024-07-15 03:37:43.602785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.746 [2024-07-15 03:37:43.602813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.746 qpair failed and we were unable to recover it. 00:34:37.746 [2024-07-15 03:37:43.603010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.746 [2024-07-15 03:37:43.603036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.746 qpair failed and we were unable to recover it. 00:34:37.746 [2024-07-15 03:37:43.603168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.746 [2024-07-15 03:37:43.603195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.746 qpair failed and we were unable to recover it. 00:34:37.746 [2024-07-15 03:37:43.603349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.746 [2024-07-15 03:37:43.603377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.746 qpair failed and we were unable to recover it. 00:34:37.746 [2024-07-15 03:37:43.603531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.746 [2024-07-15 03:37:43.603559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.746 qpair failed and we were unable to recover it. 00:34:37.746 [2024-07-15 03:37:43.603696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.746 [2024-07-15 03:37:43.603721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.746 qpair failed and we were unable to recover it. 00:34:37.746 [2024-07-15 03:37:43.603866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.746 [2024-07-15 03:37:43.603903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.746 qpair failed and we were unable to recover it. 00:34:37.746 [2024-07-15 03:37:43.604039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.746 [2024-07-15 03:37:43.604065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.746 qpair failed and we were unable to recover it. 00:34:37.746 [2024-07-15 03:37:43.604227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.746 [2024-07-15 03:37:43.604259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.746 qpair failed and we were unable to recover it. 00:34:37.746 [2024-07-15 03:37:43.604413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.746 [2024-07-15 03:37:43.604441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.746 qpair failed and we were unable to recover it. 00:34:37.746 [2024-07-15 03:37:43.604617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.746 [2024-07-15 03:37:43.604645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.746 qpair failed and we were unable to recover it. 00:34:37.746 [2024-07-15 03:37:43.604797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.746 [2024-07-15 03:37:43.604825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.746 qpair failed and we were unable to recover it. 00:34:37.746 [2024-07-15 03:37:43.605016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.746 [2024-07-15 03:37:43.605041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.746 qpair failed and we were unable to recover it. 00:34:37.746 [2024-07-15 03:37:43.605154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.746 [2024-07-15 03:37:43.605178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.746 qpair failed and we were unable to recover it. 00:34:37.746 [2024-07-15 03:37:43.605320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.746 [2024-07-15 03:37:43.605363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.746 qpair failed and we were unable to recover it. 00:34:37.746 [2024-07-15 03:37:43.605516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.746 [2024-07-15 03:37:43.605543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.746 qpair failed and we were unable to recover it. 00:34:37.746 [2024-07-15 03:37:43.605715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.746 [2024-07-15 03:37:43.605742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.746 qpair failed and we were unable to recover it. 00:34:37.746 [2024-07-15 03:37:43.605899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.746 [2024-07-15 03:37:43.605943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.746 qpair failed and we were unable to recover it. 00:34:37.746 [2024-07-15 03:37:43.606109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.746 [2024-07-15 03:37:43.606134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.746 qpair failed and we were unable to recover it. 00:34:37.746 [2024-07-15 03:37:43.606281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.746 [2024-07-15 03:37:43.606305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.746 qpair failed and we were unable to recover it. 00:34:37.746 [2024-07-15 03:37:43.606465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.746 [2024-07-15 03:37:43.606492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.746 qpair failed and we were unable to recover it. 00:34:37.746 [2024-07-15 03:37:43.606645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.746 [2024-07-15 03:37:43.606673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.746 qpair failed and we were unable to recover it. 00:34:37.746 [2024-07-15 03:37:43.606847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.746 [2024-07-15 03:37:43.606893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.746 qpair failed and we were unable to recover it. 00:34:37.746 [2024-07-15 03:37:43.607042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.746 [2024-07-15 03:37:43.607069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.746 qpair failed and we were unable to recover it. 00:34:37.746 [2024-07-15 03:37:43.607248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.746 [2024-07-15 03:37:43.607275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.746 qpair failed and we were unable to recover it. 00:34:37.746 [2024-07-15 03:37:43.607441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.746 [2024-07-15 03:37:43.607483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.746 qpair failed and we were unable to recover it. 00:34:37.746 [2024-07-15 03:37:43.607643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.746 [2024-07-15 03:37:43.607686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.746 qpair failed and we were unable to recover it. 00:34:37.746 [2024-07-15 03:37:43.607801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.746 [2024-07-15 03:37:43.607827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.746 qpair failed and we were unable to recover it. 00:34:37.747 [2024-07-15 03:37:43.607941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.747 [2024-07-15 03:37:43.607969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.747 qpair failed and we were unable to recover it. 00:34:37.747 [2024-07-15 03:37:43.608108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.747 [2024-07-15 03:37:43.608132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.747 qpair failed and we were unable to recover it. 00:34:37.747 [2024-07-15 03:37:43.608299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.747 [2024-07-15 03:37:43.608326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.747 qpair failed and we were unable to recover it. 00:34:37.747 [2024-07-15 03:37:43.608474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.747 [2024-07-15 03:37:43.608502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.747 qpair failed and we were unable to recover it. 00:34:37.747 [2024-07-15 03:37:43.608679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.747 [2024-07-15 03:37:43.608707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.747 qpair failed and we were unable to recover it. 00:34:37.747 [2024-07-15 03:37:43.608866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.747 [2024-07-15 03:37:43.608922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.747 qpair failed and we were unable to recover it. 00:34:37.747 [2024-07-15 03:37:43.609034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.747 [2024-07-15 03:37:43.609075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.747 qpair failed and we were unable to recover it. 00:34:37.747 [2024-07-15 03:37:43.609225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.747 [2024-07-15 03:37:43.609253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.747 qpair failed and we were unable to recover it. 00:34:37.747 [2024-07-15 03:37:43.609390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.747 [2024-07-15 03:37:43.609418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.747 qpair failed and we were unable to recover it. 00:34:37.747 [2024-07-15 03:37:43.609594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.747 [2024-07-15 03:37:43.609621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.747 qpair failed and we were unable to recover it. 00:34:37.747 [2024-07-15 03:37:43.609757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.747 [2024-07-15 03:37:43.609782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.747 qpair failed and we were unable to recover it. 00:34:37.747 [2024-07-15 03:37:43.609922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.747 [2024-07-15 03:37:43.609947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.747 qpair failed and we were unable to recover it. 00:34:37.747 [2024-07-15 03:37:43.610087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.747 [2024-07-15 03:37:43.610112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.747 qpair failed and we were unable to recover it. 00:34:37.747 [2024-07-15 03:37:43.610267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.747 [2024-07-15 03:37:43.610295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.747 qpair failed and we were unable to recover it. 00:34:37.747 [2024-07-15 03:37:43.610451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.747 [2024-07-15 03:37:43.610480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.747 qpair failed and we were unable to recover it. 00:34:37.747 [2024-07-15 03:37:43.610696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.747 [2024-07-15 03:37:43.610723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.747 qpair failed and we were unable to recover it. 00:34:37.747 [2024-07-15 03:37:43.610902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.747 [2024-07-15 03:37:43.610944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.747 qpair failed and we were unable to recover it. 00:34:37.747 [2024-07-15 03:37:43.611087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.747 [2024-07-15 03:37:43.611112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.747 qpair failed and we were unable to recover it. 00:34:37.747 [2024-07-15 03:37:43.611281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.747 [2024-07-15 03:37:43.611306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.747 qpair failed and we were unable to recover it. 00:34:37.747 [2024-07-15 03:37:43.611469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.747 [2024-07-15 03:37:43.611496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.747 qpair failed and we were unable to recover it. 00:34:37.747 [2024-07-15 03:37:43.611651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.747 [2024-07-15 03:37:43.611678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.747 qpair failed and we were unable to recover it. 00:34:37.747 [2024-07-15 03:37:43.611846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.747 [2024-07-15 03:37:43.611872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.747 qpair failed and we were unable to recover it. 00:34:37.747 [2024-07-15 03:37:43.612018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.747 [2024-07-15 03:37:43.612044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.747 qpair failed and we were unable to recover it. 00:34:37.747 [2024-07-15 03:37:43.612231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.747 [2024-07-15 03:37:43.612258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.747 qpair failed and we were unable to recover it. 00:34:37.747 [2024-07-15 03:37:43.612481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.747 [2024-07-15 03:37:43.612510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.747 qpair failed and we were unable to recover it. 00:34:37.747 [2024-07-15 03:37:43.612651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.747 [2024-07-15 03:37:43.612679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.747 qpair failed and we were unable to recover it. 00:34:37.747 [2024-07-15 03:37:43.612810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.747 [2024-07-15 03:37:43.612838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.747 qpair failed and we were unable to recover it. 00:34:37.747 [2024-07-15 03:37:43.613003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.747 [2024-07-15 03:37:43.613028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.747 qpair failed and we were unable to recover it. 00:34:37.747 [2024-07-15 03:37:43.613183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.747 [2024-07-15 03:37:43.613210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.747 qpair failed and we were unable to recover it. 00:34:37.747 [2024-07-15 03:37:43.613383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.747 [2024-07-15 03:37:43.613411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.747 qpair failed and we were unable to recover it. 00:34:37.747 [2024-07-15 03:37:43.613615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.747 [2024-07-15 03:37:43.613643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.747 qpair failed and we were unable to recover it. 00:34:37.747 [2024-07-15 03:37:43.613763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.747 [2024-07-15 03:37:43.613790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.747 qpair failed and we were unable to recover it. 00:34:37.747 [2024-07-15 03:37:43.613954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.747 [2024-07-15 03:37:43.613980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.747 qpair failed and we were unable to recover it. 00:34:37.747 [2024-07-15 03:37:43.614111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.747 [2024-07-15 03:37:43.614137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.747 qpair failed and we were unable to recover it. 00:34:37.747 [2024-07-15 03:37:43.614246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.747 [2024-07-15 03:37:43.614288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.747 qpair failed and we were unable to recover it. 00:34:37.747 [2024-07-15 03:37:43.614449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.747 [2024-07-15 03:37:43.614478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.747 qpair failed and we were unable to recover it. 00:34:37.747 [2024-07-15 03:37:43.614648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.748 [2024-07-15 03:37:43.614675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.748 qpair failed and we were unable to recover it. 00:34:37.748 [2024-07-15 03:37:43.614811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.748 [2024-07-15 03:37:43.614835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.748 qpair failed and we were unable to recover it. 00:34:37.748 [2024-07-15 03:37:43.615010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.748 [2024-07-15 03:37:43.615036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.748 qpair failed and we were unable to recover it. 00:34:37.748 [2024-07-15 03:37:43.615175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.748 [2024-07-15 03:37:43.615200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.748 qpair failed and we were unable to recover it. 00:34:37.748 [2024-07-15 03:37:43.615310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.748 [2024-07-15 03:37:43.615353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.748 qpair failed and we were unable to recover it. 00:34:37.748 [2024-07-15 03:37:43.615477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.748 [2024-07-15 03:37:43.615505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.748 qpair failed and we were unable to recover it. 00:34:37.748 [2024-07-15 03:37:43.615634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.748 [2024-07-15 03:37:43.615676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.748 qpair failed and we were unable to recover it. 00:34:37.748 [2024-07-15 03:37:43.615801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.748 [2024-07-15 03:37:43.615830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.748 qpair failed and we were unable to recover it. 00:34:37.748 [2024-07-15 03:37:43.616027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.748 [2024-07-15 03:37:43.616053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.748 qpair failed and we were unable to recover it. 00:34:37.748 [2024-07-15 03:37:43.616186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.748 [2024-07-15 03:37:43.616211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.748 qpair failed and we were unable to recover it. 00:34:37.748 [2024-07-15 03:37:43.616322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.748 [2024-07-15 03:37:43.616363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.748 qpair failed and we were unable to recover it. 00:34:37.748 [2024-07-15 03:37:43.616520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.748 [2024-07-15 03:37:43.616548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.748 qpair failed and we were unable to recover it. 00:34:37.748 [2024-07-15 03:37:43.616716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.748 [2024-07-15 03:37:43.616748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.748 qpair failed and we were unable to recover it. 00:34:37.748 [2024-07-15 03:37:43.616899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.748 [2024-07-15 03:37:43.616926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.748 qpair failed and we were unable to recover it. 00:34:37.748 [2024-07-15 03:37:43.617071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.748 [2024-07-15 03:37:43.617097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.748 qpair failed and we were unable to recover it. 00:34:37.748 [2024-07-15 03:37:43.617260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.748 [2024-07-15 03:37:43.617285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.748 qpair failed and we were unable to recover it. 00:34:37.748 [2024-07-15 03:37:43.617414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.748 [2024-07-15 03:37:43.617442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.748 qpair failed and we were unable to recover it. 00:34:37.748 [2024-07-15 03:37:43.617603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.748 [2024-07-15 03:37:43.617631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.748 qpair failed and we were unable to recover it. 00:34:37.748 [2024-07-15 03:37:43.617809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.748 [2024-07-15 03:37:43.617838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.748 qpair failed and we were unable to recover it. 00:34:37.748 [2024-07-15 03:37:43.618010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.748 [2024-07-15 03:37:43.618036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.748 qpair failed and we were unable to recover it. 00:34:37.748 [2024-07-15 03:37:43.618209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.748 [2024-07-15 03:37:43.618234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.748 qpair failed and we were unable to recover it. 00:34:37.748 [2024-07-15 03:37:43.618365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.748 [2024-07-15 03:37:43.618390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.748 qpair failed and we were unable to recover it. 00:34:37.748 [2024-07-15 03:37:43.618506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.748 [2024-07-15 03:37:43.618547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.748 qpair failed and we were unable to recover it. 00:34:37.748 [2024-07-15 03:37:43.618702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.748 [2024-07-15 03:37:43.618730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.748 qpair failed and we were unable to recover it. 00:34:37.748 [2024-07-15 03:37:43.618893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.748 [2024-07-15 03:37:43.618919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.748 qpair failed and we were unable to recover it. 00:34:37.748 [2024-07-15 03:37:43.619058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.748 [2024-07-15 03:37:43.619083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.748 qpair failed and we were unable to recover it. 00:34:37.748 [2024-07-15 03:37:43.619258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.748 [2024-07-15 03:37:43.619286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.748 qpair failed and we were unable to recover it. 00:34:37.748 [2024-07-15 03:37:43.619412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.748 [2024-07-15 03:37:43.619440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.748 qpair failed and we were unable to recover it. 00:34:37.748 [2024-07-15 03:37:43.619559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.748 [2024-07-15 03:37:43.619587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.748 qpair failed and we were unable to recover it. 00:34:37.748 [2024-07-15 03:37:43.619763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.748 [2024-07-15 03:37:43.619792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.748 qpair failed and we were unable to recover it. 00:34:37.748 [2024-07-15 03:37:43.619949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.748 [2024-07-15 03:37:43.619975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.748 qpair failed and we were unable to recover it. 00:34:37.748 [2024-07-15 03:37:43.620120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.748 [2024-07-15 03:37:43.620147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.748 qpair failed and we were unable to recover it. 00:34:37.749 [2024-07-15 03:37:43.620314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.749 [2024-07-15 03:37:43.620340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.749 qpair failed and we were unable to recover it. 00:34:37.749 [2024-07-15 03:37:43.620526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.749 [2024-07-15 03:37:43.620553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.749 qpair failed and we were unable to recover it. 00:34:37.749 [2024-07-15 03:37:43.620684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.749 [2024-07-15 03:37:43.620727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.749 qpair failed and we were unable to recover it. 00:34:37.749 [2024-07-15 03:37:43.620888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.749 [2024-07-15 03:37:43.620932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.749 qpair failed and we were unable to recover it. 00:34:37.749 [2024-07-15 03:37:43.621073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.749 [2024-07-15 03:37:43.621098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.749 qpair failed and we were unable to recover it. 00:34:37.749 [2024-07-15 03:37:43.621225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.749 [2024-07-15 03:37:43.621253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.749 qpair failed and we were unable to recover it. 00:34:37.749 [2024-07-15 03:37:43.621385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.749 [2024-07-15 03:37:43.621415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.749 qpair failed and we were unable to recover it. 00:34:37.749 [2024-07-15 03:37:43.621588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.749 [2024-07-15 03:37:43.621621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.749 qpair failed and we were unable to recover it. 00:34:37.749 [2024-07-15 03:37:43.621772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.749 [2024-07-15 03:37:43.621800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.749 qpair failed and we were unable to recover it. 00:34:37.749 [2024-07-15 03:37:43.621928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.749 [2024-07-15 03:37:43.621975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.749 qpair failed and we were unable to recover it. 00:34:37.749 [2024-07-15 03:37:43.622138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.749 [2024-07-15 03:37:43.622163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.749 qpair failed and we were unable to recover it. 00:34:37.749 [2024-07-15 03:37:43.622285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.749 [2024-07-15 03:37:43.622312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.749 qpair failed and we were unable to recover it. 00:34:37.749 [2024-07-15 03:37:43.622443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.749 [2024-07-15 03:37:43.622471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.749 qpair failed and we were unable to recover it. 00:34:37.749 [2024-07-15 03:37:43.622624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.749 [2024-07-15 03:37:43.622653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.749 qpair failed and we were unable to recover it. 00:34:37.749 [2024-07-15 03:37:43.622798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.749 [2024-07-15 03:37:43.622826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.749 qpair failed and we were unable to recover it. 00:34:37.749 [2024-07-15 03:37:43.622977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.749 [2024-07-15 03:37:43.623004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.749 qpair failed and we were unable to recover it. 00:34:37.749 [2024-07-15 03:37:43.623165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.749 [2024-07-15 03:37:43.623191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.749 qpair failed and we were unable to recover it. 00:34:37.749 [2024-07-15 03:37:43.623296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.749 [2024-07-15 03:37:43.623339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.749 qpair failed and we were unable to recover it. 00:34:37.749 [2024-07-15 03:37:43.623496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.749 [2024-07-15 03:37:43.623523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.749 qpair failed and we were unable to recover it. 00:34:37.749 [2024-07-15 03:37:43.623699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.749 [2024-07-15 03:37:43.623726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.749 qpair failed and we were unable to recover it. 00:34:37.749 [2024-07-15 03:37:43.623874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.749 [2024-07-15 03:37:43.623911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.749 qpair failed and we were unable to recover it. 00:34:37.749 [2024-07-15 03:37:43.624038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.749 [2024-07-15 03:37:43.624064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.749 qpair failed and we were unable to recover it. 00:34:37.749 [2024-07-15 03:37:43.624218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.749 [2024-07-15 03:37:43.624244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.749 qpair failed and we were unable to recover it. 00:34:37.749 [2024-07-15 03:37:43.624350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.749 [2024-07-15 03:37:43.624375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.749 qpair failed and we were unable to recover it. 00:34:37.749 [2024-07-15 03:37:43.624488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.749 [2024-07-15 03:37:43.624513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.749 qpair failed and we were unable to recover it. 00:34:37.749 [2024-07-15 03:37:43.624651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.749 [2024-07-15 03:37:43.624693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.749 qpair failed and we were unable to recover it. 00:34:37.749 [2024-07-15 03:37:43.624885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.749 [2024-07-15 03:37:43.624914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.749 qpair failed and we were unable to recover it. 00:34:37.749 [2024-07-15 03:37:43.625068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.749 [2024-07-15 03:37:43.625093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.749 qpair failed and we were unable to recover it. 00:34:37.749 [2024-07-15 03:37:43.625254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.749 [2024-07-15 03:37:43.625283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.749 qpair failed and we were unable to recover it. 00:34:37.749 [2024-07-15 03:37:43.625471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.749 [2024-07-15 03:37:43.625499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.749 qpair failed and we were unable to recover it. 00:34:37.749 [2024-07-15 03:37:43.625652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.749 [2024-07-15 03:37:43.625677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.749 qpair failed and we were unable to recover it. 00:34:37.749 [2024-07-15 03:37:43.625817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.749 [2024-07-15 03:37:43.625860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.749 qpair failed and we were unable to recover it. 00:34:37.749 [2024-07-15 03:37:43.626001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.749 [2024-07-15 03:37:43.626026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.749 qpair failed and we were unable to recover it. 00:34:37.749 [2024-07-15 03:37:43.626133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.749 [2024-07-15 03:37:43.626158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.749 qpair failed and we were unable to recover it. 00:34:37.749 [2024-07-15 03:37:43.626289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.749 [2024-07-15 03:37:43.626318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.749 qpair failed and we were unable to recover it. 00:34:37.749 [2024-07-15 03:37:43.626491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.749 [2024-07-15 03:37:43.626519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.749 qpair failed and we were unable to recover it. 00:34:37.749 [2024-07-15 03:37:43.626658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.750 [2024-07-15 03:37:43.626683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.750 qpair failed and we were unable to recover it. 00:34:37.750 [2024-07-15 03:37:43.626823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.750 [2024-07-15 03:37:43.626848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.750 qpair failed and we were unable to recover it. 00:34:37.750 [2024-07-15 03:37:43.626961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.750 [2024-07-15 03:37:43.626986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.750 qpair failed and we were unable to recover it. 00:34:37.750 [2024-07-15 03:37:43.627124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.750 [2024-07-15 03:37:43.627150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.750 qpair failed and we were unable to recover it. 00:34:37.750 [2024-07-15 03:37:43.627277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.750 [2024-07-15 03:37:43.627302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.750 qpair failed and we were unable to recover it. 00:34:37.750 [2024-07-15 03:37:43.627443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.750 [2024-07-15 03:37:43.627471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.750 qpair failed and we were unable to recover it. 00:34:37.750 [2024-07-15 03:37:43.627646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.750 [2024-07-15 03:37:43.627671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.750 qpair failed and we were unable to recover it. 00:34:37.750 [2024-07-15 03:37:43.627779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.750 [2024-07-15 03:37:43.627804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.750 qpair failed and we were unable to recover it. 00:34:37.750 [2024-07-15 03:37:43.627988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.750 [2024-07-15 03:37:43.628014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.750 qpair failed and we were unable to recover it. 00:34:37.750 [2024-07-15 03:37:43.628150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.750 [2024-07-15 03:37:43.628175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.750 qpair failed and we were unable to recover it. 00:34:37.750 [2024-07-15 03:37:43.628285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.750 [2024-07-15 03:37:43.628312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.750 qpair failed and we were unable to recover it. 00:34:37.750 [2024-07-15 03:37:43.628487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.750 [2024-07-15 03:37:43.628515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.750 qpair failed and we were unable to recover it. 00:34:37.750 [2024-07-15 03:37:43.628708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.750 [2024-07-15 03:37:43.628734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.750 qpair failed and we were unable to recover it. 00:34:37.750 [2024-07-15 03:37:43.628921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.750 [2024-07-15 03:37:43.628965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.750 qpair failed and we were unable to recover it. 00:34:37.750 [2024-07-15 03:37:43.629104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.750 [2024-07-15 03:37:43.629130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.750 qpair failed and we were unable to recover it. 00:34:37.750 [2024-07-15 03:37:43.629264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.750 [2024-07-15 03:37:43.629289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.750 qpair failed and we were unable to recover it. 00:34:37.750 [2024-07-15 03:37:43.629403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.750 [2024-07-15 03:37:43.629428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.750 qpair failed and we were unable to recover it. 00:34:37.750 [2024-07-15 03:37:43.629592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.750 [2024-07-15 03:37:43.629621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.750 qpair failed and we were unable to recover it. 00:34:37.750 [2024-07-15 03:37:43.629819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.750 [2024-07-15 03:37:43.629847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.750 qpair failed and we were unable to recover it. 00:34:37.750 [2024-07-15 03:37:43.629994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.750 [2024-07-15 03:37:43.630020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.750 qpair failed and we were unable to recover it. 00:34:37.750 [2024-07-15 03:37:43.630166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.750 [2024-07-15 03:37:43.630191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.750 qpair failed and we were unable to recover it. 00:34:37.750 [2024-07-15 03:37:43.630322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.750 [2024-07-15 03:37:43.630347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.750 qpair failed and we were unable to recover it. 00:34:37.750 [2024-07-15 03:37:43.630512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.750 [2024-07-15 03:37:43.630540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.750 qpair failed and we were unable to recover it. 00:34:37.750 [2024-07-15 03:37:43.630739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.750 [2024-07-15 03:37:43.630764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.750 qpair failed and we were unable to recover it. 00:34:37.750 [2024-07-15 03:37:43.630869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.750 [2024-07-15 03:37:43.630914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.750 qpair failed and we were unable to recover it. 00:34:37.750 [2024-07-15 03:37:43.631058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.750 [2024-07-15 03:37:43.631084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.750 qpair failed and we were unable to recover it. 00:34:37.750 [2024-07-15 03:37:43.631250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.750 [2024-07-15 03:37:43.631278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.750 qpair failed and we were unable to recover it. 00:34:37.750 [2024-07-15 03:37:43.631442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.750 [2024-07-15 03:37:43.631467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.750 qpair failed and we were unable to recover it. 00:34:37.750 [2024-07-15 03:37:43.631571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.750 [2024-07-15 03:37:43.631597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.750 qpair failed and we were unable to recover it. 00:34:37.750 [2024-07-15 03:37:43.631754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.750 [2024-07-15 03:37:43.631782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.750 qpair failed and we were unable to recover it. 00:34:37.750 [2024-07-15 03:37:43.631945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.750 [2024-07-15 03:37:43.631972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.750 qpair failed and we were unable to recover it. 00:34:37.750 [2024-07-15 03:37:43.632080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.750 [2024-07-15 03:37:43.632105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.750 qpair failed and we were unable to recover it. 00:34:37.750 [2024-07-15 03:37:43.632243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.750 [2024-07-15 03:37:43.632268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.750 qpair failed and we were unable to recover it. 00:34:37.750 [2024-07-15 03:37:43.632404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.750 [2024-07-15 03:37:43.632429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.750 qpair failed and we were unable to recover it. 00:34:37.750 [2024-07-15 03:37:43.632584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.750 [2024-07-15 03:37:43.632611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.750 qpair failed and we were unable to recover it. 00:34:37.750 [2024-07-15 03:37:43.632727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.750 [2024-07-15 03:37:43.632755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.750 qpair failed and we were unable to recover it. 00:34:37.750 [2024-07-15 03:37:43.632935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.750 [2024-07-15 03:37:43.632961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.751 qpair failed and we were unable to recover it. 00:34:37.751 [2024-07-15 03:37:43.633076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.751 [2024-07-15 03:37:43.633101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.751 qpair failed and we were unable to recover it. 00:34:37.751 [2024-07-15 03:37:43.633231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.751 [2024-07-15 03:37:43.633260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.751 qpair failed and we were unable to recover it. 00:34:37.751 [2024-07-15 03:37:43.633424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.751 [2024-07-15 03:37:43.633449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.751 qpair failed and we were unable to recover it. 00:34:37.751 [2024-07-15 03:37:43.633589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.751 [2024-07-15 03:37:43.633631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.751 qpair failed and we were unable to recover it. 00:34:37.751 [2024-07-15 03:37:43.633750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.751 [2024-07-15 03:37:43.633778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.751 qpair failed and we were unable to recover it. 00:34:37.751 [2024-07-15 03:37:43.633939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.751 [2024-07-15 03:37:43.633964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.751 qpair failed and we were unable to recover it. 00:34:37.751 [2024-07-15 03:37:43.634097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.751 [2024-07-15 03:37:43.634122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.751 qpair failed and we were unable to recover it. 00:34:37.751 [2024-07-15 03:37:43.634278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.751 [2024-07-15 03:37:43.634307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.751 qpair failed and we were unable to recover it. 00:34:37.751 [2024-07-15 03:37:43.634436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.751 [2024-07-15 03:37:43.634460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.751 qpair failed and we were unable to recover it. 00:34:37.751 [2024-07-15 03:37:43.634570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.751 [2024-07-15 03:37:43.634595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.751 qpair failed and we were unable to recover it. 00:34:37.751 [2024-07-15 03:37:43.634759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.751 [2024-07-15 03:37:43.634787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.751 qpair failed and we were unable to recover it. 00:34:37.751 [2024-07-15 03:37:43.634915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.751 [2024-07-15 03:37:43.634940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.751 qpair failed and we were unable to recover it. 00:34:37.751 [2024-07-15 03:37:43.635060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.751 [2024-07-15 03:37:43.635085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.751 qpair failed and we were unable to recover it. 00:34:37.751 [2024-07-15 03:37:43.635231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.751 [2024-07-15 03:37:43.635255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.751 qpair failed and we were unable to recover it. 00:34:37.751 [2024-07-15 03:37:43.635418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.751 [2024-07-15 03:37:43.635442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.751 qpair failed and we were unable to recover it. 00:34:37.751 [2024-07-15 03:37:43.635624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.751 [2024-07-15 03:37:43.635652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.751 qpair failed and we were unable to recover it. 00:34:37.751 [2024-07-15 03:37:43.635839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.751 [2024-07-15 03:37:43.635867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.751 qpair failed and we were unable to recover it. 00:34:37.751 [2024-07-15 03:37:43.636008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.751 [2024-07-15 03:37:43.636034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.751 qpair failed and we were unable to recover it. 00:34:37.751 [2024-07-15 03:37:43.636141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.751 [2024-07-15 03:37:43.636166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.751 qpair failed and we were unable to recover it. 00:34:37.751 [2024-07-15 03:37:43.636329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.751 [2024-07-15 03:37:43.636357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.751 qpair failed and we were unable to recover it. 00:34:37.751 [2024-07-15 03:37:43.636536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.751 [2024-07-15 03:37:43.636563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.751 qpair failed and we were unable to recover it. 00:34:37.751 [2024-07-15 03:37:43.636733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.751 [2024-07-15 03:37:43.636757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.751 qpair failed and we were unable to recover it. 00:34:37.751 [2024-07-15 03:37:43.636924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.751 [2024-07-15 03:37:43.636949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.751 qpair failed and we were unable to recover it. 00:34:37.751 [2024-07-15 03:37:43.637088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.751 [2024-07-15 03:37:43.637114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.751 qpair failed and we were unable to recover it. 00:34:37.751 [2024-07-15 03:37:43.637294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.751 [2024-07-15 03:37:43.637322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.751 qpair failed and we were unable to recover it. 00:34:37.751 [2024-07-15 03:37:43.637475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.751 [2024-07-15 03:37:43.637502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.751 qpair failed and we were unable to recover it. 00:34:37.751 [2024-07-15 03:37:43.637635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.751 [2024-07-15 03:37:43.637678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.751 qpair failed and we were unable to recover it. 00:34:37.751 [2024-07-15 03:37:43.637825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.751 [2024-07-15 03:37:43.637853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.751 qpair failed and we were unable to recover it. 00:34:37.751 [2024-07-15 03:37:43.638016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.751 [2024-07-15 03:37:43.638040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.751 qpair failed and we were unable to recover it. 00:34:37.751 [2024-07-15 03:37:43.638180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.751 [2024-07-15 03:37:43.638209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.751 qpair failed and we were unable to recover it. 00:34:37.751 [2024-07-15 03:37:43.638322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.751 [2024-07-15 03:37:43.638364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.751 qpair failed and we were unable to recover it. 00:34:37.751 [2024-07-15 03:37:43.638552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.751 [2024-07-15 03:37:43.638579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.751 qpair failed and we were unable to recover it. 00:34:37.751 [2024-07-15 03:37:43.638732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.751 [2024-07-15 03:37:43.638758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.751 qpair failed and we were unable to recover it. 00:34:37.751 [2024-07-15 03:37:43.638898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.751 [2024-07-15 03:37:43.638925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.751 qpair failed and we were unable to recover it. 00:34:37.751 [2024-07-15 03:37:43.639042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.751 [2024-07-15 03:37:43.639067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.751 qpair failed and we were unable to recover it. 00:34:37.751 [2024-07-15 03:37:43.639169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.751 [2024-07-15 03:37:43.639195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.751 qpair failed and we were unable to recover it. 00:34:37.752 [2024-07-15 03:37:43.639310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.752 [2024-07-15 03:37:43.639334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.752 qpair failed and we were unable to recover it. 00:34:37.752 [2024-07-15 03:37:43.639527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.752 [2024-07-15 03:37:43.639555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.752 qpair failed and we were unable to recover it. 00:34:37.752 [2024-07-15 03:37:43.639717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.752 [2024-07-15 03:37:43.639742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.752 qpair failed and we were unable to recover it. 00:34:37.752 [2024-07-15 03:37:43.639901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.752 [2024-07-15 03:37:43.639928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.752 qpair failed and we were unable to recover it. 00:34:37.752 [2024-07-15 03:37:43.640046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.752 [2024-07-15 03:37:43.640071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.752 qpair failed and we were unable to recover it. 00:34:37.752 [2024-07-15 03:37:43.640202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.752 [2024-07-15 03:37:43.640227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.752 qpair failed and we were unable to recover it. 00:34:37.752 [2024-07-15 03:37:43.640334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.752 [2024-07-15 03:37:43.640359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.752 qpair failed and we were unable to recover it. 00:34:37.752 [2024-07-15 03:37:43.640477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.752 [2024-07-15 03:37:43.640503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.752 qpair failed and we were unable to recover it. 00:34:37.752 [2024-07-15 03:37:43.640665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.752 [2024-07-15 03:37:43.640690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.752 qpair failed and we were unable to recover it. 00:34:37.752 [2024-07-15 03:37:43.640817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.752 [2024-07-15 03:37:43.640844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.752 qpair failed and we were unable to recover it. 00:34:37.752 [2024-07-15 03:37:43.641002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.752 [2024-07-15 03:37:43.641028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.752 qpair failed and we were unable to recover it. 00:34:37.752 [2024-07-15 03:37:43.641192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.752 [2024-07-15 03:37:43.641217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.752 qpair failed and we were unable to recover it. 00:34:37.752 [2024-07-15 03:37:43.641374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.752 [2024-07-15 03:37:43.641401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.752 qpair failed and we were unable to recover it. 00:34:37.752 [2024-07-15 03:37:43.641542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.752 [2024-07-15 03:37:43.641569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.752 qpair failed and we were unable to recover it. 00:34:37.752 [2024-07-15 03:37:43.641751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.752 [2024-07-15 03:37:43.641777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.752 qpair failed and we were unable to recover it. 00:34:37.752 [2024-07-15 03:37:43.641890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.752 [2024-07-15 03:37:43.641916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.752 qpair failed and we were unable to recover it. 00:34:37.752 [2024-07-15 03:37:43.642053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.752 [2024-07-15 03:37:43.642078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.752 qpair failed and we were unable to recover it. 00:34:37.752 [2024-07-15 03:37:43.642218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.752 [2024-07-15 03:37:43.642243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.752 qpair failed and we were unable to recover it. 00:34:37.752 [2024-07-15 03:37:43.642362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.752 [2024-07-15 03:37:43.642402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.752 qpair failed and we were unable to recover it. 00:34:37.752 [2024-07-15 03:37:43.642555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.752 [2024-07-15 03:37:43.642583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.752 qpair failed and we were unable to recover it. 00:34:37.752 [2024-07-15 03:37:43.642754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.752 [2024-07-15 03:37:43.642786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.752 qpair failed and we were unable to recover it. 00:34:37.752 [2024-07-15 03:37:43.642974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.752 [2024-07-15 03:37:43.642999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.752 qpair failed and we were unable to recover it. 00:34:37.752 [2024-07-15 03:37:43.643141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.752 [2024-07-15 03:37:43.643167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.752 qpair failed and we were unable to recover it. 00:34:37.752 [2024-07-15 03:37:43.643337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.752 [2024-07-15 03:37:43.643363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.752 qpair failed and we were unable to recover it. 00:34:37.752 [2024-07-15 03:37:43.643502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.752 [2024-07-15 03:37:43.643528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.752 qpair failed and we were unable to recover it. 00:34:37.752 [2024-07-15 03:37:43.643730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.752 [2024-07-15 03:37:43.643754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.752 qpair failed and we were unable to recover it. 00:34:37.752 [2024-07-15 03:37:43.643902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.752 [2024-07-15 03:37:43.643928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.752 qpair failed and we were unable to recover it. 00:34:37.752 [2024-07-15 03:37:43.644041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.752 [2024-07-15 03:37:43.644082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.752 qpair failed and we were unable to recover it. 00:34:37.752 [2024-07-15 03:37:43.644233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.752 [2024-07-15 03:37:43.644260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.752 qpair failed and we were unable to recover it. 00:34:37.752 [2024-07-15 03:37:43.644416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.752 [2024-07-15 03:37:43.644442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.752 qpair failed and we were unable to recover it. 00:34:37.752 [2024-07-15 03:37:43.644559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.752 [2024-07-15 03:37:43.644585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.752 qpair failed and we were unable to recover it. 00:34:37.752 [2024-07-15 03:37:43.644758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.752 [2024-07-15 03:37:43.644786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.753 qpair failed and we were unable to recover it. 00:34:37.753 [2024-07-15 03:37:43.644947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.753 [2024-07-15 03:37:43.644973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.753 qpair failed and we were unable to recover it. 00:34:37.753 [2024-07-15 03:37:43.645091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.753 [2024-07-15 03:37:43.645117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.753 qpair failed and we were unable to recover it. 00:34:37.753 [2024-07-15 03:37:43.645233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.753 [2024-07-15 03:37:43.645258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.753 qpair failed and we were unable to recover it. 00:34:37.753 [2024-07-15 03:37:43.645424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.753 [2024-07-15 03:37:43.645449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.753 qpair failed and we were unable to recover it. 00:34:37.753 [2024-07-15 03:37:43.645575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.753 [2024-07-15 03:37:43.645602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.753 qpair failed and we were unable to recover it. 00:34:37.753 [2024-07-15 03:37:43.645727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.753 [2024-07-15 03:37:43.645754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.753 qpair failed and we were unable to recover it. 00:34:37.753 [2024-07-15 03:37:43.645938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.753 [2024-07-15 03:37:43.645964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.753 qpair failed and we were unable to recover it. 00:34:37.753 [2024-07-15 03:37:43.646073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.753 [2024-07-15 03:37:43.646098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.753 qpair failed and we were unable to recover it. 00:34:37.753 [2024-07-15 03:37:43.646264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.753 [2024-07-15 03:37:43.646291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.753 qpair failed and we were unable to recover it. 00:34:37.753 [2024-07-15 03:37:43.646448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.753 [2024-07-15 03:37:43.646473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.753 qpair failed and we were unable to recover it. 00:34:37.753 [2024-07-15 03:37:43.646629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.753 [2024-07-15 03:37:43.646654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.753 qpair failed and we were unable to recover it. 00:34:37.753 [2024-07-15 03:37:43.646797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.753 [2024-07-15 03:37:43.646825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.753 qpair failed and we were unable to recover it. 00:34:37.753 [2024-07-15 03:37:43.647013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.753 [2024-07-15 03:37:43.647039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.753 qpair failed and we were unable to recover it. 00:34:37.753 [2024-07-15 03:37:43.647195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.753 [2024-07-15 03:37:43.647237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.753 qpair failed and we were unable to recover it. 00:34:37.753 [2024-07-15 03:37:43.647447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.753 [2024-07-15 03:37:43.647475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.753 qpair failed and we were unable to recover it. 00:34:37.753 [2024-07-15 03:37:43.647658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.753 [2024-07-15 03:37:43.647683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.753 qpair failed and we were unable to recover it. 00:34:37.753 [2024-07-15 03:37:43.647842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.753 [2024-07-15 03:37:43.647869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.753 qpair failed and we were unable to recover it. 00:34:37.753 [2024-07-15 03:37:43.648019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.753 [2024-07-15 03:37:43.648045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.753 qpair failed and we were unable to recover it. 00:34:37.753 [2024-07-15 03:37:43.648164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.753 [2024-07-15 03:37:43.648189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.753 qpair failed and we were unable to recover it. 00:34:37.753 [2024-07-15 03:37:43.648330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.753 [2024-07-15 03:37:43.648355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.753 qpair failed and we were unable to recover it. 00:34:37.753 [2024-07-15 03:37:43.648480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.753 [2024-07-15 03:37:43.648507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.753 qpair failed and we were unable to recover it. 00:34:37.753 [2024-07-15 03:37:43.648667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.753 [2024-07-15 03:37:43.648693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.753 qpair failed and we were unable to recover it. 00:34:37.753 [2024-07-15 03:37:43.648848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.753 [2024-07-15 03:37:43.648883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.753 qpair failed and we were unable to recover it. 00:34:37.753 [2024-07-15 03:37:43.649031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.753 [2024-07-15 03:37:43.649056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.753 qpair failed and we were unable to recover it. 00:34:37.753 [2024-07-15 03:37:43.649194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.753 [2024-07-15 03:37:43.649220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.753 qpair failed and we were unable to recover it. 00:34:37.753 [2024-07-15 03:37:43.649400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.753 [2024-07-15 03:37:43.649429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.753 qpair failed and we were unable to recover it. 00:34:37.753 [2024-07-15 03:37:43.649609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.753 [2024-07-15 03:37:43.649636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.753 qpair failed and we were unable to recover it. 00:34:37.753 [2024-07-15 03:37:43.649809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.753 [2024-07-15 03:37:43.649837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.753 qpair failed and we were unable to recover it. 00:34:37.753 [2024-07-15 03:37:43.649978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.753 [2024-07-15 03:37:43.650004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.753 qpair failed and we were unable to recover it. 00:34:37.753 [2024-07-15 03:37:43.650129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.753 [2024-07-15 03:37:43.650164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.753 qpair failed and we were unable to recover it. 00:34:37.753 [2024-07-15 03:37:43.650283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.753 [2024-07-15 03:37:43.650310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.753 qpair failed and we were unable to recover it. 00:34:37.753 [2024-07-15 03:37:43.650450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.753 [2024-07-15 03:37:43.650476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.753 qpair failed and we were unable to recover it. 00:34:37.753 [2024-07-15 03:37:43.650580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.753 [2024-07-15 03:37:43.650605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.753 qpair failed and we were unable to recover it. 00:34:37.753 [2024-07-15 03:37:43.650774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.753 [2024-07-15 03:37:43.650800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.753 qpair failed and we were unable to recover it. 00:34:37.753 [2024-07-15 03:37:43.650919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.753 [2024-07-15 03:37:43.650946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.753 qpair failed and we were unable to recover it. 00:34:37.753 [2024-07-15 03:37:43.651083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.753 [2024-07-15 03:37:43.651108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.754 qpair failed and we were unable to recover it. 00:34:37.754 [2024-07-15 03:37:43.651246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.754 [2024-07-15 03:37:43.651271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.754 qpair failed and we were unable to recover it. 00:34:37.754 [2024-07-15 03:37:43.651408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.754 [2024-07-15 03:37:43.651434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.754 qpair failed and we were unable to recover it. 00:34:37.754 [2024-07-15 03:37:43.651579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.754 [2024-07-15 03:37:43.651605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.754 qpair failed and we were unable to recover it. 00:34:37.754 [2024-07-15 03:37:43.651743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.754 [2024-07-15 03:37:43.651772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.754 qpair failed and we were unable to recover it. 00:34:37.754 [2024-07-15 03:37:43.651926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.754 [2024-07-15 03:37:43.651952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.754 qpair failed and we were unable to recover it. 00:34:37.754 [2024-07-15 03:37:43.652090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.754 [2024-07-15 03:37:43.652115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.754 qpair failed and we were unable to recover it. 00:34:37.754 [2024-07-15 03:37:43.652253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.754 [2024-07-15 03:37:43.652282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.754 qpair failed and we were unable to recover it. 00:34:37.754 [2024-07-15 03:37:43.652441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.754 [2024-07-15 03:37:43.652466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.754 qpair failed and we were unable to recover it. 00:34:37.754 [2024-07-15 03:37:43.652611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.754 [2024-07-15 03:37:43.652636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.754 qpair failed and we were unable to recover it. 00:34:37.754 [2024-07-15 03:37:43.652746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.754 [2024-07-15 03:37:43.652771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.754 qpair failed and we were unable to recover it. 00:34:37.754 [2024-07-15 03:37:43.652935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.754 [2024-07-15 03:37:43.652960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.754 qpair failed and we were unable to recover it. 00:34:37.754 [2024-07-15 03:37:43.653077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.754 [2024-07-15 03:37:43.653103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.754 qpair failed and we were unable to recover it. 00:34:37.754 [2024-07-15 03:37:43.653266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.754 [2024-07-15 03:37:43.653291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.754 qpair failed and we were unable to recover it. 00:34:37.754 [2024-07-15 03:37:43.653428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.754 [2024-07-15 03:37:43.653454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.754 qpair failed and we were unable to recover it. 00:34:37.754 [2024-07-15 03:37:43.653598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.754 [2024-07-15 03:37:43.653624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.754 qpair failed and we were unable to recover it. 00:34:37.754 [2024-07-15 03:37:43.653758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.754 [2024-07-15 03:37:43.653787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.754 qpair failed and we were unable to recover it. 00:34:37.754 [2024-07-15 03:37:43.653921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.754 [2024-07-15 03:37:43.653947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.754 qpair failed and we were unable to recover it. 00:34:37.754 [2024-07-15 03:37:43.654052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.754 [2024-07-15 03:37:43.654077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.754 qpair failed and we were unable to recover it. 00:34:37.754 [2024-07-15 03:37:43.654245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.754 [2024-07-15 03:37:43.654270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.754 qpair failed and we were unable to recover it. 00:34:37.754 [2024-07-15 03:37:43.654432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.754 [2024-07-15 03:37:43.654457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.754 qpair failed and we were unable to recover it. 00:34:37.754 [2024-07-15 03:37:43.654579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.754 [2024-07-15 03:37:43.654606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.754 qpair failed and we were unable to recover it. 00:34:37.754 [2024-07-15 03:37:43.654772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.754 [2024-07-15 03:37:43.654797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.754 qpair failed and we were unable to recover it. 00:34:37.754 [2024-07-15 03:37:43.654967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.754 [2024-07-15 03:37:43.654992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.754 qpair failed and we were unable to recover it. 00:34:37.754 [2024-07-15 03:37:43.655131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.754 [2024-07-15 03:37:43.655155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.754 qpair failed and we were unable to recover it. 00:34:37.754 [2024-07-15 03:37:43.655272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.754 [2024-07-15 03:37:43.655297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.754 qpair failed and we were unable to recover it. 00:34:37.754 [2024-07-15 03:37:43.655404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.754 [2024-07-15 03:37:43.655429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.754 qpair failed and we were unable to recover it. 00:34:37.754 [2024-07-15 03:37:43.655565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.754 [2024-07-15 03:37:43.655589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.754 qpair failed and we were unable to recover it. 00:34:37.754 [2024-07-15 03:37:43.655755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.754 [2024-07-15 03:37:43.655784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.754 qpair failed and we were unable to recover it. 00:34:37.754 [2024-07-15 03:37:43.655935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.754 [2024-07-15 03:37:43.655962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.754 qpair failed and we were unable to recover it. 00:34:37.754 [2024-07-15 03:37:43.656129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.754 [2024-07-15 03:37:43.656156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.754 qpair failed and we were unable to recover it. 00:34:37.754 [2024-07-15 03:37:43.656298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.754 [2024-07-15 03:37:43.656324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.754 qpair failed and we were unable to recover it. 00:34:37.754 [2024-07-15 03:37:43.656460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.754 [2024-07-15 03:37:43.656485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.754 qpair failed and we were unable to recover it. 00:34:37.754 [2024-07-15 03:37:43.656597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.754 [2024-07-15 03:37:43.656623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.754 qpair failed and we were unable to recover it. 00:34:37.754 [2024-07-15 03:37:43.656766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.754 [2024-07-15 03:37:43.656795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.754 qpair failed and we were unable to recover it. 00:34:37.754 [2024-07-15 03:37:43.656910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.754 [2024-07-15 03:37:43.656937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.754 qpair failed and we were unable to recover it. 00:34:37.754 [2024-07-15 03:37:43.657077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.754 [2024-07-15 03:37:43.657102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.754 qpair failed and we were unable to recover it. 00:34:37.755 [2024-07-15 03:37:43.657247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.755 [2024-07-15 03:37:43.657273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.755 qpair failed and we were unable to recover it. 00:34:37.755 [2024-07-15 03:37:43.657446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.755 [2024-07-15 03:37:43.657471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.755 qpair failed and we were unable to recover it. 00:34:37.755 [2024-07-15 03:37:43.657620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.755 [2024-07-15 03:37:43.657646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.755 qpair failed and we were unable to recover it. 00:34:37.755 [2024-07-15 03:37:43.657810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.755 [2024-07-15 03:37:43.657839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.755 qpair failed and we were unable to recover it. 00:34:37.755 [2024-07-15 03:37:43.657981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.755 [2024-07-15 03:37:43.658006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.755 qpair failed and we were unable to recover it. 00:34:37.755 [2024-07-15 03:37:43.658123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.755 [2024-07-15 03:37:43.658149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.755 qpair failed and we were unable to recover it. 00:34:37.755 [2024-07-15 03:37:43.658258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.755 [2024-07-15 03:37:43.658283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.755 qpair failed and we were unable to recover it. 00:34:37.755 [2024-07-15 03:37:43.658419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.755 [2024-07-15 03:37:43.658443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.755 qpair failed and we were unable to recover it. 00:34:37.755 [2024-07-15 03:37:43.658571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.755 [2024-07-15 03:37:43.658595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.755 qpair failed and we were unable to recover it. 00:34:37.755 [2024-07-15 03:37:43.658777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.755 [2024-07-15 03:37:43.658802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.755 qpair failed and we were unable to recover it. 00:34:37.755 [2024-07-15 03:37:43.658908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.755 [2024-07-15 03:37:43.658933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.755 qpair failed and we were unable to recover it. 00:34:37.755 [2024-07-15 03:37:43.659083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.755 [2024-07-15 03:37:43.659109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.755 qpair failed and we were unable to recover it. 00:34:37.755 [2024-07-15 03:37:43.659227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.755 [2024-07-15 03:37:43.659252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.755 qpair failed and we were unable to recover it. 00:34:37.755 [2024-07-15 03:37:43.659391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.755 [2024-07-15 03:37:43.659416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.755 qpair failed and we were unable to recover it. 00:34:37.755 [2024-07-15 03:37:43.659579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.755 [2024-07-15 03:37:43.659604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.755 qpair failed and we were unable to recover it. 00:34:37.755 [2024-07-15 03:37:43.659743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.755 [2024-07-15 03:37:43.659768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.755 qpair failed and we were unable to recover it. 00:34:37.755 [2024-07-15 03:37:43.659934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.755 [2024-07-15 03:37:43.659961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.755 qpair failed and we were unable to recover it. 00:34:37.755 [2024-07-15 03:37:43.660070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.755 [2024-07-15 03:37:43.660095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.755 qpair failed and we were unable to recover it. 00:34:37.755 [2024-07-15 03:37:43.660228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.755 [2024-07-15 03:37:43.660253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.755 qpair failed and we were unable to recover it. 00:34:37.755 [2024-07-15 03:37:43.660358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.755 [2024-07-15 03:37:43.660385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.755 qpair failed and we were unable to recover it. 00:34:37.755 [2024-07-15 03:37:43.660502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.755 [2024-07-15 03:37:43.660532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.755 qpair failed and we were unable to recover it. 00:34:37.755 [2024-07-15 03:37:43.660649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.755 [2024-07-15 03:37:43.660674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.755 qpair failed and we were unable to recover it. 00:34:37.755 [2024-07-15 03:37:43.660822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.755 [2024-07-15 03:37:43.660849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.755 qpair failed and we were unable to recover it. 00:34:37.755 [2024-07-15 03:37:43.661015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.755 [2024-07-15 03:37:43.661041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.755 qpair failed and we were unable to recover it. 00:34:37.755 [2024-07-15 03:37:43.661203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.755 [2024-07-15 03:37:43.661233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.755 qpair failed and we were unable to recover it. 00:34:37.755 [2024-07-15 03:37:43.661346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.755 [2024-07-15 03:37:43.661371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.755 qpair failed and we were unable to recover it. 00:34:37.755 [2024-07-15 03:37:43.661489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.755 [2024-07-15 03:37:43.661516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.755 qpair failed and we were unable to recover it. 00:34:37.755 [2024-07-15 03:37:43.661661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.755 [2024-07-15 03:37:43.661686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.755 qpair failed and we were unable to recover it. 00:34:37.755 [2024-07-15 03:37:43.661837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.755 [2024-07-15 03:37:43.661865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.755 qpair failed and we were unable to recover it. 00:34:37.755 [2024-07-15 03:37:43.662028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.755 [2024-07-15 03:37:43.662053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.755 qpair failed and we were unable to recover it. 00:34:37.755 [2024-07-15 03:37:43.662191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.755 [2024-07-15 03:37:43.662216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.755 qpair failed and we were unable to recover it. 00:34:37.755 [2024-07-15 03:37:43.662349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.755 [2024-07-15 03:37:43.662374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.755 qpair failed and we were unable to recover it. 00:34:37.755 [2024-07-15 03:37:43.662509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.755 [2024-07-15 03:37:43.662534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.755 qpair failed and we were unable to recover it. 00:34:37.755 [2024-07-15 03:37:43.662675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.755 [2024-07-15 03:37:43.662700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.755 qpair failed and we were unable to recover it. 00:34:37.755 [2024-07-15 03:37:43.662841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.755 [2024-07-15 03:37:43.662866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.755 qpair failed and we were unable to recover it. 00:34:37.755 [2024-07-15 03:37:43.663015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.755 [2024-07-15 03:37:43.663041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.755 qpair failed and we were unable to recover it. 00:34:37.756 [2024-07-15 03:37:43.663185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.756 [2024-07-15 03:37:43.663210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.756 qpair failed and we were unable to recover it. 00:34:37.756 [2024-07-15 03:37:43.663351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.756 [2024-07-15 03:37:43.663375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.756 qpair failed and we were unable to recover it. 00:34:37.756 [2024-07-15 03:37:43.663517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.756 [2024-07-15 03:37:43.663542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.756 qpair failed and we were unable to recover it. 00:34:37.756 [2024-07-15 03:37:43.663650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.756 [2024-07-15 03:37:43.663676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.756 qpair failed and we were unable to recover it. 00:34:37.756 [2024-07-15 03:37:43.663811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.756 [2024-07-15 03:37:43.663835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.756 qpair failed and we were unable to recover it. 00:34:37.756 [2024-07-15 03:37:43.663982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.756 [2024-07-15 03:37:43.664008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.756 qpair failed and we were unable to recover it. 00:34:37.756 [2024-07-15 03:37:43.664147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.756 [2024-07-15 03:37:43.664172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.756 qpair failed and we were unable to recover it. 00:34:37.756 [2024-07-15 03:37:43.664318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.756 [2024-07-15 03:37:43.664342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.756 qpair failed and we were unable to recover it. 00:34:37.756 [2024-07-15 03:37:43.664506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.756 [2024-07-15 03:37:43.664531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.756 qpair failed and we were unable to recover it. 00:34:37.756 [2024-07-15 03:37:43.664643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.756 [2024-07-15 03:37:43.664667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.756 qpair failed and we were unable to recover it. 00:34:37.756 [2024-07-15 03:37:43.664828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.756 [2024-07-15 03:37:43.664855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.756 qpair failed and we were unable to recover it. 00:34:37.756 [2024-07-15 03:37:43.665030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.756 [2024-07-15 03:37:43.665057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.756 qpair failed and we were unable to recover it. 00:34:37.756 [2024-07-15 03:37:43.665201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.756 [2024-07-15 03:37:43.665227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.756 qpair failed and we were unable to recover it. 00:34:37.756 [2024-07-15 03:37:43.665365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.756 [2024-07-15 03:37:43.665392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.756 qpair failed and we were unable to recover it. 00:34:37.756 [2024-07-15 03:37:43.665533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.756 [2024-07-15 03:37:43.665558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.756 qpair failed and we were unable to recover it. 00:34:37.756 [2024-07-15 03:37:43.665692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.756 [2024-07-15 03:37:43.665725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.756 qpair failed and we were unable to recover it. 00:34:37.756 [2024-07-15 03:37:43.665873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.756 [2024-07-15 03:37:43.665924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.756 qpair failed and we were unable to recover it. 00:34:37.756 [2024-07-15 03:37:43.666065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.756 [2024-07-15 03:37:43.666090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.756 qpair failed and we were unable to recover it. 00:34:37.756 [2024-07-15 03:37:43.666238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.756 [2024-07-15 03:37:43.666263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.756 qpair failed and we were unable to recover it. 00:34:37.756 [2024-07-15 03:37:43.666435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.756 [2024-07-15 03:37:43.666460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.756 qpair failed and we were unable to recover it. 00:34:37.756 [2024-07-15 03:37:43.666568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.756 [2024-07-15 03:37:43.666595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.756 qpair failed and we were unable to recover it. 00:34:37.756 [2024-07-15 03:37:43.666736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.756 [2024-07-15 03:37:43.666761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.756 qpair failed and we were unable to recover it. 00:34:37.756 [2024-07-15 03:37:43.666898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.756 [2024-07-15 03:37:43.666924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.756 qpair failed and we were unable to recover it. 00:34:37.756 [2024-07-15 03:37:43.667066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.756 [2024-07-15 03:37:43.667091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.756 qpair failed and we were unable to recover it. 00:34:37.756 [2024-07-15 03:37:43.667232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.756 [2024-07-15 03:37:43.667257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.756 qpair failed and we were unable to recover it. 00:34:37.756 [2024-07-15 03:37:43.667393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.756 [2024-07-15 03:37:43.667417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.756 qpair failed and we were unable to recover it. 00:34:37.756 [2024-07-15 03:37:43.667564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.756 [2024-07-15 03:37:43.667591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.756 qpair failed and we were unable to recover it. 00:34:37.756 [2024-07-15 03:37:43.667747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.756 [2024-07-15 03:37:43.667776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.756 qpair failed and we were unable to recover it. 00:34:37.756 [2024-07-15 03:37:43.667956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.756 [2024-07-15 03:37:43.667981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.756 qpair failed and we were unable to recover it. 00:34:37.756 [2024-07-15 03:37:43.668151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.756 [2024-07-15 03:37:43.668176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.756 qpair failed and we were unable to recover it. 00:34:37.756 [2024-07-15 03:37:43.668315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.756 [2024-07-15 03:37:43.668340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.756 qpair failed and we were unable to recover it. 00:34:37.756 [2024-07-15 03:37:43.668476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.756 [2024-07-15 03:37:43.668501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.756 qpair failed and we were unable to recover it. 00:34:37.756 [2024-07-15 03:37:43.668660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.756 [2024-07-15 03:37:43.668685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.756 qpair failed and we were unable to recover it. 00:34:37.756 [2024-07-15 03:37:43.668821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.756 [2024-07-15 03:37:43.668846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.756 qpair failed and we were unable to recover it. 00:34:37.756 [2024-07-15 03:37:43.668970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.756 [2024-07-15 03:37:43.668996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.756 qpair failed and we were unable to recover it. 00:34:37.756 [2024-07-15 03:37:43.669131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.756 [2024-07-15 03:37:43.669157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.756 qpair failed and we were unable to recover it. 00:34:37.757 [2024-07-15 03:37:43.669295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.757 [2024-07-15 03:37:43.669321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.757 qpair failed and we were unable to recover it. 00:34:37.757 [2024-07-15 03:37:43.669461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.757 [2024-07-15 03:37:43.669488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.757 qpair failed and we were unable to recover it. 00:34:37.757 [2024-07-15 03:37:43.669625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.757 [2024-07-15 03:37:43.669650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.757 qpair failed and we were unable to recover it. 00:34:37.757 [2024-07-15 03:37:43.669790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.757 [2024-07-15 03:37:43.669815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.757 qpair failed and we were unable to recover it. 00:34:37.757 [2024-07-15 03:37:43.669960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.757 [2024-07-15 03:37:43.669986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.757 qpair failed and we were unable to recover it. 00:34:37.757 [2024-07-15 03:37:43.670129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.757 [2024-07-15 03:37:43.670154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.757 qpair failed and we were unable to recover it. 00:34:37.757 [2024-07-15 03:37:43.670300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.757 [2024-07-15 03:37:43.670325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.757 qpair failed and we were unable to recover it. 00:34:37.757 [2024-07-15 03:37:43.670463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.757 [2024-07-15 03:37:43.670489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.757 qpair failed and we were unable to recover it. 00:34:37.757 [2024-07-15 03:37:43.670597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.757 [2024-07-15 03:37:43.670622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.757 qpair failed and we were unable to recover it. 00:34:37.757 [2024-07-15 03:37:43.670761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.757 [2024-07-15 03:37:43.670786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.757 qpair failed and we were unable to recover it. 00:34:37.757 [2024-07-15 03:37:43.670899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.757 [2024-07-15 03:37:43.670924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.757 qpair failed and we were unable to recover it. 00:34:37.757 [2024-07-15 03:37:43.671036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.757 [2024-07-15 03:37:43.671061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.757 qpair failed and we were unable to recover it. 00:34:37.757 [2024-07-15 03:37:43.671223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.757 [2024-07-15 03:37:43.671249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.757 qpair failed and we were unable to recover it. 00:34:37.757 [2024-07-15 03:37:43.671411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.757 [2024-07-15 03:37:43.671436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.757 qpair failed and we were unable to recover it. 00:34:37.757 [2024-07-15 03:37:43.671576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.757 [2024-07-15 03:37:43.671601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.757 qpair failed and we were unable to recover it. 00:34:37.757 [2024-07-15 03:37:43.671790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.757 [2024-07-15 03:37:43.671818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.757 qpair failed and we were unable to recover it. 00:34:37.757 [2024-07-15 03:37:43.671979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.757 [2024-07-15 03:37:43.672004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.757 qpair failed and we were unable to recover it. 00:34:37.757 [2024-07-15 03:37:43.672146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.757 [2024-07-15 03:37:43.672172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.757 qpair failed and we were unable to recover it. 00:34:37.757 [2024-07-15 03:37:43.672312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.757 [2024-07-15 03:37:43.672339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.757 qpair failed and we were unable to recover it. 00:34:37.757 [2024-07-15 03:37:43.672502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.757 [2024-07-15 03:37:43.672531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.757 qpair failed and we were unable to recover it. 00:34:37.757 [2024-07-15 03:37:43.672666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.757 [2024-07-15 03:37:43.672692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.757 qpair failed and we were unable to recover it. 00:34:37.757 [2024-07-15 03:37:43.672859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.757 [2024-07-15 03:37:43.672889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.757 qpair failed and we were unable to recover it. 00:34:37.757 [2024-07-15 03:37:43.673026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.757 [2024-07-15 03:37:43.673052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.757 qpair failed and we were unable to recover it. 00:34:37.757 [2024-07-15 03:37:43.673189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.757 [2024-07-15 03:37:43.673215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.757 qpair failed and we were unable to recover it. 00:34:37.757 [2024-07-15 03:37:43.673354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.757 [2024-07-15 03:37:43.673379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.757 qpair failed and we were unable to recover it. 00:34:37.757 [2024-07-15 03:37:43.673547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.757 [2024-07-15 03:37:43.673572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.757 qpair failed and we were unable to recover it. 00:34:37.757 [2024-07-15 03:37:43.673696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.757 [2024-07-15 03:37:43.673724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.757 qpair failed and we were unable to recover it. 00:34:37.757 [2024-07-15 03:37:43.673933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.757 [2024-07-15 03:37:43.673959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.757 qpair failed and we were unable to recover it. 00:34:37.757 [2024-07-15 03:37:43.674097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.757 [2024-07-15 03:37:43.674122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.757 qpair failed and we were unable to recover it. 00:34:37.757 [2024-07-15 03:37:43.674286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.757 [2024-07-15 03:37:43.674311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.757 qpair failed and we were unable to recover it. 00:34:37.757 [2024-07-15 03:37:43.674427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.757 [2024-07-15 03:37:43.674451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.757 qpair failed and we were unable to recover it. 00:34:37.757 [2024-07-15 03:37:43.674586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.758 [2024-07-15 03:37:43.674612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.758 qpair failed and we were unable to recover it. 00:34:37.758 [2024-07-15 03:37:43.674756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.758 [2024-07-15 03:37:43.674782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.758 qpair failed and we were unable to recover it. 00:34:37.758 [2024-07-15 03:37:43.674913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.758 [2024-07-15 03:37:43.674938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.758 qpair failed and we were unable to recover it. 00:34:37.758 [2024-07-15 03:37:43.675073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.758 [2024-07-15 03:37:43.675099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.758 qpair failed and we were unable to recover it. 00:34:37.758 [2024-07-15 03:37:43.675235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.758 [2024-07-15 03:37:43.675259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.758 qpair failed and we were unable to recover it. 00:34:37.758 [2024-07-15 03:37:43.675437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.758 [2024-07-15 03:37:43.675462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.758 qpair failed and we were unable to recover it. 00:34:37.758 [2024-07-15 03:37:43.675601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.758 [2024-07-15 03:37:43.675627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.758 qpair failed and we were unable to recover it. 00:34:37.758 [2024-07-15 03:37:43.675772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.758 [2024-07-15 03:37:43.675796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.758 qpair failed and we were unable to recover it. 00:34:37.758 [2024-07-15 03:37:43.675992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.758 [2024-07-15 03:37:43.676017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.758 qpair failed and we were unable to recover it. 00:34:37.758 [2024-07-15 03:37:43.676117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.758 [2024-07-15 03:37:43.676142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.758 qpair failed and we were unable to recover it. 00:34:37.758 [2024-07-15 03:37:43.676278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.758 [2024-07-15 03:37:43.676303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.758 qpair failed and we were unable to recover it. 00:34:37.758 [2024-07-15 03:37:43.676472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.758 [2024-07-15 03:37:43.676497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.758 qpair failed and we were unable to recover it. 00:34:37.758 [2024-07-15 03:37:43.676631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.758 [2024-07-15 03:37:43.676656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.758 qpair failed and we were unable to recover it. 00:34:37.758 [2024-07-15 03:37:43.676806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.758 [2024-07-15 03:37:43.676845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.758 qpair failed and we were unable to recover it. 00:34:37.758 [2024-07-15 03:37:43.676980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.758 [2024-07-15 03:37:43.677019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.758 qpair failed and we were unable to recover it. 00:34:37.758 [2024-07-15 03:37:43.677154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.758 [2024-07-15 03:37:43.677194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.758 qpair failed and we were unable to recover it. 00:34:37.758 [2024-07-15 03:37:43.677345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.758 [2024-07-15 03:37:43.677374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.758 qpair failed and we were unable to recover it. 00:34:37.758 [2024-07-15 03:37:43.677548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.758 [2024-07-15 03:37:43.677575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.758 qpair failed and we were unable to recover it. 00:34:37.758 [2024-07-15 03:37:43.677712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.758 [2024-07-15 03:37:43.677741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.758 qpair failed and we were unable to recover it. 00:34:37.758 [2024-07-15 03:37:43.677910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.758 [2024-07-15 03:37:43.677937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.758 qpair failed and we were unable to recover it. 00:34:37.758 [2024-07-15 03:37:43.678076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.758 [2024-07-15 03:37:43.678102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.758 qpair failed and we were unable to recover it. 00:34:37.758 [2024-07-15 03:37:43.678244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.758 [2024-07-15 03:37:43.678270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.758 qpair failed and we were unable to recover it. 00:34:37.758 [2024-07-15 03:37:43.678456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.758 [2024-07-15 03:37:43.678484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.758 qpair failed and we were unable to recover it. 00:34:37.758 [2024-07-15 03:37:43.678637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.758 [2024-07-15 03:37:43.678666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.758 qpair failed and we were unable to recover it. 00:34:37.758 [2024-07-15 03:37:43.678825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.758 [2024-07-15 03:37:43.678852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.758 qpair failed and we were unable to recover it. 00:34:37.758 [2024-07-15 03:37:43.679028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.758 [2024-07-15 03:37:43.679055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.758 qpair failed and we were unable to recover it. 00:34:37.758 [2024-07-15 03:37:43.679174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.758 [2024-07-15 03:37:43.679216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.758 qpair failed and we were unable to recover it. 00:34:37.758 [2024-07-15 03:37:43.679404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.758 [2024-07-15 03:37:43.679430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.758 qpair failed and we were unable to recover it. 00:34:37.758 [2024-07-15 03:37:43.679613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.758 [2024-07-15 03:37:43.679648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.758 qpair failed and we were unable to recover it. 00:34:37.758 [2024-07-15 03:37:43.679814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.758 [2024-07-15 03:37:43.679843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.758 qpair failed and we were unable to recover it. 00:34:37.758 [2024-07-15 03:37:43.679989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.758 [2024-07-15 03:37:43.680017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.758 qpair failed and we were unable to recover it. 00:34:37.758 [2024-07-15 03:37:43.680167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.758 [2024-07-15 03:37:43.680209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.758 qpair failed and we were unable to recover it. 00:34:37.758 [2024-07-15 03:37:43.680361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.758 [2024-07-15 03:37:43.680390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.758 qpair failed and we were unable to recover it. 00:34:37.758 [2024-07-15 03:37:43.680564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.758 [2024-07-15 03:37:43.680593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.758 qpair failed and we were unable to recover it. 00:34:37.758 [2024-07-15 03:37:43.680748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.758 [2024-07-15 03:37:43.680778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.758 qpair failed and we were unable to recover it. 00:34:37.758 [2024-07-15 03:37:43.680949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.758 [2024-07-15 03:37:43.680977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.758 qpair failed and we were unable to recover it. 00:34:37.758 [2024-07-15 03:37:43.681120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.759 [2024-07-15 03:37:43.681145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.759 qpair failed and we were unable to recover it. 00:34:37.759 [2024-07-15 03:37:43.681311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.759 [2024-07-15 03:37:43.681353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.759 qpair failed and we were unable to recover it. 00:34:37.759 [2024-07-15 03:37:43.681499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.759 [2024-07-15 03:37:43.681527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.759 qpair failed and we were unable to recover it. 00:34:37.759 [2024-07-15 03:37:43.681681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.759 [2024-07-15 03:37:43.681710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.759 qpair failed and we were unable to recover it. 00:34:37.759 [2024-07-15 03:37:43.681868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.759 [2024-07-15 03:37:43.681908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.759 qpair failed and we were unable to recover it. 00:34:37.759 [2024-07-15 03:37:43.682046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.759 [2024-07-15 03:37:43.682072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.759 qpair failed and we were unable to recover it. 00:34:37.759 [2024-07-15 03:37:43.682217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.759 [2024-07-15 03:37:43.682243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.759 qpair failed and we were unable to recover it. 00:34:37.759 [2024-07-15 03:37:43.682402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.759 [2024-07-15 03:37:43.682431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.759 qpair failed and we were unable to recover it. 00:34:37.759 [2024-07-15 03:37:43.682586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.759 [2024-07-15 03:37:43.682614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.759 qpair failed and we were unable to recover it. 00:34:37.759 [2024-07-15 03:37:43.682822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.759 [2024-07-15 03:37:43.682851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.759 qpair failed and we were unable to recover it. 00:34:37.759 [2024-07-15 03:37:43.683020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.759 [2024-07-15 03:37:43.683047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.759 qpair failed and we were unable to recover it. 00:34:37.759 [2024-07-15 03:37:43.683200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.759 [2024-07-15 03:37:43.683229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.759 qpair failed and we were unable to recover it. 00:34:37.759 [2024-07-15 03:37:43.683389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.759 [2024-07-15 03:37:43.683416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.759 qpair failed and we were unable to recover it. 00:34:37.759 [2024-07-15 03:37:43.683571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.759 [2024-07-15 03:37:43.683601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.759 qpair failed and we were unable to recover it. 00:34:37.759 [2024-07-15 03:37:43.683801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.759 [2024-07-15 03:37:43.683830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.759 qpair failed and we were unable to recover it. 00:34:37.759 [2024-07-15 03:37:43.684001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.759 [2024-07-15 03:37:43.684028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.759 qpair failed and we were unable to recover it. 00:34:37.759 [2024-07-15 03:37:43.684216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.759 [2024-07-15 03:37:43.684245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.759 qpair failed and we were unable to recover it. 00:34:37.759 [2024-07-15 03:37:43.684424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.759 [2024-07-15 03:37:43.684452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.759 qpair failed and we were unable to recover it. 00:34:37.759 [2024-07-15 03:37:43.684602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.759 [2024-07-15 03:37:43.684630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.759 qpair failed and we were unable to recover it. 00:34:37.759 [2024-07-15 03:37:43.684821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.759 [2024-07-15 03:37:43.684853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.759 qpair failed and we were unable to recover it. 00:34:37.759 [2024-07-15 03:37:43.684994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.759 [2024-07-15 03:37:43.685021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.759 qpair failed and we were unable to recover it. 00:34:37.759 [2024-07-15 03:37:43.685139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.759 [2024-07-15 03:37:43.685165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.759 qpair failed and we were unable to recover it. 00:34:37.759 [2024-07-15 03:37:43.685395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.759 [2024-07-15 03:37:43.685446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.759 qpair failed and we were unable to recover it. 00:34:37.759 [2024-07-15 03:37:43.685594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.759 [2024-07-15 03:37:43.685623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.759 qpair failed and we were unable to recover it. 00:34:37.759 [2024-07-15 03:37:43.685773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.759 [2024-07-15 03:37:43.685797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.759 qpair failed and we were unable to recover it. 00:34:37.759 [2024-07-15 03:37:43.685957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.759 [2024-07-15 03:37:43.685985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.759 qpair failed and we were unable to recover it. 00:34:37.759 [2024-07-15 03:37:43.686175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.759 [2024-07-15 03:37:43.686203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.759 qpair failed and we were unable to recover it. 00:34:37.759 [2024-07-15 03:37:43.686393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.759 [2024-07-15 03:37:43.686422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.759 qpair failed and we were unable to recover it. 00:34:37.759 [2024-07-15 03:37:43.686579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.759 [2024-07-15 03:37:43.686605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.759 qpair failed and we were unable to recover it. 00:34:37.759 [2024-07-15 03:37:43.686739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.759 [2024-07-15 03:37:43.686764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.759 qpair failed and we were unable to recover it. 00:34:37.759 [2024-07-15 03:37:43.686883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.759 [2024-07-15 03:37:43.686925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.759 qpair failed and we were unable to recover it. 00:34:37.759 [2024-07-15 03:37:43.687141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.759 [2024-07-15 03:37:43.687166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.759 qpair failed and we were unable to recover it. 00:34:37.759 [2024-07-15 03:37:43.687427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.759 [2024-07-15 03:37:43.687460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.759 qpair failed and we were unable to recover it. 00:34:37.759 [2024-07-15 03:37:43.687622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.759 [2024-07-15 03:37:43.687647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.759 qpair failed and we were unable to recover it. 00:34:37.759 [2024-07-15 03:37:43.687787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.759 [2024-07-15 03:37:43.687812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.759 qpair failed and we were unable to recover it. 00:34:37.759 [2024-07-15 03:37:43.688000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.759 [2024-07-15 03:37:43.688029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.759 qpair failed and we were unable to recover it. 00:34:37.759 [2024-07-15 03:37:43.688342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.759 [2024-07-15 03:37:43.688394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.760 qpair failed and we were unable to recover it. 00:34:37.760 [2024-07-15 03:37:43.688572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.760 [2024-07-15 03:37:43.688597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.760 qpair failed and we were unable to recover it. 00:34:37.760 [2024-07-15 03:37:43.688761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.760 [2024-07-15 03:37:43.688786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.760 qpair failed and we were unable to recover it. 00:34:37.760 [2024-07-15 03:37:43.688949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.760 [2024-07-15 03:37:43.688978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.760 qpair failed and we were unable to recover it. 00:34:37.760 [2024-07-15 03:37:43.689128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.760 [2024-07-15 03:37:43.689157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.760 qpair failed and we were unable to recover it. 00:34:37.760 [2024-07-15 03:37:43.689274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.760 [2024-07-15 03:37:43.689303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.760 qpair failed and we were unable to recover it. 00:34:37.760 [2024-07-15 03:37:43.689460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.760 [2024-07-15 03:37:43.689486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.760 qpair failed and we were unable to recover it. 00:34:37.760 [2024-07-15 03:37:43.689626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.760 [2024-07-15 03:37:43.689651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.760 qpair failed and we were unable to recover it. 00:34:37.760 [2024-07-15 03:37:43.689789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.760 [2024-07-15 03:37:43.689814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.760 qpair failed and we were unable to recover it. 00:34:37.760 [2024-07-15 03:37:43.689943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.760 [2024-07-15 03:37:43.689972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.760 qpair failed and we were unable to recover it. 00:34:37.760 [2024-07-15 03:37:43.690183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.760 [2024-07-15 03:37:43.690212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.760 qpair failed and we were unable to recover it. 00:34:37.760 [2024-07-15 03:37:43.690362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.760 [2024-07-15 03:37:43.690390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.760 qpair failed and we were unable to recover it. 00:34:37.760 [2024-07-15 03:37:43.690544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.760 [2024-07-15 03:37:43.690570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.760 qpair failed and we were unable to recover it. 00:34:37.760 [2024-07-15 03:37:43.690711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.760 [2024-07-15 03:37:43.690737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.760 qpair failed and we were unable to recover it. 00:34:37.760 [2024-07-15 03:37:43.690839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.760 [2024-07-15 03:37:43.690864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.760 qpair failed and we were unable to recover it. 00:34:37.760 [2024-07-15 03:37:43.691024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.760 [2024-07-15 03:37:43.691052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.760 qpair failed and we were unable to recover it. 00:34:37.760 [2024-07-15 03:37:43.691195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.760 [2024-07-15 03:37:43.691223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.760 qpair failed and we were unable to recover it. 00:34:37.760 [2024-07-15 03:37:43.691378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.760 [2024-07-15 03:37:43.691403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.760 qpair failed and we were unable to recover it. 00:34:37.760 [2024-07-15 03:37:43.691544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.760 [2024-07-15 03:37:43.691569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.760 qpair failed and we were unable to recover it. 00:34:37.760 [2024-07-15 03:37:43.691707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.760 [2024-07-15 03:37:43.691732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.760 qpair failed and we were unable to recover it. 00:34:37.760 [2024-07-15 03:37:43.691867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.760 [2024-07-15 03:37:43.691897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.760 qpair failed and we were unable to recover it. 00:34:37.760 [2024-07-15 03:37:43.692098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.760 [2024-07-15 03:37:43.692126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.760 qpair failed and we were unable to recover it. 00:34:37.760 [2024-07-15 03:37:43.692278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.760 [2024-07-15 03:37:43.692307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.760 qpair failed and we were unable to recover it. 00:34:37.760 [2024-07-15 03:37:43.692487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.760 [2024-07-15 03:37:43.692517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.760 qpair failed and we were unable to recover it. 00:34:37.760 [2024-07-15 03:37:43.692674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.760 [2024-07-15 03:37:43.692699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.760 qpair failed and we were unable to recover it. 00:34:37.760 [2024-07-15 03:37:43.692864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.760 [2024-07-15 03:37:43.692894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.760 qpair failed and we were unable to recover it. 00:34:37.760 [2024-07-15 03:37:43.693080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.760 [2024-07-15 03:37:43.693109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.760 qpair failed and we were unable to recover it. 00:34:37.760 [2024-07-15 03:37:43.693382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.760 [2024-07-15 03:37:43.693433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.760 qpair failed and we were unable to recover it. 00:34:37.760 [2024-07-15 03:37:43.693614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.760 [2024-07-15 03:37:43.693640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.760 qpair failed and we were unable to recover it. 00:34:37.760 [2024-07-15 03:37:43.693753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.760 [2024-07-15 03:37:43.693778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.760 qpair failed and we were unable to recover it. 00:34:37.760 [2024-07-15 03:37:43.693964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.760 [2024-07-15 03:37:43.693993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.760 qpair failed and we were unable to recover it. 00:34:37.760 [2024-07-15 03:37:43.694161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.760 [2024-07-15 03:37:43.694188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.760 qpair failed and we were unable to recover it. 00:34:37.760 [2024-07-15 03:37:43.694332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.760 [2024-07-15 03:37:43.694360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.760 qpair failed and we were unable to recover it. 00:34:37.760 [2024-07-15 03:37:43.694486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.760 [2024-07-15 03:37:43.694513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.760 qpair failed and we were unable to recover it. 00:34:37.760 [2024-07-15 03:37:43.694625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.760 [2024-07-15 03:37:43.694650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.760 qpair failed and we were unable to recover it. 00:34:37.760 [2024-07-15 03:37:43.694789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.760 [2024-07-15 03:37:43.694815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.760 qpair failed and we were unable to recover it. 00:34:37.760 [2024-07-15 03:37:43.694923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.761 [2024-07-15 03:37:43.694952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.761 qpair failed and we were unable to recover it. 00:34:37.761 [2024-07-15 03:37:43.695098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.761 [2024-07-15 03:37:43.695124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.761 qpair failed and we were unable to recover it. 00:34:37.761 [2024-07-15 03:37:43.695261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.761 [2024-07-15 03:37:43.695287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.761 qpair failed and we were unable to recover it. 00:34:37.761 [2024-07-15 03:37:43.695447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.761 [2024-07-15 03:37:43.695472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.761 qpair failed and we were unable to recover it. 00:34:37.761 [2024-07-15 03:37:43.695581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.761 [2024-07-15 03:37:43.695607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.761 qpair failed and we were unable to recover it. 00:34:37.761 [2024-07-15 03:37:43.695783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.761 [2024-07-15 03:37:43.695812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.761 qpair failed and we were unable to recover it. 00:34:37.761 [2024-07-15 03:37:43.695955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.761 [2024-07-15 03:37:43.695982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.761 qpair failed and we were unable to recover it. 00:34:37.761 [2024-07-15 03:37:43.696098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.761 [2024-07-15 03:37:43.696123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.761 qpair failed and we were unable to recover it. 00:34:37.761 [2024-07-15 03:37:43.696262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.761 [2024-07-15 03:37:43.696289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.761 qpair failed and we were unable to recover it. 00:34:37.761 [2024-07-15 03:37:43.696434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.761 [2024-07-15 03:37:43.696459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.761 qpair failed and we were unable to recover it. 00:34:37.761 [2024-07-15 03:37:43.696596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.761 [2024-07-15 03:37:43.696621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.761 qpair failed and we were unable to recover it. 00:34:37.761 [2024-07-15 03:37:43.696764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.761 [2024-07-15 03:37:43.696789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.761 qpair failed and we were unable to recover it. 00:34:37.761 [2024-07-15 03:37:43.696902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.761 [2024-07-15 03:37:43.696927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.761 qpair failed and we were unable to recover it. 00:34:37.761 [2024-07-15 03:37:43.697059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.761 [2024-07-15 03:37:43.697084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.761 qpair failed and we were unable to recover it. 00:34:37.761 [2024-07-15 03:37:43.697203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.761 [2024-07-15 03:37:43.697227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.761 qpair failed and we were unable to recover it. 00:34:37.761 [2024-07-15 03:37:43.697364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.761 [2024-07-15 03:37:43.697389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.761 qpair failed and we were unable to recover it. 00:34:37.761 [2024-07-15 03:37:43.697568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.761 [2024-07-15 03:37:43.697607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.761 qpair failed and we were unable to recover it. 00:34:37.761 [2024-07-15 03:37:43.697807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.761 [2024-07-15 03:37:43.697839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.761 qpair failed and we were unable to recover it. 00:34:37.761 [2024-07-15 03:37:43.698013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.761 [2024-07-15 03:37:43.698039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.761 qpair failed and we were unable to recover it. 00:34:37.761 [2024-07-15 03:37:43.698154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.761 [2024-07-15 03:37:43.698180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.761 qpair failed and we were unable to recover it. 00:34:37.761 [2024-07-15 03:37:43.698342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.761 [2024-07-15 03:37:43.698368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.761 qpair failed and we were unable to recover it. 00:34:37.761 [2024-07-15 03:37:43.698503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.761 [2024-07-15 03:37:43.698529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.761 qpair failed and we were unable to recover it. 00:34:37.761 [2024-07-15 03:37:43.698671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.761 [2024-07-15 03:37:43.698698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.761 qpair failed and we were unable to recover it. 00:34:37.761 [2024-07-15 03:37:43.698826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.761 [2024-07-15 03:37:43.698854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.761 qpair failed and we were unable to recover it. 00:34:37.761 [2024-07-15 03:37:43.698993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.761 [2024-07-15 03:37:43.699021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.761 qpair failed and we were unable to recover it. 00:34:37.761 [2024-07-15 03:37:43.699157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.761 [2024-07-15 03:37:43.699183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.761 qpair failed and we were unable to recover it. 00:34:37.761 [2024-07-15 03:37:43.699319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.761 [2024-07-15 03:37:43.699344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.761 qpair failed and we were unable to recover it. 00:34:37.761 [2024-07-15 03:37:43.699474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.761 [2024-07-15 03:37:43.699513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.761 qpair failed and we were unable to recover it. 00:34:37.761 [2024-07-15 03:37:43.699660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.761 [2024-07-15 03:37:43.699688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.761 qpair failed and we were unable to recover it. 00:34:37.761 [2024-07-15 03:37:43.699804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.761 [2024-07-15 03:37:43.699830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.761 qpair failed and we were unable to recover it. 00:34:37.761 [2024-07-15 03:37:43.699986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.761 [2024-07-15 03:37:43.700013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.761 qpair failed and we were unable to recover it. 00:34:37.761 [2024-07-15 03:37:43.700172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.761 [2024-07-15 03:37:43.700202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.761 qpair failed and we were unable to recover it. 00:34:37.761 [2024-07-15 03:37:43.700382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.761 [2024-07-15 03:37:43.700410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.761 qpair failed and we were unable to recover it. 00:34:37.761 [2024-07-15 03:37:43.700591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.761 [2024-07-15 03:37:43.700620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.761 qpair failed and we were unable to recover it. 00:34:37.761 [2024-07-15 03:37:43.700780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.761 [2024-07-15 03:37:43.700810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.761 qpair failed and we were unable to recover it. 00:34:37.761 [2024-07-15 03:37:43.700953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.761 [2024-07-15 03:37:43.700982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.761 qpair failed and we were unable to recover it. 00:34:37.761 [2024-07-15 03:37:43.701150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.762 [2024-07-15 03:37:43.701179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.762 qpair failed and we were unable to recover it. 00:34:37.762 [2024-07-15 03:37:43.701330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.762 [2024-07-15 03:37:43.701374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.762 qpair failed and we were unable to recover it. 00:34:37.762 [2024-07-15 03:37:43.701533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.762 [2024-07-15 03:37:43.701577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.762 qpair failed and we were unable to recover it. 00:34:37.762 [2024-07-15 03:37:43.701693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.762 [2024-07-15 03:37:43.701721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.762 qpair failed and we were unable to recover it. 00:34:37.762 [2024-07-15 03:37:43.701866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.762 [2024-07-15 03:37:43.701919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.762 qpair failed and we were unable to recover it. 00:34:37.762 [2024-07-15 03:37:43.702083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.762 [2024-07-15 03:37:43.702111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.762 qpair failed and we were unable to recover it. 00:34:37.762 [2024-07-15 03:37:43.702345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.762 [2024-07-15 03:37:43.702402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.762 qpair failed and we were unable to recover it. 00:34:37.762 [2024-07-15 03:37:43.702564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.762 [2024-07-15 03:37:43.702593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.762 qpair failed and we were unable to recover it. 00:34:37.762 [2024-07-15 03:37:43.702777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.762 [2024-07-15 03:37:43.702806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.762 qpair failed and we were unable to recover it. 00:34:37.762 [2024-07-15 03:37:43.702991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.762 [2024-07-15 03:37:43.703017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.762 qpair failed and we were unable to recover it. 00:34:37.762 [2024-07-15 03:37:43.703230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.762 [2024-07-15 03:37:43.703256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.762 qpair failed and we were unable to recover it. 00:34:37.762 [2024-07-15 03:37:43.703424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.762 [2024-07-15 03:37:43.703452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.762 qpair failed and we were unable to recover it. 00:34:37.762 [2024-07-15 03:37:43.703632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.762 [2024-07-15 03:37:43.703661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.762 qpair failed and we were unable to recover it. 00:34:37.762 [2024-07-15 03:37:43.703848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.762 [2024-07-15 03:37:43.703873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.762 qpair failed and we were unable to recover it. 00:34:37.762 [2024-07-15 03:37:43.704009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.762 [2024-07-15 03:37:43.704035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.762 qpair failed and we were unable to recover it. 00:34:37.762 [2024-07-15 03:37:43.704196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.762 [2024-07-15 03:37:43.704224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.762 qpair failed and we were unable to recover it. 00:34:37.762 [2024-07-15 03:37:43.704447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.762 [2024-07-15 03:37:43.704497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.762 qpair failed and we were unable to recover it. 00:34:37.762 [2024-07-15 03:37:43.704655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.762 [2024-07-15 03:37:43.704684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.762 qpair failed and we were unable to recover it. 00:34:37.762 [2024-07-15 03:37:43.704859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.762 [2024-07-15 03:37:43.704892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.762 qpair failed and we were unable to recover it. 00:34:37.762 [2024-07-15 03:37:43.705005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.762 [2024-07-15 03:37:43.705030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.762 qpair failed and we were unable to recover it. 00:34:37.762 [2024-07-15 03:37:43.705170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.762 [2024-07-15 03:37:43.705212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.762 qpair failed and we were unable to recover it. 00:34:37.762 [2024-07-15 03:37:43.705396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.762 [2024-07-15 03:37:43.705425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.762 qpair failed and we were unable to recover it. 00:34:37.762 [2024-07-15 03:37:43.705644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.762 [2024-07-15 03:37:43.705672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.762 qpair failed and we were unable to recover it. 00:34:37.762 [2024-07-15 03:37:43.705849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.762 [2024-07-15 03:37:43.705884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.762 qpair failed and we were unable to recover it. 00:34:37.762 [2024-07-15 03:37:43.706012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.762 [2024-07-15 03:37:43.706038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.762 qpair failed and we were unable to recover it. 00:34:37.762 [2024-07-15 03:37:43.706172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.762 [2024-07-15 03:37:43.706198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.762 qpair failed and we were unable to recover it. 00:34:37.762 [2024-07-15 03:37:43.706379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.762 [2024-07-15 03:37:43.706408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.762 qpair failed and we were unable to recover it. 00:34:37.762 [2024-07-15 03:37:43.706563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.762 [2024-07-15 03:37:43.706592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.762 qpair failed and we were unable to recover it. 00:34:37.762 [2024-07-15 03:37:43.706797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.762 [2024-07-15 03:37:43.706825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.762 qpair failed and we were unable to recover it. 00:34:37.762 [2024-07-15 03:37:43.706964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.762 [2024-07-15 03:37:43.706991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.762 qpair failed and we were unable to recover it. 00:34:37.762 [2024-07-15 03:37:43.707172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.762 [2024-07-15 03:37:43.707201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.762 qpair failed and we were unable to recover it. 00:34:37.762 [2024-07-15 03:37:43.707321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.762 [2024-07-15 03:37:43.707350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.762 qpair failed and we were unable to recover it. 00:34:37.763 [2024-07-15 03:37:43.707514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.763 [2024-07-15 03:37:43.707559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.763 qpair failed and we were unable to recover it. 00:34:37.763 [2024-07-15 03:37:43.707737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.763 [2024-07-15 03:37:43.707766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.763 qpair failed and we were unable to recover it. 00:34:37.763 [2024-07-15 03:37:43.707921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.763 [2024-07-15 03:37:43.707948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.763 qpair failed and we were unable to recover it. 00:34:37.763 [2024-07-15 03:37:43.708060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.763 [2024-07-15 03:37:43.708088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.763 qpair failed and we were unable to recover it. 00:34:37.763 [2024-07-15 03:37:43.708278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.763 [2024-07-15 03:37:43.708307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.763 qpair failed and we were unable to recover it. 00:34:37.763 [2024-07-15 03:37:43.708441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.763 [2024-07-15 03:37:43.708487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.763 qpair failed and we were unable to recover it. 00:34:37.763 [2024-07-15 03:37:43.708666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.763 [2024-07-15 03:37:43.708695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.763 qpair failed and we were unable to recover it. 00:34:37.763 [2024-07-15 03:37:43.708823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.763 [2024-07-15 03:37:43.708852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.763 qpair failed and we were unable to recover it. 00:34:37.763 [2024-07-15 03:37:43.709042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.763 [2024-07-15 03:37:43.709068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.763 qpair failed and we were unable to recover it. 00:34:37.763 [2024-07-15 03:37:43.709248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.763 [2024-07-15 03:37:43.709276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.763 qpair failed and we were unable to recover it. 00:34:37.763 [2024-07-15 03:37:43.709500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.763 [2024-07-15 03:37:43.709551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.763 qpair failed and we were unable to recover it. 00:34:37.763 [2024-07-15 03:37:43.709765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.763 [2024-07-15 03:37:43.709794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.763 qpair failed and we were unable to recover it. 00:34:37.763 [2024-07-15 03:37:43.709941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.763 [2024-07-15 03:37:43.709967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.763 qpair failed and we were unable to recover it. 00:34:37.763 [2024-07-15 03:37:43.710088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.763 [2024-07-15 03:37:43.710115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.763 qpair failed and we were unable to recover it. 00:34:37.763 [2024-07-15 03:37:43.710252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.763 [2024-07-15 03:37:43.710278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.763 qpair failed and we were unable to recover it. 00:34:37.763 [2024-07-15 03:37:43.710428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.763 [2024-07-15 03:37:43.710457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.763 qpair failed and we were unable to recover it. 00:34:37.763 [2024-07-15 03:37:43.710578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.763 [2024-07-15 03:37:43.710608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.763 qpair failed and we were unable to recover it. 00:34:37.763 [2024-07-15 03:37:43.710788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.763 [2024-07-15 03:37:43.710817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.763 qpair failed and we were unable to recover it. 00:34:37.763 [2024-07-15 03:37:43.710976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.763 [2024-07-15 03:37:43.711003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.763 qpair failed and we were unable to recover it. 00:34:37.763 [2024-07-15 03:37:43.711146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.763 [2024-07-15 03:37:43.711189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.763 qpair failed and we were unable to recover it. 00:34:37.763 [2024-07-15 03:37:43.711317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.763 [2024-07-15 03:37:43.711342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.763 qpair failed and we were unable to recover it. 00:34:37.763 [2024-07-15 03:37:43.711479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.763 [2024-07-15 03:37:43.711504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.763 qpair failed and we were unable to recover it. 00:34:37.763 [2024-07-15 03:37:43.711674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.763 [2024-07-15 03:37:43.711703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.763 qpair failed and we were unable to recover it. 00:34:37.763 [2024-07-15 03:37:43.711830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.763 [2024-07-15 03:37:43.711857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.763 qpair failed and we were unable to recover it. 00:34:37.763 [2024-07-15 03:37:43.712003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.763 [2024-07-15 03:37:43.712030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.763 qpair failed and we were unable to recover it. 00:34:37.763 [2024-07-15 03:37:43.712194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.763 [2024-07-15 03:37:43.712222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.763 qpair failed and we were unable to recover it. 00:34:37.763 [2024-07-15 03:37:43.712389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.763 [2024-07-15 03:37:43.712417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.763 qpair failed and we were unable to recover it. 00:34:37.763 [2024-07-15 03:37:43.712604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.763 [2024-07-15 03:37:43.712633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.763 qpair failed and we were unable to recover it. 00:34:37.763 [2024-07-15 03:37:43.712821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.763 [2024-07-15 03:37:43.712850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.763 qpair failed and we were unable to recover it. 00:34:37.763 [2024-07-15 03:37:43.713016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.763 [2024-07-15 03:37:43.713043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.763 qpair failed and we were unable to recover it. 00:34:37.763 [2024-07-15 03:37:43.713218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.763 [2024-07-15 03:37:43.713259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.763 qpair failed and we were unable to recover it. 00:34:37.763 [2024-07-15 03:37:43.713508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.763 [2024-07-15 03:37:43.713537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.763 qpair failed and we were unable to recover it. 00:34:37.763 [2024-07-15 03:37:43.713704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.763 [2024-07-15 03:37:43.713733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.763 qpair failed and we were unable to recover it. 00:34:37.763 [2024-07-15 03:37:43.713890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.763 [2024-07-15 03:37:43.713934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.763 qpair failed and we were unable to recover it. 00:34:37.763 [2024-07-15 03:37:43.714071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.763 [2024-07-15 03:37:43.714097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.763 qpair failed and we were unable to recover it. 00:34:37.763 [2024-07-15 03:37:43.714237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.763 [2024-07-15 03:37:43.714265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.763 qpair failed and we were unable to recover it. 00:34:37.763 [2024-07-15 03:37:43.714450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.764 [2024-07-15 03:37:43.714479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.764 qpair failed and we were unable to recover it. 00:34:37.764 [2024-07-15 03:37:43.714639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.764 [2024-07-15 03:37:43.714667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.764 qpair failed and we were unable to recover it. 00:34:37.764 [2024-07-15 03:37:43.714808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.764 [2024-07-15 03:37:43.714852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.764 qpair failed and we were unable to recover it. 00:34:37.764 [2024-07-15 03:37:43.715036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.764 [2024-07-15 03:37:43.715066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.764 qpair failed and we were unable to recover it. 00:34:37.764 [2024-07-15 03:37:43.715208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.764 [2024-07-15 03:37:43.715237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.764 qpair failed and we were unable to recover it. 00:34:37.764 [2024-07-15 03:37:43.715424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.764 [2024-07-15 03:37:43.715450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.764 qpair failed and we were unable to recover it. 00:34:37.764 [2024-07-15 03:37:43.715634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.764 [2024-07-15 03:37:43.715662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.764 qpair failed and we were unable to recover it. 00:34:37.764 [2024-07-15 03:37:43.715795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.764 [2024-07-15 03:37:43.715823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.764 qpair failed and we were unable to recover it. 00:34:37.764 [2024-07-15 03:37:43.715983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.764 [2024-07-15 03:37:43.716010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.764 qpair failed and we were unable to recover it. 00:34:37.764 [2024-07-15 03:37:43.716146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.764 [2024-07-15 03:37:43.716189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.764 qpair failed and we were unable to recover it. 00:34:37.764 [2024-07-15 03:37:43.716407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.764 [2024-07-15 03:37:43.716458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.764 qpair failed and we were unable to recover it. 00:34:37.764 [2024-07-15 03:37:43.716617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.764 [2024-07-15 03:37:43.716644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.764 qpair failed and we were unable to recover it. 00:34:37.764 [2024-07-15 03:37:43.716803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.764 [2024-07-15 03:37:43.716832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.764 qpair failed and we were unable to recover it. 00:34:37.764 [2024-07-15 03:37:43.717001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.764 [2024-07-15 03:37:43.717028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.764 qpair failed and we were unable to recover it. 00:34:37.764 [2024-07-15 03:37:43.717173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.764 [2024-07-15 03:37:43.717200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.764 qpair failed and we were unable to recover it. 00:34:37.764 [2024-07-15 03:37:43.717356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.764 [2024-07-15 03:37:43.717385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.764 qpair failed and we were unable to recover it. 00:34:37.764 [2024-07-15 03:37:43.717586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.764 [2024-07-15 03:37:43.717612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.764 qpair failed and we were unable to recover it. 00:34:37.764 [2024-07-15 03:37:43.717752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.764 [2024-07-15 03:37:43.717778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.764 qpair failed and we were unable to recover it. 00:34:37.764 [2024-07-15 03:37:43.717920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.764 [2024-07-15 03:37:43.717947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.764 qpair failed and we were unable to recover it. 00:34:37.764 [2024-07-15 03:37:43.718059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.764 [2024-07-15 03:37:43.718086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.764 qpair failed and we were unable to recover it. 00:34:37.764 [2024-07-15 03:37:43.718227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.764 [2024-07-15 03:37:43.718254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.764 qpair failed and we were unable to recover it. 00:34:37.764 [2024-07-15 03:37:43.718398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.764 [2024-07-15 03:37:43.718424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.764 qpair failed and we were unable to recover it. 00:34:37.764 [2024-07-15 03:37:43.718606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.764 [2024-07-15 03:37:43.718634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.764 qpair failed and we were unable to recover it. 00:34:37.764 [2024-07-15 03:37:43.718791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.764 [2024-07-15 03:37:43.718817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.764 qpair failed and we were unable to recover it. 00:34:37.764 [2024-07-15 03:37:43.718961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.764 [2024-07-15 03:37:43.718988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.764 qpair failed and we were unable to recover it. 00:34:37.764 [2024-07-15 03:37:43.719126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.764 [2024-07-15 03:37:43.719152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.764 qpair failed and we were unable to recover it. 00:34:37.764 [2024-07-15 03:37:43.719290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.764 [2024-07-15 03:37:43.719317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.764 qpair failed and we were unable to recover it. 00:34:37.764 [2024-07-15 03:37:43.719427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.764 [2024-07-15 03:37:43.719472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.764 qpair failed and we were unable to recover it. 00:34:37.764 [2024-07-15 03:37:43.719664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.764 [2024-07-15 03:37:43.719693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.764 qpair failed and we were unable to recover it. 00:34:37.764 [2024-07-15 03:37:43.719853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.764 [2024-07-15 03:37:43.719884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.764 qpair failed and we were unable to recover it. 00:34:37.764 [2024-07-15 03:37:43.720072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.764 [2024-07-15 03:37:43.720101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.764 qpair failed and we were unable to recover it. 00:34:37.764 [2024-07-15 03:37:43.720254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.764 [2024-07-15 03:37:43.720283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.764 qpair failed and we were unable to recover it. 00:34:37.764 [2024-07-15 03:37:43.720467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.764 [2024-07-15 03:37:43.720492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.764 qpair failed and we were unable to recover it. 00:34:37.764 [2024-07-15 03:37:43.720643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.764 [2024-07-15 03:37:43.720672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.764 qpair failed and we were unable to recover it. 00:34:37.764 [2024-07-15 03:37:43.720850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.764 [2024-07-15 03:37:43.720884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.764 qpair failed and we were unable to recover it. 00:34:37.764 [2024-07-15 03:37:43.721042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.764 [2024-07-15 03:37:43.721069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.764 qpair failed and we were unable to recover it. 00:34:37.765 [2024-07-15 03:37:43.721202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.765 [2024-07-15 03:37:43.721228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.765 qpair failed and we were unable to recover it. 00:34:37.765 [2024-07-15 03:37:43.721398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.765 [2024-07-15 03:37:43.721428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.765 qpair failed and we were unable to recover it. 00:34:37.765 [2024-07-15 03:37:43.721586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.765 [2024-07-15 03:37:43.721612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.765 qpair failed and we were unable to recover it. 00:34:37.765 [2024-07-15 03:37:43.721791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.765 [2024-07-15 03:37:43.721819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.765 qpair failed and we were unable to recover it. 00:34:37.765 [2024-07-15 03:37:43.721944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.765 [2024-07-15 03:37:43.721974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.765 qpair failed and we were unable to recover it. 00:34:37.765 [2024-07-15 03:37:43.722106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.765 [2024-07-15 03:37:43.722134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.765 qpair failed and we were unable to recover it. 00:34:37.765 [2024-07-15 03:37:43.722288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.765 [2024-07-15 03:37:43.722329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.765 qpair failed and we were unable to recover it. 00:34:37.765 [2024-07-15 03:37:43.722461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.765 [2024-07-15 03:37:43.722495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.765 qpair failed and we were unable to recover it. 00:34:37.765 [2024-07-15 03:37:43.722686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.765 [2024-07-15 03:37:43.722712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.765 qpair failed and we were unable to recover it. 00:34:37.765 [2024-07-15 03:37:43.722862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.765 [2024-07-15 03:37:43.722899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.765 qpair failed and we were unable to recover it. 00:34:37.765 [2024-07-15 03:37:43.723064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.765 [2024-07-15 03:37:43.723090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.765 qpair failed and we were unable to recover it. 00:34:37.765 [2024-07-15 03:37:43.723256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.765 [2024-07-15 03:37:43.723282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.765 qpair failed and we were unable to recover it. 00:34:37.765 [2024-07-15 03:37:43.723440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.765 [2024-07-15 03:37:43.723470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.765 qpair failed and we were unable to recover it. 00:34:37.765 [2024-07-15 03:37:43.723637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.765 [2024-07-15 03:37:43.723665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.765 qpair failed and we were unable to recover it. 00:34:37.765 [2024-07-15 03:37:43.723849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.765 [2024-07-15 03:37:43.723956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.765 qpair failed and we were unable to recover it. 00:34:37.765 [2024-07-15 03:37:43.724113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.765 [2024-07-15 03:37:43.724139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.765 qpair failed and we were unable to recover it. 00:34:37.765 [2024-07-15 03:37:43.724312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.765 [2024-07-15 03:37:43.724342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.765 qpair failed and we were unable to recover it. 00:34:37.765 [2024-07-15 03:37:43.724520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.765 [2024-07-15 03:37:43.724546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.765 qpair failed and we were unable to recover it. 00:34:37.765 [2024-07-15 03:37:43.724705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.765 [2024-07-15 03:37:43.724734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.765 qpair failed and we were unable to recover it. 00:34:37.765 [2024-07-15 03:37:43.724890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.765 [2024-07-15 03:37:43.724920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.765 qpair failed and we were unable to recover it. 00:34:37.765 [2024-07-15 03:37:43.725049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.765 [2024-07-15 03:37:43.725076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.765 qpair failed and we were unable to recover it. 00:34:37.765 [2024-07-15 03:37:43.725221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.765 [2024-07-15 03:37:43.725262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.765 qpair failed and we were unable to recover it. 00:34:37.765 [2024-07-15 03:37:43.725420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.765 [2024-07-15 03:37:43.725448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.765 qpair failed and we were unable to recover it. 00:34:37.765 [2024-07-15 03:37:43.725608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.765 [2024-07-15 03:37:43.725633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.765 qpair failed and we were unable to recover it. 00:34:37.765 [2024-07-15 03:37:43.725769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.765 [2024-07-15 03:37:43.725813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.765 qpair failed and we were unable to recover it. 00:34:37.765 [2024-07-15 03:37:43.725964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.765 [2024-07-15 03:37:43.725993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.765 qpair failed and we were unable to recover it. 00:34:37.765 [2024-07-15 03:37:43.726177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.765 [2024-07-15 03:37:43.726203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.765 qpair failed and we were unable to recover it. 00:34:37.765 [2024-07-15 03:37:43.726354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.765 [2024-07-15 03:37:43.726383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.765 qpair failed and we were unable to recover it. 00:34:37.765 [2024-07-15 03:37:43.726543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.765 [2024-07-15 03:37:43.726571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.765 qpair failed and we were unable to recover it. 00:34:37.765 [2024-07-15 03:37:43.726729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.765 [2024-07-15 03:37:43.726755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.765 qpair failed and we were unable to recover it. 00:34:37.765 [2024-07-15 03:37:43.726944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.765 [2024-07-15 03:37:43.726974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.765 qpair failed and we were unable to recover it. 00:34:37.765 [2024-07-15 03:37:43.727142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.765 [2024-07-15 03:37:43.727169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.765 qpair failed and we were unable to recover it. 00:34:37.765 [2024-07-15 03:37:43.727311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.765 [2024-07-15 03:37:43.727337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.765 qpair failed and we were unable to recover it. 00:34:37.765 [2024-07-15 03:37:43.727476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.765 [2024-07-15 03:37:43.727503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.765 qpair failed and we were unable to recover it. 00:34:37.765 [2024-07-15 03:37:43.727696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.765 [2024-07-15 03:37:43.727724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.765 qpair failed and we were unable to recover it. 00:34:37.765 [2024-07-15 03:37:43.727855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.765 [2024-07-15 03:37:43.727888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.766 qpair failed and we were unable to recover it. 00:34:37.766 [2024-07-15 03:37:43.728052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.766 [2024-07-15 03:37:43.728095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.766 qpair failed and we were unable to recover it. 00:34:37.766 [2024-07-15 03:37:43.728249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.766 [2024-07-15 03:37:43.728277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.766 qpair failed and we were unable to recover it. 00:34:37.766 [2024-07-15 03:37:43.728430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.766 [2024-07-15 03:37:43.728456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.766 qpair failed and we were unable to recover it. 00:34:37.766 [2024-07-15 03:37:43.728615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.766 [2024-07-15 03:37:43.728641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.766 qpair failed and we were unable to recover it. 00:34:37.766 [2024-07-15 03:37:43.728784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.766 [2024-07-15 03:37:43.728809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.766 qpair failed and we were unable to recover it. 00:34:37.766 [2024-07-15 03:37:43.728994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.766 [2024-07-15 03:37:43.729022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.766 qpair failed and we were unable to recover it. 00:34:37.766 [2024-07-15 03:37:43.729164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.766 [2024-07-15 03:37:43.729190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.766 qpair failed and we were unable to recover it. 00:34:37.766 [2024-07-15 03:37:43.729299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.766 [2024-07-15 03:37:43.729324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.766 qpair failed and we were unable to recover it. 00:34:37.766 [2024-07-15 03:37:43.729464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.766 [2024-07-15 03:37:43.729490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.766 qpair failed and we were unable to recover it. 00:34:37.766 [2024-07-15 03:37:43.729630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.766 [2024-07-15 03:37:43.729672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.766 qpair failed and we were unable to recover it. 00:34:37.766 [2024-07-15 03:37:43.729840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.766 [2024-07-15 03:37:43.729867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.766 qpair failed and we were unable to recover it. 00:34:37.766 [2024-07-15 03:37:43.730046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.766 [2024-07-15 03:37:43.730076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.766 qpair failed and we were unable to recover it. 00:34:37.766 [2024-07-15 03:37:43.730237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.766 [2024-07-15 03:37:43.730266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.766 qpair failed and we were unable to recover it. 00:34:37.766 [2024-07-15 03:37:43.730413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.766 [2024-07-15 03:37:43.730441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.766 qpair failed and we were unable to recover it. 00:34:37.766 [2024-07-15 03:37:43.730602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.766 [2024-07-15 03:37:43.730628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.766 qpair failed and we were unable to recover it. 00:34:37.766 [2024-07-15 03:37:43.730770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.766 [2024-07-15 03:37:43.730797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.766 qpair failed and we were unable to recover it. 00:34:37.766 [2024-07-15 03:37:43.730991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.766 [2024-07-15 03:37:43.731021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.766 qpair failed and we were unable to recover it. 00:34:37.766 [2024-07-15 03:37:43.731169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.766 [2024-07-15 03:37:43.731195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.766 qpair failed and we were unable to recover it. 00:34:37.766 [2024-07-15 03:37:43.731335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.766 [2024-07-15 03:37:43.731361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.766 qpair failed and we were unable to recover it. 00:34:37.766 [2024-07-15 03:37:43.731559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.766 [2024-07-15 03:37:43.731588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.766 qpair failed and we were unable to recover it. 00:34:37.766 [2024-07-15 03:37:43.731736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.766 [2024-07-15 03:37:43.731764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.766 qpair failed and we were unable to recover it. 00:34:37.766 [2024-07-15 03:37:43.731961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.766 [2024-07-15 03:37:43.731987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.766 qpair failed and we were unable to recover it. 00:34:37.766 [2024-07-15 03:37:43.732133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.766 [2024-07-15 03:37:43.732159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.766 qpair failed and we were unable to recover it. 00:34:37.766 [2024-07-15 03:37:43.732322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.766 [2024-07-15 03:37:43.732348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.766 qpair failed and we were unable to recover it. 00:34:37.766 [2024-07-15 03:37:43.732511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.766 [2024-07-15 03:37:43.732555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.766 qpair failed and we were unable to recover it. 00:34:37.766 [2024-07-15 03:37:43.732726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.766 [2024-07-15 03:37:43.732755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.766 qpair failed and we were unable to recover it. 00:34:37.766 [2024-07-15 03:37:43.732888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.766 [2024-07-15 03:37:43.732914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.766 qpair failed and we were unable to recover it. 00:34:37.766 [2024-07-15 03:37:43.733077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.766 [2024-07-15 03:37:43.733121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.766 qpair failed and we were unable to recover it. 00:34:37.766 [2024-07-15 03:37:43.733250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.766 [2024-07-15 03:37:43.733278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.766 qpair failed and we were unable to recover it. 00:34:37.766 [2024-07-15 03:37:43.733436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.766 [2024-07-15 03:37:43.733462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.766 qpair failed and we were unable to recover it. 00:34:37.766 [2024-07-15 03:37:43.733574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.766 [2024-07-15 03:37:43.733599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.766 qpair failed and we were unable to recover it. 00:34:37.766 [2024-07-15 03:37:43.733766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.766 [2024-07-15 03:37:43.733794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.766 qpair failed and we were unable to recover it. 00:34:37.766 [2024-07-15 03:37:43.733980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.766 [2024-07-15 03:37:43.734006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.766 qpair failed and we were unable to recover it. 00:34:37.766 [2024-07-15 03:37:43.734192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.766 [2024-07-15 03:37:43.734220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.766 qpair failed and we were unable to recover it. 00:34:37.766 [2024-07-15 03:37:43.734339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.766 [2024-07-15 03:37:43.734367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.766 qpair failed and we were unable to recover it. 00:34:37.766 [2024-07-15 03:37:43.734531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.766 [2024-07-15 03:37:43.734557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.767 qpair failed and we were unable to recover it. 00:34:37.767 [2024-07-15 03:37:43.734668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.767 [2024-07-15 03:37:43.734694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.767 qpair failed and we were unable to recover it. 00:34:37.767 [2024-07-15 03:37:43.734890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.767 [2024-07-15 03:37:43.734919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.767 qpair failed and we were unable to recover it. 00:34:37.767 [2024-07-15 03:37:43.735077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.767 [2024-07-15 03:37:43.735103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.767 qpair failed and we were unable to recover it. 00:34:37.767 [2024-07-15 03:37:43.735216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.767 [2024-07-15 03:37:43.735242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.767 qpair failed and we were unable to recover it. 00:34:37.767 [2024-07-15 03:37:43.735437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.767 [2024-07-15 03:37:43.735466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.767 qpair failed and we were unable to recover it. 00:34:37.767 [2024-07-15 03:37:43.735600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.767 [2024-07-15 03:37:43.735626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.767 qpair failed and we were unable to recover it. 00:34:37.767 [2024-07-15 03:37:43.735760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.767 [2024-07-15 03:37:43.735785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.767 qpair failed and we were unable to recover it. 00:34:37.767 [2024-07-15 03:37:43.735948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.767 [2024-07-15 03:37:43.735977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.767 qpair failed and we were unable to recover it. 00:34:37.767 [2024-07-15 03:37:43.736142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.767 [2024-07-15 03:37:43.736168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.767 qpair failed and we were unable to recover it. 00:34:37.767 [2024-07-15 03:37:43.736349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.767 [2024-07-15 03:37:43.736378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.767 qpair failed and we were unable to recover it. 00:34:37.767 [2024-07-15 03:37:43.736570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.767 [2024-07-15 03:37:43.736599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.767 qpair failed and we were unable to recover it. 00:34:37.767 [2024-07-15 03:37:43.736777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.767 [2024-07-15 03:37:43.736802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.767 qpair failed and we were unable to recover it. 00:34:37.767 [2024-07-15 03:37:43.736937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.767 [2024-07-15 03:37:43.736981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.767 qpair failed and we were unable to recover it. 00:34:37.767 [2024-07-15 03:37:43.737134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.767 [2024-07-15 03:37:43.737165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.767 qpair failed and we were unable to recover it. 00:34:37.767 [2024-07-15 03:37:43.737328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.767 [2024-07-15 03:37:43.737354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.767 qpair failed and we were unable to recover it. 00:34:37.767 [2024-07-15 03:37:43.737483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.767 [2024-07-15 03:37:43.737513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.767 qpair failed and we were unable to recover it. 00:34:37.767 [2024-07-15 03:37:43.737674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.767 [2024-07-15 03:37:43.737704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.767 qpair failed and we were unable to recover it. 00:34:37.767 [2024-07-15 03:37:43.737933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.767 [2024-07-15 03:37:43.737959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.767 qpair failed and we were unable to recover it. 00:34:37.767 [2024-07-15 03:37:43.738059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.767 [2024-07-15 03:37:43.738085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.767 qpair failed and we were unable to recover it. 00:34:37.767 [2024-07-15 03:37:43.738216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.767 [2024-07-15 03:37:43.738244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.767 qpair failed and we were unable to recover it. 00:34:37.767 [2024-07-15 03:37:43.738400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.767 [2024-07-15 03:37:43.738425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.767 qpair failed and we were unable to recover it. 00:34:37.767 [2024-07-15 03:37:43.738564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.767 [2024-07-15 03:37:43.738605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.767 qpair failed and we were unable to recover it. 00:34:37.767 [2024-07-15 03:37:43.738762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.767 [2024-07-15 03:37:43.738791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.767 qpair failed and we were unable to recover it. 00:34:37.767 [2024-07-15 03:37:43.738951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.767 [2024-07-15 03:37:43.738977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.767 qpair failed and we were unable to recover it. 00:34:37.767 [2024-07-15 03:37:43.739078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.767 [2024-07-15 03:37:43.739104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.767 qpair failed and we were unable to recover it. 00:34:37.767 [2024-07-15 03:37:43.739241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.767 [2024-07-15 03:37:43.739269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.767 qpair failed and we were unable to recover it. 00:34:37.767 [2024-07-15 03:37:43.739402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.767 [2024-07-15 03:37:43.739429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.767 qpair failed and we were unable to recover it. 00:34:37.767 [2024-07-15 03:37:43.739584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.767 [2024-07-15 03:37:43.739611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.767 qpair failed and we were unable to recover it. 00:34:37.767 [2024-07-15 03:37:43.739786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.767 [2024-07-15 03:37:43.739816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.767 qpair failed and we were unable to recover it. 00:34:37.767 [2024-07-15 03:37:43.740017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.767 [2024-07-15 03:37:43.740044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.767 qpair failed and we were unable to recover it. 00:34:37.767 [2024-07-15 03:37:43.740150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.767 [2024-07-15 03:37:43.740194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.768 qpair failed and we were unable to recover it. 00:34:37.768 [2024-07-15 03:37:43.740378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.768 [2024-07-15 03:37:43.740406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.768 qpair failed and we were unable to recover it. 00:34:37.768 [2024-07-15 03:37:43.740569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.768 [2024-07-15 03:37:43.740595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.768 qpair failed and we were unable to recover it. 00:34:37.768 [2024-07-15 03:37:43.740750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.768 [2024-07-15 03:37:43.740778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.768 qpair failed and we were unable to recover it. 00:34:37.768 [2024-07-15 03:37:43.740924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.768 [2024-07-15 03:37:43.740950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.768 qpair failed and we were unable to recover it. 00:34:37.768 [2024-07-15 03:37:43.741093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.768 [2024-07-15 03:37:43.741119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.768 qpair failed and we were unable to recover it. 00:34:37.768 [2024-07-15 03:37:43.741295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.768 [2024-07-15 03:37:43.741324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.768 qpair failed and we were unable to recover it. 00:34:37.768 [2024-07-15 03:37:43.741472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.768 [2024-07-15 03:37:43.741501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.768 qpair failed and we were unable to recover it. 00:34:37.768 [2024-07-15 03:37:43.741632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.768 [2024-07-15 03:37:43.741659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.768 qpair failed and we were unable to recover it. 00:34:37.768 [2024-07-15 03:37:43.741812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.768 [2024-07-15 03:37:43.741838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.768 qpair failed and we were unable to recover it. 00:34:37.768 [2024-07-15 03:37:43.742006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.768 [2024-07-15 03:37:43.742036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.768 qpair failed and we were unable to recover it. 00:34:37.768 [2024-07-15 03:37:43.742214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.768 [2024-07-15 03:37:43.742240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.768 qpair failed and we were unable to recover it. 00:34:37.768 [2024-07-15 03:37:43.742428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.768 [2024-07-15 03:37:43.742456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.768 qpair failed and we were unable to recover it. 00:34:37.768 [2024-07-15 03:37:43.742645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.768 [2024-07-15 03:37:43.742673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.768 qpair failed and we were unable to recover it. 00:34:37.768 [2024-07-15 03:37:43.742829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.768 [2024-07-15 03:37:43.742854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.768 qpair failed and we were unable to recover it. 00:34:37.768 [2024-07-15 03:37:43.743004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.768 [2024-07-15 03:37:43.743030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.768 qpair failed and we were unable to recover it. 00:34:37.768 [2024-07-15 03:37:43.743144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.768 [2024-07-15 03:37:43.743171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.768 qpair failed and we were unable to recover it. 00:34:37.768 [2024-07-15 03:37:43.743277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.768 [2024-07-15 03:37:43.743303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.768 qpair failed and we were unable to recover it. 00:34:37.768 [2024-07-15 03:37:43.743467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.768 [2024-07-15 03:37:43.743493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.768 qpair failed and we were unable to recover it. 00:34:37.768 [2024-07-15 03:37:43.743662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.768 [2024-07-15 03:37:43.743690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.768 qpair failed and we were unable to recover it. 00:34:37.768 [2024-07-15 03:37:43.743847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.768 [2024-07-15 03:37:43.743873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.768 qpair failed and we were unable to recover it. 00:34:37.768 [2024-07-15 03:37:43.744050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.768 [2024-07-15 03:37:43.744079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.768 qpair failed and we were unable to recover it. 00:34:37.768 [2024-07-15 03:37:43.744244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.768 [2024-07-15 03:37:43.744273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.768 qpair failed and we were unable to recover it. 00:34:37.768 [2024-07-15 03:37:43.744426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.768 [2024-07-15 03:37:43.744453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.768 qpair failed and we were unable to recover it. 00:34:37.768 [2024-07-15 03:37:43.744609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.768 [2024-07-15 03:37:43.744638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.768 qpair failed and we were unable to recover it. 00:34:37.768 [2024-07-15 03:37:43.744825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.768 [2024-07-15 03:37:43.744860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.768 qpair failed and we were unable to recover it. 00:34:37.768 [2024-07-15 03:37:43.745027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.768 [2024-07-15 03:37:43.745053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.768 qpair failed and we were unable to recover it. 00:34:37.768 [2024-07-15 03:37:43.745213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.768 [2024-07-15 03:37:43.745242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.768 qpair failed and we were unable to recover it. 00:34:37.768 [2024-07-15 03:37:43.745398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.768 [2024-07-15 03:37:43.745428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.768 qpair failed and we were unable to recover it. 00:34:37.768 [2024-07-15 03:37:43.745607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.768 [2024-07-15 03:37:43.745632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.768 qpair failed and we were unable to recover it. 00:34:37.768 [2024-07-15 03:37:43.745768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.768 [2024-07-15 03:37:43.745813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.768 qpair failed and we were unable to recover it. 00:34:37.768 [2024-07-15 03:37:43.745966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.768 [2024-07-15 03:37:43.745996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.768 qpair failed and we were unable to recover it. 00:34:37.768 [2024-07-15 03:37:43.746148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.768 [2024-07-15 03:37:43.746174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.768 qpair failed and we were unable to recover it. 00:34:37.768 [2024-07-15 03:37:43.746301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.768 [2024-07-15 03:37:43.746343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.768 qpair failed and we were unable to recover it. 00:34:37.768 [2024-07-15 03:37:43.746522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.768 [2024-07-15 03:37:43.746550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.768 qpair failed and we were unable to recover it. 00:34:37.768 [2024-07-15 03:37:43.746709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.768 [2024-07-15 03:37:43.746735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.768 qpair failed and we were unable to recover it. 00:34:37.768 [2024-07-15 03:37:43.746874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.769 [2024-07-15 03:37:43.746924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.769 qpair failed and we were unable to recover it. 00:34:37.769 [2024-07-15 03:37:43.747049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.769 [2024-07-15 03:37:43.747078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.769 qpair failed and we were unable to recover it. 00:34:37.769 [2024-07-15 03:37:43.747237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.769 [2024-07-15 03:37:43.747263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.769 qpair failed and we were unable to recover it. 00:34:37.769 [2024-07-15 03:37:43.747450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.769 [2024-07-15 03:37:43.747479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.769 qpair failed and we were unable to recover it. 00:34:37.769 [2024-07-15 03:37:43.747625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.769 [2024-07-15 03:37:43.747653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.769 qpair failed and we were unable to recover it. 00:34:37.769 [2024-07-15 03:37:43.747783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.769 [2024-07-15 03:37:43.747829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.769 qpair failed and we were unable to recover it. 00:34:37.769 [2024-07-15 03:37:43.748006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.769 [2024-07-15 03:37:43.748033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.769 qpair failed and we were unable to recover it. 00:34:37.769 [2024-07-15 03:37:43.748220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.769 [2024-07-15 03:37:43.748248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.769 qpair failed and we were unable to recover it. 00:34:37.769 [2024-07-15 03:37:43.748436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.769 [2024-07-15 03:37:43.748462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.769 qpair failed and we were unable to recover it. 00:34:37.769 [2024-07-15 03:37:43.748618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.769 [2024-07-15 03:37:43.748648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.769 qpair failed and we were unable to recover it. 00:34:37.769 [2024-07-15 03:37:43.748802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.769 [2024-07-15 03:37:43.748831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.769 qpair failed and we were unable to recover it. 00:34:37.769 [2024-07-15 03:37:43.748993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.769 [2024-07-15 03:37:43.749020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.769 qpair failed and we were unable to recover it. 00:34:37.769 [2024-07-15 03:37:43.749131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.769 [2024-07-15 03:37:43.749157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.769 qpair failed and we were unable to recover it. 00:34:37.769 [2024-07-15 03:37:43.749359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.769 [2024-07-15 03:37:43.749388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.769 qpair failed and we were unable to recover it. 00:34:37.769 [2024-07-15 03:37:43.749513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.769 [2024-07-15 03:37:43.749538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.769 qpair failed and we were unable to recover it. 00:34:37.769 [2024-07-15 03:37:43.749686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.769 [2024-07-15 03:37:43.749712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.769 qpair failed and we were unable to recover it. 00:34:37.769 [2024-07-15 03:37:43.749914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.769 [2024-07-15 03:37:43.749957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.769 qpair failed and we were unable to recover it. 00:34:37.769 [2024-07-15 03:37:43.750128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.769 [2024-07-15 03:37:43.750153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.769 qpair failed and we were unable to recover it. 00:34:37.769 [2024-07-15 03:37:43.750300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.769 [2024-07-15 03:37:43.750328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.769 qpair failed and we were unable to recover it. 00:34:37.769 [2024-07-15 03:37:43.750455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.769 [2024-07-15 03:37:43.750483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.769 qpair failed and we were unable to recover it. 00:34:37.769 [2024-07-15 03:37:43.750649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.769 [2024-07-15 03:37:43.750674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.769 qpair failed and we were unable to recover it. 00:34:37.769 [2024-07-15 03:37:43.750834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.769 [2024-07-15 03:37:43.750860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.769 qpair failed and we were unable to recover it. 00:34:37.769 [2024-07-15 03:37:43.751042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.769 [2024-07-15 03:37:43.751071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.769 qpair failed and we were unable to recover it. 00:34:37.769 [2024-07-15 03:37:43.751226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.769 [2024-07-15 03:37:43.751253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.769 qpair failed and we were unable to recover it. 00:34:37.769 [2024-07-15 03:37:43.751417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.769 [2024-07-15 03:37:43.751459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.769 qpair failed and we were unable to recover it. 00:34:37.769 [2024-07-15 03:37:43.751609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.769 [2024-07-15 03:37:43.751637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.769 qpair failed and we were unable to recover it. 00:34:37.769 [2024-07-15 03:37:43.751794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.769 [2024-07-15 03:37:43.751821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.769 qpair failed and we were unable to recover it. 00:34:37.769 [2024-07-15 03:37:43.751978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.769 [2024-07-15 03:37:43.752007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.769 qpair failed and we were unable to recover it. 00:34:37.769 [2024-07-15 03:37:43.752165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.769 [2024-07-15 03:37:43.752193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.769 qpair failed and we were unable to recover it. 00:34:37.769 [2024-07-15 03:37:43.752369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.769 [2024-07-15 03:37:43.752400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.769 qpair failed and we were unable to recover it. 00:34:37.769 [2024-07-15 03:37:43.752513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.769 [2024-07-15 03:37:43.752539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.769 qpair failed and we were unable to recover it. 00:34:37.769 [2024-07-15 03:37:43.752682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.769 [2024-07-15 03:37:43.752708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.769 qpair failed and we were unable to recover it. 00:34:37.769 [2024-07-15 03:37:43.752848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.769 [2024-07-15 03:37:43.752874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.769 qpair failed and we were unable to recover it. 00:34:37.769 [2024-07-15 03:37:43.753024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.769 [2024-07-15 03:37:43.753050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.769 qpair failed and we were unable to recover it. 00:34:37.769 [2024-07-15 03:37:43.753190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.769 [2024-07-15 03:37:43.753216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.769 qpair failed and we were unable to recover it. 00:34:37.769 [2024-07-15 03:37:43.753382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.769 [2024-07-15 03:37:43.753408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.769 qpair failed and we were unable to recover it. 00:34:37.769 [2024-07-15 03:37:43.753564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.770 [2024-07-15 03:37:43.753594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.770 qpair failed and we were unable to recover it. 00:34:37.770 [2024-07-15 03:37:43.753726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.770 [2024-07-15 03:37:43.753755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.770 qpair failed and we were unable to recover it. 00:34:37.770 [2024-07-15 03:37:43.753941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.770 [2024-07-15 03:37:43.753968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.770 qpair failed and we were unable to recover it. 00:34:37.770 [2024-07-15 03:37:43.754081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.770 [2024-07-15 03:37:43.754124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.770 qpair failed and we were unable to recover it. 00:34:37.770 [2024-07-15 03:37:43.754307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.770 [2024-07-15 03:37:43.754336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.770 qpair failed and we were unable to recover it. 00:34:37.770 [2024-07-15 03:37:43.754524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.770 [2024-07-15 03:37:43.754550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.770 qpair failed and we were unable to recover it. 00:34:37.770 [2024-07-15 03:37:43.754690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.770 [2024-07-15 03:37:43.754715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.770 qpair failed and we were unable to recover it. 00:34:37.770 [2024-07-15 03:37:43.754857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.770 [2024-07-15 03:37:43.754889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.770 qpair failed and we were unable to recover it. 00:34:37.770 [2024-07-15 03:37:43.755002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.770 [2024-07-15 03:37:43.755027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.770 qpair failed and we were unable to recover it. 00:34:37.770 [2024-07-15 03:37:43.755161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.770 [2024-07-15 03:37:43.755202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.770 qpair failed and we were unable to recover it. 00:34:37.770 [2024-07-15 03:37:43.755320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.770 [2024-07-15 03:37:43.755349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.770 qpair failed and we were unable to recover it. 00:34:37.770 [2024-07-15 03:37:43.755489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.770 [2024-07-15 03:37:43.755515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.770 qpair failed and we were unable to recover it. 00:34:37.770 [2024-07-15 03:37:43.755649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.770 [2024-07-15 03:37:43.755675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.770 qpair failed and we were unable to recover it. 00:34:37.770 [2024-07-15 03:37:43.755841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.770 [2024-07-15 03:37:43.755869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.770 qpair failed and we were unable to recover it. 00:34:37.770 [2024-07-15 03:37:43.756027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.770 [2024-07-15 03:37:43.756053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.770 qpair failed and we were unable to recover it. 00:34:37.770 [2024-07-15 03:37:43.756189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.770 [2024-07-15 03:37:43.756216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.770 qpair failed and we were unable to recover it. 00:34:37.770 [2024-07-15 03:37:43.756407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.770 [2024-07-15 03:37:43.756435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.770 qpair failed and we were unable to recover it. 00:34:37.770 [2024-07-15 03:37:43.756597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.770 [2024-07-15 03:37:43.756623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.770 qpair failed and we were unable to recover it. 00:34:37.770 [2024-07-15 03:37:43.756758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.770 [2024-07-15 03:37:43.756801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.770 qpair failed and we were unable to recover it. 00:34:37.770 [2024-07-15 03:37:43.756953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.770 [2024-07-15 03:37:43.756983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.770 qpair failed and we were unable to recover it. 00:34:37.770 [2024-07-15 03:37:43.757152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.770 [2024-07-15 03:37:43.757179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.770 qpair failed and we were unable to recover it. 00:34:37.770 [2024-07-15 03:37:43.757315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.770 [2024-07-15 03:37:43.757358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.770 qpair failed and we were unable to recover it. 00:34:37.770 [2024-07-15 03:37:43.757508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.770 [2024-07-15 03:37:43.757537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.770 qpair failed and we were unable to recover it. 00:34:37.770 [2024-07-15 03:37:43.757715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.770 [2024-07-15 03:37:43.757741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.770 qpair failed and we were unable to recover it. 00:34:37.770 [2024-07-15 03:37:43.757846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.770 [2024-07-15 03:37:43.757900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.770 qpair failed and we were unable to recover it. 00:34:37.770 [2024-07-15 03:37:43.758060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.770 [2024-07-15 03:37:43.758089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.770 qpair failed and we were unable to recover it. 00:34:37.770 [2024-07-15 03:37:43.758241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.770 [2024-07-15 03:37:43.758267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.770 qpair failed and we were unable to recover it. 00:34:37.770 [2024-07-15 03:37:43.758450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.770 [2024-07-15 03:37:43.758479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.770 qpair failed and we were unable to recover it. 00:34:37.770 [2024-07-15 03:37:43.758652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.770 [2024-07-15 03:37:43.758694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.770 qpair failed and we were unable to recover it. 00:34:37.770 [2024-07-15 03:37:43.758895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.770 [2024-07-15 03:37:43.758925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.770 qpair failed and we were unable to recover it. 00:34:37.770 [2024-07-15 03:37:43.759106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.770 [2024-07-15 03:37:43.759132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.770 qpair failed and we were unable to recover it. 00:34:37.770 [2024-07-15 03:37:43.759328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.770 [2024-07-15 03:37:43.759357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.770 qpair failed and we were unable to recover it. 00:34:37.770 [2024-07-15 03:37:43.759485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.770 [2024-07-15 03:37:43.759510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.770 qpair failed and we were unable to recover it. 00:34:37.770 [2024-07-15 03:37:43.759650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.770 [2024-07-15 03:37:43.759680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.770 qpair failed and we were unable to recover it. 00:34:37.770 [2024-07-15 03:37:43.759830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.770 [2024-07-15 03:37:43.759855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.770 qpair failed and we were unable to recover it. 00:34:37.770 [2024-07-15 03:37:43.759988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.770 [2024-07-15 03:37:43.760027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.770 qpair failed and we were unable to recover it. 00:34:37.770 [2024-07-15 03:37:43.760174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.770 [2024-07-15 03:37:43.760201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.771 qpair failed and we were unable to recover it. 00:34:37.771 [2024-07-15 03:37:43.760349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.771 [2024-07-15 03:37:43.760375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.771 qpair failed and we were unable to recover it. 00:34:37.771 [2024-07-15 03:37:43.760559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.771 [2024-07-15 03:37:43.760602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.771 qpair failed and we were unable to recover it. 00:34:37.771 [2024-07-15 03:37:43.760760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.771 [2024-07-15 03:37:43.760803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.771 qpair failed and we were unable to recover it. 00:34:37.771 [2024-07-15 03:37:43.760953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.771 [2024-07-15 03:37:43.760980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.771 qpair failed and we were unable to recover it. 00:34:37.771 [2024-07-15 03:37:43.761147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.771 [2024-07-15 03:37:43.761173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.771 qpair failed and we were unable to recover it. 00:34:37.771 [2024-07-15 03:37:43.761310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.771 [2024-07-15 03:37:43.761336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.771 qpair failed and we were unable to recover it. 00:34:37.771 [2024-07-15 03:37:43.761489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.771 [2024-07-15 03:37:43.761515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.771 qpair failed and we were unable to recover it. 00:34:37.771 [2024-07-15 03:37:43.761673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.771 [2024-07-15 03:37:43.761702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.771 qpair failed and we were unable to recover it. 00:34:37.771 [2024-07-15 03:37:43.761828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.771 [2024-07-15 03:37:43.761855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.771 qpair failed and we were unable to recover it. 00:34:37.771 [2024-07-15 03:37:43.762036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.771 [2024-07-15 03:37:43.762073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:37.771 qpair failed and we were unable to recover it. 00:34:37.771 [2024-07-15 03:37:43.762389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.771 [2024-07-15 03:37:43.762447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.771 qpair failed and we were unable to recover it. 00:34:37.771 [2024-07-15 03:37:43.762666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.771 [2024-07-15 03:37:43.762695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.771 qpair failed and we were unable to recover it. 00:34:37.771 [2024-07-15 03:37:43.762889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.771 [2024-07-15 03:37:43.762916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.771 qpair failed and we were unable to recover it. 00:34:37.771 [2024-07-15 03:37:43.763076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.771 [2024-07-15 03:37:43.763104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.771 qpair failed and we were unable to recover it. 00:34:37.771 [2024-07-15 03:37:43.763248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.771 [2024-07-15 03:37:43.763279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.771 qpair failed and we were unable to recover it. 00:34:37.771 [2024-07-15 03:37:43.763463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.771 [2024-07-15 03:37:43.763492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.771 qpair failed and we were unable to recover it. 00:34:37.771 [2024-07-15 03:37:43.763646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.771 [2024-07-15 03:37:43.763673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.771 qpair failed and we were unable to recover it. 00:34:37.771 [2024-07-15 03:37:43.763831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.771 [2024-07-15 03:37:43.763857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.771 qpair failed and we were unable to recover it. 00:34:37.771 [2024-07-15 03:37:43.764042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.771 [2024-07-15 03:37:43.764088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.771 qpair failed and we were unable to recover it. 00:34:37.771 [2024-07-15 03:37:43.764272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.771 [2024-07-15 03:37:43.764299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.771 qpair failed and we were unable to recover it. 00:34:37.771 [2024-07-15 03:37:43.764485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.771 [2024-07-15 03:37:43.764528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.771 qpair failed and we were unable to recover it. 00:34:37.771 [2024-07-15 03:37:43.764672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.771 [2024-07-15 03:37:43.764698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.771 qpair failed and we were unable to recover it. 00:34:37.771 [2024-07-15 03:37:43.764840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.771 [2024-07-15 03:37:43.764866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.771 qpair failed and we were unable to recover it. 00:34:37.771 [2024-07-15 03:37:43.765047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.771 [2024-07-15 03:37:43.765090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.771 qpair failed and we were unable to recover it. 00:34:37.771 [2024-07-15 03:37:43.765231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.771 [2024-07-15 03:37:43.765274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.771 qpair failed and we were unable to recover it. 00:34:37.771 [2024-07-15 03:37:43.765515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.771 [2024-07-15 03:37:43.765564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.771 qpair failed and we were unable to recover it. 00:34:37.771 [2024-07-15 03:37:43.765726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.771 [2024-07-15 03:37:43.765751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.771 qpair failed and we were unable to recover it. 00:34:37.771 [2024-07-15 03:37:43.765862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.771 [2024-07-15 03:37:43.765897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.771 qpair failed and we were unable to recover it. 00:34:37.771 [2024-07-15 03:37:43.766042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.771 [2024-07-15 03:37:43.766068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.771 qpair failed and we were unable to recover it. 00:34:37.771 [2024-07-15 03:37:43.766196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.771 [2024-07-15 03:37:43.766225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.771 qpair failed and we were unable to recover it. 00:34:37.771 [2024-07-15 03:37:43.766352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.771 [2024-07-15 03:37:43.766380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.771 qpair failed and we were unable to recover it. 00:34:37.771 [2024-07-15 03:37:43.766531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.771 [2024-07-15 03:37:43.766559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.771 qpair failed and we were unable to recover it. 00:34:37.771 [2024-07-15 03:37:43.766717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.771 [2024-07-15 03:37:43.766745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.771 qpair failed and we were unable to recover it. 00:34:37.771 [2024-07-15 03:37:43.766892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.771 [2024-07-15 03:37:43.766937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.771 qpair failed and we were unable to recover it. 00:34:37.771 [2024-07-15 03:37:43.767043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.771 [2024-07-15 03:37:43.767068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.771 qpair failed and we were unable to recover it. 00:34:37.771 [2024-07-15 03:37:43.767256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.771 [2024-07-15 03:37:43.767284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.771 qpair failed and we were unable to recover it. 00:34:37.771 [2024-07-15 03:37:43.767437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.772 [2024-07-15 03:37:43.767465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.772 qpair failed and we were unable to recover it. 00:34:37.772 [2024-07-15 03:37:43.767602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.772 [2024-07-15 03:37:43.767631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.772 qpair failed and we were unable to recover it. 00:34:37.772 [2024-07-15 03:37:43.767773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.772 [2024-07-15 03:37:43.767801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.772 qpair failed and we were unable to recover it. 00:34:37.772 [2024-07-15 03:37:43.767968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.772 [2024-07-15 03:37:43.767993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.772 qpair failed and we were unable to recover it. 00:34:37.772 [2024-07-15 03:37:43.768130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.772 [2024-07-15 03:37:43.768156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.772 qpair failed and we were unable to recover it. 00:34:37.772 [2024-07-15 03:37:43.768425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.772 [2024-07-15 03:37:43.768480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.772 qpair failed and we were unable to recover it. 00:34:37.772 [2024-07-15 03:37:43.768631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.772 [2024-07-15 03:37:43.768658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.772 qpair failed and we were unable to recover it. 00:34:37.772 [2024-07-15 03:37:43.768825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.772 [2024-07-15 03:37:43.768853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.772 qpair failed and we were unable to recover it. 00:34:37.772 [2024-07-15 03:37:43.769025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.772 [2024-07-15 03:37:43.769053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.772 qpair failed and we were unable to recover it. 00:34:37.772 [2024-07-15 03:37:43.769172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.772 [2024-07-15 03:37:43.769228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.772 qpair failed and we were unable to recover it. 00:34:37.772 [2024-07-15 03:37:43.769485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.772 [2024-07-15 03:37:43.769537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.772 qpair failed and we were unable to recover it. 00:34:37.772 [2024-07-15 03:37:43.769688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.772 [2024-07-15 03:37:43.769717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.772 qpair failed and we were unable to recover it. 00:34:37.772 [2024-07-15 03:37:43.769846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.772 [2024-07-15 03:37:43.769872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.772 qpair failed and we were unable to recover it. 00:34:37.772 [2024-07-15 03:37:43.770061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.772 [2024-07-15 03:37:43.770087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.772 qpair failed and we were unable to recover it. 00:34:37.772 [2024-07-15 03:37:43.770306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.772 [2024-07-15 03:37:43.770364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.772 qpair failed and we were unable to recover it. 00:34:37.772 [2024-07-15 03:37:43.770596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.772 [2024-07-15 03:37:43.770648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.772 qpair failed and we were unable to recover it. 00:34:37.772 [2024-07-15 03:37:43.770797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.772 [2024-07-15 03:37:43.770825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.772 qpair failed and we were unable to recover it. 00:34:37.772 [2024-07-15 03:37:43.770964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.772 [2024-07-15 03:37:43.770990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.772 qpair failed and we were unable to recover it. 00:34:37.772 [2024-07-15 03:37:43.771104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.772 [2024-07-15 03:37:43.771130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.772 qpair failed and we were unable to recover it. 00:34:37.772 [2024-07-15 03:37:43.771270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.772 [2024-07-15 03:37:43.771312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.772 qpair failed and we were unable to recover it. 00:34:37.772 [2024-07-15 03:37:43.771460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.772 [2024-07-15 03:37:43.771487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.772 qpair failed and we were unable to recover it. 00:34:37.772 [2024-07-15 03:37:43.771661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.772 [2024-07-15 03:37:43.771690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.772 qpair failed and we were unable to recover it. 00:34:37.772 [2024-07-15 03:37:43.771841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.772 [2024-07-15 03:37:43.771869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.772 qpair failed and we were unable to recover it. 00:34:37.772 [2024-07-15 03:37:43.772035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.772 [2024-07-15 03:37:43.772060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.772 qpair failed and we were unable to recover it. 00:34:37.772 [2024-07-15 03:37:43.772197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.772 [2024-07-15 03:37:43.772223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.772 qpair failed and we were unable to recover it. 00:34:37.772 [2024-07-15 03:37:43.772376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.772 [2024-07-15 03:37:43.772404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.772 qpair failed and we were unable to recover it. 00:34:37.772 [2024-07-15 03:37:43.772583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.772 [2024-07-15 03:37:43.772611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.772 qpair failed and we were unable to recover it. 00:34:37.772 [2024-07-15 03:37:43.772793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.772 [2024-07-15 03:37:43.772821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.772 qpair failed and we were unable to recover it. 00:34:37.772 [2024-07-15 03:37:43.772996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.772 [2024-07-15 03:37:43.773022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.772 qpair failed and we were unable to recover it. 00:34:37.772 [2024-07-15 03:37:43.773175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.772 [2024-07-15 03:37:43.773203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.772 qpair failed and we were unable to recover it. 00:34:37.772 [2024-07-15 03:37:43.773419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.772 [2024-07-15 03:37:43.773447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.772 qpair failed and we were unable to recover it. 00:34:37.772 [2024-07-15 03:37:43.773601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.772 [2024-07-15 03:37:43.773629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.772 qpair failed and we were unable to recover it. 00:34:37.772 [2024-07-15 03:37:43.773782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.773 [2024-07-15 03:37:43.773812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.773 qpair failed and we were unable to recover it. 00:34:37.773 [2024-07-15 03:37:43.773996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.773 [2024-07-15 03:37:43.774022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.773 qpair failed and we were unable to recover it. 00:34:37.773 [2024-07-15 03:37:43.774129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.773 [2024-07-15 03:37:43.774169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.773 qpair failed and we were unable to recover it. 00:34:37.773 [2024-07-15 03:37:43.774329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.773 [2024-07-15 03:37:43.774355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.773 qpair failed and we were unable to recover it. 00:34:37.773 [2024-07-15 03:37:43.774571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.773 [2024-07-15 03:37:43.774600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.773 qpair failed and we were unable to recover it. 00:34:37.773 [2024-07-15 03:37:43.774727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.773 [2024-07-15 03:37:43.774755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.773 qpair failed and we were unable to recover it. 00:34:37.773 [2024-07-15 03:37:43.774888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.773 [2024-07-15 03:37:43.774944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.773 qpair failed and we were unable to recover it. 00:34:37.773 [2024-07-15 03:37:43.775091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.773 [2024-07-15 03:37:43.775119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.773 qpair failed and we were unable to recover it. 00:34:37.773 [2024-07-15 03:37:43.775257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.773 [2024-07-15 03:37:43.775300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:37.773 qpair failed and we were unable to recover it. 00:34:37.773 [2024-07-15 03:37:43.775478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.773 [2024-07-15 03:37:43.775507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.773 qpair failed and we were unable to recover it. 00:34:37.773 [2024-07-15 03:37:43.775633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.773 [2024-07-15 03:37:43.775662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.773 qpair failed and we were unable to recover it. 00:34:37.773 [2024-07-15 03:37:43.775844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.773 [2024-07-15 03:37:43.775873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.773 qpair failed and we were unable to recover it. 00:34:37.773 [2024-07-15 03:37:43.776036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.773 [2024-07-15 03:37:43.776062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.773 qpair failed and we were unable to recover it. 00:34:37.773 [2024-07-15 03:37:43.776189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.773 [2024-07-15 03:37:43.776217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.773 qpair failed and we were unable to recover it. 00:34:37.773 [2024-07-15 03:37:43.776396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.773 [2024-07-15 03:37:43.776424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.773 qpair failed and we were unable to recover it. 00:34:37.773 [2024-07-15 03:37:43.776546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.773 [2024-07-15 03:37:43.776574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.773 qpair failed and we were unable to recover it. 00:34:37.773 [2024-07-15 03:37:43.776749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.773 [2024-07-15 03:37:43.776777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.773 qpair failed and we were unable to recover it. 00:34:37.773 [2024-07-15 03:37:43.776918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.773 [2024-07-15 03:37:43.776945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.773 qpair failed and we were unable to recover it. 00:34:37.773 [2024-07-15 03:37:43.777083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.773 [2024-07-15 03:37:43.777108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.773 qpair failed and we were unable to recover it. 00:34:37.773 [2024-07-15 03:37:43.777247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.773 [2024-07-15 03:37:43.777272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.773 qpair failed and we were unable to recover it. 00:34:37.773 [2024-07-15 03:37:43.777409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.773 [2024-07-15 03:37:43.777451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.773 qpair failed and we were unable to recover it. 00:34:37.773 [2024-07-15 03:37:43.777608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.773 [2024-07-15 03:37:43.777636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.773 qpair failed and we were unable to recover it. 00:34:37.773 [2024-07-15 03:37:43.777807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.773 [2024-07-15 03:37:43.777835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.773 qpair failed and we were unable to recover it. 00:34:37.773 [2024-07-15 03:37:43.777992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.773 [2024-07-15 03:37:43.778018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.773 qpair failed and we were unable to recover it. 00:34:37.773 [2024-07-15 03:37:43.778170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.773 [2024-07-15 03:37:43.778198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.773 qpair failed and we were unable to recover it. 00:34:37.773 [2024-07-15 03:37:43.778367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.773 [2024-07-15 03:37:43.778395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.773 qpair failed and we were unable to recover it. 00:34:37.773 [2024-07-15 03:37:43.778516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.773 [2024-07-15 03:37:43.778545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.773 qpair failed and we were unable to recover it. 00:34:37.773 [2024-07-15 03:37:43.778718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.773 [2024-07-15 03:37:43.778746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.773 qpair failed and we were unable to recover it. 00:34:37.773 [2024-07-15 03:37:43.778885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.773 [2024-07-15 03:37:43.778912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.773 qpair failed and we were unable to recover it. 00:34:37.773 [2024-07-15 03:37:43.779076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.773 [2024-07-15 03:37:43.779102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.773 qpair failed and we were unable to recover it. 00:34:37.773 [2024-07-15 03:37:43.779260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.773 [2024-07-15 03:37:43.779288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.773 qpair failed and we were unable to recover it. 00:34:37.773 [2024-07-15 03:37:43.779440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.773 [2024-07-15 03:37:43.779468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.773 qpair failed and we were unable to recover it. 00:34:37.773 [2024-07-15 03:37:43.779646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.773 [2024-07-15 03:37:43.779674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.773 qpair failed and we were unable to recover it. 00:34:37.773 [2024-07-15 03:37:43.779816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.773 [2024-07-15 03:37:43.779844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.773 qpair failed and we were unable to recover it. 00:34:37.773 [2024-07-15 03:37:43.780026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.773 [2024-07-15 03:37:43.780065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.773 qpair failed and we were unable to recover it. 00:34:37.773 [2024-07-15 03:37:43.780244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.773 [2024-07-15 03:37:43.780271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.773 qpair failed and we were unable to recover it. 00:34:37.773 [2024-07-15 03:37:43.780457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.773 [2024-07-15 03:37:43.780508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.773 qpair failed and we were unable to recover it. 00:34:37.773 [2024-07-15 03:37:43.780666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.774 [2024-07-15 03:37:43.780709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.774 qpair failed and we were unable to recover it. 00:34:37.774 [2024-07-15 03:37:43.780856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.774 [2024-07-15 03:37:43.780892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.774 qpair failed and we were unable to recover it. 00:34:37.774 [2024-07-15 03:37:43.781033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.774 [2024-07-15 03:37:43.781059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.774 qpair failed and we were unable to recover it. 00:34:37.774 [2024-07-15 03:37:43.781190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.774 [2024-07-15 03:37:43.781233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.774 qpair failed and we were unable to recover it. 00:34:37.774 [2024-07-15 03:37:43.781429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.774 [2024-07-15 03:37:43.781473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.774 qpair failed and we were unable to recover it. 00:34:37.774 [2024-07-15 03:37:43.781637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.774 [2024-07-15 03:37:43.781681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:37.774 qpair failed and we were unable to recover it. 00:34:37.774 [2024-07-15 03:37:43.781824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.774 [2024-07-15 03:37:43.781851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.774 qpair failed and we were unable to recover it. 00:34:37.774 [2024-07-15 03:37:43.781973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.774 [2024-07-15 03:37:43.781998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.774 qpair failed and we were unable to recover it. 00:34:37.774 [2024-07-15 03:37:43.782104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.774 [2024-07-15 03:37:43.782129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.774 qpair failed and we were unable to recover it. 00:34:37.774 [2024-07-15 03:37:43.782292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.774 [2024-07-15 03:37:43.782317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.774 qpair failed and we were unable to recover it. 00:34:37.774 [2024-07-15 03:37:43.782481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.774 [2024-07-15 03:37:43.782506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.774 qpair failed and we were unable to recover it. 00:34:37.774 [2024-07-15 03:37:43.782671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.774 [2024-07-15 03:37:43.782696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.774 qpair failed and we were unable to recover it. 00:34:37.774 [2024-07-15 03:37:43.782826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.774 [2024-07-15 03:37:43.782851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.774 qpair failed and we were unable to recover it. 00:34:37.774 [2024-07-15 03:37:43.782979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.774 [2024-07-15 03:37:43.783004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.774 qpair failed and we were unable to recover it. 00:34:37.774 [2024-07-15 03:37:43.783119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.774 [2024-07-15 03:37:43.783161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.774 qpair failed and we were unable to recover it. 00:34:37.774 [2024-07-15 03:37:43.783374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.774 [2024-07-15 03:37:43.783402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.774 qpair failed and we were unable to recover it. 00:34:37.774 [2024-07-15 03:37:43.783530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.774 [2024-07-15 03:37:43.783557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.774 qpair failed and we were unable to recover it. 00:34:37.774 [2024-07-15 03:37:43.783705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.774 [2024-07-15 03:37:43.783733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.774 qpair failed and we were unable to recover it. 00:34:37.774 [2024-07-15 03:37:43.783860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.774 [2024-07-15 03:37:43.783891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.774 qpair failed and we were unable to recover it. 00:34:37.774 [2024-07-15 03:37:43.784015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.774 [2024-07-15 03:37:43.784040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.774 qpair failed and we were unable to recover it. 00:34:37.774 [2024-07-15 03:37:43.784197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.774 [2024-07-15 03:37:43.784225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.774 qpair failed and we were unable to recover it. 00:34:37.774 [2024-07-15 03:37:43.784350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.774 [2024-07-15 03:37:43.784392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.774 qpair failed and we were unable to recover it. 00:34:37.774 [2024-07-15 03:37:43.784550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.774 [2024-07-15 03:37:43.784578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.774 qpair failed and we were unable to recover it. 00:34:37.774 [2024-07-15 03:37:43.784756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.774 [2024-07-15 03:37:43.784785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.774 qpair failed and we were unable to recover it. 00:34:37.774 [2024-07-15 03:37:43.784920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.774 [2024-07-15 03:37:43.784946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.774 qpair failed and we were unable to recover it. 00:34:37.774 [2024-07-15 03:37:43.785060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.774 [2024-07-15 03:37:43.785085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.774 qpair failed and we were unable to recover it. 00:34:37.774 [2024-07-15 03:37:43.785275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.774 [2024-07-15 03:37:43.785307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.774 qpair failed and we were unable to recover it. 00:34:37.774 [2024-07-15 03:37:43.785482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.774 [2024-07-15 03:37:43.785510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.774 qpair failed and we were unable to recover it. 00:34:37.774 [2024-07-15 03:37:43.785631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.774 [2024-07-15 03:37:43.785659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.774 qpair failed and we were unable to recover it. 00:34:37.774 [2024-07-15 03:37:43.785775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.774 [2024-07-15 03:37:43.785803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.774 qpair failed and we were unable to recover it. 00:34:37.774 [2024-07-15 03:37:43.785981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.774 [2024-07-15 03:37:43.786007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.774 qpair failed and we were unable to recover it. 00:34:37.774 [2024-07-15 03:37:43.786125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.774 [2024-07-15 03:37:43.786150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.774 qpair failed and we were unable to recover it. 00:34:37.774 [2024-07-15 03:37:43.786285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.774 [2024-07-15 03:37:43.786310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.774 qpair failed and we were unable to recover it. 00:34:37.774 [2024-07-15 03:37:43.786468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.774 [2024-07-15 03:37:43.786496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.774 qpair failed and we were unable to recover it. 00:34:37.774 [2024-07-15 03:37:43.786609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.774 [2024-07-15 03:37:43.786637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.774 qpair failed and we were unable to recover it. 00:34:37.774 [2024-07-15 03:37:43.786761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.774 [2024-07-15 03:37:43.786789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.774 qpair failed and we were unable to recover it. 00:34:37.774 [2024-07-15 03:37:43.786972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.774 [2024-07-15 03:37:43.786998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.774 qpair failed and we were unable to recover it. 00:34:37.774 [2024-07-15 03:37:43.787112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.775 [2024-07-15 03:37:43.787138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.775 qpair failed and we were unable to recover it. 00:34:37.775 [2024-07-15 03:37:43.787300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.775 [2024-07-15 03:37:43.787327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.775 qpair failed and we were unable to recover it. 00:34:37.775 [2024-07-15 03:37:43.787510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.775 [2024-07-15 03:37:43.787538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.775 qpair failed and we were unable to recover it. 00:34:37.775 [2024-07-15 03:37:43.787725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.775 [2024-07-15 03:37:43.787754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.775 qpair failed and we were unable to recover it. 00:34:37.775 [2024-07-15 03:37:43.787903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.775 [2024-07-15 03:37:43.787944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.775 qpair failed and we were unable to recover it. 00:34:37.775 [2024-07-15 03:37:43.788081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.775 [2024-07-15 03:37:43.788106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.775 qpair failed and we were unable to recover it. 00:34:37.775 [2024-07-15 03:37:43.788292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.775 [2024-07-15 03:37:43.788321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.775 qpair failed and we were unable to recover it. 00:34:37.775 [2024-07-15 03:37:43.788446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.775 [2024-07-15 03:37:43.788474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.775 qpair failed and we were unable to recover it. 00:34:37.775 [2024-07-15 03:37:43.788653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.775 [2024-07-15 03:37:43.788682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.775 qpair failed and we were unable to recover it. 00:34:37.775 [2024-07-15 03:37:43.788861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.775 [2024-07-15 03:37:43.788899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.775 qpair failed and we were unable to recover it. 00:34:37.775 [2024-07-15 03:37:43.789016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.775 [2024-07-15 03:37:43.789042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.775 qpair failed and we were unable to recover it. 00:34:37.775 [2024-07-15 03:37:43.789178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.775 [2024-07-15 03:37:43.789203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.775 qpair failed and we were unable to recover it. 00:34:37.775 [2024-07-15 03:37:43.789392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.775 [2024-07-15 03:37:43.789420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.775 qpair failed and we were unable to recover it. 00:34:37.775 [2024-07-15 03:37:43.789625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.775 [2024-07-15 03:37:43.789654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.775 qpair failed and we were unable to recover it. 00:34:37.775 [2024-07-15 03:37:43.789915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.775 [2024-07-15 03:37:43.789941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.775 qpair failed and we were unable to recover it. 00:34:37.775 [2024-07-15 03:37:43.790080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.775 [2024-07-15 03:37:43.790105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.775 qpair failed and we were unable to recover it. 00:34:37.775 [2024-07-15 03:37:43.790287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.775 [2024-07-15 03:37:43.790320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.775 qpair failed and we were unable to recover it. 00:34:37.775 [2024-07-15 03:37:43.790572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.775 [2024-07-15 03:37:43.790601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.775 qpair failed and we were unable to recover it. 00:34:37.775 [2024-07-15 03:37:43.790758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.775 [2024-07-15 03:37:43.790786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.775 qpair failed and we were unable to recover it. 00:34:37.775 [2024-07-15 03:37:43.790976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.775 [2024-07-15 03:37:43.791002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.775 qpair failed and we were unable to recover it. 00:34:37.775 [2024-07-15 03:37:43.791167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.775 [2024-07-15 03:37:43.791192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.775 qpair failed and we were unable to recover it. 00:34:37.775 [2024-07-15 03:37:43.791356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.775 [2024-07-15 03:37:43.791384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.775 qpair failed and we were unable to recover it. 00:34:37.775 [2024-07-15 03:37:43.791561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.775 [2024-07-15 03:37:43.791589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.775 qpair failed and we were unable to recover it. 00:34:37.775 [2024-07-15 03:37:43.791776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.775 [2024-07-15 03:37:43.791804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.775 qpair failed and we were unable to recover it. 00:34:37.775 [2024-07-15 03:37:43.791946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.775 [2024-07-15 03:37:43.791972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.775 qpair failed and we were unable to recover it. 00:34:37.775 [2024-07-15 03:37:43.792108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.775 [2024-07-15 03:37:43.792133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.775 qpair failed and we were unable to recover it. 00:34:37.775 [2024-07-15 03:37:43.792241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.775 [2024-07-15 03:37:43.792267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.775 qpair failed and we were unable to recover it. 00:34:37.775 [2024-07-15 03:37:43.792417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.775 [2024-07-15 03:37:43.792459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.775 qpair failed and we were unable to recover it. 00:34:37.775 [2024-07-15 03:37:43.792635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.775 [2024-07-15 03:37:43.792664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.775 qpair failed and we were unable to recover it. 00:34:37.775 [2024-07-15 03:37:43.792805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.775 [2024-07-15 03:37:43.792833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.775 qpair failed and we were unable to recover it. 00:34:37.775 [2024-07-15 03:37:43.793014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.775 [2024-07-15 03:37:43.793040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.775 qpair failed and we were unable to recover it. 00:34:37.775 [2024-07-15 03:37:43.793151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.775 [2024-07-15 03:37:43.793194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.775 qpair failed and we were unable to recover it. 00:34:37.775 [2024-07-15 03:37:43.793356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.775 [2024-07-15 03:37:43.793382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.775 qpair failed and we were unable to recover it. 00:34:37.775 [2024-07-15 03:37:43.793518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.775 [2024-07-15 03:37:43.793560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.775 qpair failed and we were unable to recover it. 00:34:37.775 [2024-07-15 03:37:43.793736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.775 [2024-07-15 03:37:43.793764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.775 qpair failed and we were unable to recover it. 00:34:37.775 [2024-07-15 03:37:43.793921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.775 [2024-07-15 03:37:43.793947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.775 qpair failed and we were unable to recover it. 00:34:37.775 [2024-07-15 03:37:43.794085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.775 [2024-07-15 03:37:43.794110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.775 qpair failed and we were unable to recover it. 00:34:37.776 [2024-07-15 03:37:43.794268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.776 [2024-07-15 03:37:43.794296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.776 qpair failed and we were unable to recover it. 00:34:37.776 [2024-07-15 03:37:43.794470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.776 [2024-07-15 03:37:43.794498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.776 qpair failed and we were unable to recover it. 00:34:37.776 [2024-07-15 03:37:43.794655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.776 [2024-07-15 03:37:43.794683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.776 qpair failed and we were unable to recover it. 00:34:37.776 [2024-07-15 03:37:43.794860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.776 [2024-07-15 03:37:43.794897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.776 qpair failed and we were unable to recover it. 00:34:37.776 [2024-07-15 03:37:43.795035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.776 [2024-07-15 03:37:43.795063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.776 qpair failed and we were unable to recover it. 00:34:37.776 [2024-07-15 03:37:43.795208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.776 [2024-07-15 03:37:43.795234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.776 qpair failed and we were unable to recover it. 00:34:37.776 [2024-07-15 03:37:43.795368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.776 [2024-07-15 03:37:43.795397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.776 qpair failed and we were unable to recover it. 00:34:37.776 [2024-07-15 03:37:43.795581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.776 [2024-07-15 03:37:43.795609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.776 qpair failed and we were unable to recover it. 00:34:37.776 [2024-07-15 03:37:43.795766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.776 [2024-07-15 03:37:43.795794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.776 qpair failed and we were unable to recover it. 00:34:37.776 [2024-07-15 03:37:43.795959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.776 [2024-07-15 03:37:43.795985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.776 qpair failed and we were unable to recover it. 00:34:37.776 [2024-07-15 03:37:43.796119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.776 [2024-07-15 03:37:43.796145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.776 qpair failed and we were unable to recover it. 00:34:37.776 [2024-07-15 03:37:43.796326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.776 [2024-07-15 03:37:43.796354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.776 qpair failed and we were unable to recover it. 00:34:37.776 [2024-07-15 03:37:43.796473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.776 [2024-07-15 03:37:43.796501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.776 qpair failed and we were unable to recover it. 00:34:37.776 [2024-07-15 03:37:43.796664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.776 [2024-07-15 03:37:43.796689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.776 qpair failed and we were unable to recover it. 00:34:37.776 [2024-07-15 03:37:43.796884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.776 [2024-07-15 03:37:43.796914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.776 qpair failed and we were unable to recover it. 00:34:37.776 [2024-07-15 03:37:43.797036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.776 [2024-07-15 03:37:43.797066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.776 qpair failed and we were unable to recover it. 00:34:37.776 [2024-07-15 03:37:43.797255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.776 [2024-07-15 03:37:43.797281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.776 qpair failed and we were unable to recover it. 00:34:37.776 [2024-07-15 03:37:43.797439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.776 [2024-07-15 03:37:43.797468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.776 qpair failed and we were unable to recover it. 00:34:37.776 [2024-07-15 03:37:43.797649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.776 [2024-07-15 03:37:43.797677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.776 qpair failed and we were unable to recover it. 00:34:37.776 [2024-07-15 03:37:43.797861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.776 [2024-07-15 03:37:43.797893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.776 qpair failed and we were unable to recover it. 00:34:37.776 [2024-07-15 03:37:43.798032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.776 [2024-07-15 03:37:43.798062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.776 qpair failed and we were unable to recover it. 00:34:37.776 [2024-07-15 03:37:43.798240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.776 [2024-07-15 03:37:43.798269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.776 qpair failed and we were unable to recover it. 00:34:37.776 [2024-07-15 03:37:43.798424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.776 [2024-07-15 03:37:43.798450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.776 qpair failed and we were unable to recover it. 00:34:37.776 [2024-07-15 03:37:43.798593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.776 [2024-07-15 03:37:43.798636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.776 qpair failed and we were unable to recover it. 00:34:37.776 [2024-07-15 03:37:43.798783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.776 [2024-07-15 03:37:43.798811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.776 qpair failed and we were unable to recover it. 00:34:37.776 [2024-07-15 03:37:43.798968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.776 [2024-07-15 03:37:43.798995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.776 qpair failed and we were unable to recover it. 00:34:37.776 [2024-07-15 03:37:43.799107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.776 [2024-07-15 03:37:43.799133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.776 qpair failed and we were unable to recover it. 00:34:37.776 [2024-07-15 03:37:43.799273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.776 [2024-07-15 03:37:43.799298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.776 qpair failed and we were unable to recover it. 00:34:37.776 [2024-07-15 03:37:43.799399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.776 [2024-07-15 03:37:43.799424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.776 qpair failed and we were unable to recover it. 00:34:37.776 [2024-07-15 03:37:43.799557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.776 [2024-07-15 03:37:43.799582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.776 qpair failed and we were unable to recover it. 00:34:37.776 [2024-07-15 03:37:43.799717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.776 [2024-07-15 03:37:43.799745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.776 qpair failed and we were unable to recover it. 00:34:37.776 [2024-07-15 03:37:43.799905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.776 [2024-07-15 03:37:43.799931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.776 qpair failed and we were unable to recover it. 00:34:37.776 [2024-07-15 03:37:43.800037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.776 [2024-07-15 03:37:43.800063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.776 qpair failed and we were unable to recover it. 00:34:37.776 [2024-07-15 03:37:43.800252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.777 [2024-07-15 03:37:43.800280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.777 qpair failed and we were unable to recover it. 00:34:37.777 [2024-07-15 03:37:43.800435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.777 [2024-07-15 03:37:43.800460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.777 qpair failed and we were unable to recover it. 00:34:37.777 [2024-07-15 03:37:43.800591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.777 [2024-07-15 03:37:43.800617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.777 qpair failed and we were unable to recover it. 00:34:37.777 [2024-07-15 03:37:43.804057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.777 [2024-07-15 03:37:43.804102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.777 qpair failed and we were unable to recover it. 00:34:37.777 [2024-07-15 03:37:43.804301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.777 [2024-07-15 03:37:43.804328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.777 qpair failed and we were unable to recover it. 00:34:37.777 [2024-07-15 03:37:43.804485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.777 [2024-07-15 03:37:43.804513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.777 qpair failed and we were unable to recover it. 00:34:37.777 [2024-07-15 03:37:43.804691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.777 [2024-07-15 03:37:43.804720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.777 qpair failed and we were unable to recover it. 00:34:37.777 [2024-07-15 03:37:43.804886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.777 [2024-07-15 03:37:43.804914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.777 qpair failed and we were unable to recover it. 00:34:37.777 [2024-07-15 03:37:43.805100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.777 [2024-07-15 03:37:43.805128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.777 qpair failed and we were unable to recover it. 00:34:37.777 [2024-07-15 03:37:43.805282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.777 [2024-07-15 03:37:43.805310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.777 qpair failed and we were unable to recover it. 00:34:37.777 [2024-07-15 03:37:43.805468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.777 [2024-07-15 03:37:43.805494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.777 qpair failed and we were unable to recover it. 00:34:37.777 [2024-07-15 03:37:43.805620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.777 [2024-07-15 03:37:43.805646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.777 qpair failed and we were unable to recover it. 00:34:37.777 [2024-07-15 03:37:43.805842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.777 [2024-07-15 03:37:43.805870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.777 qpair failed and we were unable to recover it. 00:34:37.777 [2024-07-15 03:37:43.806056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.777 [2024-07-15 03:37:43.806082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.777 qpair failed and we were unable to recover it. 00:34:37.777 [2024-07-15 03:37:43.806227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.777 [2024-07-15 03:37:43.806257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.777 qpair failed and we were unable to recover it. 00:34:37.777 [2024-07-15 03:37:43.806392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.777 [2024-07-15 03:37:43.806418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.777 qpair failed and we were unable to recover it. 00:34:37.777 [2024-07-15 03:37:43.806582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.777 [2024-07-15 03:37:43.806607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.777 qpair failed and we were unable to recover it. 00:34:37.777 [2024-07-15 03:37:43.806770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.777 [2024-07-15 03:37:43.806798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.777 qpair failed and we were unable to recover it. 00:34:37.777 [2024-07-15 03:37:43.806966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.777 [2024-07-15 03:37:43.806992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.777 qpair failed and we were unable to recover it. 00:34:37.777 [2024-07-15 03:37:43.807102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.777 [2024-07-15 03:37:43.807127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.777 qpair failed and we were unable to recover it. 00:34:37.777 [2024-07-15 03:37:43.807270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.777 [2024-07-15 03:37:43.807296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.777 qpair failed and we were unable to recover it. 00:34:37.777 [2024-07-15 03:37:43.807436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.777 [2024-07-15 03:37:43.807465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.777 qpair failed and we were unable to recover it. 00:34:37.777 [2024-07-15 03:37:43.807625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.777 [2024-07-15 03:37:43.807650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.777 qpair failed and we were unable to recover it. 00:34:37.777 [2024-07-15 03:37:43.807765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.777 [2024-07-15 03:37:43.807790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.777 qpair failed and we were unable to recover it. 00:34:37.777 [2024-07-15 03:37:43.807930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.777 [2024-07-15 03:37:43.807956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.777 qpair failed and we were unable to recover it. 00:34:37.777 [2024-07-15 03:37:43.808093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.777 [2024-07-15 03:37:43.808119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.777 qpair failed and we were unable to recover it. 00:34:37.777 [2024-07-15 03:37:43.808228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.777 [2024-07-15 03:37:43.808270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.777 qpair failed and we were unable to recover it. 00:34:37.777 [2024-07-15 03:37:43.808431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.777 [2024-07-15 03:37:43.808460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.777 qpair failed and we were unable to recover it. 00:34:37.777 [2024-07-15 03:37:43.808625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.777 [2024-07-15 03:37:43.808651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.777 qpair failed and we were unable to recover it. 00:34:37.777 [2024-07-15 03:37:43.808790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.777 [2024-07-15 03:37:43.808833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.777 qpair failed and we were unable to recover it. 00:34:37.777 [2024-07-15 03:37:43.808996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.777 [2024-07-15 03:37:43.809025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.777 qpair failed and we were unable to recover it. 00:34:37.777 [2024-07-15 03:37:43.809163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.777 [2024-07-15 03:37:43.809188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.777 qpair failed and we were unable to recover it. 00:34:37.777 [2024-07-15 03:37:43.809356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.777 [2024-07-15 03:37:43.809381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.777 qpair failed and we were unable to recover it. 00:34:37.777 [2024-07-15 03:37:43.809512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.777 [2024-07-15 03:37:43.809541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.777 qpair failed and we were unable to recover it. 00:34:37.777 [2024-07-15 03:37:43.809706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.777 [2024-07-15 03:37:43.809731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.777 qpair failed and we were unable to recover it. 00:34:37.777 [2024-07-15 03:37:43.809916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.777 [2024-07-15 03:37:43.809945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.777 qpair failed and we were unable to recover it. 00:34:37.777 [2024-07-15 03:37:43.810122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.777 [2024-07-15 03:37:43.810151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.777 qpair failed and we were unable to recover it. 00:34:37.777 [2024-07-15 03:37:43.810332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.777 [2024-07-15 03:37:43.810357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.777 qpair failed and we were unable to recover it. 00:34:37.777 [2024-07-15 03:37:43.810518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.777 [2024-07-15 03:37:43.810546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.777 qpair failed and we were unable to recover it. 00:34:37.777 [2024-07-15 03:37:43.810690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.778 [2024-07-15 03:37:43.810718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.778 qpair failed and we were unable to recover it. 00:34:37.778 [2024-07-15 03:37:43.810884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.778 [2024-07-15 03:37:43.810910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.778 qpair failed and we were unable to recover it. 00:34:37.778 [2024-07-15 03:37:43.811072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.778 [2024-07-15 03:37:43.811104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.778 qpair failed and we were unable to recover it. 00:34:37.778 [2024-07-15 03:37:43.811235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.778 [2024-07-15 03:37:43.811263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.778 qpair failed and we were unable to recover it. 00:34:37.778 [2024-07-15 03:37:43.811424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.778 [2024-07-15 03:37:43.811449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.778 qpair failed and we were unable to recover it. 00:34:37.778 [2024-07-15 03:37:43.811626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.778 [2024-07-15 03:37:43.811655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.778 qpair failed and we were unable to recover it. 00:34:37.778 [2024-07-15 03:37:43.811773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.778 [2024-07-15 03:37:43.811801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.778 qpair failed and we were unable to recover it. 00:34:37.778 [2024-07-15 03:37:43.811964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.778 [2024-07-15 03:37:43.811990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.778 qpair failed and we were unable to recover it. 00:34:37.778 [2024-07-15 03:37:43.812098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.778 [2024-07-15 03:37:43.812124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.778 qpair failed and we were unable to recover it. 00:34:37.778 [2024-07-15 03:37:43.812296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.778 [2024-07-15 03:37:43.812329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.778 qpair failed and we were unable to recover it. 00:34:37.778 [2024-07-15 03:37:43.812492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.778 [2024-07-15 03:37:43.812518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.778 qpair failed and we were unable to recover it. 00:34:37.778 [2024-07-15 03:37:43.812667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.778 [2024-07-15 03:37:43.812692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.778 qpair failed and we were unable to recover it. 00:34:37.778 [2024-07-15 03:37:43.812873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.778 [2024-07-15 03:37:43.812911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.778 qpair failed and we were unable to recover it. 00:34:37.778 [2024-07-15 03:37:43.813047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.778 [2024-07-15 03:37:43.813073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.778 qpair failed and we were unable to recover it. 00:34:37.778 [2024-07-15 03:37:43.813213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.778 [2024-07-15 03:37:43.813238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.778 qpair failed and we were unable to recover it. 00:34:37.778 [2024-07-15 03:37:43.813424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.778 [2024-07-15 03:37:43.813452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.778 qpair failed and we were unable to recover it. 00:34:37.778 [2024-07-15 03:37:43.813587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.778 [2024-07-15 03:37:43.813612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.778 qpair failed and we were unable to recover it. 00:34:37.778 [2024-07-15 03:37:43.813753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.778 [2024-07-15 03:37:43.813778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.778 qpair failed and we were unable to recover it. 00:34:37.778 [2024-07-15 03:37:43.813943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.778 [2024-07-15 03:37:43.813969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.778 qpair failed and we were unable to recover it. 00:34:37.778 [2024-07-15 03:37:43.814164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.778 [2024-07-15 03:37:43.814189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.778 qpair failed and we were unable to recover it. 00:34:37.778 [2024-07-15 03:37:43.814333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.778 [2024-07-15 03:37:43.814359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.778 qpair failed and we were unable to recover it. 00:34:37.778 [2024-07-15 03:37:43.814494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.778 [2024-07-15 03:37:43.814519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.778 qpair failed and we were unable to recover it. 00:34:37.778 [2024-07-15 03:37:43.814629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.778 [2024-07-15 03:37:43.814654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.778 qpair failed and we were unable to recover it. 00:34:37.778 [2024-07-15 03:37:43.814785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.778 [2024-07-15 03:37:43.814811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.778 qpair failed and we were unable to recover it. 00:34:37.778 [2024-07-15 03:37:43.814998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.778 [2024-07-15 03:37:43.815024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.778 qpair failed and we were unable to recover it. 00:34:37.778 [2024-07-15 03:37:43.815159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.778 [2024-07-15 03:37:43.815187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.778 qpair failed and we were unable to recover it. 00:34:37.778 [2024-07-15 03:37:43.815340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.778 [2024-07-15 03:37:43.815368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.778 qpair failed and we were unable to recover it. 00:34:37.778 [2024-07-15 03:37:43.815510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.778 [2024-07-15 03:37:43.815538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.778 qpair failed and we were unable to recover it. 00:34:37.778 [2024-07-15 03:37:43.815697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.778 [2024-07-15 03:37:43.815722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.778 qpair failed and we were unable to recover it. 00:34:37.778 [2024-07-15 03:37:43.815861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.778 [2024-07-15 03:37:43.815928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.778 qpair failed and we were unable to recover it. 00:34:37.778 [2024-07-15 03:37:43.816047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.778 [2024-07-15 03:37:43.816075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.778 qpair failed and we were unable to recover it. 00:34:37.778 [2024-07-15 03:37:43.816242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.778 [2024-07-15 03:37:43.816267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.778 qpair failed and we were unable to recover it. 00:34:37.778 [2024-07-15 03:37:43.816379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.778 [2024-07-15 03:37:43.816405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.778 qpair failed and we were unable to recover it. 00:34:37.778 [2024-07-15 03:37:43.816543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.778 [2024-07-15 03:37:43.816573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.778 qpair failed and we were unable to recover it. 00:34:37.778 [2024-07-15 03:37:43.816720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.778 [2024-07-15 03:37:43.816746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.778 qpair failed and we were unable to recover it. 00:34:37.778 [2024-07-15 03:37:43.816890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.778 [2024-07-15 03:37:43.816933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.778 qpair failed and we were unable to recover it. 00:34:37.778 [2024-07-15 03:37:43.817099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.778 [2024-07-15 03:37:43.817125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.778 qpair failed and we were unable to recover it. 00:34:37.778 [2024-07-15 03:37:43.817265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.778 [2024-07-15 03:37:43.817291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.778 qpair failed and we were unable to recover it. 00:34:37.778 [2024-07-15 03:37:43.817451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.778 [2024-07-15 03:37:43.817479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.778 qpair failed and we were unable to recover it. 00:34:37.778 [2024-07-15 03:37:43.817601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.779 [2024-07-15 03:37:43.817631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.779 qpair failed and we were unable to recover it. 00:34:37.779 [2024-07-15 03:37:43.817823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.779 [2024-07-15 03:37:43.817848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.779 qpair failed and we were unable to recover it. 00:34:37.779 [2024-07-15 03:37:43.818000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.779 [2024-07-15 03:37:43.818027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.779 qpair failed and we were unable to recover it. 00:34:37.779 [2024-07-15 03:37:43.818214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.779 [2024-07-15 03:37:43.818242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.779 qpair failed and we were unable to recover it. 00:34:37.779 [2024-07-15 03:37:43.818380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.779 [2024-07-15 03:37:43.818406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.779 qpair failed and we were unable to recover it. 00:34:37.779 [2024-07-15 03:37:43.818516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.779 [2024-07-15 03:37:43.818541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.779 qpair failed and we were unable to recover it. 00:34:37.779 [2024-07-15 03:37:43.818703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.779 [2024-07-15 03:37:43.818728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.779 qpair failed and we were unable to recover it. 00:34:37.779 [2024-07-15 03:37:43.818905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.779 [2024-07-15 03:37:43.818931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.779 qpair failed and we were unable to recover it. 00:34:37.779 [2024-07-15 03:37:43.819062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.779 [2024-07-15 03:37:43.819103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.779 qpair failed and we were unable to recover it. 00:34:37.779 [2024-07-15 03:37:43.819250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.779 [2024-07-15 03:37:43.819278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.779 qpair failed and we were unable to recover it. 00:34:37.779 [2024-07-15 03:37:43.819434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.779 [2024-07-15 03:37:43.819461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.779 qpair failed and we were unable to recover it. 00:34:37.779 [2024-07-15 03:37:43.819642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.779 [2024-07-15 03:37:43.819670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.779 qpair failed and we were unable to recover it. 00:34:37.779 [2024-07-15 03:37:43.819837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.779 [2024-07-15 03:37:43.819862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.779 qpair failed and we were unable to recover it. 00:34:37.779 [2024-07-15 03:37:43.820005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.779 [2024-07-15 03:37:43.820030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.779 qpair failed and we were unable to recover it. 00:34:37.779 [2024-07-15 03:37:43.820187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.779 [2024-07-15 03:37:43.820215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.779 qpair failed and we were unable to recover it. 00:34:37.779 [2024-07-15 03:37:43.820367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.779 [2024-07-15 03:37:43.820396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.779 qpair failed and we were unable to recover it. 00:34:37.779 [2024-07-15 03:37:43.820552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.779 [2024-07-15 03:37:43.820577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.779 qpair failed and we were unable to recover it. 00:34:37.779 [2024-07-15 03:37:43.820688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.779 [2024-07-15 03:37:43.820714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.779 qpair failed and we were unable to recover it. 00:34:37.779 [2024-07-15 03:37:43.820875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.779 [2024-07-15 03:37:43.820910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.779 qpair failed and we were unable to recover it. 00:34:37.779 [2024-07-15 03:37:43.821074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.779 [2024-07-15 03:37:43.821100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.779 qpair failed and we were unable to recover it. 00:34:37.779 [2024-07-15 03:37:43.821263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.779 [2024-07-15 03:37:43.821288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.779 qpair failed and we were unable to recover it. 00:34:37.779 [2024-07-15 03:37:43.821442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.779 [2024-07-15 03:37:43.821470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.779 qpair failed and we were unable to recover it. 00:34:37.779 [2024-07-15 03:37:43.821602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.779 [2024-07-15 03:37:43.821627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.779 qpair failed and we were unable to recover it. 00:34:37.779 [2024-07-15 03:37:43.821796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.779 [2024-07-15 03:37:43.821822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.779 qpair failed and we were unable to recover it. 00:34:37.779 [2024-07-15 03:37:43.822007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.779 [2024-07-15 03:37:43.822033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.779 qpair failed and we were unable to recover it. 00:34:37.779 [2024-07-15 03:37:43.822143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.779 [2024-07-15 03:37:43.822168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.779 qpair failed and we were unable to recover it. 00:34:37.779 [2024-07-15 03:37:43.822283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.779 [2024-07-15 03:37:43.822309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.779 qpair failed and we were unable to recover it. 00:34:37.779 [2024-07-15 03:37:43.822445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.779 [2024-07-15 03:37:43.822471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.779 qpair failed and we were unable to recover it. 00:34:37.779 [2024-07-15 03:37:43.822634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.779 [2024-07-15 03:37:43.822659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.779 qpair failed and we were unable to recover it. 00:34:37.779 [2024-07-15 03:37:43.822772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.779 [2024-07-15 03:37:43.822798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.779 qpair failed and we were unable to recover it. 00:34:37.779 [2024-07-15 03:37:43.822960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.779 [2024-07-15 03:37:43.822987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.779 qpair failed and we were unable to recover it. 00:34:37.779 [2024-07-15 03:37:43.823127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.779 [2024-07-15 03:37:43.823153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.779 qpair failed and we were unable to recover it. 00:34:37.779 [2024-07-15 03:37:43.823291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.779 [2024-07-15 03:37:43.823316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.779 qpair failed and we were unable to recover it. 00:34:37.779 [2024-07-15 03:37:43.823506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.779 [2024-07-15 03:37:43.823534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.779 qpair failed and we were unable to recover it. 00:34:37.779 [2024-07-15 03:37:43.823687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.779 [2024-07-15 03:37:43.823712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.779 qpair failed and we were unable to recover it. 00:34:37.779 [2024-07-15 03:37:43.823903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.779 [2024-07-15 03:37:43.823931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.779 qpair failed and we were unable to recover it. 00:34:37.779 [2024-07-15 03:37:43.824057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.779 [2024-07-15 03:37:43.824086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.779 qpair failed and we were unable to recover it. 00:34:37.779 [2024-07-15 03:37:43.824280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.779 [2024-07-15 03:37:43.824305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.779 qpair failed and we were unable to recover it. 00:34:37.779 [2024-07-15 03:37:43.824487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.779 [2024-07-15 03:37:43.824515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.779 qpair failed and we were unable to recover it. 00:34:37.779 [2024-07-15 03:37:43.824639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.779 [2024-07-15 03:37:43.824667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.779 qpair failed and we were unable to recover it. 00:34:37.780 [2024-07-15 03:37:43.824802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.780 [2024-07-15 03:37:43.824828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.780 qpair failed and we were unable to recover it. 00:34:37.780 [2024-07-15 03:37:43.824955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.780 [2024-07-15 03:37:43.824982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.780 qpair failed and we were unable to recover it. 00:34:37.780 [2024-07-15 03:37:43.825118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.780 [2024-07-15 03:37:43.825144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.780 qpair failed and we were unable to recover it. 00:34:37.780 [2024-07-15 03:37:43.825309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.780 [2024-07-15 03:37:43.825334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.780 qpair failed and we were unable to recover it. 00:34:37.780 [2024-07-15 03:37:43.825460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.780 [2024-07-15 03:37:43.825488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.780 qpair failed and we were unable to recover it. 00:34:37.780 [2024-07-15 03:37:43.825666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.780 [2024-07-15 03:37:43.825694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.780 qpair failed and we were unable to recover it. 00:34:37.780 [2024-07-15 03:37:43.825881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.780 [2024-07-15 03:37:43.825907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.780 qpair failed and we were unable to recover it. 00:34:37.780 [2024-07-15 03:37:43.826009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.780 [2024-07-15 03:37:43.826034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.780 qpair failed and we were unable to recover it. 00:34:37.780 [2024-07-15 03:37:43.826236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.780 [2024-07-15 03:37:43.826264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.780 qpair failed and we were unable to recover it. 00:34:37.780 [2024-07-15 03:37:43.826395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.780 [2024-07-15 03:37:43.826421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.780 qpair failed and we were unable to recover it. 00:34:37.780 [2024-07-15 03:37:43.826552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.780 [2024-07-15 03:37:43.826577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.780 qpair failed and we were unable to recover it. 00:34:37.780 [2024-07-15 03:37:43.826697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.780 [2024-07-15 03:37:43.826725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.780 qpair failed and we were unable to recover it. 00:34:37.780 [2024-07-15 03:37:43.826848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.780 [2024-07-15 03:37:43.826873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.780 qpair failed and we were unable to recover it. 00:34:37.780 [2024-07-15 03:37:43.827025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.780 [2024-07-15 03:37:43.827050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.780 qpair failed and we were unable to recover it. 00:34:37.780 [2024-07-15 03:37:43.827171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.780 [2024-07-15 03:37:43.827198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.780 qpair failed and we were unable to recover it. 00:34:37.780 [2024-07-15 03:37:43.827359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.780 [2024-07-15 03:37:43.827385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.780 qpair failed and we were unable to recover it. 00:34:37.780 [2024-07-15 03:37:43.827517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.780 [2024-07-15 03:37:43.827559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.780 qpair failed and we were unable to recover it. 00:34:37.780 [2024-07-15 03:37:43.827712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.780 [2024-07-15 03:37:43.827740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.780 qpair failed and we were unable to recover it. 00:34:37.780 [2024-07-15 03:37:43.827937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.780 [2024-07-15 03:37:43.827967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.780 qpair failed and we were unable to recover it. 00:34:37.780 [2024-07-15 03:37:43.828127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.780 [2024-07-15 03:37:43.828155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.780 qpair failed and we were unable to recover it. 00:34:37.780 [2024-07-15 03:37:43.828309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.780 [2024-07-15 03:37:43.828337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.780 qpair failed and we were unable to recover it. 00:34:37.780 [2024-07-15 03:37:43.828491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.780 [2024-07-15 03:37:43.828516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.780 qpair failed and we were unable to recover it. 00:34:37.780 [2024-07-15 03:37:43.828696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.780 [2024-07-15 03:37:43.828724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.780 qpair failed and we were unable to recover it. 00:34:37.780 [2024-07-15 03:37:43.828854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.780 [2024-07-15 03:37:43.828889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.780 qpair failed and we were unable to recover it. 00:34:37.780 [2024-07-15 03:37:43.829020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.780 [2024-07-15 03:37:43.829046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.780 qpair failed and we were unable to recover it. 00:34:37.780 [2024-07-15 03:37:43.829187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.780 [2024-07-15 03:37:43.829212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.780 qpair failed and we were unable to recover it. 00:34:37.780 [2024-07-15 03:37:43.829402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.780 [2024-07-15 03:37:43.829430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.780 qpair failed and we were unable to recover it. 00:34:37.780 [2024-07-15 03:37:43.829613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.780 [2024-07-15 03:37:43.829639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.780 qpair failed and we were unable to recover it. 00:34:37.780 [2024-07-15 03:37:43.829745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.780 [2024-07-15 03:37:43.829786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.780 qpair failed and we were unable to recover it. 00:34:37.780 [2024-07-15 03:37:43.829935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.780 [2024-07-15 03:37:43.829964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.780 qpair failed and we were unable to recover it. 00:34:37.780 [2024-07-15 03:37:43.830098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.780 [2024-07-15 03:37:43.830124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.780 qpair failed and we were unable to recover it. 00:34:37.780 [2024-07-15 03:37:43.830238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.780 [2024-07-15 03:37:43.830263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.780 qpair failed and we were unable to recover it. 00:34:37.780 [2024-07-15 03:37:43.830425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.780 [2024-07-15 03:37:43.830454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.780 qpair failed and we were unable to recover it. 00:34:37.780 [2024-07-15 03:37:43.830588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.780 [2024-07-15 03:37:43.830613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.780 qpair failed and we were unable to recover it. 00:34:37.780 [2024-07-15 03:37:43.830757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.780 [2024-07-15 03:37:43.830782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.780 qpair failed and we were unable to recover it. 00:34:37.780 [2024-07-15 03:37:43.830934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.780 [2024-07-15 03:37:43.830963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.780 qpair failed and we were unable to recover it. 00:34:37.780 [2024-07-15 03:37:43.831122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.780 [2024-07-15 03:37:43.831148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.780 qpair failed and we were unable to recover it. 00:34:37.780 [2024-07-15 03:37:43.831310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.780 [2024-07-15 03:37:43.831338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.780 qpair failed and we were unable to recover it. 00:34:37.780 [2024-07-15 03:37:43.831490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.780 [2024-07-15 03:37:43.831519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.780 qpair failed and we were unable to recover it. 00:34:37.781 [2024-07-15 03:37:43.831641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.781 [2024-07-15 03:37:43.831682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.781 qpair failed and we were unable to recover it. 00:34:37.781 [2024-07-15 03:37:43.831831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.781 [2024-07-15 03:37:43.831860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.781 qpair failed and we were unable to recover it. 00:34:37.781 [2024-07-15 03:37:43.832005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.781 [2024-07-15 03:37:43.832031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.781 qpair failed and we were unable to recover it. 00:34:37.781 [2024-07-15 03:37:43.832168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.781 [2024-07-15 03:37:43.832193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.781 qpair failed and we were unable to recover it. 00:34:37.781 [2024-07-15 03:37:43.832328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.781 [2024-07-15 03:37:43.832354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.781 qpair failed and we were unable to recover it. 00:34:37.781 [2024-07-15 03:37:43.832511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.781 [2024-07-15 03:37:43.832539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.781 qpair failed and we were unable to recover it. 00:34:37.781 [2024-07-15 03:37:43.832725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.781 [2024-07-15 03:37:43.832754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.781 qpair failed and we were unable to recover it. 00:34:37.781 [2024-07-15 03:37:43.832928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.781 [2024-07-15 03:37:43.832957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.781 qpair failed and we were unable to recover it. 00:34:37.781 [2024-07-15 03:37:43.833073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.781 [2024-07-15 03:37:43.833101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.781 qpair failed and we were unable to recover it. 00:34:37.781 [2024-07-15 03:37:43.833298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.781 [2024-07-15 03:37:43.833323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.781 qpair failed and we were unable to recover it. 00:34:37.781 [2024-07-15 03:37:43.833444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.781 [2024-07-15 03:37:43.833471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.781 qpair failed and we were unable to recover it. 00:34:37.781 [2024-07-15 03:37:43.833606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.781 [2024-07-15 03:37:43.833631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.781 qpair failed and we were unable to recover it. 00:34:37.781 [2024-07-15 03:37:43.833825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.781 [2024-07-15 03:37:43.833851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.781 qpair failed and we were unable to recover it. 00:34:37.781 [2024-07-15 03:37:43.833991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.781 [2024-07-15 03:37:43.834017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.781 qpair failed and we were unable to recover it. 00:34:37.781 [2024-07-15 03:37:43.834124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.781 [2024-07-15 03:37:43.834149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.781 qpair failed and we were unable to recover it. 00:34:37.781 [2024-07-15 03:37:43.834299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.781 [2024-07-15 03:37:43.834326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.781 qpair failed and we were unable to recover it. 00:34:37.781 [2024-07-15 03:37:43.834434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.781 [2024-07-15 03:37:43.834459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.781 qpair failed and we were unable to recover it. 00:34:37.781 [2024-07-15 03:37:43.834607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.781 [2024-07-15 03:37:43.834635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.781 qpair failed and we were unable to recover it. 00:34:37.781 [2024-07-15 03:37:43.834818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.781 [2024-07-15 03:37:43.834843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.781 qpair failed and we were unable to recover it. 00:34:37.781 [2024-07-15 03:37:43.834967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.781 [2024-07-15 03:37:43.834993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.781 qpair failed and we were unable to recover it. 00:34:37.781 [2024-07-15 03:37:43.835108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.781 [2024-07-15 03:37:43.835134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.781 qpair failed and we were unable to recover it. 00:34:37.781 [2024-07-15 03:37:43.835337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.781 [2024-07-15 03:37:43.835362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.781 qpair failed and we were unable to recover it. 00:34:37.781 [2024-07-15 03:37:43.835512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.781 [2024-07-15 03:37:43.835541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.781 qpair failed and we were unable to recover it. 00:34:37.781 [2024-07-15 03:37:43.835667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.781 [2024-07-15 03:37:43.835694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.781 qpair failed and we were unable to recover it. 00:34:37.781 [2024-07-15 03:37:43.835850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.781 [2024-07-15 03:37:43.835883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.781 qpair failed and we were unable to recover it. 00:34:37.781 [2024-07-15 03:37:43.836051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.781 [2024-07-15 03:37:43.836076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.781 qpair failed and we were unable to recover it. 00:34:37.781 [2024-07-15 03:37:43.836211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.781 [2024-07-15 03:37:43.836239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.781 qpair failed and we were unable to recover it. 00:34:37.781 [2024-07-15 03:37:43.836390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.781 [2024-07-15 03:37:43.836415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.781 qpair failed and we were unable to recover it. 00:34:37.781 [2024-07-15 03:37:43.836533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.781 [2024-07-15 03:37:43.836558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.781 qpair failed and we were unable to recover it. 00:34:37.781 [2024-07-15 03:37:43.836691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.781 [2024-07-15 03:37:43.836717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.781 qpair failed and we were unable to recover it. 00:34:37.781 [2024-07-15 03:37:43.836813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.781 [2024-07-15 03:37:43.836839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.781 qpair failed and we were unable to recover it. 00:34:37.781 [2024-07-15 03:37:43.836992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.781 [2024-07-15 03:37:43.837018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.781 qpair failed and we were unable to recover it. 00:34:37.781 [2024-07-15 03:37:43.837117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.781 [2024-07-15 03:37:43.837142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.781 qpair failed and we were unable to recover it. 00:34:37.781 [2024-07-15 03:37:43.837317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.781 [2024-07-15 03:37:43.837343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.781 qpair failed and we were unable to recover it. 00:34:37.781 [2024-07-15 03:37:43.837484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.781 [2024-07-15 03:37:43.837527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.781 qpair failed and we were unable to recover it. 00:34:37.782 [2024-07-15 03:37:43.837673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.782 [2024-07-15 03:37:43.837702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.782 qpair failed and we were unable to recover it. 00:34:37.782 [2024-07-15 03:37:43.837866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.782 [2024-07-15 03:37:43.837900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.782 qpair failed and we were unable to recover it. 00:34:37.782 [2024-07-15 03:37:43.838039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.782 [2024-07-15 03:37:43.838064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.782 qpair failed and we were unable to recover it. 00:34:37.782 [2024-07-15 03:37:43.838199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.782 [2024-07-15 03:37:43.838228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.782 qpair failed and we were unable to recover it. 00:34:37.782 [2024-07-15 03:37:43.838420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.782 [2024-07-15 03:37:43.838446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.782 qpair failed and we were unable to recover it. 00:34:37.782 [2024-07-15 03:37:43.838626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.782 [2024-07-15 03:37:43.838654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.782 qpair failed and we were unable to recover it. 00:34:37.782 [2024-07-15 03:37:43.838799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.782 [2024-07-15 03:37:43.838827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.782 qpair failed and we were unable to recover it. 00:34:37.782 [2024-07-15 03:37:43.838997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.782 [2024-07-15 03:37:43.839023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.782 qpair failed and we were unable to recover it. 00:34:37.782 [2024-07-15 03:37:43.839188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.782 [2024-07-15 03:37:43.839217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.782 qpair failed and we were unable to recover it. 00:34:37.782 [2024-07-15 03:37:43.839370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.782 [2024-07-15 03:37:43.839398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.782 qpair failed and we were unable to recover it. 00:34:37.782 [2024-07-15 03:37:43.839563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.782 [2024-07-15 03:37:43.839589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.782 qpair failed and we were unable to recover it. 00:34:37.782 [2024-07-15 03:37:43.839730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.782 [2024-07-15 03:37:43.839771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.782 qpair failed and we were unable to recover it. 00:34:37.782 [2024-07-15 03:37:43.839891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.782 [2024-07-15 03:37:43.839934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.782 qpair failed and we were unable to recover it. 00:34:37.782 [2024-07-15 03:37:43.840077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.782 [2024-07-15 03:37:43.840102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.782 qpair failed and we were unable to recover it. 00:34:37.782 [2024-07-15 03:37:43.840209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.782 [2024-07-15 03:37:43.840250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.782 qpair failed and we were unable to recover it. 00:34:37.782 [2024-07-15 03:37:43.840365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.782 [2024-07-15 03:37:43.840392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.782 qpair failed and we were unable to recover it. 00:34:37.782 [2024-07-15 03:37:43.840587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.782 [2024-07-15 03:37:43.840612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.782 qpair failed and we were unable to recover it. 00:34:37.782 [2024-07-15 03:37:43.840721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.782 [2024-07-15 03:37:43.840764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.782 qpair failed and we were unable to recover it. 00:34:37.782 [2024-07-15 03:37:43.840942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.782 [2024-07-15 03:37:43.840971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.782 qpair failed and we were unable to recover it. 00:34:37.782 [2024-07-15 03:37:43.841100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.782 [2024-07-15 03:37:43.841126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.782 qpair failed and we were unable to recover it. 00:34:37.782 [2024-07-15 03:37:43.841230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.782 [2024-07-15 03:37:43.841254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.782 qpair failed and we were unable to recover it. 00:34:37.782 [2024-07-15 03:37:43.841390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.782 [2024-07-15 03:37:43.841418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.782 qpair failed and we were unable to recover it. 00:34:37.782 [2024-07-15 03:37:43.841567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.782 [2024-07-15 03:37:43.841593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.782 qpair failed and we were unable to recover it. 00:34:37.782 [2024-07-15 03:37:43.841778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.782 [2024-07-15 03:37:43.841806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.782 qpair failed and we were unable to recover it. 00:34:37.782 [2024-07-15 03:37:43.841962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.783 [2024-07-15 03:37:43.841991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.783 qpair failed and we were unable to recover it. 00:34:37.783 [2024-07-15 03:37:43.842173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.783 [2024-07-15 03:37:43.842199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.783 qpair failed and we were unable to recover it. 00:34:37.783 [2024-07-15 03:37:43.842333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.783 [2024-07-15 03:37:43.842361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.783 qpair failed and we were unable to recover it. 00:34:37.783 [2024-07-15 03:37:43.842471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.783 [2024-07-15 03:37:43.842499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.783 qpair failed and we were unable to recover it. 00:34:37.783 [2024-07-15 03:37:43.842654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.783 [2024-07-15 03:37:43.842680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.783 qpair failed and we were unable to recover it. 00:34:37.783 [2024-07-15 03:37:43.842822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.783 [2024-07-15 03:37:43.842847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.783 qpair failed and we were unable to recover it. 00:34:37.783 [2024-07-15 03:37:43.842969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.783 [2024-07-15 03:37:43.842995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.783 qpair failed and we were unable to recover it. 00:34:37.783 [2024-07-15 03:37:43.843134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.783 [2024-07-15 03:37:43.843159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.783 qpair failed and we were unable to recover it. 00:34:37.783 [2024-07-15 03:37:43.843261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.783 [2024-07-15 03:37:43.843287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.783 qpair failed and we were unable to recover it. 00:34:37.783 [2024-07-15 03:37:43.843488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.783 [2024-07-15 03:37:43.843516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.783 qpair failed and we were unable to recover it. 00:34:37.783 [2024-07-15 03:37:43.843637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.783 [2024-07-15 03:37:43.843662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.783 qpair failed and we were unable to recover it. 00:34:37.783 [2024-07-15 03:37:43.843769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.783 [2024-07-15 03:37:43.843794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.783 qpair failed and we were unable to recover it. 00:34:37.783 [2024-07-15 03:37:43.843916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.783 [2024-07-15 03:37:43.843945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.783 qpair failed and we were unable to recover it. 00:34:37.783 [2024-07-15 03:37:43.844109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.783 [2024-07-15 03:37:43.844135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.783 qpair failed and we were unable to recover it. 00:34:37.783 [2024-07-15 03:37:43.844273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.783 [2024-07-15 03:37:43.844298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.783 qpair failed and we were unable to recover it. 00:34:37.783 [2024-07-15 03:37:43.844468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.783 [2024-07-15 03:37:43.844500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.783 qpair failed and we were unable to recover it. 00:34:37.783 [2024-07-15 03:37:43.844652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.783 [2024-07-15 03:37:43.844677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.783 qpair failed and we were unable to recover it. 00:34:37.783 [2024-07-15 03:37:43.844818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.783 [2024-07-15 03:37:43.844843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.783 qpair failed and we were unable to recover it. 00:34:37.783 [2024-07-15 03:37:43.845022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.783 [2024-07-15 03:37:43.845050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.783 qpair failed and we were unable to recover it. 00:34:37.783 [2024-07-15 03:37:43.845187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.783 [2024-07-15 03:37:43.845213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.783 qpair failed and we were unable to recover it. 00:34:37.783 [2024-07-15 03:37:43.845383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.783 [2024-07-15 03:37:43.845426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.783 qpair failed and we were unable to recover it. 00:34:37.783 [2024-07-15 03:37:43.845566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.783 [2024-07-15 03:37:43.845594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.783 qpair failed and we were unable to recover it. 00:34:37.783 [2024-07-15 03:37:43.845776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.783 [2024-07-15 03:37:43.845801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.783 qpair failed and we were unable to recover it. 00:34:37.783 [2024-07-15 03:37:43.845933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.783 [2024-07-15 03:37:43.845977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.783 qpair failed and we were unable to recover it. 00:34:37.783 [2024-07-15 03:37:43.846161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.783 [2024-07-15 03:37:43.846187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.783 qpair failed and we were unable to recover it. 00:34:37.783 [2024-07-15 03:37:43.846326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.783 [2024-07-15 03:37:43.846351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.783 qpair failed and we were unable to recover it. 00:34:37.783 [2024-07-15 03:37:43.846510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.783 [2024-07-15 03:37:43.846539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.783 qpair failed and we were unable to recover it. 00:34:37.783 [2024-07-15 03:37:43.846695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.783 [2024-07-15 03:37:43.846724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.783 qpair failed and we were unable to recover it. 00:34:37.783 [2024-07-15 03:37:43.846898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.783 [2024-07-15 03:37:43.846927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.783 qpair failed and we were unable to recover it. 00:34:37.783 [2024-07-15 03:37:43.847059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.783 [2024-07-15 03:37:43.847084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.783 qpair failed and we were unable to recover it. 00:34:37.783 [2024-07-15 03:37:43.847220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.783 [2024-07-15 03:37:43.847248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.783 qpair failed and we were unable to recover it. 00:34:37.783 [2024-07-15 03:37:43.847401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.783 [2024-07-15 03:37:43.847426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.783 qpair failed and we were unable to recover it. 00:34:37.783 [2024-07-15 03:37:43.847610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.783 [2024-07-15 03:37:43.847639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.783 qpair failed and we were unable to recover it. 00:34:37.783 [2024-07-15 03:37:43.847805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.783 [2024-07-15 03:37:43.847831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.783 qpair failed and we were unable to recover it. 00:34:37.783 [2024-07-15 03:37:43.847944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.783 [2024-07-15 03:37:43.847971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.783 qpair failed and we were unable to recover it. 00:34:37.783 [2024-07-15 03:37:43.848155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.783 [2024-07-15 03:37:43.848200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.783 qpair failed and we were unable to recover it. 00:34:37.783 [2024-07-15 03:37:43.848327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.783 [2024-07-15 03:37:43.848364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.783 qpair failed and we were unable to recover it. 00:34:37.783 [2024-07-15 03:37:43.848536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.784 [2024-07-15 03:37:43.848564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.784 qpair failed and we were unable to recover it. 00:34:37.784 [2024-07-15 03:37:43.848681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.784 [2024-07-15 03:37:43.848707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.784 qpair failed and we were unable to recover it. 00:34:37.784 [2024-07-15 03:37:43.848844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.784 [2024-07-15 03:37:43.848891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.784 qpair failed and we were unable to recover it. 00:34:37.784 [2024-07-15 03:37:43.849029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.784 [2024-07-15 03:37:43.849054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.784 qpair failed and we were unable to recover it. 00:34:37.784 [2024-07-15 03:37:43.849203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.784 [2024-07-15 03:37:43.849228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.784 qpair failed and we were unable to recover it. 00:34:37.784 [2024-07-15 03:37:43.849369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.784 [2024-07-15 03:37:43.849414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.784 qpair failed and we were unable to recover it. 00:34:37.784 [2024-07-15 03:37:43.849597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.784 [2024-07-15 03:37:43.849622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.784 qpair failed and we were unable to recover it. 00:34:37.784 [2024-07-15 03:37:43.849734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.784 [2024-07-15 03:37:43.849760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.784 qpair failed and we were unable to recover it. 00:34:37.784 [2024-07-15 03:37:43.849895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.784 [2024-07-15 03:37:43.849922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.784 qpair failed and we were unable to recover it. 00:34:37.784 [2024-07-15 03:37:43.850059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.784 [2024-07-15 03:37:43.850085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.784 qpair failed and we were unable to recover it. 00:34:37.784 [2024-07-15 03:37:43.850277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.784 [2024-07-15 03:37:43.850311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.784 qpair failed and we were unable to recover it. 00:34:37.784 [2024-07-15 03:37:43.850511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.784 [2024-07-15 03:37:43.850542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.784 qpair failed and we were unable to recover it. 00:34:37.784 [2024-07-15 03:37:43.850703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.784 [2024-07-15 03:37:43.850729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.784 qpair failed and we were unable to recover it. 00:34:37.784 [2024-07-15 03:37:43.850909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.784 [2024-07-15 03:37:43.850940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.784 qpair failed and we were unable to recover it. 00:34:37.784 [2024-07-15 03:37:43.851086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.784 [2024-07-15 03:37:43.851115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.784 qpair failed and we were unable to recover it. 00:34:37.784 [2024-07-15 03:37:43.851289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.784 [2024-07-15 03:37:43.851314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.784 qpair failed and we were unable to recover it. 00:34:37.784 [2024-07-15 03:37:43.851499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.784 [2024-07-15 03:37:43.851527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.784 qpair failed and we were unable to recover it. 00:34:37.784 [2024-07-15 03:37:43.851654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.784 [2024-07-15 03:37:43.851682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.784 qpair failed and we were unable to recover it. 00:34:37.784 [2024-07-15 03:37:43.851866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.784 [2024-07-15 03:37:43.851910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.784 qpair failed and we were unable to recover it. 00:34:37.784 [2024-07-15 03:37:43.852084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.784 [2024-07-15 03:37:43.852114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.784 qpair failed and we were unable to recover it. 00:34:37.784 [2024-07-15 03:37:43.852275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.784 [2024-07-15 03:37:43.852305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.784 qpair failed and we were unable to recover it. 00:34:37.784 [2024-07-15 03:37:43.852450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.784 [2024-07-15 03:37:43.852476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:37.784 qpair failed and we were unable to recover it. 00:34:38.067 [2024-07-15 03:37:43.852616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.068 [2024-07-15 03:37:43.852641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.068 qpair failed and we were unable to recover it. 00:34:38.068 [2024-07-15 03:37:43.852777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.068 [2024-07-15 03:37:43.852802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.068 qpair failed and we were unable to recover it. 00:34:38.068 [2024-07-15 03:37:43.852913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.068 [2024-07-15 03:37:43.852940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.068 qpair failed and we were unable to recover it. 00:34:38.068 [2024-07-15 03:37:43.853052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.068 [2024-07-15 03:37:43.853078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.068 qpair failed and we were unable to recover it. 00:34:38.068 [2024-07-15 03:37:43.853210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.068 [2024-07-15 03:37:43.853241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.068 qpair failed and we were unable to recover it. 00:34:38.068 [2024-07-15 03:37:43.853389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.068 [2024-07-15 03:37:43.853424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.068 qpair failed and we were unable to recover it. 00:34:38.068 [2024-07-15 03:37:43.853587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.068 [2024-07-15 03:37:43.853620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.068 qpair failed and we were unable to recover it. 00:34:38.068 [2024-07-15 03:37:43.853767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.068 [2024-07-15 03:37:43.853805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.068 qpair failed and we were unable to recover it. 00:34:38.068 [2024-07-15 03:37:43.853956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.068 [2024-07-15 03:37:43.853991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.068 qpair failed and we were unable to recover it. 00:34:38.068 [2024-07-15 03:37:43.854180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.068 [2024-07-15 03:37:43.854213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.068 qpair failed and we were unable to recover it. 00:34:38.068 [2024-07-15 03:37:43.854379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.068 [2024-07-15 03:37:43.854422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.068 qpair failed and we were unable to recover it. 00:34:38.068 [2024-07-15 03:37:43.854543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.068 [2024-07-15 03:37:43.854572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.068 qpair failed and we were unable to recover it. 00:34:38.068 [2024-07-15 03:37:43.854695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.068 [2024-07-15 03:37:43.854722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.068 qpair failed and we were unable to recover it. 00:34:38.068 [2024-07-15 03:37:43.854899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.068 [2024-07-15 03:37:43.854927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.068 qpair failed and we were unable to recover it. 00:34:38.068 [2024-07-15 03:37:43.855044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.068 [2024-07-15 03:37:43.855077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.068 qpair failed and we were unable to recover it. 00:34:38.068 [2024-07-15 03:37:43.855183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.068 [2024-07-15 03:37:43.855209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.068 qpair failed and we were unable to recover it. 00:34:38.068 [2024-07-15 03:37:43.855323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.068 [2024-07-15 03:37:43.855349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.068 qpair failed and we were unable to recover it. 00:34:38.068 [2024-07-15 03:37:43.855488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.068 [2024-07-15 03:37:43.855514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.068 qpair failed and we were unable to recover it. 00:34:38.068 [2024-07-15 03:37:43.855668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.068 [2024-07-15 03:37:43.855696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.068 qpair failed and we were unable to recover it. 00:34:38.068 [2024-07-15 03:37:43.855860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.068 [2024-07-15 03:37:43.855900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.068 qpair failed and we were unable to recover it. 00:34:38.068 [2024-07-15 03:37:43.856040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.068 [2024-07-15 03:37:43.856067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.068 qpair failed and we were unable to recover it. 00:34:38.068 [2024-07-15 03:37:43.856196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.068 [2024-07-15 03:37:43.856240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.068 qpair failed and we were unable to recover it. 00:34:38.068 [2024-07-15 03:37:43.856399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.068 [2024-07-15 03:37:43.856444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.068 qpair failed and we were unable to recover it. 00:34:38.068 [2024-07-15 03:37:43.856581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.068 [2024-07-15 03:37:43.856625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.068 qpair failed and we were unable to recover it. 00:34:38.068 [2024-07-15 03:37:43.856761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.068 [2024-07-15 03:37:43.856787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.068 qpair failed and we were unable to recover it. 00:34:38.068 [2024-07-15 03:37:43.856953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.068 [2024-07-15 03:37:43.856985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.068 qpair failed and we were unable to recover it. 00:34:38.068 [2024-07-15 03:37:43.857143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.068 [2024-07-15 03:37:43.857171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.068 qpair failed and we were unable to recover it. 00:34:38.068 [2024-07-15 03:37:43.857299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.068 [2024-07-15 03:37:43.857327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.068 qpair failed and we were unable to recover it. 00:34:38.068 [2024-07-15 03:37:43.857453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.068 [2024-07-15 03:37:43.857482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.068 qpair failed and we were unable to recover it. 00:34:38.068 [2024-07-15 03:37:43.857610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.068 [2024-07-15 03:37:43.857638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.068 qpair failed and we were unable to recover it. 00:34:38.068 [2024-07-15 03:37:43.857760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.068 [2024-07-15 03:37:43.857788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.068 qpair failed and we were unable to recover it. 00:34:38.068 [2024-07-15 03:37:43.857937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.068 [2024-07-15 03:37:43.857963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.068 qpair failed and we were unable to recover it. 00:34:38.068 [2024-07-15 03:37:43.858103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.068 [2024-07-15 03:37:43.858129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.068 qpair failed and we were unable to recover it. 00:34:38.068 [2024-07-15 03:37:43.858243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.068 [2024-07-15 03:37:43.858269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.068 qpair failed and we were unable to recover it. 00:34:38.068 [2024-07-15 03:37:43.858393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.068 [2024-07-15 03:37:43.858422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.068 qpair failed and we were unable to recover it. 00:34:38.068 [2024-07-15 03:37:43.858598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.068 [2024-07-15 03:37:43.858626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.068 qpair failed and we were unable to recover it. 00:34:38.068 [2024-07-15 03:37:43.858772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.069 [2024-07-15 03:37:43.858800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.069 qpair failed and we were unable to recover it. 00:34:38.069 [2024-07-15 03:37:43.858958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.069 [2024-07-15 03:37:43.858991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.069 qpair failed and we were unable to recover it. 00:34:38.069 [2024-07-15 03:37:43.859129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.069 [2024-07-15 03:37:43.859171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.069 qpair failed and we were unable to recover it. 00:34:38.069 [2024-07-15 03:37:43.859285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.069 [2024-07-15 03:37:43.859314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.069 qpair failed and we were unable to recover it. 00:34:38.069 [2024-07-15 03:37:43.859491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.069 [2024-07-15 03:37:43.859519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.069 qpair failed and we were unable to recover it. 00:34:38.069 [2024-07-15 03:37:43.859670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.069 [2024-07-15 03:37:43.859699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.069 qpair failed and we were unable to recover it. 00:34:38.069 [2024-07-15 03:37:43.859890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.069 [2024-07-15 03:37:43.859933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.069 qpair failed and we were unable to recover it. 00:34:38.069 [2024-07-15 03:37:43.860041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.069 [2024-07-15 03:37:43.860066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.069 qpair failed and we were unable to recover it. 00:34:38.069 [2024-07-15 03:37:43.860201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.069 [2024-07-15 03:37:43.860243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.069 qpair failed and we were unable to recover it. 00:34:38.069 [2024-07-15 03:37:43.860355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.069 [2024-07-15 03:37:43.860383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.069 qpair failed and we were unable to recover it. 00:34:38.069 [2024-07-15 03:37:43.860555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.069 [2024-07-15 03:37:43.860583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.069 qpair failed and we were unable to recover it. 00:34:38.069 [2024-07-15 03:37:43.860731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.069 [2024-07-15 03:37:43.860765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.069 qpair failed and we were unable to recover it. 00:34:38.069 [2024-07-15 03:37:43.860937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.069 [2024-07-15 03:37:43.860964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.069 qpair failed and we were unable to recover it. 00:34:38.069 [2024-07-15 03:37:43.861078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.069 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 3350388 Killed "${NVMF_APP[@]}" "$@" 00:34:38.069 [2024-07-15 03:37:43.861104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.069 qpair failed and we were unable to recover it. 00:34:38.069 [2024-07-15 03:37:43.861224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.069 [2024-07-15 03:37:43.861255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.069 qpair failed and we were unable to recover it. 00:34:38.069 [2024-07-15 03:37:43.861398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.069 [2024-07-15 03:37:43.861426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.069 qpair failed and we were unable to recover it. 00:34:38.069 03:37:43 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:34:38.069 [2024-07-15 03:37:43.861569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.069 [2024-07-15 03:37:43.861597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.069 03:37:43 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:34:38.069 qpair failed and we were unable to recover it. 00:34:38.069 [2024-07-15 03:37:43.861752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.069 [2024-07-15 03:37:43.861780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.069 03:37:43 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:34:38.069 qpair failed and we were unable to recover it. 00:34:38.069 [2024-07-15 03:37:43.861944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.069 03:37:43 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:34:38.069 [2024-07-15 03:37:43.861971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.069 qpair failed and we were unable to recover it. 00:34:38.069 03:37:43 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:38.069 [2024-07-15 03:37:43.862108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.069 [2024-07-15 03:37:43.862133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.069 qpair failed and we were unable to recover it. 00:34:38.069 [2024-07-15 03:37:43.862379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.069 [2024-07-15 03:37:43.862419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.069 qpair failed and we were unable to recover it. 00:34:38.069 [2024-07-15 03:37:43.862631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.069 [2024-07-15 03:37:43.862659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.069 qpair failed and we were unable to recover it. 00:34:38.069 [2024-07-15 03:37:43.862836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.069 [2024-07-15 03:37:43.862863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.069 qpair failed and we were unable to recover it. 00:34:38.069 [2024-07-15 03:37:43.862990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.069 [2024-07-15 03:37:43.863016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.069 qpair failed and we were unable to recover it. 00:34:38.069 [2024-07-15 03:37:43.863362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.069 [2024-07-15 03:37:43.863393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.069 qpair failed and we were unable to recover it. 00:34:38.069 [2024-07-15 03:37:43.863580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.069 [2024-07-15 03:37:43.863607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.069 qpair failed and we were unable to recover it. 00:34:38.069 [2024-07-15 03:37:43.863743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.069 [2024-07-15 03:37:43.863769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.069 qpair failed and we were unable to recover it. 00:34:38.069 [2024-07-15 03:37:43.863886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.069 [2024-07-15 03:37:43.863931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.069 qpair failed and we were unable to recover it. 00:34:38.069 [2024-07-15 03:37:43.864093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.069 [2024-07-15 03:37:43.864121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.069 qpair failed and we were unable to recover it. 00:34:38.069 [2024-07-15 03:37:43.864303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.069 [2024-07-15 03:37:43.864331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.069 qpair failed and we were unable to recover it. 00:34:38.069 [2024-07-15 03:37:43.864509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.069 [2024-07-15 03:37:43.864535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.069 qpair failed and we were unable to recover it. 00:34:38.069 [2024-07-15 03:37:43.864656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.069 [2024-07-15 03:37:43.864683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.069 qpair failed and we were unable to recover it. 00:34:38.069 [2024-07-15 03:37:43.864844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.069 [2024-07-15 03:37:43.864870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.069 qpair failed and we were unable to recover it. 00:34:38.069 [2024-07-15 03:37:43.865041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.069 [2024-07-15 03:37:43.865070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.070 qpair failed and we were unable to recover it. 00:34:38.070 [2024-07-15 03:37:43.865342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.070 [2024-07-15 03:37:43.865372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.070 qpair failed and we were unable to recover it. 00:34:38.070 [2024-07-15 03:37:43.865682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.070 [2024-07-15 03:37:43.865734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.070 qpair failed and we were unable to recover it. 00:34:38.070 [2024-07-15 03:37:43.865889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.070 [2024-07-15 03:37:43.865920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.070 qpair failed and we were unable to recover it. 00:34:38.070 [2024-07-15 03:37:43.866127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.070 [2024-07-15 03:37:43.866157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.070 qpair failed and we were unable to recover it. 00:34:38.070 [2024-07-15 03:37:43.866340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.070 [2024-07-15 03:37:43.866369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.070 qpair failed and we were unable to recover it. 00:34:38.070 03:37:43 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=3350942 00:34:38.070 [2024-07-15 03:37:43.866496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.070 03:37:43 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:34:38.070 [2024-07-15 03:37:43.866522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.070 qpair failed and we were unable to recover it. 00:34:38.070 03:37:43 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 3350942 00:34:38.070 [2024-07-15 03:37:43.866663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.070 [2024-07-15 03:37:43.866689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.070 qpair failed and we were unable to recover it. 00:34:38.070 03:37:43 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@829 -- # '[' -z 3350942 ']' 00:34:38.070 [2024-07-15 03:37:43.866821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.070 [2024-07-15 03:37:43.866847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.070 03:37:43 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:38.070 qpair failed and we were unable to recover it. 00:34:38.070 03:37:43 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:34:38.070 [2024-07-15 03:37:43.867021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.070 [2024-07-15 03:37:43.867050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.070 qpair failed and we were unable to recover it. 00:34:38.070 03:37:43 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:38.070 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:38.070 [2024-07-15 03:37:43.867237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.070 [2024-07-15 03:37:43.867267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.070 qpair failed and we were unable to recover it. 00:34:38.070 03:37:43 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:34:38.070 03:37:43 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:38.070 [2024-07-15 03:37:43.867451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.070 [2024-07-15 03:37:43.867481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.070 qpair failed and we were unable to recover it. 00:34:38.070 [2024-07-15 03:37:43.867665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.070 [2024-07-15 03:37:43.867691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.070 qpair failed and we were unable to recover it. 00:34:38.070 [2024-07-15 03:37:43.867809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.070 [2024-07-15 03:37:43.867837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.070 qpair failed and we were unable to recover it. 00:34:38.070 [2024-07-15 03:37:43.867993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.070 [2024-07-15 03:37:43.868019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.070 qpair failed and we were unable to recover it. 00:34:38.070 [2024-07-15 03:37:43.868175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.070 [2024-07-15 03:37:43.868213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.070 qpair failed and we were unable to recover it. 00:34:38.070 [2024-07-15 03:37:43.868338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.070 [2024-07-15 03:37:43.868366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.070 qpair failed and we were unable to recover it. 00:34:38.070 [2024-07-15 03:37:43.868534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.070 [2024-07-15 03:37:43.868561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.070 qpair failed and we were unable to recover it. 00:34:38.070 [2024-07-15 03:37:43.868723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.070 [2024-07-15 03:37:43.868749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.070 qpair failed and we were unable to recover it. 00:34:38.070 [2024-07-15 03:37:43.868933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.070 [2024-07-15 03:37:43.868963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.070 qpair failed and we were unable to recover it. 00:34:38.070 [2024-07-15 03:37:43.869140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.070 [2024-07-15 03:37:43.869170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.070 qpair failed and we were unable to recover it. 00:34:38.070 [2024-07-15 03:37:43.869349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.070 [2024-07-15 03:37:43.869396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.070 qpair failed and we were unable to recover it. 00:34:38.070 [2024-07-15 03:37:43.869581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.070 [2024-07-15 03:37:43.869607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.070 qpair failed and we were unable to recover it. 00:34:38.070 [2024-07-15 03:37:43.869747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.070 [2024-07-15 03:37:43.869773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.070 qpair failed and we were unable to recover it. 00:34:38.070 [2024-07-15 03:37:43.869950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.070 [2024-07-15 03:37:43.869981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.070 qpair failed and we were unable to recover it. 00:34:38.070 [2024-07-15 03:37:43.870170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.070 [2024-07-15 03:37:43.870205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.070 qpair failed and we were unable to recover it. 00:34:38.070 [2024-07-15 03:37:43.870386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.070 [2024-07-15 03:37:43.870415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.070 qpair failed and we were unable to recover it. 00:34:38.070 [2024-07-15 03:37:43.870540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.070 [2024-07-15 03:37:43.870567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.070 qpair failed and we were unable to recover it. 00:34:38.070 [2024-07-15 03:37:43.870708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.070 [2024-07-15 03:37:43.870740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.070 qpair failed and we were unable to recover it. 00:34:38.070 [2024-07-15 03:37:43.870886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.070 [2024-07-15 03:37:43.870929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.070 qpair failed and we were unable to recover it. 00:34:38.070 [2024-07-15 03:37:43.871052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.070 [2024-07-15 03:37:43.871081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.070 qpair failed and we were unable to recover it. 00:34:38.070 [2024-07-15 03:37:43.871325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.070 [2024-07-15 03:37:43.871372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.070 qpair failed and we were unable to recover it. 00:34:38.070 [2024-07-15 03:37:43.871500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.070 [2024-07-15 03:37:43.871526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.070 qpair failed and we were unable to recover it. 00:34:38.071 [2024-07-15 03:37:43.871642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.071 [2024-07-15 03:37:43.871670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.071 qpair failed and we were unable to recover it. 00:34:38.071 [2024-07-15 03:37:43.871841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.071 [2024-07-15 03:37:43.871868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.071 qpair failed and we were unable to recover it. 00:34:38.071 [2024-07-15 03:37:43.872005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.071 [2024-07-15 03:37:43.872034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.071 qpair failed and we were unable to recover it. 00:34:38.071 [2024-07-15 03:37:43.872209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.071 [2024-07-15 03:37:43.872241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.071 qpair failed and we were unable to recover it. 00:34:38.071 [2024-07-15 03:37:43.872454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.071 [2024-07-15 03:37:43.872483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.071 qpair failed and we were unable to recover it. 00:34:38.071 [2024-07-15 03:37:43.872665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.071 [2024-07-15 03:37:43.872691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.071 qpair failed and we were unable to recover it. 00:34:38.071 [2024-07-15 03:37:43.872832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.071 [2024-07-15 03:37:43.872857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.071 qpair failed and we were unable to recover it. 00:34:38.071 [2024-07-15 03:37:43.873056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.071 [2024-07-15 03:37:43.873085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.071 qpair failed and we were unable to recover it. 00:34:38.071 [2024-07-15 03:37:43.873275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.071 [2024-07-15 03:37:43.873305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.071 qpair failed and we were unable to recover it. 00:34:38.071 [2024-07-15 03:37:43.873478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.071 [2024-07-15 03:37:43.873527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.071 qpair failed and we were unable to recover it. 00:34:38.071 [2024-07-15 03:37:43.873661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.071 [2024-07-15 03:37:43.873686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.071 qpair failed and we were unable to recover it. 00:34:38.071 [2024-07-15 03:37:43.873823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.071 [2024-07-15 03:37:43.873849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.071 qpair failed and we were unable to recover it. 00:34:38.071 [2024-07-15 03:37:43.874006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.071 [2024-07-15 03:37:43.874034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.071 qpair failed and we were unable to recover it. 00:34:38.071 [2024-07-15 03:37:43.874272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.071 [2024-07-15 03:37:43.874300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.071 qpair failed and we were unable to recover it. 00:34:38.071 [2024-07-15 03:37:43.874485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.071 [2024-07-15 03:37:43.874513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.071 qpair failed and we were unable to recover it. 00:34:38.071 [2024-07-15 03:37:43.874668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.071 [2024-07-15 03:37:43.874693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.071 qpair failed and we were unable to recover it. 00:34:38.071 [2024-07-15 03:37:43.874823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.071 [2024-07-15 03:37:43.874849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.071 qpair failed and we were unable to recover it. 00:34:38.071 [2024-07-15 03:37:43.875039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.071 [2024-07-15 03:37:43.875083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.071 qpair failed and we were unable to recover it. 00:34:38.071 [2024-07-15 03:37:43.875306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.071 [2024-07-15 03:37:43.875350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.071 qpair failed and we were unable to recover it. 00:34:38.071 [2024-07-15 03:37:43.875518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.071 [2024-07-15 03:37:43.875550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.071 qpair failed and we were unable to recover it. 00:34:38.071 [2024-07-15 03:37:43.875710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.071 [2024-07-15 03:37:43.875737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.071 qpair failed and we were unable to recover it. 00:34:38.071 [2024-07-15 03:37:43.875885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.071 [2024-07-15 03:37:43.875912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.071 qpair failed and we were unable to recover it. 00:34:38.071 [2024-07-15 03:37:43.876065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.071 [2024-07-15 03:37:43.876120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.071 qpair failed and we were unable to recover it. 00:34:38.071 [2024-07-15 03:37:43.876333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.071 [2024-07-15 03:37:43.876380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.071 qpair failed and we were unable to recover it. 00:34:38.071 [2024-07-15 03:37:43.876519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.071 [2024-07-15 03:37:43.876547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.071 qpair failed and we were unable to recover it. 00:34:38.071 [2024-07-15 03:37:43.876715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.071 [2024-07-15 03:37:43.876742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.071 qpair failed and we were unable to recover it. 00:34:38.071 [2024-07-15 03:37:43.876931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.071 [2024-07-15 03:37:43.876961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.071 qpair failed and we were unable to recover it. 00:34:38.071 [2024-07-15 03:37:43.877120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.071 [2024-07-15 03:37:43.877164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.071 qpair failed and we were unable to recover it. 00:34:38.071 [2024-07-15 03:37:43.877330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.071 [2024-07-15 03:37:43.877374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.071 qpair failed and we were unable to recover it. 00:34:38.071 [2024-07-15 03:37:43.877495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.071 [2024-07-15 03:37:43.877522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.071 qpair failed and we were unable to recover it. 00:34:38.071 [2024-07-15 03:37:43.877668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.071 [2024-07-15 03:37:43.877694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.071 qpair failed and we were unable to recover it. 00:34:38.071 [2024-07-15 03:37:43.877831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.071 [2024-07-15 03:37:43.877857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.071 qpair failed and we were unable to recover it. 00:34:38.071 [2024-07-15 03:37:43.878022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.071 [2024-07-15 03:37:43.878067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.071 qpair failed and we were unable to recover it. 00:34:38.071 [2024-07-15 03:37:43.878203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.071 [2024-07-15 03:37:43.878232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.071 qpair failed and we were unable to recover it. 00:34:38.071 [2024-07-15 03:37:43.878450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.071 [2024-07-15 03:37:43.878493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.071 qpair failed and we were unable to recover it. 00:34:38.071 [2024-07-15 03:37:43.878636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.071 [2024-07-15 03:37:43.878667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.071 qpair failed and we were unable to recover it. 00:34:38.071 [2024-07-15 03:37:43.878790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.071 [2024-07-15 03:37:43.878829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.071 qpair failed and we were unable to recover it. 00:34:38.072 [2024-07-15 03:37:43.878962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.072 [2024-07-15 03:37:43.878989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.072 qpair failed and we were unable to recover it. 00:34:38.072 [2024-07-15 03:37:43.879105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.072 [2024-07-15 03:37:43.879131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.072 qpair failed and we were unable to recover it. 00:34:38.072 [2024-07-15 03:37:43.879259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.072 [2024-07-15 03:37:43.879287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.072 qpair failed and we were unable to recover it. 00:34:38.072 [2024-07-15 03:37:43.879466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.072 [2024-07-15 03:37:43.879494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.072 qpair failed and we were unable to recover it. 00:34:38.072 [2024-07-15 03:37:43.879621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.072 [2024-07-15 03:37:43.879649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.072 qpair failed and we were unable to recover it. 00:34:38.072 [2024-07-15 03:37:43.879785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.072 [2024-07-15 03:37:43.879813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.072 qpair failed and we were unable to recover it. 00:34:38.072 [2024-07-15 03:37:43.879980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.072 [2024-07-15 03:37:43.880019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.072 qpair failed and we were unable to recover it. 00:34:38.072 [2024-07-15 03:37:43.880144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.072 [2024-07-15 03:37:43.880201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.072 qpair failed and we were unable to recover it. 00:34:38.072 [2024-07-15 03:37:43.880350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.072 [2024-07-15 03:37:43.880380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.072 qpair failed and we were unable to recover it. 00:34:38.072 [2024-07-15 03:37:43.880537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.072 [2024-07-15 03:37:43.880568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.072 qpair failed and we were unable to recover it. 00:34:38.072 [2024-07-15 03:37:43.880720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.072 [2024-07-15 03:37:43.880750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.072 qpair failed and we were unable to recover it. 00:34:38.072 [2024-07-15 03:37:43.880957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.072 [2024-07-15 03:37:43.880984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.072 qpair failed and we were unable to recover it. 00:34:38.072 [2024-07-15 03:37:43.881127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.072 [2024-07-15 03:37:43.881153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.072 qpair failed and we were unable to recover it. 00:34:38.072 [2024-07-15 03:37:43.881318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.072 [2024-07-15 03:37:43.881347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.072 qpair failed and we were unable to recover it. 00:34:38.072 [2024-07-15 03:37:43.881514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.072 [2024-07-15 03:37:43.881543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.072 qpair failed and we were unable to recover it. 00:34:38.072 [2024-07-15 03:37:43.881716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.072 [2024-07-15 03:37:43.881744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.072 qpair failed and we were unable to recover it. 00:34:38.072 [2024-07-15 03:37:43.881920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.072 [2024-07-15 03:37:43.881947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.072 qpair failed and we were unable to recover it. 00:34:38.072 [2024-07-15 03:37:43.882082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.072 [2024-07-15 03:37:43.882107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.072 qpair failed and we were unable to recover it. 00:34:38.072 [2024-07-15 03:37:43.882260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.072 [2024-07-15 03:37:43.882286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.072 qpair failed and we were unable to recover it. 00:34:38.072 [2024-07-15 03:37:43.882471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.072 [2024-07-15 03:37:43.882500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.072 qpair failed and we were unable to recover it. 00:34:38.072 [2024-07-15 03:37:43.882678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.072 [2024-07-15 03:37:43.882706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.072 qpair failed and we were unable to recover it. 00:34:38.072 [2024-07-15 03:37:43.882839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.072 [2024-07-15 03:37:43.882866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.072 qpair failed and we were unable to recover it. 00:34:38.072 [2024-07-15 03:37:43.883019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.072 [2024-07-15 03:37:43.883045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.072 qpair failed and we were unable to recover it. 00:34:38.072 [2024-07-15 03:37:43.883205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.072 [2024-07-15 03:37:43.883234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.072 qpair failed and we were unable to recover it. 00:34:38.072 [2024-07-15 03:37:43.883418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.072 [2024-07-15 03:37:43.883446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.072 qpair failed and we were unable to recover it. 00:34:38.072 [2024-07-15 03:37:43.883630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.072 [2024-07-15 03:37:43.883663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.072 qpair failed and we were unable to recover it. 00:34:38.072 [2024-07-15 03:37:43.883819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.072 [2024-07-15 03:37:43.883849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.072 qpair failed and we were unable to recover it. 00:34:38.072 [2024-07-15 03:37:43.884062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.072 [2024-07-15 03:37:43.884100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.072 qpair failed and we were unable to recover it. 00:34:38.072 [2024-07-15 03:37:43.884230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.072 [2024-07-15 03:37:43.884258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.072 qpair failed and we were unable to recover it. 00:34:38.072 [2024-07-15 03:37:43.884420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.073 [2024-07-15 03:37:43.884464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.073 qpair failed and we were unable to recover it. 00:34:38.073 [2024-07-15 03:37:43.884627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.073 [2024-07-15 03:37:43.884670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.073 qpair failed and we were unable to recover it. 00:34:38.073 [2024-07-15 03:37:43.884810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.073 [2024-07-15 03:37:43.884836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.073 qpair failed and we were unable to recover it. 00:34:38.073 [2024-07-15 03:37:43.884963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.073 [2024-07-15 03:37:43.884990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.073 qpair failed and we were unable to recover it. 00:34:38.073 [2024-07-15 03:37:43.885106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.073 [2024-07-15 03:37:43.885133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.073 qpair failed and we were unable to recover it. 00:34:38.073 [2024-07-15 03:37:43.885302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.073 [2024-07-15 03:37:43.885329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.073 qpair failed and we were unable to recover it. 00:34:38.073 [2024-07-15 03:37:43.885443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.073 [2024-07-15 03:37:43.885470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.073 qpair failed and we were unable to recover it. 00:34:38.073 [2024-07-15 03:37:43.885609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.073 [2024-07-15 03:37:43.885635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.073 qpair failed and we were unable to recover it. 00:34:38.073 [2024-07-15 03:37:43.885748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.073 [2024-07-15 03:37:43.885775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.073 qpair failed and we were unable to recover it. 00:34:38.073 [2024-07-15 03:37:43.885888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.073 [2024-07-15 03:37:43.885915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.073 qpair failed and we were unable to recover it. 00:34:38.073 [2024-07-15 03:37:43.886062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.073 [2024-07-15 03:37:43.886088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.073 qpair failed and we were unable to recover it. 00:34:38.073 [2024-07-15 03:37:43.886230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.073 [2024-07-15 03:37:43.886257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.073 qpair failed and we were unable to recover it. 00:34:38.073 [2024-07-15 03:37:43.886392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.073 [2024-07-15 03:37:43.886419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.073 qpair failed and we were unable to recover it. 00:34:38.073 [2024-07-15 03:37:43.886571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.073 [2024-07-15 03:37:43.886597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.073 qpair failed and we were unable to recover it. 00:34:38.073 [2024-07-15 03:37:43.886738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.073 [2024-07-15 03:37:43.886765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.073 qpair failed and we were unable to recover it. 00:34:38.073 [2024-07-15 03:37:43.886888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.073 [2024-07-15 03:37:43.886927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.073 qpair failed and we were unable to recover it. 00:34:38.073 [2024-07-15 03:37:43.887076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.073 [2024-07-15 03:37:43.887115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.073 qpair failed and we were unable to recover it. 00:34:38.073 [2024-07-15 03:37:43.887277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.073 [2024-07-15 03:37:43.887306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.073 qpair failed and we were unable to recover it. 00:34:38.073 [2024-07-15 03:37:43.887474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.073 [2024-07-15 03:37:43.887500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.073 qpair failed and we were unable to recover it. 00:34:38.073 [2024-07-15 03:37:43.887637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.073 [2024-07-15 03:37:43.887663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.073 qpair failed and we were unable to recover it. 00:34:38.073 [2024-07-15 03:37:43.887810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.073 [2024-07-15 03:37:43.887836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.073 qpair failed and we were unable to recover it. 00:34:38.073 [2024-07-15 03:37:43.888005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.073 [2024-07-15 03:37:43.888034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.073 qpair failed and we were unable to recover it. 00:34:38.073 [2024-07-15 03:37:43.888141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.073 [2024-07-15 03:37:43.888168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.073 qpair failed and we were unable to recover it. 00:34:38.073 [2024-07-15 03:37:43.888325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.073 [2024-07-15 03:37:43.888353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.073 qpair failed and we were unable to recover it. 00:34:38.073 [2024-07-15 03:37:43.888471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.073 [2024-07-15 03:37:43.888497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.073 qpair failed and we were unable to recover it. 00:34:38.073 [2024-07-15 03:37:43.888654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.073 [2024-07-15 03:37:43.888694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.073 qpair failed and we were unable to recover it. 00:34:38.073 [2024-07-15 03:37:43.888827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.073 [2024-07-15 03:37:43.888858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.073 qpair failed and we were unable to recover it. 00:34:38.073 [2024-07-15 03:37:43.889037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.073 [2024-07-15 03:37:43.889062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.073 qpair failed and we were unable to recover it. 00:34:38.073 [2024-07-15 03:37:43.889182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.073 [2024-07-15 03:37:43.889207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.073 qpair failed and we were unable to recover it. 00:34:38.073 [2024-07-15 03:37:43.889318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.073 [2024-07-15 03:37:43.889343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.073 qpair failed and we were unable to recover it. 00:34:38.073 [2024-07-15 03:37:43.889508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.073 [2024-07-15 03:37:43.889547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.073 qpair failed and we were unable to recover it. 00:34:38.073 [2024-07-15 03:37:43.889675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.073 [2024-07-15 03:37:43.889704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.073 qpair failed and we were unable to recover it. 00:34:38.073 [2024-07-15 03:37:43.889842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.073 [2024-07-15 03:37:43.889868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.073 qpair failed and we were unable to recover it. 00:34:38.073 [2024-07-15 03:37:43.890015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.073 [2024-07-15 03:37:43.890042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.073 qpair failed and we were unable to recover it. 00:34:38.073 [2024-07-15 03:37:43.890186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.073 [2024-07-15 03:37:43.890212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.073 qpair failed and we were unable to recover it. 00:34:38.073 [2024-07-15 03:37:43.890351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.073 [2024-07-15 03:37:43.890377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.073 qpair failed and we were unable to recover it. 00:34:38.074 [2024-07-15 03:37:43.890552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.074 [2024-07-15 03:37:43.890584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.074 qpair failed and we were unable to recover it. 00:34:38.074 [2024-07-15 03:37:43.890721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.074 [2024-07-15 03:37:43.890747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.074 qpair failed and we were unable to recover it. 00:34:38.074 [2024-07-15 03:37:43.890868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.074 [2024-07-15 03:37:43.890899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.074 qpair failed and we were unable to recover it. 00:34:38.074 [2024-07-15 03:37:43.891013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.074 [2024-07-15 03:37:43.891039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.074 qpair failed and we were unable to recover it. 00:34:38.074 [2024-07-15 03:37:43.891186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.074 [2024-07-15 03:37:43.891211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.074 qpair failed and we were unable to recover it. 00:34:38.074 [2024-07-15 03:37:43.891336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.074 [2024-07-15 03:37:43.891360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.074 qpair failed and we were unable to recover it. 00:34:38.074 [2024-07-15 03:37:43.891481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.074 [2024-07-15 03:37:43.891509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.074 qpair failed and we were unable to recover it. 00:34:38.074 [2024-07-15 03:37:43.891656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.074 [2024-07-15 03:37:43.891682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.074 qpair failed and we were unable to recover it. 00:34:38.074 [2024-07-15 03:37:43.891803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.074 [2024-07-15 03:37:43.891829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.074 qpair failed and we were unable to recover it. 00:34:38.074 [2024-07-15 03:37:43.891986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.074 [2024-07-15 03:37:43.892013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.074 qpair failed and we were unable to recover it. 00:34:38.074 [2024-07-15 03:37:43.892153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.074 [2024-07-15 03:37:43.892179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.074 qpair failed and we were unable to recover it. 00:34:38.074 [2024-07-15 03:37:43.892292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.074 [2024-07-15 03:37:43.892318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.074 qpair failed and we were unable to recover it. 00:34:38.074 [2024-07-15 03:37:43.892456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.074 [2024-07-15 03:37:43.892482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.074 qpair failed and we were unable to recover it. 00:34:38.074 [2024-07-15 03:37:43.892626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.074 [2024-07-15 03:37:43.892651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.074 qpair failed and we were unable to recover it. 00:34:38.074 [2024-07-15 03:37:43.892799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.074 [2024-07-15 03:37:43.892825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.074 qpair failed and we were unable to recover it. 00:34:38.074 [2024-07-15 03:37:43.892955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.074 [2024-07-15 03:37:43.892981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.074 qpair failed and we were unable to recover it. 00:34:38.074 [2024-07-15 03:37:43.893100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.074 [2024-07-15 03:37:43.893125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.074 qpair failed and we were unable to recover it. 00:34:38.074 [2024-07-15 03:37:43.893262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.074 [2024-07-15 03:37:43.893287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.074 qpair failed and we were unable to recover it. 00:34:38.074 [2024-07-15 03:37:43.893424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.074 [2024-07-15 03:37:43.893449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.074 qpair failed and we were unable to recover it. 00:34:38.074 [2024-07-15 03:37:43.893612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.074 [2024-07-15 03:37:43.893637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.074 qpair failed and we were unable to recover it. 00:34:38.074 [2024-07-15 03:37:43.893746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.074 [2024-07-15 03:37:43.893771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.074 qpair failed and we were unable to recover it. 00:34:38.074 [2024-07-15 03:37:43.893912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.074 [2024-07-15 03:37:43.893938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.074 qpair failed and we were unable to recover it. 00:34:38.074 [2024-07-15 03:37:43.894058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.074 [2024-07-15 03:37:43.894085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.074 qpair failed and we were unable to recover it. 00:34:38.074 [2024-07-15 03:37:43.894238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.074 [2024-07-15 03:37:43.894262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.074 qpair failed and we were unable to recover it. 00:34:38.074 [2024-07-15 03:37:43.894404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.074 [2024-07-15 03:37:43.894430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.074 qpair failed and we were unable to recover it. 00:34:38.074 [2024-07-15 03:37:43.894594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.074 [2024-07-15 03:37:43.894620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.074 qpair failed and we were unable to recover it. 00:34:38.074 [2024-07-15 03:37:43.894756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.074 [2024-07-15 03:37:43.894781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.074 qpair failed and we were unable to recover it. 00:34:38.074 [2024-07-15 03:37:43.894916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.074 [2024-07-15 03:37:43.894957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.074 qpair failed and we were unable to recover it. 00:34:38.074 [2024-07-15 03:37:43.895079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.074 [2024-07-15 03:37:43.895107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.074 qpair failed and we were unable to recover it. 00:34:38.074 [2024-07-15 03:37:43.895228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.074 [2024-07-15 03:37:43.895254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.074 qpair failed and we were unable to recover it. 00:34:38.074 [2024-07-15 03:37:43.895391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.074 [2024-07-15 03:37:43.895417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.074 qpair failed and we were unable to recover it. 00:34:38.074 [2024-07-15 03:37:43.895556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.074 [2024-07-15 03:37:43.895582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.074 qpair failed and we were unable to recover it. 00:34:38.074 [2024-07-15 03:37:43.895712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.074 [2024-07-15 03:37:43.895738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.074 qpair failed and we were unable to recover it. 00:34:38.074 [2024-07-15 03:37:43.895869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.074 [2024-07-15 03:37:43.895902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.074 qpair failed and we were unable to recover it. 00:34:38.074 [2024-07-15 03:37:43.896068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.074 [2024-07-15 03:37:43.896094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.074 qpair failed and we were unable to recover it. 00:34:38.074 [2024-07-15 03:37:43.896232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.074 [2024-07-15 03:37:43.896257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.075 qpair failed and we were unable to recover it. 00:34:38.075 [2024-07-15 03:37:43.896370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.075 [2024-07-15 03:37:43.896396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.075 qpair failed and we were unable to recover it. 00:34:38.075 [2024-07-15 03:37:43.896508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.075 [2024-07-15 03:37:43.896534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.075 qpair failed and we were unable to recover it. 00:34:38.075 [2024-07-15 03:37:43.896698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.075 [2024-07-15 03:37:43.896723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.075 qpair failed and we were unable to recover it. 00:34:38.075 [2024-07-15 03:37:43.896856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.075 [2024-07-15 03:37:43.896892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.075 qpair failed and we were unable to recover it. 00:34:38.075 [2024-07-15 03:37:43.897038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.075 [2024-07-15 03:37:43.897067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.075 qpair failed and we were unable to recover it. 00:34:38.075 [2024-07-15 03:37:43.897244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.075 [2024-07-15 03:37:43.897269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.075 qpair failed and we were unable to recover it. 00:34:38.075 [2024-07-15 03:37:43.897407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.075 [2024-07-15 03:37:43.897433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.075 qpair failed and we were unable to recover it. 00:34:38.075 [2024-07-15 03:37:43.897580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.075 [2024-07-15 03:37:43.897605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.075 qpair failed and we were unable to recover it. 00:34:38.075 [2024-07-15 03:37:43.897747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.075 [2024-07-15 03:37:43.897772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.075 qpair failed and we were unable to recover it. 00:34:38.075 [2024-07-15 03:37:43.897917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.075 [2024-07-15 03:37:43.897942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.075 qpair failed and we were unable to recover it. 00:34:38.075 [2024-07-15 03:37:43.898060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.075 [2024-07-15 03:37:43.898087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.075 qpair failed and we were unable to recover it. 00:34:38.075 [2024-07-15 03:37:43.898224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.075 [2024-07-15 03:37:43.898249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.075 qpair failed and we were unable to recover it. 00:34:38.075 [2024-07-15 03:37:43.898409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.075 [2024-07-15 03:37:43.898434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.075 qpair failed and we were unable to recover it. 00:34:38.075 [2024-07-15 03:37:43.898549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.075 [2024-07-15 03:37:43.898574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.075 qpair failed and we were unable to recover it. 00:34:38.075 [2024-07-15 03:37:43.898709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.075 [2024-07-15 03:37:43.898734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.075 qpair failed and we were unable to recover it. 00:34:38.075 [2024-07-15 03:37:43.898890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.075 [2024-07-15 03:37:43.898916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.075 qpair failed and we were unable to recover it. 00:34:38.075 [2024-07-15 03:37:43.899054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.075 [2024-07-15 03:37:43.899079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.075 qpair failed and we were unable to recover it. 00:34:38.075 [2024-07-15 03:37:43.899193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.075 [2024-07-15 03:37:43.899217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.075 qpair failed and we were unable to recover it. 00:34:38.075 [2024-07-15 03:37:43.899324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.075 [2024-07-15 03:37:43.899350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.075 qpair failed and we were unable to recover it. 00:34:38.075 [2024-07-15 03:37:43.899488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.075 [2024-07-15 03:37:43.899513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.075 qpair failed and we were unable to recover it. 00:34:38.075 [2024-07-15 03:37:43.899650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.075 [2024-07-15 03:37:43.899675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.075 qpair failed and we were unable to recover it. 00:34:38.075 [2024-07-15 03:37:43.899781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.075 [2024-07-15 03:37:43.899806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.075 qpair failed and we were unable to recover it. 00:34:38.075 [2024-07-15 03:37:43.899926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.075 [2024-07-15 03:37:43.899951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.075 qpair failed and we were unable to recover it. 00:34:38.075 [2024-07-15 03:37:43.900092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.075 [2024-07-15 03:37:43.900117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.075 qpair failed and we were unable to recover it. 00:34:38.075 [2024-07-15 03:37:43.900257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.075 [2024-07-15 03:37:43.900282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.075 qpair failed and we were unable to recover it. 00:34:38.075 [2024-07-15 03:37:43.900449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.075 [2024-07-15 03:37:43.900475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.075 qpair failed and we were unable to recover it. 00:34:38.075 [2024-07-15 03:37:43.900588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.075 [2024-07-15 03:37:43.900613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.075 qpair failed and we were unable to recover it. 00:34:38.075 [2024-07-15 03:37:43.900752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.075 [2024-07-15 03:37:43.900777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.075 qpair failed and we were unable to recover it. 00:34:38.075 [2024-07-15 03:37:43.900954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.075 [2024-07-15 03:37:43.900980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.075 qpair failed and we were unable to recover it. 00:34:38.075 [2024-07-15 03:37:43.901094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.075 [2024-07-15 03:37:43.901121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.075 qpair failed and we were unable to recover it. 00:34:38.075 [2024-07-15 03:37:43.901242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.075 [2024-07-15 03:37:43.901268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.075 qpair failed and we were unable to recover it. 00:34:38.075 [2024-07-15 03:37:43.901378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.075 [2024-07-15 03:37:43.901404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.075 qpair failed and we were unable to recover it. 00:34:38.075 [2024-07-15 03:37:43.901543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.075 [2024-07-15 03:37:43.901567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.075 qpair failed and we were unable to recover it. 00:34:38.075 [2024-07-15 03:37:43.901731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.075 [2024-07-15 03:37:43.901757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.075 qpair failed and we were unable to recover it. 00:34:38.075 [2024-07-15 03:37:43.901912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.075 [2024-07-15 03:37:43.901937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.075 qpair failed and we were unable to recover it. 00:34:38.076 [2024-07-15 03:37:43.902072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.076 [2024-07-15 03:37:43.902097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.076 qpair failed and we were unable to recover it. 00:34:38.076 [2024-07-15 03:37:43.902239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.076 [2024-07-15 03:37:43.902265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.076 qpair failed and we were unable to recover it. 00:34:38.076 [2024-07-15 03:37:43.902377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.076 [2024-07-15 03:37:43.902402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.076 qpair failed and we were unable to recover it. 00:34:38.076 [2024-07-15 03:37:43.902519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.076 [2024-07-15 03:37:43.902544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.076 qpair failed and we were unable to recover it. 00:34:38.076 [2024-07-15 03:37:43.902672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.076 [2024-07-15 03:37:43.902696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.076 qpair failed and we were unable to recover it. 00:34:38.076 [2024-07-15 03:37:43.902836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.076 [2024-07-15 03:37:43.902861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.076 qpair failed and we were unable to recover it. 00:34:38.076 [2024-07-15 03:37:43.902978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.076 [2024-07-15 03:37:43.903004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.076 qpair failed and we were unable to recover it. 00:34:38.076 [2024-07-15 03:37:43.903136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.076 [2024-07-15 03:37:43.903160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.076 qpair failed and we were unable to recover it. 00:34:38.076 [2024-07-15 03:37:43.903313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.076 [2024-07-15 03:37:43.903338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.076 qpair failed and we were unable to recover it. 00:34:38.076 [2024-07-15 03:37:43.903443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.076 [2024-07-15 03:37:43.903472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.076 qpair failed and we were unable to recover it. 00:34:38.076 [2024-07-15 03:37:43.903635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.076 [2024-07-15 03:37:43.903660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.076 qpair failed and we were unable to recover it. 00:34:38.076 [2024-07-15 03:37:43.903793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.076 [2024-07-15 03:37:43.903818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.076 qpair failed and we were unable to recover it. 00:34:38.076 [2024-07-15 03:37:43.903945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.076 [2024-07-15 03:37:43.903970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.076 qpair failed and we were unable to recover it. 00:34:38.076 [2024-07-15 03:37:43.904109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.076 [2024-07-15 03:37:43.904133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.076 qpair failed and we were unable to recover it. 00:34:38.076 [2024-07-15 03:37:43.904248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.076 [2024-07-15 03:37:43.904273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.076 qpair failed and we were unable to recover it. 00:34:38.076 [2024-07-15 03:37:43.904388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.076 [2024-07-15 03:37:43.904414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.076 qpair failed and we were unable to recover it. 00:34:38.076 [2024-07-15 03:37:43.904532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.076 [2024-07-15 03:37:43.904558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.076 qpair failed and we were unable to recover it. 00:34:38.076 [2024-07-15 03:37:43.904698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.076 [2024-07-15 03:37:43.904723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.076 qpair failed and we were unable to recover it. 00:34:38.076 [2024-07-15 03:37:43.904856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.076 [2024-07-15 03:37:43.904886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.076 qpair failed and we were unable to recover it. 00:34:38.076 [2024-07-15 03:37:43.905002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.076 [2024-07-15 03:37:43.905027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.076 qpair failed and we were unable to recover it. 00:34:38.076 [2024-07-15 03:37:43.905143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.076 [2024-07-15 03:37:43.905168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.076 qpair failed and we were unable to recover it. 00:34:38.076 [2024-07-15 03:37:43.905336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.076 [2024-07-15 03:37:43.905361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.076 qpair failed and we were unable to recover it. 00:34:38.076 [2024-07-15 03:37:43.905470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.076 [2024-07-15 03:37:43.905495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.076 qpair failed and we were unable to recover it. 00:34:38.076 [2024-07-15 03:37:43.905615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.076 [2024-07-15 03:37:43.905640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.076 qpair failed and we were unable to recover it. 00:34:38.076 [2024-07-15 03:37:43.905751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.076 [2024-07-15 03:37:43.905777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.076 qpair failed and we were unable to recover it. 00:34:38.076 [2024-07-15 03:37:43.905941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.076 [2024-07-15 03:37:43.905967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.076 qpair failed and we were unable to recover it. 00:34:38.076 [2024-07-15 03:37:43.906105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.076 [2024-07-15 03:37:43.906130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.076 qpair failed and we were unable to recover it. 00:34:38.076 [2024-07-15 03:37:43.906234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.076 [2024-07-15 03:37:43.906260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.076 qpair failed and we were unable to recover it. 00:34:38.076 [2024-07-15 03:37:43.906372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.076 [2024-07-15 03:37:43.906397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.076 qpair failed and we were unable to recover it. 00:34:38.076 [2024-07-15 03:37:43.906517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.076 [2024-07-15 03:37:43.906543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.076 qpair failed and we were unable to recover it. 00:34:38.076 [2024-07-15 03:37:43.906650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.076 [2024-07-15 03:37:43.906674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.076 qpair failed and we were unable to recover it. 00:34:38.076 [2024-07-15 03:37:43.906814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.076 [2024-07-15 03:37:43.906839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.076 qpair failed and we were unable to recover it. 00:34:38.076 [2024-07-15 03:37:43.906978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.076 [2024-07-15 03:37:43.907004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.076 qpair failed and we were unable to recover it. 00:34:38.076 [2024-07-15 03:37:43.907117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.076 [2024-07-15 03:37:43.907141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.076 qpair failed and we were unable to recover it. 00:34:38.076 [2024-07-15 03:37:43.907251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.077 [2024-07-15 03:37:43.907276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.077 qpair failed and we were unable to recover it. 00:34:38.077 [2024-07-15 03:37:43.907425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.077 [2024-07-15 03:37:43.907450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.077 qpair failed and we were unable to recover it. 00:34:38.077 [2024-07-15 03:37:43.907589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.077 [2024-07-15 03:37:43.907616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.077 qpair failed and we were unable to recover it. 00:34:38.077 [2024-07-15 03:37:43.907760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.077 [2024-07-15 03:37:43.907786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.077 qpair failed and we were unable to recover it. 00:34:38.077 [2024-07-15 03:37:43.907928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.077 [2024-07-15 03:37:43.907954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.077 qpair failed and we were unable to recover it. 00:34:38.077 [2024-07-15 03:37:43.908055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.077 [2024-07-15 03:37:43.908080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.077 qpair failed and we were unable to recover it. 00:34:38.077 [2024-07-15 03:37:43.908214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.077 [2024-07-15 03:37:43.908239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.077 qpair failed and we were unable to recover it. 00:34:38.077 [2024-07-15 03:37:43.908355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.077 [2024-07-15 03:37:43.908381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.077 qpair failed and we were unable to recover it. 00:34:38.077 [2024-07-15 03:37:43.908539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.077 [2024-07-15 03:37:43.908565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.077 qpair failed and we were unable to recover it. 00:34:38.077 [2024-07-15 03:37:43.908714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.077 [2024-07-15 03:37:43.908753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.077 qpair failed and we were unable to recover it. 00:34:38.077 [2024-07-15 03:37:43.908908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.077 [2024-07-15 03:37:43.908941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.077 qpair failed and we were unable to recover it. 00:34:38.077 [2024-07-15 03:37:43.909059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.077 [2024-07-15 03:37:43.909086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.077 qpair failed and we were unable to recover it. 00:34:38.077 [2024-07-15 03:37:43.909227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.077 [2024-07-15 03:37:43.909254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.077 qpair failed and we were unable to recover it. 00:34:38.077 [2024-07-15 03:37:43.909392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.077 [2024-07-15 03:37:43.909418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.077 qpair failed and we were unable to recover it. 00:34:38.077 [2024-07-15 03:37:43.909534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.077 [2024-07-15 03:37:43.909559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.077 qpair failed and we were unable to recover it. 00:34:38.077 [2024-07-15 03:37:43.909725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.077 [2024-07-15 03:37:43.909756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.077 qpair failed and we were unable to recover it. 00:34:38.077 [2024-07-15 03:37:43.909866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.077 [2024-07-15 03:37:43.909899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.077 qpair failed and we were unable to recover it. 00:34:38.077 [2024-07-15 03:37:43.910041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.077 [2024-07-15 03:37:43.910080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.077 qpair failed and we were unable to recover it. 00:34:38.077 [2024-07-15 03:37:43.910225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.077 [2024-07-15 03:37:43.910251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.077 qpair failed and we were unable to recover it. 00:34:38.077 [2024-07-15 03:37:43.910365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.077 [2024-07-15 03:37:43.910390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.077 qpair failed and we were unable to recover it. 00:34:38.077 [2024-07-15 03:37:43.910554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.077 [2024-07-15 03:37:43.910580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.077 qpair failed and we were unable to recover it. 00:34:38.077 [2024-07-15 03:37:43.910724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.077 [2024-07-15 03:37:43.910751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.077 qpair failed and we were unable to recover it. 00:34:38.077 [2024-07-15 03:37:43.910909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.077 [2024-07-15 03:37:43.910948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.077 qpair failed and we were unable to recover it. 00:34:38.077 [2024-07-15 03:37:43.911096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.077 [2024-07-15 03:37:43.911123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.077 qpair failed and we were unable to recover it. 00:34:38.077 [2024-07-15 03:37:43.911245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.077 [2024-07-15 03:37:43.911272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.077 qpair failed and we were unable to recover it. 00:34:38.077 [2024-07-15 03:37:43.911413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.077 [2024-07-15 03:37:43.911441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.077 qpair failed and we were unable to recover it. 00:34:38.077 [2024-07-15 03:37:43.911580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.077 [2024-07-15 03:37:43.911572] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:34:38.077 [2024-07-15 03:37:43.911608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.077 qpair failed and we were unable to recover it. 00:34:38.077 [2024-07-15 03:37:43.911655] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:38.077 [2024-07-15 03:37:43.911773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.077 [2024-07-15 03:37:43.911799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.077 qpair failed and we were unable to recover it. 00:34:38.077 [2024-07-15 03:37:43.911941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.077 [2024-07-15 03:37:43.911968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.078 qpair failed and we were unable to recover it. 00:34:38.078 [2024-07-15 03:37:43.912110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.078 [2024-07-15 03:37:43.912136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.078 qpair failed and we were unable to recover it. 00:34:38.078 [2024-07-15 03:37:43.912252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.078 [2024-07-15 03:37:43.912279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.078 qpair failed and we were unable to recover it. 00:34:38.078 [2024-07-15 03:37:43.912419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.078 [2024-07-15 03:37:43.912445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.078 qpair failed and we were unable to recover it. 00:34:38.078 [2024-07-15 03:37:43.912588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.078 [2024-07-15 03:37:43.912614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.078 qpair failed and we were unable to recover it. 00:34:38.078 [2024-07-15 03:37:43.912757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.078 [2024-07-15 03:37:43.912783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.078 qpair failed and we were unable to recover it. 00:34:38.078 [2024-07-15 03:37:43.912901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.078 [2024-07-15 03:37:43.912927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.078 qpair failed and we were unable to recover it. 00:34:38.078 [2024-07-15 03:37:43.913064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.078 [2024-07-15 03:37:43.913091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.078 qpair failed and we were unable to recover it. 00:34:38.078 [2024-07-15 03:37:43.913230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.078 [2024-07-15 03:37:43.913256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.078 qpair failed and we were unable to recover it. 00:34:38.078 [2024-07-15 03:37:43.913421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.078 [2024-07-15 03:37:43.913447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.078 qpair failed and we were unable to recover it. 00:34:38.078 [2024-07-15 03:37:43.913559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.078 [2024-07-15 03:37:43.913585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.078 qpair failed and we were unable to recover it. 00:34:38.078 [2024-07-15 03:37:43.913749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.078 [2024-07-15 03:37:43.913781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.078 qpair failed and we were unable to recover it. 00:34:38.078 [2024-07-15 03:37:43.913906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.078 [2024-07-15 03:37:43.913946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.078 qpair failed and we were unable to recover it. 00:34:38.078 [2024-07-15 03:37:43.914079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.078 [2024-07-15 03:37:43.914106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.078 qpair failed and we were unable to recover it. 00:34:38.078 [2024-07-15 03:37:43.914270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.078 [2024-07-15 03:37:43.914296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.078 qpair failed and we were unable to recover it. 00:34:38.078 [2024-07-15 03:37:43.914404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.078 [2024-07-15 03:37:43.914429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.078 qpair failed and we were unable to recover it. 00:34:38.078 [2024-07-15 03:37:43.914571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.078 [2024-07-15 03:37:43.914597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.078 qpair failed and we were unable to recover it. 00:34:38.078 [2024-07-15 03:37:43.914704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.078 [2024-07-15 03:37:43.914729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.078 qpair failed and we were unable to recover it. 00:34:38.078 [2024-07-15 03:37:43.914869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.078 [2024-07-15 03:37:43.914906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.078 qpair failed and we were unable to recover it. 00:34:38.078 [2024-07-15 03:37:43.915049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.078 [2024-07-15 03:37:43.915075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.078 qpair failed and we were unable to recover it. 00:34:38.078 [2024-07-15 03:37:43.915196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.078 [2024-07-15 03:37:43.915226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.078 qpair failed and we were unable to recover it. 00:34:38.078 [2024-07-15 03:37:43.915337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.078 [2024-07-15 03:37:43.915362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.078 qpair failed and we were unable to recover it. 00:34:38.078 [2024-07-15 03:37:43.915494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.078 [2024-07-15 03:37:43.915520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.078 qpair failed and we were unable to recover it. 00:34:38.078 [2024-07-15 03:37:43.915634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.078 [2024-07-15 03:37:43.915660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.078 qpair failed and we were unable to recover it. 00:34:38.078 [2024-07-15 03:37:43.915811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.078 [2024-07-15 03:37:43.915849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.078 qpair failed and we were unable to recover it. 00:34:38.078 [2024-07-15 03:37:43.915979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.078 [2024-07-15 03:37:43.916007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.078 qpair failed and we were unable to recover it. 00:34:38.078 [2024-07-15 03:37:43.916121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.078 [2024-07-15 03:37:43.916153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.078 qpair failed and we were unable to recover it. 00:34:38.078 [2024-07-15 03:37:43.916296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.078 [2024-07-15 03:37:43.916322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.078 qpair failed and we were unable to recover it. 00:34:38.078 [2024-07-15 03:37:43.916451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.078 [2024-07-15 03:37:43.916477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.078 qpair failed and we were unable to recover it. 00:34:38.078 [2024-07-15 03:37:43.916621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.078 [2024-07-15 03:37:43.916650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.078 qpair failed and we were unable to recover it. 00:34:38.078 [2024-07-15 03:37:43.916817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.078 [2024-07-15 03:37:43.916843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.078 qpair failed and we were unable to recover it. 00:34:38.078 [2024-07-15 03:37:43.916994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.078 [2024-07-15 03:37:43.917021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.078 qpair failed and we were unable to recover it. 00:34:38.078 [2024-07-15 03:37:43.917182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.078 [2024-07-15 03:37:43.917209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.078 qpair failed and we were unable to recover it. 00:34:38.078 [2024-07-15 03:37:43.917347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.078 [2024-07-15 03:37:43.917375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.078 qpair failed and we were unable to recover it. 00:34:38.078 [2024-07-15 03:37:43.917522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.078 [2024-07-15 03:37:43.917548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.078 qpair failed and we were unable to recover it. 00:34:38.078 [2024-07-15 03:37:43.917714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.079 [2024-07-15 03:37:43.917740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.079 qpair failed and we were unable to recover it. 00:34:38.079 [2024-07-15 03:37:43.917882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.079 [2024-07-15 03:37:43.917910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.079 qpair failed and we were unable to recover it. 00:34:38.079 [2024-07-15 03:37:43.918031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.079 [2024-07-15 03:37:43.918058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.079 qpair failed and we were unable to recover it. 00:34:38.079 [2024-07-15 03:37:43.918200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.079 [2024-07-15 03:37:43.918226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.079 qpair failed and we were unable to recover it. 00:34:38.079 [2024-07-15 03:37:43.918361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.079 [2024-07-15 03:37:43.918388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.079 qpair failed and we were unable to recover it. 00:34:38.079 [2024-07-15 03:37:43.918552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.079 [2024-07-15 03:37:43.918578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.079 qpair failed and we were unable to recover it. 00:34:38.079 [2024-07-15 03:37:43.918687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.079 [2024-07-15 03:37:43.918714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.079 qpair failed and we were unable to recover it. 00:34:38.079 [2024-07-15 03:37:43.918828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.079 [2024-07-15 03:37:43.918855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.079 qpair failed and we were unable to recover it. 00:34:38.079 [2024-07-15 03:37:43.919003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.079 [2024-07-15 03:37:43.919030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.079 qpair failed and we were unable to recover it. 00:34:38.079 [2024-07-15 03:37:43.919143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.079 [2024-07-15 03:37:43.919169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.079 qpair failed and we were unable to recover it. 00:34:38.079 [2024-07-15 03:37:43.919309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.079 [2024-07-15 03:37:43.919335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.079 qpair failed and we were unable to recover it. 00:34:38.079 [2024-07-15 03:37:43.919478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.079 [2024-07-15 03:37:43.919505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.079 qpair failed and we were unable to recover it. 00:34:38.079 [2024-07-15 03:37:43.919674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.079 [2024-07-15 03:37:43.919700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.079 qpair failed and we were unable to recover it. 00:34:38.079 [2024-07-15 03:37:43.919856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.079 [2024-07-15 03:37:43.919889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.079 qpair failed and we were unable to recover it. 00:34:38.079 [2024-07-15 03:37:43.920034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.079 [2024-07-15 03:37:43.920060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.079 qpair failed and we were unable to recover it. 00:34:38.079 [2024-07-15 03:37:43.920170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.079 [2024-07-15 03:37:43.920198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.079 qpair failed and we were unable to recover it. 00:34:38.079 [2024-07-15 03:37:43.920309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.079 [2024-07-15 03:37:43.920337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.079 qpair failed and we were unable to recover it. 00:34:38.079 [2024-07-15 03:37:43.920456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.079 [2024-07-15 03:37:43.920481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.079 qpair failed and we were unable to recover it. 00:34:38.079 [2024-07-15 03:37:43.920589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.079 [2024-07-15 03:37:43.920619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.079 qpair failed and we were unable to recover it. 00:34:38.079 [2024-07-15 03:37:43.920729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.079 [2024-07-15 03:37:43.920754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.079 qpair failed and we were unable to recover it. 00:34:38.079 [2024-07-15 03:37:43.920867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.079 [2024-07-15 03:37:43.920899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.079 qpair failed and we were unable to recover it. 00:34:38.079 [2024-07-15 03:37:43.921010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.079 [2024-07-15 03:37:43.921036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.079 qpair failed and we were unable to recover it. 00:34:38.079 [2024-07-15 03:37:43.921178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.079 [2024-07-15 03:37:43.921203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.079 qpair failed and we were unable to recover it. 00:34:38.079 [2024-07-15 03:37:43.921323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.079 [2024-07-15 03:37:43.921348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.079 qpair failed and we were unable to recover it. 00:34:38.079 [2024-07-15 03:37:43.921486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.079 [2024-07-15 03:37:43.921511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.079 qpair failed and we were unable to recover it. 00:34:38.079 [2024-07-15 03:37:43.921675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.079 [2024-07-15 03:37:43.921700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.079 qpair failed and we were unable to recover it. 00:34:38.079 [2024-07-15 03:37:43.921809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.079 [2024-07-15 03:37:43.921834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.079 qpair failed and we were unable to recover it. 00:34:38.079 [2024-07-15 03:37:43.921983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.079 [2024-07-15 03:37:43.922009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.079 qpair failed and we were unable to recover it. 00:34:38.079 [2024-07-15 03:37:43.922145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.079 [2024-07-15 03:37:43.922170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.079 qpair failed and we were unable to recover it. 00:34:38.079 [2024-07-15 03:37:43.922306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.079 [2024-07-15 03:37:43.922339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.079 qpair failed and we were unable to recover it. 00:34:38.079 [2024-07-15 03:37:43.922478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.079 [2024-07-15 03:37:43.922504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.079 qpair failed and we were unable to recover it. 00:34:38.079 [2024-07-15 03:37:43.922645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.079 [2024-07-15 03:37:43.922670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.079 qpair failed and we were unable to recover it. 00:34:38.079 [2024-07-15 03:37:43.922816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.079 [2024-07-15 03:37:43.922842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.079 qpair failed and we were unable to recover it. 00:34:38.079 [2024-07-15 03:37:43.922961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.079 [2024-07-15 03:37:43.922987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.079 qpair failed and we were unable to recover it. 00:34:38.079 [2024-07-15 03:37:43.923121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.079 [2024-07-15 03:37:43.923146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.079 qpair failed and we were unable to recover it. 00:34:38.079 [2024-07-15 03:37:43.923286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.080 [2024-07-15 03:37:43.923312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.080 qpair failed and we were unable to recover it. 00:34:38.080 [2024-07-15 03:37:43.923444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.080 [2024-07-15 03:37:43.923470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.080 qpair failed and we were unable to recover it. 00:34:38.080 [2024-07-15 03:37:43.923613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.080 [2024-07-15 03:37:43.923639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.080 qpair failed and we were unable to recover it. 00:34:38.080 [2024-07-15 03:37:43.923750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.080 [2024-07-15 03:37:43.923776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.080 qpair failed and we were unable to recover it. 00:34:38.080 [2024-07-15 03:37:43.923922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.080 [2024-07-15 03:37:43.923950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.080 qpair failed and we were unable to recover it. 00:34:38.080 [2024-07-15 03:37:43.924117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.080 [2024-07-15 03:37:43.924143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.080 qpair failed and we were unable to recover it. 00:34:38.080 [2024-07-15 03:37:43.924282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.080 [2024-07-15 03:37:43.924308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.080 qpair failed and we were unable to recover it. 00:34:38.080 [2024-07-15 03:37:43.924421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.080 [2024-07-15 03:37:43.924447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.080 qpair failed and we were unable to recover it. 00:34:38.080 [2024-07-15 03:37:43.924616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.080 [2024-07-15 03:37:43.924642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.080 qpair failed and we were unable to recover it. 00:34:38.080 [2024-07-15 03:37:43.924785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.080 [2024-07-15 03:37:43.924811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.080 qpair failed and we were unable to recover it. 00:34:38.080 [2024-07-15 03:37:43.924932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.080 [2024-07-15 03:37:43.924962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.080 qpair failed and we were unable to recover it. 00:34:38.080 [2024-07-15 03:37:43.925068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.080 [2024-07-15 03:37:43.925094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.080 qpair failed and we were unable to recover it. 00:34:38.080 [2024-07-15 03:37:43.925220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.080 [2024-07-15 03:37:43.925245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.080 qpair failed and we were unable to recover it. 00:34:38.080 [2024-07-15 03:37:43.925382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.080 [2024-07-15 03:37:43.925407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.080 qpair failed and we were unable to recover it. 00:34:38.080 [2024-07-15 03:37:43.925514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.080 [2024-07-15 03:37:43.925540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.080 qpair failed and we were unable to recover it. 00:34:38.080 [2024-07-15 03:37:43.925662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.080 [2024-07-15 03:37:43.925688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.080 qpair failed and we were unable to recover it. 00:34:38.080 [2024-07-15 03:37:43.925829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.080 [2024-07-15 03:37:43.925854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.080 qpair failed and we were unable to recover it. 00:34:38.080 [2024-07-15 03:37:43.926025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.080 [2024-07-15 03:37:43.926065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.080 qpair failed and we were unable to recover it. 00:34:38.080 [2024-07-15 03:37:43.926251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.080 [2024-07-15 03:37:43.926278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.080 qpair failed and we were unable to recover it. 00:34:38.080 [2024-07-15 03:37:43.926414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.080 [2024-07-15 03:37:43.926439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.080 qpair failed and we were unable to recover it. 00:34:38.080 [2024-07-15 03:37:43.926546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.080 [2024-07-15 03:37:43.926570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.080 qpair failed and we were unable to recover it. 00:34:38.080 [2024-07-15 03:37:43.926716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.080 [2024-07-15 03:37:43.926743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.080 qpair failed and we were unable to recover it. 00:34:38.080 [2024-07-15 03:37:43.926887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.080 [2024-07-15 03:37:43.926913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.080 qpair failed and we were unable to recover it. 00:34:38.080 [2024-07-15 03:37:43.927033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.080 [2024-07-15 03:37:43.927058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.080 qpair failed and we were unable to recover it. 00:34:38.080 [2024-07-15 03:37:43.927212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.080 [2024-07-15 03:37:43.927238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.080 qpair failed and we were unable to recover it. 00:34:38.080 [2024-07-15 03:37:43.927374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.080 [2024-07-15 03:37:43.927400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.080 qpair failed and we were unable to recover it. 00:34:38.080 [2024-07-15 03:37:43.927519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.080 [2024-07-15 03:37:43.927548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.080 qpair failed and we were unable to recover it. 00:34:38.080 [2024-07-15 03:37:43.927669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.080 [2024-07-15 03:37:43.927696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.080 qpair failed and we were unable to recover it. 00:34:38.080 [2024-07-15 03:37:43.927835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.080 [2024-07-15 03:37:43.927861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.080 qpair failed and we were unable to recover it. 00:34:38.080 [2024-07-15 03:37:43.928006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.080 [2024-07-15 03:37:43.928031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.080 qpair failed and we were unable to recover it. 00:34:38.080 [2024-07-15 03:37:43.928136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.080 [2024-07-15 03:37:43.928162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.080 qpair failed and we were unable to recover it. 00:34:38.080 [2024-07-15 03:37:43.928263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.080 [2024-07-15 03:37:43.928289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.080 qpair failed and we were unable to recover it. 00:34:38.080 [2024-07-15 03:37:43.928430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.080 [2024-07-15 03:37:43.928455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.080 qpair failed and we were unable to recover it. 00:34:38.080 [2024-07-15 03:37:43.928571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.080 [2024-07-15 03:37:43.928596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.080 qpair failed and we were unable to recover it. 00:34:38.080 [2024-07-15 03:37:43.928701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.080 [2024-07-15 03:37:43.928735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.080 qpair failed and we were unable to recover it. 00:34:38.080 [2024-07-15 03:37:43.928886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.080 [2024-07-15 03:37:43.928918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.080 qpair failed and we were unable to recover it. 00:34:38.081 [2024-07-15 03:37:43.929058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.081 [2024-07-15 03:37:43.929084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.081 qpair failed and we were unable to recover it. 00:34:38.081 [2024-07-15 03:37:43.929219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.081 [2024-07-15 03:37:43.929249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.081 qpair failed and we were unable to recover it. 00:34:38.081 [2024-07-15 03:37:43.929390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.081 [2024-07-15 03:37:43.929416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.081 qpair failed and we were unable to recover it. 00:34:38.081 [2024-07-15 03:37:43.929553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.081 [2024-07-15 03:37:43.929584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.081 qpair failed and we were unable to recover it. 00:34:38.081 [2024-07-15 03:37:43.929738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.081 [2024-07-15 03:37:43.929765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.081 qpair failed and we were unable to recover it. 00:34:38.081 [2024-07-15 03:37:43.929890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.081 [2024-07-15 03:37:43.929926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.081 qpair failed and we were unable to recover it. 00:34:38.081 [2024-07-15 03:37:43.930045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.081 [2024-07-15 03:37:43.930073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.081 qpair failed and we were unable to recover it. 00:34:38.081 [2024-07-15 03:37:43.930188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.081 [2024-07-15 03:37:43.930214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.081 qpair failed and we were unable to recover it. 00:34:38.081 [2024-07-15 03:37:43.930322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.081 [2024-07-15 03:37:43.930349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.081 qpair failed and we were unable to recover it. 00:34:38.081 [2024-07-15 03:37:43.930457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.081 [2024-07-15 03:37:43.930483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.081 qpair failed and we were unable to recover it. 00:34:38.081 [2024-07-15 03:37:43.930608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.081 [2024-07-15 03:37:43.930636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.081 qpair failed and we were unable to recover it. 00:34:38.081 [2024-07-15 03:37:43.930777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.081 [2024-07-15 03:37:43.930802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.081 qpair failed and we were unable to recover it. 00:34:38.081 [2024-07-15 03:37:43.930927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.081 [2024-07-15 03:37:43.930954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.081 qpair failed and we were unable to recover it. 00:34:38.081 [2024-07-15 03:37:43.931093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.081 [2024-07-15 03:37:43.931119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.081 qpair failed and we were unable to recover it. 00:34:38.081 [2024-07-15 03:37:43.931230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.081 [2024-07-15 03:37:43.931256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.081 qpair failed and we were unable to recover it. 00:34:38.081 [2024-07-15 03:37:43.931401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.081 [2024-07-15 03:37:43.931426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.081 qpair failed and we were unable to recover it. 00:34:38.081 [2024-07-15 03:37:43.931540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.081 [2024-07-15 03:37:43.931565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.081 qpair failed and we were unable to recover it. 00:34:38.081 [2024-07-15 03:37:43.931703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.081 [2024-07-15 03:37:43.931729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.081 qpair failed and we were unable to recover it. 00:34:38.081 [2024-07-15 03:37:43.931871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.081 [2024-07-15 03:37:43.931908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.081 qpair failed and we were unable to recover it. 00:34:38.081 [2024-07-15 03:37:43.932029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.081 [2024-07-15 03:37:43.932068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.081 qpair failed and we were unable to recover it. 00:34:38.081 [2024-07-15 03:37:43.932228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.081 [2024-07-15 03:37:43.932267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.081 qpair failed and we were unable to recover it. 00:34:38.081 [2024-07-15 03:37:43.932389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.081 [2024-07-15 03:37:43.932416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.081 qpair failed and we were unable to recover it. 00:34:38.081 [2024-07-15 03:37:43.932550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.081 [2024-07-15 03:37:43.932576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.081 qpair failed and we were unable to recover it. 00:34:38.081 [2024-07-15 03:37:43.932711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.081 [2024-07-15 03:37:43.932737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.081 qpair failed and we were unable to recover it. 00:34:38.081 [2024-07-15 03:37:43.932851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.081 [2024-07-15 03:37:43.932882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.081 qpair failed and we were unable to recover it. 00:34:38.081 [2024-07-15 03:37:43.932996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.081 [2024-07-15 03:37:43.933021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.081 qpair failed and we were unable to recover it. 00:34:38.081 [2024-07-15 03:37:43.933162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.081 [2024-07-15 03:37:43.933187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.081 qpair failed and we were unable to recover it. 00:34:38.081 [2024-07-15 03:37:43.933309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.081 [2024-07-15 03:37:43.933335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.081 qpair failed and we were unable to recover it. 00:34:38.081 [2024-07-15 03:37:43.933475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.081 [2024-07-15 03:37:43.933507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.081 qpair failed and we were unable to recover it. 00:34:38.081 [2024-07-15 03:37:43.933617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.081 [2024-07-15 03:37:43.933642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.081 qpair failed and we were unable to recover it. 00:34:38.081 [2024-07-15 03:37:43.933784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.081 [2024-07-15 03:37:43.933811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.081 qpair failed and we were unable to recover it. 00:34:38.081 [2024-07-15 03:37:43.933941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.081 [2024-07-15 03:37:43.933968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.081 qpair failed and we were unable to recover it. 00:34:38.081 [2024-07-15 03:37:43.934110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.081 [2024-07-15 03:37:43.934135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.081 qpair failed and we were unable to recover it. 00:34:38.081 [2024-07-15 03:37:43.934277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.081 [2024-07-15 03:37:43.934303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.081 qpair failed and we were unable to recover it. 00:34:38.081 [2024-07-15 03:37:43.934440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.081 [2024-07-15 03:37:43.934465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.081 qpair failed and we were unable to recover it. 00:34:38.081 [2024-07-15 03:37:43.934599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.082 [2024-07-15 03:37:43.934624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.082 qpair failed and we were unable to recover it. 00:34:38.082 [2024-07-15 03:37:43.934787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.082 [2024-07-15 03:37:43.934812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.082 qpair failed and we were unable to recover it. 00:34:38.082 [2024-07-15 03:37:43.934976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.082 [2024-07-15 03:37:43.935002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.082 qpair failed and we were unable to recover it. 00:34:38.082 [2024-07-15 03:37:43.935112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.082 [2024-07-15 03:37:43.935137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.082 qpair failed and we were unable to recover it. 00:34:38.082 [2024-07-15 03:37:43.935250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.082 [2024-07-15 03:37:43.935275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.082 qpair failed and we were unable to recover it. 00:34:38.082 [2024-07-15 03:37:43.935413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.082 [2024-07-15 03:37:43.935439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.082 qpair failed and we were unable to recover it. 00:34:38.082 [2024-07-15 03:37:43.935553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.082 [2024-07-15 03:37:43.935579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.082 qpair failed and we were unable to recover it. 00:34:38.082 [2024-07-15 03:37:43.935719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.082 [2024-07-15 03:37:43.935744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.082 qpair failed and we were unable to recover it. 00:34:38.082 [2024-07-15 03:37:43.935874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.082 [2024-07-15 03:37:43.935906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.082 qpair failed and we were unable to recover it. 00:34:38.082 [2024-07-15 03:37:43.936026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.082 [2024-07-15 03:37:43.936052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.082 qpair failed and we were unable to recover it. 00:34:38.082 [2024-07-15 03:37:43.936193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.082 [2024-07-15 03:37:43.936219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.082 qpair failed and we were unable to recover it. 00:34:38.082 [2024-07-15 03:37:43.936354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.082 [2024-07-15 03:37:43.936379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.082 qpair failed and we were unable to recover it. 00:34:38.082 [2024-07-15 03:37:43.936517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.082 [2024-07-15 03:37:43.936542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.082 qpair failed and we were unable to recover it. 00:34:38.082 [2024-07-15 03:37:43.936687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.082 [2024-07-15 03:37:43.936712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.082 qpair failed and we were unable to recover it. 00:34:38.082 [2024-07-15 03:37:43.936844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.082 [2024-07-15 03:37:43.936869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.082 qpair failed and we were unable to recover it. 00:34:38.082 [2024-07-15 03:37:43.937024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.082 [2024-07-15 03:37:43.937050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.082 qpair failed and we were unable to recover it. 00:34:38.082 [2024-07-15 03:37:43.937217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.082 [2024-07-15 03:37:43.937243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.082 qpair failed and we were unable to recover it. 00:34:38.082 [2024-07-15 03:37:43.937385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.082 [2024-07-15 03:37:43.937411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.082 qpair failed and we were unable to recover it. 00:34:38.082 [2024-07-15 03:37:43.937541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.082 [2024-07-15 03:37:43.937567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.082 qpair failed and we were unable to recover it. 00:34:38.082 [2024-07-15 03:37:43.937674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.082 [2024-07-15 03:37:43.937699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.082 qpair failed and we were unable to recover it. 00:34:38.082 [2024-07-15 03:37:43.937853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.082 [2024-07-15 03:37:43.937904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.082 qpair failed and we were unable to recover it. 00:34:38.082 [2024-07-15 03:37:43.938065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.082 [2024-07-15 03:37:43.938104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.082 qpair failed and we were unable to recover it. 00:34:38.082 [2024-07-15 03:37:43.938232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.082 [2024-07-15 03:37:43.938259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.082 qpair failed and we were unable to recover it. 00:34:38.082 [2024-07-15 03:37:43.938396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.082 [2024-07-15 03:37:43.938422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.082 qpair failed and we were unable to recover it. 00:34:38.082 [2024-07-15 03:37:43.938567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.082 [2024-07-15 03:37:43.938594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.082 qpair failed and we were unable to recover it. 00:34:38.082 [2024-07-15 03:37:43.938705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.082 [2024-07-15 03:37:43.938731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.082 qpair failed and we were unable to recover it. 00:34:38.082 [2024-07-15 03:37:43.938858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.082 [2024-07-15 03:37:43.938892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.082 qpair failed and we were unable to recover it. 00:34:38.082 [2024-07-15 03:37:43.939043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.082 [2024-07-15 03:37:43.939069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.082 qpair failed and we were unable to recover it. 00:34:38.082 [2024-07-15 03:37:43.939205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.082 [2024-07-15 03:37:43.939231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.082 qpair failed and we were unable to recover it. 00:34:38.082 [2024-07-15 03:37:43.939374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.083 [2024-07-15 03:37:43.939400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.083 qpair failed and we were unable to recover it. 00:34:38.083 [2024-07-15 03:37:43.939541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.083 [2024-07-15 03:37:43.939567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.083 qpair failed and we were unable to recover it. 00:34:38.083 [2024-07-15 03:37:43.939682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.083 [2024-07-15 03:37:43.939708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.083 qpair failed and we were unable to recover it. 00:34:38.083 [2024-07-15 03:37:43.939815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.083 [2024-07-15 03:37:43.939841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.083 qpair failed and we were unable to recover it. 00:34:38.083 [2024-07-15 03:37:43.939999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.083 [2024-07-15 03:37:43.940026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.083 qpair failed and we were unable to recover it. 00:34:38.083 [2024-07-15 03:37:43.940166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.083 [2024-07-15 03:37:43.940191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.083 qpair failed and we were unable to recover it. 00:34:38.083 [2024-07-15 03:37:43.940327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.083 [2024-07-15 03:37:43.940353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.083 qpair failed and we were unable to recover it. 00:34:38.083 [2024-07-15 03:37:43.940486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.083 [2024-07-15 03:37:43.940512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.083 qpair failed and we were unable to recover it. 00:34:38.083 [2024-07-15 03:37:43.940617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.083 [2024-07-15 03:37:43.940643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.083 qpair failed and we were unable to recover it. 00:34:38.083 [2024-07-15 03:37:43.940759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.083 [2024-07-15 03:37:43.940784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.083 qpair failed and we were unable to recover it. 00:34:38.083 [2024-07-15 03:37:43.940899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.083 [2024-07-15 03:37:43.940926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.083 qpair failed and we were unable to recover it. 00:34:38.083 [2024-07-15 03:37:43.941033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.083 [2024-07-15 03:37:43.941058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.083 qpair failed and we were unable to recover it. 00:34:38.083 [2024-07-15 03:37:43.941207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.083 [2024-07-15 03:37:43.941232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.083 qpair failed and we were unable to recover it. 00:34:38.083 [2024-07-15 03:37:43.941372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.083 [2024-07-15 03:37:43.941399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.083 qpair failed and we were unable to recover it. 00:34:38.083 [2024-07-15 03:37:43.941534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.083 [2024-07-15 03:37:43.941560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.083 qpair failed and we were unable to recover it. 00:34:38.083 [2024-07-15 03:37:43.941722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.083 [2024-07-15 03:37:43.941747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.083 qpair failed and we were unable to recover it. 00:34:38.083 [2024-07-15 03:37:43.941869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.083 [2024-07-15 03:37:43.941910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.083 qpair failed and we were unable to recover it. 00:34:38.083 [2024-07-15 03:37:43.942026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.083 [2024-07-15 03:37:43.942053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.083 qpair failed and we were unable to recover it. 00:34:38.083 [2024-07-15 03:37:43.942172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.083 [2024-07-15 03:37:43.942210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.083 qpair failed and we were unable to recover it. 00:34:38.083 [2024-07-15 03:37:43.942364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.083 [2024-07-15 03:37:43.942390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.083 qpair failed and we were unable to recover it. 00:34:38.083 [2024-07-15 03:37:43.942533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.083 [2024-07-15 03:37:43.942560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.083 qpair failed and we were unable to recover it. 00:34:38.083 [2024-07-15 03:37:43.942710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.083 [2024-07-15 03:37:43.942736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.083 qpair failed and we were unable to recover it. 00:34:38.083 [2024-07-15 03:37:43.942893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.083 [2024-07-15 03:37:43.942921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.083 qpair failed and we were unable to recover it. 00:34:38.083 [2024-07-15 03:37:43.943034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.083 [2024-07-15 03:37:43.943059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.083 qpair failed and we were unable to recover it. 00:34:38.083 [2024-07-15 03:37:43.943217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.083 [2024-07-15 03:37:43.943244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.083 qpair failed and we were unable to recover it. 00:34:38.083 [2024-07-15 03:37:43.943375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.083 [2024-07-15 03:37:43.943401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.083 qpair failed and we were unable to recover it. 00:34:38.083 [2024-07-15 03:37:43.943541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.083 [2024-07-15 03:37:43.943567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.083 qpair failed and we were unable to recover it. 00:34:38.083 [2024-07-15 03:37:43.943722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.083 [2024-07-15 03:37:43.943748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.083 qpair failed and we were unable to recover it. 00:34:38.083 [2024-07-15 03:37:43.943919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.083 [2024-07-15 03:37:43.943946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.083 qpair failed and we were unable to recover it. 00:34:38.083 [2024-07-15 03:37:43.944062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.083 [2024-07-15 03:37:43.944087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.083 qpair failed and we were unable to recover it. 00:34:38.083 [2024-07-15 03:37:43.944210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.083 [2024-07-15 03:37:43.944235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.083 qpair failed and we were unable to recover it. 00:34:38.083 [2024-07-15 03:37:43.944377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.083 [2024-07-15 03:37:43.944404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.083 qpair failed and we were unable to recover it. 00:34:38.083 [2024-07-15 03:37:43.944524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.083 [2024-07-15 03:37:43.944550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.083 qpair failed and we were unable to recover it. 00:34:38.083 [2024-07-15 03:37:43.944683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.083 [2024-07-15 03:37:43.944708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.083 qpair failed and we were unable to recover it. 00:34:38.083 [2024-07-15 03:37:43.944825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.083 [2024-07-15 03:37:43.944849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.083 qpair failed and we were unable to recover it. 00:34:38.083 [2024-07-15 03:37:43.944973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.083 [2024-07-15 03:37:43.945000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.084 qpair failed and we were unable to recover it. 00:34:38.084 [2024-07-15 03:37:43.945109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.084 [2024-07-15 03:37:43.945136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.084 qpair failed and we were unable to recover it. 00:34:38.084 [2024-07-15 03:37:43.945278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.084 [2024-07-15 03:37:43.945304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.084 qpair failed and we were unable to recover it. 00:34:38.084 [2024-07-15 03:37:43.945419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.084 [2024-07-15 03:37:43.945446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.084 qpair failed and we were unable to recover it. 00:34:38.084 [2024-07-15 03:37:43.945589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.084 [2024-07-15 03:37:43.945615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.084 qpair failed and we were unable to recover it. 00:34:38.084 [2024-07-15 03:37:43.945756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.084 [2024-07-15 03:37:43.945782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.084 qpair failed and we were unable to recover it. 00:34:38.084 [2024-07-15 03:37:43.945918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.084 [2024-07-15 03:37:43.945957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.084 qpair failed and we were unable to recover it. 00:34:38.084 [2024-07-15 03:37:43.946101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.084 [2024-07-15 03:37:43.946128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.084 qpair failed and we were unable to recover it. 00:34:38.084 [2024-07-15 03:37:43.946279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.084 [2024-07-15 03:37:43.946305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.084 qpair failed and we were unable to recover it. 00:34:38.084 [2024-07-15 03:37:43.946418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.084 [2024-07-15 03:37:43.946444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.084 qpair failed and we were unable to recover it. 00:34:38.084 [2024-07-15 03:37:43.946585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.084 [2024-07-15 03:37:43.946616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.084 qpair failed and we were unable to recover it. 00:34:38.084 EAL: No free 2048 kB hugepages reported on node 1 00:34:38.084 [2024-07-15 03:37:43.946737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.084 [2024-07-15 03:37:43.946762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.084 qpair failed and we were unable to recover it. 00:34:38.084 [2024-07-15 03:37:43.946866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.084 [2024-07-15 03:37:43.946904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.084 qpair failed and we were unable to recover it. 00:34:38.084 [2024-07-15 03:37:43.947020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.084 [2024-07-15 03:37:43.947046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.084 qpair failed and we were unable to recover it. 00:34:38.084 [2024-07-15 03:37:43.947161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.084 [2024-07-15 03:37:43.947188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.084 qpair failed and we were unable to recover it. 00:34:38.084 [2024-07-15 03:37:43.947293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.084 [2024-07-15 03:37:43.947319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.084 qpair failed and we were unable to recover it. 00:34:38.084 [2024-07-15 03:37:43.947481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.084 [2024-07-15 03:37:43.947506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.084 qpair failed and we were unable to recover it. 00:34:38.084 [2024-07-15 03:37:43.947615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.084 [2024-07-15 03:37:43.947640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.084 qpair failed and we were unable to recover it. 00:34:38.084 [2024-07-15 03:37:43.947754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.084 [2024-07-15 03:37:43.947781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.084 qpair failed and we were unable to recover it. 00:34:38.084 [2024-07-15 03:37:43.947910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.084 [2024-07-15 03:37:43.947950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.084 qpair failed and we were unable to recover it. 00:34:38.084 [2024-07-15 03:37:43.948124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.084 [2024-07-15 03:37:43.948152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.084 qpair failed and we were unable to recover it. 00:34:38.084 [2024-07-15 03:37:43.948287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.084 [2024-07-15 03:37:43.948313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.084 qpair failed and we were unable to recover it. 00:34:38.084 [2024-07-15 03:37:43.948455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.084 [2024-07-15 03:37:43.948481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.084 qpair failed and we were unable to recover it. 00:34:38.084 [2024-07-15 03:37:43.948615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.084 [2024-07-15 03:37:43.948641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.084 qpair failed and we were unable to recover it. 00:34:38.084 [2024-07-15 03:37:43.948790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.084 [2024-07-15 03:37:43.948816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.084 qpair failed and we were unable to recover it. 00:34:38.084 [2024-07-15 03:37:43.948947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.084 [2024-07-15 03:37:43.948974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.084 qpair failed and we were unable to recover it. 00:34:38.084 [2024-07-15 03:37:43.949096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.084 [2024-07-15 03:37:43.949122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.084 qpair failed and we were unable to recover it. 00:34:38.084 [2024-07-15 03:37:43.949265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.084 [2024-07-15 03:37:43.949290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.084 qpair failed and we were unable to recover it. 00:34:38.084 [2024-07-15 03:37:43.949430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.084 [2024-07-15 03:37:43.949455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.084 qpair failed and we were unable to recover it. 00:34:38.084 [2024-07-15 03:37:43.949571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.084 [2024-07-15 03:37:43.949600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.084 qpair failed and we were unable to recover it. 00:34:38.084 [2024-07-15 03:37:43.949755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.084 [2024-07-15 03:37:43.949795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.084 qpair failed and we were unable to recover it. 00:34:38.084 [2024-07-15 03:37:43.949941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.084 [2024-07-15 03:37:43.949968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.084 qpair failed and we were unable to recover it. 00:34:38.084 [2024-07-15 03:37:43.950089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.084 [2024-07-15 03:37:43.950116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.084 qpair failed and we were unable to recover it. 00:34:38.084 [2024-07-15 03:37:43.950248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.084 [2024-07-15 03:37:43.950274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.084 qpair failed and we were unable to recover it. 00:34:38.084 [2024-07-15 03:37:43.950411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.084 [2024-07-15 03:37:43.950437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.084 qpair failed and we were unable to recover it. 00:34:38.084 [2024-07-15 03:37:43.950574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.084 [2024-07-15 03:37:43.950600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.085 qpair failed and we were unable to recover it. 00:34:38.085 [2024-07-15 03:37:43.950755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.085 [2024-07-15 03:37:43.950781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.085 qpair failed and we were unable to recover it. 00:34:38.085 [2024-07-15 03:37:43.950917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.085 [2024-07-15 03:37:43.950944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.085 qpair failed and we were unable to recover it. 00:34:38.085 [2024-07-15 03:37:43.951058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.085 [2024-07-15 03:37:43.951084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.085 qpair failed and we were unable to recover it. 00:34:38.085 [2024-07-15 03:37:43.951221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.085 [2024-07-15 03:37:43.951246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.085 qpair failed and we were unable to recover it. 00:34:38.085 [2024-07-15 03:37:43.951386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.085 [2024-07-15 03:37:43.951411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.085 qpair failed and we were unable to recover it. 00:34:38.085 [2024-07-15 03:37:43.951533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.085 [2024-07-15 03:37:43.951558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.085 qpair failed and we were unable to recover it. 00:34:38.085 [2024-07-15 03:37:43.951665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.085 [2024-07-15 03:37:43.951691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.085 qpair failed and we were unable to recover it. 00:34:38.085 [2024-07-15 03:37:43.951848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.085 [2024-07-15 03:37:43.951901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.085 qpair failed and we were unable to recover it. 00:34:38.085 [2024-07-15 03:37:43.952024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.085 [2024-07-15 03:37:43.952052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.085 qpair failed and we were unable to recover it. 00:34:38.085 [2024-07-15 03:37:43.952157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.085 [2024-07-15 03:37:43.952182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.085 qpair failed and we were unable to recover it. 00:34:38.085 [2024-07-15 03:37:43.952301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.085 [2024-07-15 03:37:43.952329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.085 qpair failed and we were unable to recover it. 00:34:38.085 [2024-07-15 03:37:43.952441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.085 [2024-07-15 03:37:43.952466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.085 qpair failed and we were unable to recover it. 00:34:38.085 [2024-07-15 03:37:43.952589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.085 [2024-07-15 03:37:43.952615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.085 qpair failed and we were unable to recover it. 00:34:38.085 [2024-07-15 03:37:43.952749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.085 [2024-07-15 03:37:43.952775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.085 qpair failed and we were unable to recover it. 00:34:38.085 [2024-07-15 03:37:43.952893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.085 [2024-07-15 03:37:43.952924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.085 qpair failed and we were unable to recover it. 00:34:38.085 [2024-07-15 03:37:43.953040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.085 [2024-07-15 03:37:43.953065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.085 qpair failed and we were unable to recover it. 00:34:38.085 [2024-07-15 03:37:43.953215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.085 [2024-07-15 03:37:43.953241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.085 qpair failed and we were unable to recover it. 00:34:38.085 [2024-07-15 03:37:43.953346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.085 [2024-07-15 03:37:43.953371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.085 qpair failed and we were unable to recover it. 00:34:38.085 [2024-07-15 03:37:43.953485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.085 [2024-07-15 03:37:43.953509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.085 qpair failed and we were unable to recover it. 00:34:38.085 [2024-07-15 03:37:43.953639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.085 [2024-07-15 03:37:43.953664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.085 qpair failed and we were unable to recover it. 00:34:38.085 [2024-07-15 03:37:43.953830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.085 [2024-07-15 03:37:43.953855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.085 qpair failed and we were unable to recover it. 00:34:38.085 [2024-07-15 03:37:43.954002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.085 [2024-07-15 03:37:43.954028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.085 qpair failed and we were unable to recover it. 00:34:38.085 [2024-07-15 03:37:43.954166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.085 [2024-07-15 03:37:43.954192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.085 qpair failed and we were unable to recover it. 00:34:38.085 [2024-07-15 03:37:43.954338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.085 [2024-07-15 03:37:43.954364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.085 qpair failed and we were unable to recover it. 00:34:38.085 [2024-07-15 03:37:43.954496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.085 [2024-07-15 03:37:43.954520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.085 qpair failed and we were unable to recover it. 00:34:38.085 [2024-07-15 03:37:43.954682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.085 [2024-07-15 03:37:43.954707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.085 qpair failed and we were unable to recover it. 00:34:38.085 [2024-07-15 03:37:43.954847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.085 [2024-07-15 03:37:43.954872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.085 qpair failed and we were unable to recover it. 00:34:38.085 [2024-07-15 03:37:43.955020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.085 [2024-07-15 03:37:43.955045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.085 qpair failed and we were unable to recover it. 00:34:38.085 [2024-07-15 03:37:43.955187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.085 [2024-07-15 03:37:43.955212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.085 qpair failed and we were unable to recover it. 00:34:38.085 [2024-07-15 03:37:43.955352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.085 [2024-07-15 03:37:43.955377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.085 qpair failed and we were unable to recover it. 00:34:38.085 [2024-07-15 03:37:43.955509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.085 [2024-07-15 03:37:43.955535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.085 qpair failed and we were unable to recover it. 00:34:38.085 [2024-07-15 03:37:43.955642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.085 [2024-07-15 03:37:43.955667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.085 qpair failed and we were unable to recover it. 00:34:38.085 [2024-07-15 03:37:43.955798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.085 [2024-07-15 03:37:43.955823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.085 qpair failed and we were unable to recover it. 00:34:38.085 [2024-07-15 03:37:43.955964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.085 [2024-07-15 03:37:43.956004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.085 qpair failed and we were unable to recover it. 00:34:38.085 [2024-07-15 03:37:43.956124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.086 [2024-07-15 03:37:43.956151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.086 qpair failed and we were unable to recover it. 00:34:38.086 [2024-07-15 03:37:43.956272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.086 [2024-07-15 03:37:43.956298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.086 qpair failed and we were unable to recover it. 00:34:38.086 [2024-07-15 03:37:43.956461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.086 [2024-07-15 03:37:43.956488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.086 qpair failed and we were unable to recover it. 00:34:38.086 [2024-07-15 03:37:43.956619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.086 [2024-07-15 03:37:43.956646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.086 qpair failed and we were unable to recover it. 00:34:38.086 [2024-07-15 03:37:43.956802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.086 [2024-07-15 03:37:43.956828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.086 qpair failed and we were unable to recover it. 00:34:38.086 [2024-07-15 03:37:43.957007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.086 [2024-07-15 03:37:43.957034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.086 qpair failed and we were unable to recover it. 00:34:38.086 [2024-07-15 03:37:43.957186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.086 [2024-07-15 03:37:43.957225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.086 qpair failed and we were unable to recover it. 00:34:38.086 [2024-07-15 03:37:43.957382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.086 [2024-07-15 03:37:43.957417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.086 qpair failed and we were unable to recover it. 00:34:38.086 [2024-07-15 03:37:43.957560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.086 [2024-07-15 03:37:43.957587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.086 qpair failed and we were unable to recover it. 00:34:38.086 [2024-07-15 03:37:43.957708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.086 [2024-07-15 03:37:43.957734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.086 qpair failed and we were unable to recover it. 00:34:38.086 [2024-07-15 03:37:43.957884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.086 [2024-07-15 03:37:43.957911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.086 qpair failed and we were unable to recover it. 00:34:38.086 [2024-07-15 03:37:43.958049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.086 [2024-07-15 03:37:43.958076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.086 qpair failed and we were unable to recover it. 00:34:38.086 [2024-07-15 03:37:43.958189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.086 [2024-07-15 03:37:43.958231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.086 qpair failed and we were unable to recover it. 00:34:38.086 [2024-07-15 03:37:43.958376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.086 [2024-07-15 03:37:43.958402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.086 qpair failed and we were unable to recover it. 00:34:38.086 [2024-07-15 03:37:43.958565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.086 [2024-07-15 03:37:43.958591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.086 qpair failed and we were unable to recover it. 00:34:38.086 [2024-07-15 03:37:43.958712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.086 [2024-07-15 03:37:43.958738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.086 qpair failed and we were unable to recover it. 00:34:38.086 [2024-07-15 03:37:43.958913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.086 [2024-07-15 03:37:43.958940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.086 qpair failed and we were unable to recover it. 00:34:38.086 [2024-07-15 03:37:43.959054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.086 [2024-07-15 03:37:43.959080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.086 qpair failed and we were unable to recover it. 00:34:38.086 [2024-07-15 03:37:43.959225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.086 [2024-07-15 03:37:43.959250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.086 qpair failed and we were unable to recover it. 00:34:38.086 [2024-07-15 03:37:43.959376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.086 [2024-07-15 03:37:43.959402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.086 qpair failed and we were unable to recover it. 00:34:38.086 [2024-07-15 03:37:43.959521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.086 [2024-07-15 03:37:43.959552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.086 qpair failed and we were unable to recover it. 00:34:38.086 [2024-07-15 03:37:43.959722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.086 [2024-07-15 03:37:43.959747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.086 qpair failed and we were unable to recover it. 00:34:38.086 [2024-07-15 03:37:43.959867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.086 [2024-07-15 03:37:43.959899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.086 qpair failed and we were unable to recover it. 00:34:38.086 [2024-07-15 03:37:43.960037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.086 [2024-07-15 03:37:43.960063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.086 qpair failed and we were unable to recover it. 00:34:38.086 [2024-07-15 03:37:43.960213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.086 [2024-07-15 03:37:43.960240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.086 qpair failed and we were unable to recover it. 00:34:38.086 [2024-07-15 03:37:43.960374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.086 [2024-07-15 03:37:43.960400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.086 qpair failed and we were unable to recover it. 00:34:38.086 [2024-07-15 03:37:43.960551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.086 [2024-07-15 03:37:43.960577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.086 qpair failed and we were unable to recover it. 00:34:38.086 [2024-07-15 03:37:43.960713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.086 [2024-07-15 03:37:43.960739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.086 qpair failed and we were unable to recover it. 00:34:38.086 [2024-07-15 03:37:43.960883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.086 [2024-07-15 03:37:43.960909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.086 qpair failed and we were unable to recover it. 00:34:38.086 [2024-07-15 03:37:43.961020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.086 [2024-07-15 03:37:43.961048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.086 qpair failed and we were unable to recover it. 00:34:38.086 [2024-07-15 03:37:43.961277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.086 [2024-07-15 03:37:43.961303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.086 qpair failed and we were unable to recover it. 00:34:38.086 [2024-07-15 03:37:43.961423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.086 [2024-07-15 03:37:43.961448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.086 qpair failed and we were unable to recover it. 00:34:38.086 [2024-07-15 03:37:43.961611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.086 [2024-07-15 03:37:43.961637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.086 qpair failed and we were unable to recover it. 00:34:38.086 [2024-07-15 03:37:43.961778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.086 [2024-07-15 03:37:43.961803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.086 qpair failed and we were unable to recover it. 00:34:38.086 [2024-07-15 03:37:43.961964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.086 [2024-07-15 03:37:43.961991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.086 qpair failed and we were unable to recover it. 00:34:38.086 [2024-07-15 03:37:43.962165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.087 [2024-07-15 03:37:43.962191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.087 qpair failed and we were unable to recover it. 00:34:38.087 [2024-07-15 03:37:43.962335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.087 [2024-07-15 03:37:43.962362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.087 qpair failed and we were unable to recover it. 00:34:38.087 [2024-07-15 03:37:43.962467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.087 [2024-07-15 03:37:43.962493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.087 qpair failed and we were unable to recover it. 00:34:38.087 [2024-07-15 03:37:43.962631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.087 [2024-07-15 03:37:43.962658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.087 qpair failed and we were unable to recover it. 00:34:38.087 [2024-07-15 03:37:43.962806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.087 [2024-07-15 03:37:43.962832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.087 qpair failed and we were unable to recover it. 00:34:38.087 [2024-07-15 03:37:43.962984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.087 [2024-07-15 03:37:43.963011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.087 qpair failed and we were unable to recover it. 00:34:38.087 [2024-07-15 03:37:43.963126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.087 [2024-07-15 03:37:43.963151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.087 qpair failed and we were unable to recover it. 00:34:38.087 [2024-07-15 03:37:43.963269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.087 [2024-07-15 03:37:43.963295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.087 qpair failed and we were unable to recover it. 00:34:38.087 [2024-07-15 03:37:43.963513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.087 [2024-07-15 03:37:43.963554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.087 qpair failed and we were unable to recover it. 00:34:38.087 [2024-07-15 03:37:43.963694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.087 [2024-07-15 03:37:43.963721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.087 qpair failed and we were unable to recover it. 00:34:38.087 [2024-07-15 03:37:43.963900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.087 [2024-07-15 03:37:43.963927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.087 qpair failed and we were unable to recover it. 00:34:38.087 [2024-07-15 03:37:43.964050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.087 [2024-07-15 03:37:43.964077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.087 qpair failed and we were unable to recover it. 00:34:38.087 [2024-07-15 03:37:43.964230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.087 [2024-07-15 03:37:43.964269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.087 qpair failed and we were unable to recover it. 00:34:38.087 [2024-07-15 03:37:43.964415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.087 [2024-07-15 03:37:43.964442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.087 qpair failed and we were unable to recover it. 00:34:38.087 [2024-07-15 03:37:43.964577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.087 [2024-07-15 03:37:43.964603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.087 qpair failed and we were unable to recover it. 00:34:38.087 [2024-07-15 03:37:43.964764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.087 [2024-07-15 03:37:43.964790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.087 qpair failed and we were unable to recover it. 00:34:38.087 [2024-07-15 03:37:43.964926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.087 [2024-07-15 03:37:43.964955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.087 qpair failed and we were unable to recover it. 00:34:38.087 [2024-07-15 03:37:43.965105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.087 [2024-07-15 03:37:43.965131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.087 qpair failed and we were unable to recover it. 00:34:38.087 [2024-07-15 03:37:43.965250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.087 [2024-07-15 03:37:43.965276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.087 qpair failed and we were unable to recover it. 00:34:38.087 [2024-07-15 03:37:43.965416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.087 [2024-07-15 03:37:43.965442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.087 qpair failed and we were unable to recover it. 00:34:38.087 [2024-07-15 03:37:43.965593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.087 [2024-07-15 03:37:43.965631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.087 qpair failed and we were unable to recover it. 00:34:38.087 [2024-07-15 03:37:43.965756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.087 [2024-07-15 03:37:43.965782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.087 qpair failed and we were unable to recover it. 00:34:38.087 [2024-07-15 03:37:43.965892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.087 [2024-07-15 03:37:43.965919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.087 qpair failed and we were unable to recover it. 00:34:38.087 [2024-07-15 03:37:43.966049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.087 [2024-07-15 03:37:43.966074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.087 qpair failed and we were unable to recover it. 00:34:38.087 [2024-07-15 03:37:43.966224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.087 [2024-07-15 03:37:43.966249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.087 qpair failed and we were unable to recover it. 00:34:38.087 [2024-07-15 03:37:43.966388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.087 [2024-07-15 03:37:43.966413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.087 qpair failed and we were unable to recover it. 00:34:38.087 [2024-07-15 03:37:43.966522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.087 [2024-07-15 03:37:43.966547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.087 qpair failed and we were unable to recover it. 00:34:38.087 [2024-07-15 03:37:43.966664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.087 [2024-07-15 03:37:43.966692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.087 qpair failed and we were unable to recover it. 00:34:38.087 [2024-07-15 03:37:43.966839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.087 [2024-07-15 03:37:43.966865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.087 qpair failed and we were unable to recover it. 00:34:38.087 [2024-07-15 03:37:43.966992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.087 [2024-07-15 03:37:43.967020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.087 qpair failed and we were unable to recover it. 00:34:38.088 [2024-07-15 03:37:43.967164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.088 [2024-07-15 03:37:43.967191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.088 qpair failed and we were unable to recover it. 00:34:38.088 [2024-07-15 03:37:43.967330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.088 [2024-07-15 03:37:43.967356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.088 qpair failed and we were unable to recover it. 00:34:38.088 [2024-07-15 03:37:43.967501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.088 [2024-07-15 03:37:43.967527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.088 qpair failed and we were unable to recover it. 00:34:38.088 [2024-07-15 03:37:43.967713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.088 [2024-07-15 03:37:43.967741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.088 qpair failed and we were unable to recover it. 00:34:38.088 [2024-07-15 03:37:43.967883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.088 [2024-07-15 03:37:43.967910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.088 qpair failed and we were unable to recover it. 00:34:38.088 [2024-07-15 03:37:43.968050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.088 [2024-07-15 03:37:43.968076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.088 qpair failed and we were unable to recover it. 00:34:38.088 [2024-07-15 03:37:43.968215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.088 [2024-07-15 03:37:43.968241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.088 qpair failed and we were unable to recover it. 00:34:38.088 [2024-07-15 03:37:43.968359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.088 [2024-07-15 03:37:43.968385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.088 qpair failed and we were unable to recover it. 00:34:38.088 [2024-07-15 03:37:43.968519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.088 [2024-07-15 03:37:43.968545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.088 qpair failed and we were unable to recover it. 00:34:38.088 [2024-07-15 03:37:43.968658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.088 [2024-07-15 03:37:43.968686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.088 qpair failed and we were unable to recover it. 00:34:38.088 [2024-07-15 03:37:43.968807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.088 [2024-07-15 03:37:43.968833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.088 qpair failed and we were unable to recover it. 00:34:38.088 [2024-07-15 03:37:43.968965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.088 [2024-07-15 03:37:43.969004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.088 qpair failed and we were unable to recover it. 00:34:38.088 [2024-07-15 03:37:43.969153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.088 [2024-07-15 03:37:43.969181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.088 qpair failed and we were unable to recover it. 00:34:38.088 [2024-07-15 03:37:43.969309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.088 [2024-07-15 03:37:43.969334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.088 qpair failed and we were unable to recover it. 00:34:38.088 [2024-07-15 03:37:43.969476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.088 [2024-07-15 03:37:43.969502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.088 qpair failed and we were unable to recover it. 00:34:38.088 [2024-07-15 03:37:43.969642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.088 [2024-07-15 03:37:43.969669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.088 qpair failed and we were unable to recover it. 00:34:38.088 [2024-07-15 03:37:43.969777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.088 [2024-07-15 03:37:43.969803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.088 qpair failed and we were unable to recover it. 00:34:38.088 [2024-07-15 03:37:43.969922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.088 [2024-07-15 03:37:43.969949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.088 qpair failed and we were unable to recover it. 00:34:38.088 [2024-07-15 03:37:43.970059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.088 [2024-07-15 03:37:43.970085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.088 qpair failed and we were unable to recover it. 00:34:38.088 [2024-07-15 03:37:43.970255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.088 [2024-07-15 03:37:43.970280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.088 qpair failed and we were unable to recover it. 00:34:38.088 [2024-07-15 03:37:43.970411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.088 [2024-07-15 03:37:43.970437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.088 qpair failed and we were unable to recover it. 00:34:38.088 [2024-07-15 03:37:43.970569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.088 [2024-07-15 03:37:43.970595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.088 qpair failed and we were unable to recover it. 00:34:38.088 [2024-07-15 03:37:43.970702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.088 [2024-07-15 03:37:43.970727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.088 qpair failed and we were unable to recover it. 00:34:38.088 [2024-07-15 03:37:43.970847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.088 [2024-07-15 03:37:43.970872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.088 qpair failed and we were unable to recover it. 00:34:38.088 [2024-07-15 03:37:43.971002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.088 [2024-07-15 03:37:43.971030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.088 qpair failed and we were unable to recover it. 00:34:38.088 [2024-07-15 03:37:43.971255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.088 [2024-07-15 03:37:43.971282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.088 qpair failed and we were unable to recover it. 00:34:38.088 [2024-07-15 03:37:43.971401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.088 [2024-07-15 03:37:43.971429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.088 qpair failed and we were unable to recover it. 00:34:38.088 [2024-07-15 03:37:43.971587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.088 [2024-07-15 03:37:43.971614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.088 qpair failed and we were unable to recover it. 00:34:38.088 [2024-07-15 03:37:43.971764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.088 [2024-07-15 03:37:43.971790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.088 qpair failed and we were unable to recover it. 00:34:38.088 [2024-07-15 03:37:43.971932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.088 [2024-07-15 03:37:43.971959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.088 qpair failed and we were unable to recover it. 00:34:38.088 [2024-07-15 03:37:43.972075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.088 [2024-07-15 03:37:43.972101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.088 qpair failed and we were unable to recover it. 00:34:38.088 [2024-07-15 03:37:43.972251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.088 [2024-07-15 03:37:43.972277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.088 qpair failed and we were unable to recover it. 00:34:38.088 [2024-07-15 03:37:43.972393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.088 [2024-07-15 03:37:43.972419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.088 qpair failed and we were unable to recover it. 00:34:38.088 [2024-07-15 03:37:43.972556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.088 [2024-07-15 03:37:43.972582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.088 qpair failed and we were unable to recover it. 00:34:38.088 [2024-07-15 03:37:43.972718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.088 [2024-07-15 03:37:43.972744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.088 qpair failed and we were unable to recover it. 00:34:38.088 [2024-07-15 03:37:43.972884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.089 [2024-07-15 03:37:43.972911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.089 qpair failed and we were unable to recover it. 00:34:38.089 [2024-07-15 03:37:43.973085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.089 [2024-07-15 03:37:43.973111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.089 qpair failed and we were unable to recover it. 00:34:38.089 [2024-07-15 03:37:43.973267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.089 [2024-07-15 03:37:43.973293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.089 qpair failed and we were unable to recover it. 00:34:38.089 [2024-07-15 03:37:43.973417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.089 [2024-07-15 03:37:43.973442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.089 qpair failed and we were unable to recover it. 00:34:38.089 [2024-07-15 03:37:43.973582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.089 [2024-07-15 03:37:43.973610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.089 qpair failed and we were unable to recover it. 00:34:38.089 [2024-07-15 03:37:43.973832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.089 [2024-07-15 03:37:43.973858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.089 qpair failed and we were unable to recover it. 00:34:38.089 [2024-07-15 03:37:43.973996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.089 [2024-07-15 03:37:43.974023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.089 qpair failed and we were unable to recover it. 00:34:38.089 [2024-07-15 03:37:43.974166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.089 [2024-07-15 03:37:43.974191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.089 qpair failed and we were unable to recover it. 00:34:38.089 [2024-07-15 03:37:43.974355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.089 [2024-07-15 03:37:43.974381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.089 qpair failed and we were unable to recover it. 00:34:38.089 [2024-07-15 03:37:43.974519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.089 [2024-07-15 03:37:43.974545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.089 qpair failed and we were unable to recover it. 00:34:38.089 [2024-07-15 03:37:43.974658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.089 [2024-07-15 03:37:43.974684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.089 qpair failed and we were unable to recover it. 00:34:38.089 [2024-07-15 03:37:43.974846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.089 [2024-07-15 03:37:43.974872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.089 qpair failed and we were unable to recover it. 00:34:38.089 [2024-07-15 03:37:43.975012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.089 [2024-07-15 03:37:43.975051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.089 qpair failed and we were unable to recover it. 00:34:38.089 [2024-07-15 03:37:43.975221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.089 [2024-07-15 03:37:43.975248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.089 qpair failed and we were unable to recover it. 00:34:38.089 [2024-07-15 03:37:43.975355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.089 [2024-07-15 03:37:43.975386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.089 qpair failed and we were unable to recover it. 00:34:38.089 [2024-07-15 03:37:43.975502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.089 [2024-07-15 03:37:43.975528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.089 qpair failed and we were unable to recover it. 00:34:38.089 [2024-07-15 03:37:43.975645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.089 [2024-07-15 03:37:43.975671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.089 qpair failed and we were unable to recover it. 00:34:38.089 [2024-07-15 03:37:43.975843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.089 [2024-07-15 03:37:43.975869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.089 qpair failed and we were unable to recover it. 00:34:38.089 [2024-07-15 03:37:43.975988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.089 [2024-07-15 03:37:43.976014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.089 qpair failed and we were unable to recover it. 00:34:38.089 [2024-07-15 03:37:43.976154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.089 [2024-07-15 03:37:43.976180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.089 qpair failed and we were unable to recover it. 00:34:38.089 [2024-07-15 03:37:43.976295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.089 [2024-07-15 03:37:43.976321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.089 qpair failed and we were unable to recover it. 00:34:38.089 [2024-07-15 03:37:43.976451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.089 [2024-07-15 03:37:43.976477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.089 qpair failed and we were unable to recover it. 00:34:38.089 [2024-07-15 03:37:43.976625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.089 [2024-07-15 03:37:43.976653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.089 qpair failed and we were unable to recover it. 00:34:38.089 [2024-07-15 03:37:43.976797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.089 [2024-07-15 03:37:43.976822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.089 qpair failed and we were unable to recover it. 00:34:38.089 [2024-07-15 03:37:43.976947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.089 [2024-07-15 03:37:43.976974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.089 qpair failed and we were unable to recover it. 00:34:38.089 [2024-07-15 03:37:43.977081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.089 [2024-07-15 03:37:43.977106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.089 qpair failed and we were unable to recover it. 00:34:38.089 [2024-07-15 03:37:43.977250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.089 [2024-07-15 03:37:43.977276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.089 qpair failed and we were unable to recover it. 00:34:38.089 [2024-07-15 03:37:43.977383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.089 [2024-07-15 03:37:43.977408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.089 qpair failed and we were unable to recover it. 00:34:38.089 [2024-07-15 03:37:43.977528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.089 [2024-07-15 03:37:43.977553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.089 qpair failed and we were unable to recover it. 00:34:38.089 [2024-07-15 03:37:43.977686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.089 [2024-07-15 03:37:43.977711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.089 qpair failed and we were unable to recover it. 00:34:38.089 [2024-07-15 03:37:43.977880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.089 [2024-07-15 03:37:43.977907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.089 qpair failed and we were unable to recover it. 00:34:38.089 [2024-07-15 03:37:43.978009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.089 [2024-07-15 03:37:43.978034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.089 qpair failed and we were unable to recover it. 00:34:38.089 [2024-07-15 03:37:43.978174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.089 [2024-07-15 03:37:43.978199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.089 qpair failed and we were unable to recover it. 00:34:38.089 [2024-07-15 03:37:43.978327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.089 [2024-07-15 03:37:43.978352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.089 qpair failed and we were unable to recover it. 00:34:38.089 [2024-07-15 03:37:43.978448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.089 [2024-07-15 03:37:43.978473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.089 qpair failed and we were unable to recover it. 00:34:38.090 [2024-07-15 03:37:43.978581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.090 [2024-07-15 03:37:43.978606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.090 qpair failed and we were unable to recover it. 00:34:38.090 [2024-07-15 03:37:43.978728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.090 [2024-07-15 03:37:43.978768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.090 qpair failed and we were unable to recover it. 00:34:38.090 [2024-07-15 03:37:43.978900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.090 [2024-07-15 03:37:43.978929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.090 qpair failed and we were unable to recover it. 00:34:38.090 [2024-07-15 03:37:43.979078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.090 [2024-07-15 03:37:43.979106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.090 qpair failed and we were unable to recover it. 00:34:38.090 [2024-07-15 03:37:43.979266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.090 [2024-07-15 03:37:43.979292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.090 qpair failed and we were unable to recover it. 00:34:38.090 [2024-07-15 03:37:43.979404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.090 [2024-07-15 03:37:43.979429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.090 qpair failed and we were unable to recover it. 00:34:38.090 [2024-07-15 03:37:43.979584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.090 [2024-07-15 03:37:43.979615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.090 qpair failed and we were unable to recover it. 00:34:38.090 [2024-07-15 03:37:43.979726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.090 [2024-07-15 03:37:43.979753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.090 qpair failed and we were unable to recover it. 00:34:38.090 [2024-07-15 03:37:43.979897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.090 [2024-07-15 03:37:43.979924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.090 qpair failed and we were unable to recover it. 00:34:38.090 [2024-07-15 03:37:43.980064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.090 [2024-07-15 03:37:43.980089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.090 qpair failed and we were unable to recover it. 00:34:38.090 [2024-07-15 03:37:43.980202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.090 [2024-07-15 03:37:43.980227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.090 qpair failed and we were unable to recover it. 00:34:38.090 [2024-07-15 03:37:43.980341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.090 [2024-07-15 03:37:43.980367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.090 qpair failed and we were unable to recover it. 00:34:38.090 [2024-07-15 03:37:43.980474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.090 [2024-07-15 03:37:43.980500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.090 qpair failed and we were unable to recover it. 00:34:38.090 [2024-07-15 03:37:43.980644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.090 [2024-07-15 03:37:43.980671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.090 qpair failed and we were unable to recover it. 00:34:38.090 [2024-07-15 03:37:43.980803] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:34:38.090 [2024-07-15 03:37:43.980822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.090 [2024-07-15 03:37:43.980859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.090 qpair failed and we were unable to recover it. 00:34:38.090 [2024-07-15 03:37:43.980985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.090 [2024-07-15 03:37:43.981014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.090 qpair failed and we were unable to recover it. 00:34:38.090 [2024-07-15 03:37:43.981236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.090 [2024-07-15 03:37:43.981262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.090 qpair failed and we were unable to recover it. 00:34:38.090 [2024-07-15 03:37:43.981405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.090 [2024-07-15 03:37:43.981432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.090 qpair failed and we were unable to recover it. 00:34:38.090 [2024-07-15 03:37:43.981573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.090 [2024-07-15 03:37:43.981599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.090 qpair failed and we were unable to recover it. 00:34:38.090 [2024-07-15 03:37:43.981743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.090 [2024-07-15 03:37:43.981773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.090 qpair failed and we were unable to recover it. 00:34:38.090 [2024-07-15 03:37:43.981918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.090 [2024-07-15 03:37:43.981945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.090 qpair failed and we were unable to recover it. 00:34:38.090 [2024-07-15 03:37:43.982088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.090 [2024-07-15 03:37:43.982113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.090 qpair failed and we were unable to recover it. 00:34:38.090 [2024-07-15 03:37:43.982251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.090 [2024-07-15 03:37:43.982277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.090 qpair failed and we were unable to recover it. 00:34:38.090 [2024-07-15 03:37:43.982394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.090 [2024-07-15 03:37:43.982420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.090 qpair failed and we were unable to recover it. 00:34:38.090 [2024-07-15 03:37:43.982562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.090 [2024-07-15 03:37:43.982587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.090 qpair failed and we were unable to recover it. 00:34:38.090 [2024-07-15 03:37:43.982716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.090 [2024-07-15 03:37:43.982755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.090 qpair failed and we were unable to recover it. 00:34:38.090 [2024-07-15 03:37:43.982903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.090 [2024-07-15 03:37:43.982931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.090 qpair failed and we were unable to recover it. 00:34:38.090 [2024-07-15 03:37:43.983048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.090 [2024-07-15 03:37:43.983076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.090 qpair failed and we were unable to recover it. 00:34:38.090 [2024-07-15 03:37:43.983217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.090 [2024-07-15 03:37:43.983244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.090 qpair failed and we were unable to recover it. 00:34:38.090 [2024-07-15 03:37:43.983415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.090 [2024-07-15 03:37:43.983441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.090 qpair failed and we were unable to recover it. 00:34:38.090 [2024-07-15 03:37:43.983580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.090 [2024-07-15 03:37:43.983607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.090 qpair failed and we were unable to recover it. 00:34:38.090 [2024-07-15 03:37:43.983749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.090 [2024-07-15 03:37:43.983775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.090 qpair failed and we were unable to recover it. 00:34:38.090 [2024-07-15 03:37:43.983931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.090 [2024-07-15 03:37:43.983958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.090 qpair failed and we were unable to recover it. 00:34:38.090 [2024-07-15 03:37:43.984112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.090 [2024-07-15 03:37:43.984138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.090 qpair failed and we were unable to recover it. 00:34:38.090 [2024-07-15 03:37:43.984274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.090 [2024-07-15 03:37:43.984300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.090 qpair failed and we were unable to recover it. 00:34:38.090 [2024-07-15 03:37:43.984450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.091 [2024-07-15 03:37:43.984476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.091 qpair failed and we were unable to recover it. 00:34:38.091 [2024-07-15 03:37:43.984697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.091 [2024-07-15 03:37:43.984722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.091 qpair failed and we were unable to recover it. 00:34:38.091 [2024-07-15 03:37:43.984850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.091 [2024-07-15 03:37:43.984883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.091 qpair failed and we were unable to recover it. 00:34:38.091 [2024-07-15 03:37:43.985028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.091 [2024-07-15 03:37:43.985055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.091 qpair failed and we were unable to recover it. 00:34:38.091 [2024-07-15 03:37:43.985200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.091 [2024-07-15 03:37:43.985227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.091 qpair failed and we were unable to recover it. 00:34:38.091 [2024-07-15 03:37:43.985446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.091 [2024-07-15 03:37:43.985472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.091 qpair failed and we were unable to recover it. 00:34:38.091 [2024-07-15 03:37:43.985585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.091 [2024-07-15 03:37:43.985612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.091 qpair failed and we were unable to recover it. 00:34:38.091 [2024-07-15 03:37:43.985734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.091 [2024-07-15 03:37:43.985760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.091 qpair failed and we were unable to recover it. 00:34:38.091 [2024-07-15 03:37:43.985923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.091 [2024-07-15 03:37:43.985963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.091 qpair failed and we were unable to recover it. 00:34:38.091 [2024-07-15 03:37:43.986090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.091 [2024-07-15 03:37:43.986117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.091 qpair failed and we were unable to recover it. 00:34:38.091 [2024-07-15 03:37:43.986254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.091 [2024-07-15 03:37:43.986280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.091 qpair failed and we were unable to recover it. 00:34:38.091 [2024-07-15 03:37:43.986392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.091 [2024-07-15 03:37:43.986423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.091 qpair failed and we were unable to recover it. 00:34:38.091 [2024-07-15 03:37:43.986544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.091 [2024-07-15 03:37:43.986569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.091 qpair failed and we were unable to recover it. 00:34:38.091 [2024-07-15 03:37:43.986711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.091 [2024-07-15 03:37:43.986736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.091 qpair failed and we were unable to recover it. 00:34:38.091 [2024-07-15 03:37:43.986853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.091 [2024-07-15 03:37:43.986888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.091 qpair failed and we were unable to recover it. 00:34:38.091 [2024-07-15 03:37:43.987044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.091 [2024-07-15 03:37:43.987084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.091 qpair failed and we were unable to recover it. 00:34:38.091 [2024-07-15 03:37:43.987235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.091 [2024-07-15 03:37:43.987262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.091 qpair failed and we were unable to recover it. 00:34:38.091 [2024-07-15 03:37:43.987410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.091 [2024-07-15 03:37:43.987436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.091 qpair failed and we were unable to recover it. 00:34:38.091 [2024-07-15 03:37:43.987622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.091 [2024-07-15 03:37:43.987649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.091 qpair failed and we were unable to recover it. 00:34:38.091 [2024-07-15 03:37:43.987826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.091 [2024-07-15 03:37:43.987852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.091 qpair failed and we were unable to recover it. 00:34:38.091 [2024-07-15 03:37:43.987985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.091 [2024-07-15 03:37:43.988014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.091 qpair failed and we were unable to recover it. 00:34:38.091 [2024-07-15 03:37:43.988165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.091 [2024-07-15 03:37:43.988191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.091 qpair failed and we were unable to recover it. 00:34:38.091 [2024-07-15 03:37:43.988309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.091 [2024-07-15 03:37:43.988335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.091 qpair failed and we were unable to recover it. 00:34:38.091 [2024-07-15 03:37:43.988453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.091 [2024-07-15 03:37:43.988479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.091 qpair failed and we were unable to recover it. 00:34:38.091 [2024-07-15 03:37:43.988594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.091 [2024-07-15 03:37:43.988623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.091 qpair failed and we were unable to recover it. 00:34:38.091 [2024-07-15 03:37:43.988799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.091 [2024-07-15 03:37:43.988825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.091 qpair failed and we were unable to recover it. 00:34:38.091 [2024-07-15 03:37:43.988960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.091 [2024-07-15 03:37:43.988987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.091 qpair failed and we were unable to recover it. 00:34:38.091 [2024-07-15 03:37:43.989127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.091 [2024-07-15 03:37:43.989153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.091 qpair failed and we were unable to recover it. 00:34:38.091 [2024-07-15 03:37:43.989311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.091 [2024-07-15 03:37:43.989337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.091 qpair failed and we were unable to recover it. 00:34:38.091 [2024-07-15 03:37:43.989475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.091 [2024-07-15 03:37:43.989501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.091 qpair failed and we were unable to recover it. 00:34:38.091 [2024-07-15 03:37:43.989635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.091 [2024-07-15 03:37:43.989661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.091 qpair failed and we were unable to recover it. 00:34:38.091 [2024-07-15 03:37:43.989836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.091 [2024-07-15 03:37:43.989862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.091 qpair failed and we were unable to recover it. 00:34:38.091 [2024-07-15 03:37:43.990033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.091 [2024-07-15 03:37:43.990059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.091 qpair failed and we were unable to recover it. 00:34:38.091 [2024-07-15 03:37:43.990177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.091 [2024-07-15 03:37:43.990205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.091 qpair failed and we were unable to recover it. 00:34:38.091 [2024-07-15 03:37:43.990377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.091 [2024-07-15 03:37:43.990402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.091 qpair failed and we were unable to recover it. 00:34:38.091 [2024-07-15 03:37:43.990512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.091 [2024-07-15 03:37:43.990537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.091 qpair failed and we were unable to recover it. 00:34:38.091 [2024-07-15 03:37:43.990676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.092 [2024-07-15 03:37:43.990701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.092 qpair failed and we were unable to recover it. 00:34:38.092 [2024-07-15 03:37:43.990819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.092 [2024-07-15 03:37:43.990845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.092 qpair failed and we were unable to recover it. 00:34:38.092 [2024-07-15 03:37:43.990985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.092 [2024-07-15 03:37:43.991024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.092 qpair failed and we were unable to recover it. 00:34:38.092 [2024-07-15 03:37:43.991200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.092 [2024-07-15 03:37:43.991228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.092 qpair failed and we were unable to recover it. 00:34:38.092 [2024-07-15 03:37:43.991375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.092 [2024-07-15 03:37:43.991401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.092 qpair failed and we were unable to recover it. 00:34:38.092 [2024-07-15 03:37:43.991519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.092 [2024-07-15 03:37:43.991544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.092 qpair failed and we were unable to recover it. 00:34:38.092 [2024-07-15 03:37:43.991689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.092 [2024-07-15 03:37:43.991715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.092 qpair failed and we were unable to recover it. 00:34:38.092 [2024-07-15 03:37:43.991831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.092 [2024-07-15 03:37:43.991857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.092 qpair failed and we were unable to recover it. 00:34:38.092 [2024-07-15 03:37:43.992044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.092 [2024-07-15 03:37:43.992082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.092 qpair failed and we were unable to recover it. 00:34:38.092 [2024-07-15 03:37:43.992242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.092 [2024-07-15 03:37:43.992281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.092 qpair failed and we were unable to recover it. 00:34:38.092 [2024-07-15 03:37:43.992458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.092 [2024-07-15 03:37:43.992485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.092 qpair failed and we were unable to recover it. 00:34:38.092 [2024-07-15 03:37:43.992624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.092 [2024-07-15 03:37:43.992651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.092 qpair failed and we were unable to recover it. 00:34:38.092 [2024-07-15 03:37:43.992812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.092 [2024-07-15 03:37:43.992839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.092 qpair failed and we were unable to recover it. 00:34:38.092 [2024-07-15 03:37:43.992965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.092 [2024-07-15 03:37:43.992991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.092 qpair failed and we were unable to recover it. 00:34:38.092 [2024-07-15 03:37:43.993158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.092 [2024-07-15 03:37:43.993184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.092 qpair failed and we were unable to recover it. 00:34:38.092 [2024-07-15 03:37:43.993350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.092 [2024-07-15 03:37:43.993377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.092 qpair failed and we were unable to recover it. 00:34:38.092 [2024-07-15 03:37:43.993527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.092 [2024-07-15 03:37:43.993553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.092 qpair failed and we were unable to recover it. 00:34:38.092 [2024-07-15 03:37:43.993691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.092 [2024-07-15 03:37:43.993730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.092 qpair failed and we were unable to recover it. 00:34:38.092 [2024-07-15 03:37:43.993883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.092 [2024-07-15 03:37:43.993912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.092 qpair failed and we were unable to recover it. 00:34:38.092 [2024-07-15 03:37:43.994039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.092 [2024-07-15 03:37:43.994065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.092 qpair failed and we were unable to recover it. 00:34:38.092 [2024-07-15 03:37:43.994202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.092 [2024-07-15 03:37:43.994227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.092 qpair failed and we were unable to recover it. 00:34:38.092 [2024-07-15 03:37:43.994366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.092 [2024-07-15 03:37:43.994392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.092 qpair failed and we were unable to recover it. 00:34:38.092 [2024-07-15 03:37:43.994535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.092 [2024-07-15 03:37:43.994560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.092 qpair failed and we were unable to recover it. 00:34:38.092 [2024-07-15 03:37:43.994705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.092 [2024-07-15 03:37:43.994733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.092 qpair failed and we were unable to recover it. 00:34:38.092 [2024-07-15 03:37:43.994851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.092 [2024-07-15 03:37:43.994885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.092 qpair failed and we were unable to recover it. 00:34:38.092 [2024-07-15 03:37:43.995039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.092 [2024-07-15 03:37:43.995065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.092 qpair failed and we were unable to recover it. 00:34:38.092 [2024-07-15 03:37:43.995208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.092 [2024-07-15 03:37:43.995235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.092 qpair failed and we were unable to recover it. 00:34:38.092 [2024-07-15 03:37:43.995356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.092 [2024-07-15 03:37:43.995382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.092 qpair failed and we were unable to recover it. 00:34:38.092 [2024-07-15 03:37:43.995496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.092 [2024-07-15 03:37:43.995522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.092 qpair failed and we were unable to recover it. 00:34:38.092 [2024-07-15 03:37:43.995667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.092 [2024-07-15 03:37:43.995695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.092 qpair failed and we were unable to recover it. 00:34:38.093 [2024-07-15 03:37:43.995858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.093 [2024-07-15 03:37:43.995900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.093 qpair failed and we were unable to recover it. 00:34:38.093 [2024-07-15 03:37:43.996017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.093 [2024-07-15 03:37:43.996042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.093 qpair failed and we were unable to recover it. 00:34:38.093 [2024-07-15 03:37:43.996184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.093 [2024-07-15 03:37:43.996209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.093 qpair failed and we were unable to recover it. 00:34:38.093 [2024-07-15 03:37:43.996314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.093 [2024-07-15 03:37:43.996339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.093 qpair failed and we were unable to recover it. 00:34:38.093 [2024-07-15 03:37:43.996456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.093 [2024-07-15 03:37:43.996481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.093 qpair failed and we were unable to recover it. 00:34:38.093 [2024-07-15 03:37:43.996657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.093 [2024-07-15 03:37:43.996682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.093 qpair failed and we were unable to recover it. 00:34:38.093 [2024-07-15 03:37:43.996789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.093 [2024-07-15 03:37:43.996815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.093 qpair failed and we were unable to recover it. 00:34:38.093 [2024-07-15 03:37:43.996931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.093 [2024-07-15 03:37:43.996958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.093 qpair failed and we were unable to recover it. 00:34:38.093 [2024-07-15 03:37:43.997075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.093 [2024-07-15 03:37:43.997099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.093 qpair failed and we were unable to recover it. 00:34:38.093 [2024-07-15 03:37:43.997205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.093 [2024-07-15 03:37:43.997230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.093 qpair failed and we were unable to recover it. 00:34:38.093 [2024-07-15 03:37:43.997386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.093 [2024-07-15 03:37:43.997411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.093 qpair failed and we were unable to recover it. 00:34:38.093 [2024-07-15 03:37:43.997572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.093 [2024-07-15 03:37:43.997597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.093 qpair failed and we were unable to recover it. 00:34:38.093 [2024-07-15 03:37:43.997731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.093 [2024-07-15 03:37:43.997761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.093 qpair failed and we were unable to recover it. 00:34:38.093 [2024-07-15 03:37:43.997897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.093 [2024-07-15 03:37:43.997923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.093 qpair failed and we were unable to recover it. 00:34:38.093 [2024-07-15 03:37:43.998075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.093 [2024-07-15 03:37:43.998100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.093 qpair failed and we were unable to recover it. 00:34:38.093 [2024-07-15 03:37:43.998211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.093 [2024-07-15 03:37:43.998236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.093 qpair failed and we were unable to recover it. 00:34:38.093 [2024-07-15 03:37:43.998385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.093 [2024-07-15 03:37:43.998410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.093 qpair failed and we were unable to recover it. 00:34:38.093 [2024-07-15 03:37:43.998513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.093 [2024-07-15 03:37:43.998538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.093 qpair failed and we were unable to recover it. 00:34:38.093 [2024-07-15 03:37:43.998665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.093 [2024-07-15 03:37:43.998690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.093 qpair failed and we were unable to recover it. 00:34:38.093 [2024-07-15 03:37:43.998794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.093 [2024-07-15 03:37:43.998819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.093 qpair failed and we were unable to recover it. 00:34:38.093 [2024-07-15 03:37:43.998946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.093 [2024-07-15 03:37:43.998985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.093 qpair failed and we were unable to recover it. 00:34:38.093 [2024-07-15 03:37:43.999167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.093 [2024-07-15 03:37:43.999206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.093 qpair failed and we were unable to recover it. 00:34:38.093 [2024-07-15 03:37:43.999354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.093 [2024-07-15 03:37:43.999381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.093 qpair failed and we were unable to recover it. 00:34:38.093 [2024-07-15 03:37:43.999547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.093 [2024-07-15 03:37:43.999573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.093 qpair failed and we were unable to recover it. 00:34:38.093 [2024-07-15 03:37:43.999710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.093 [2024-07-15 03:37:43.999736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.093 qpair failed and we were unable to recover it. 00:34:38.093 [2024-07-15 03:37:43.999894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.093 [2024-07-15 03:37:43.999921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.093 qpair failed and we were unable to recover it. 00:34:38.093 [2024-07-15 03:37:44.000070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.093 [2024-07-15 03:37:44.000098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.093 qpair failed and we were unable to recover it. 00:34:38.093 [2024-07-15 03:37:44.000242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.093 [2024-07-15 03:37:44.000270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.093 qpair failed and we were unable to recover it. 00:34:38.093 [2024-07-15 03:37:44.000386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.093 [2024-07-15 03:37:44.000411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.093 qpair failed and we were unable to recover it. 00:34:38.093 [2024-07-15 03:37:44.000575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.093 [2024-07-15 03:37:44.000600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.093 qpair failed and we were unable to recover it. 00:34:38.093 [2024-07-15 03:37:44.000735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.093 [2024-07-15 03:37:44.000759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.093 qpair failed and we were unable to recover it. 00:34:38.093 [2024-07-15 03:37:44.000925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.093 [2024-07-15 03:37:44.000952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.093 qpair failed and we were unable to recover it. 00:34:38.093 [2024-07-15 03:37:44.001076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.093 [2024-07-15 03:37:44.001101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.093 qpair failed and we were unable to recover it. 00:34:38.093 [2024-07-15 03:37:44.001208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.093 [2024-07-15 03:37:44.001233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.093 qpair failed and we were unable to recover it. 00:34:38.093 [2024-07-15 03:37:44.001350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.093 [2024-07-15 03:37:44.001377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.093 qpair failed and we were unable to recover it. 00:34:38.093 [2024-07-15 03:37:44.001515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.093 [2024-07-15 03:37:44.001540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.093 qpair failed and we were unable to recover it. 00:34:38.094 [2024-07-15 03:37:44.001683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.094 [2024-07-15 03:37:44.001708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.094 qpair failed and we were unable to recover it. 00:34:38.094 [2024-07-15 03:37:44.001824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.094 [2024-07-15 03:37:44.001850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.094 qpair failed and we were unable to recover it. 00:34:38.094 [2024-07-15 03:37:44.001970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.094 [2024-07-15 03:37:44.001996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.094 qpair failed and we were unable to recover it. 00:34:38.094 [2024-07-15 03:37:44.002147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.094 [2024-07-15 03:37:44.002186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.094 qpair failed and we were unable to recover it. 00:34:38.094 [2024-07-15 03:37:44.002341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.094 [2024-07-15 03:37:44.002368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.094 qpair failed and we were unable to recover it. 00:34:38.094 [2024-07-15 03:37:44.002508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.094 [2024-07-15 03:37:44.002534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.094 qpair failed and we were unable to recover it. 00:34:38.094 [2024-07-15 03:37:44.002674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.094 [2024-07-15 03:37:44.002700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.094 qpair failed and we were unable to recover it. 00:34:38.094 [2024-07-15 03:37:44.002841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.094 [2024-07-15 03:37:44.002867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.094 qpair failed and we were unable to recover it. 00:34:38.094 [2024-07-15 03:37:44.003013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.094 [2024-07-15 03:37:44.003039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.094 qpair failed and we were unable to recover it. 00:34:38.094 [2024-07-15 03:37:44.003191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.094 [2024-07-15 03:37:44.003216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.094 qpair failed and we were unable to recover it. 00:34:38.094 [2024-07-15 03:37:44.003384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.094 [2024-07-15 03:37:44.003410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.094 qpair failed and we were unable to recover it. 00:34:38.094 [2024-07-15 03:37:44.003547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.094 [2024-07-15 03:37:44.003574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.094 qpair failed and we were unable to recover it. 00:34:38.094 [2024-07-15 03:37:44.003691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.094 [2024-07-15 03:37:44.003719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.094 qpair failed and we were unable to recover it. 00:34:38.094 [2024-07-15 03:37:44.003862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.094 [2024-07-15 03:37:44.003905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.094 qpair failed and we were unable to recover it. 00:34:38.094 [2024-07-15 03:37:44.004062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.094 [2024-07-15 03:37:44.004088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.094 qpair failed and we were unable to recover it. 00:34:38.094 [2024-07-15 03:37:44.004224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.094 [2024-07-15 03:37:44.004249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.094 qpair failed and we were unable to recover it. 00:34:38.094 [2024-07-15 03:37:44.004389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.094 [2024-07-15 03:37:44.004414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.094 qpair failed and we were unable to recover it. 00:34:38.094 [2024-07-15 03:37:44.004558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.094 [2024-07-15 03:37:44.004584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.094 qpair failed and we were unable to recover it. 00:34:38.094 [2024-07-15 03:37:44.004692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.094 [2024-07-15 03:37:44.004717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.094 qpair failed and we were unable to recover it. 00:34:38.094 [2024-07-15 03:37:44.004837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.094 [2024-07-15 03:37:44.004862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.094 qpair failed and we were unable to recover it. 00:34:38.094 [2024-07-15 03:37:44.005013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.094 [2024-07-15 03:37:44.005039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.094 qpair failed and we were unable to recover it. 00:34:38.094 [2024-07-15 03:37:44.005176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.094 [2024-07-15 03:37:44.005201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.094 qpair failed and we were unable to recover it. 00:34:38.094 [2024-07-15 03:37:44.005355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.094 [2024-07-15 03:37:44.005380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.094 qpair failed and we were unable to recover it. 00:34:38.094 [2024-07-15 03:37:44.005483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.094 [2024-07-15 03:37:44.005508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.094 qpair failed and we were unable to recover it. 00:34:38.094 [2024-07-15 03:37:44.005641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.094 [2024-07-15 03:37:44.005666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.094 qpair failed and we were unable to recover it. 00:34:38.094 [2024-07-15 03:37:44.005806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.094 [2024-07-15 03:37:44.005831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.094 qpair failed and we were unable to recover it. 00:34:38.094 [2024-07-15 03:37:44.005975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.094 [2024-07-15 03:37:44.006002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.094 qpair failed and we were unable to recover it. 00:34:38.094 [2024-07-15 03:37:44.006123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.094 [2024-07-15 03:37:44.006152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.094 qpair failed and we were unable to recover it. 00:34:38.094 [2024-07-15 03:37:44.006295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.094 [2024-07-15 03:37:44.006321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.094 qpair failed and we were unable to recover it. 00:34:38.094 [2024-07-15 03:37:44.006460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.094 [2024-07-15 03:37:44.006486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.094 qpair failed and we were unable to recover it. 00:34:38.094 [2024-07-15 03:37:44.006626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.094 [2024-07-15 03:37:44.006657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.094 qpair failed and we were unable to recover it. 00:34:38.094 [2024-07-15 03:37:44.006791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.094 [2024-07-15 03:37:44.006817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.094 qpair failed and we were unable to recover it. 00:34:38.094 [2024-07-15 03:37:44.006961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.094 [2024-07-15 03:37:44.006987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.094 qpair failed and we were unable to recover it. 00:34:38.094 [2024-07-15 03:37:44.007141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.094 [2024-07-15 03:37:44.007167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.094 qpair failed and we were unable to recover it. 00:34:38.094 [2024-07-15 03:37:44.007337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.094 [2024-07-15 03:37:44.007363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.094 qpair failed and we were unable to recover it. 00:34:38.094 [2024-07-15 03:37:44.007500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.094 [2024-07-15 03:37:44.007527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.095 qpair failed and we were unable to recover it. 00:34:38.095 [2024-07-15 03:37:44.007669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.095 [2024-07-15 03:37:44.007696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.095 qpair failed and we were unable to recover it. 00:34:38.095 [2024-07-15 03:37:44.007821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.095 [2024-07-15 03:37:44.007846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.095 qpair failed and we were unable to recover it. 00:34:38.095 [2024-07-15 03:37:44.008002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.095 [2024-07-15 03:37:44.008029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.095 qpair failed and we were unable to recover it. 00:34:38.095 [2024-07-15 03:37:44.008167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.095 [2024-07-15 03:37:44.008193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.095 qpair failed and we were unable to recover it. 00:34:38.095 [2024-07-15 03:37:44.008351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.095 [2024-07-15 03:37:44.008376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.095 qpair failed and we were unable to recover it. 00:34:38.095 [2024-07-15 03:37:44.008495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.095 [2024-07-15 03:37:44.008521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.095 qpair failed and we were unable to recover it. 00:34:38.095 [2024-07-15 03:37:44.008668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.095 [2024-07-15 03:37:44.008696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.095 qpair failed and we were unable to recover it. 00:34:38.095 [2024-07-15 03:37:44.008851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.095 [2024-07-15 03:37:44.008885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.095 qpair failed and we were unable to recover it. 00:34:38.095 [2024-07-15 03:37:44.009029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.095 [2024-07-15 03:37:44.009056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.095 qpair failed and we were unable to recover it. 00:34:38.095 [2024-07-15 03:37:44.009191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.095 [2024-07-15 03:37:44.009217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.095 qpair failed and we were unable to recover it. 00:34:38.095 [2024-07-15 03:37:44.009361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.095 [2024-07-15 03:37:44.009387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.095 qpair failed and we were unable to recover it. 00:34:38.095 [2024-07-15 03:37:44.009487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.095 [2024-07-15 03:37:44.009512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.095 qpair failed and we were unable to recover it. 00:34:38.095 [2024-07-15 03:37:44.009629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.095 [2024-07-15 03:37:44.009655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.095 qpair failed and we were unable to recover it. 00:34:38.095 [2024-07-15 03:37:44.009795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.095 [2024-07-15 03:37:44.009820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.095 qpair failed and we were unable to recover it. 00:34:38.095 [2024-07-15 03:37:44.010004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.095 [2024-07-15 03:37:44.010043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.095 qpair failed and we were unable to recover it. 00:34:38.095 [2024-07-15 03:37:44.010215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.095 [2024-07-15 03:37:44.010243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.095 qpair failed and we were unable to recover it. 00:34:38.095 [2024-07-15 03:37:44.010388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.095 [2024-07-15 03:37:44.010416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.095 qpair failed and we were unable to recover it. 00:34:38.095 [2024-07-15 03:37:44.010536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.095 [2024-07-15 03:37:44.010564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.095 qpair failed and we were unable to recover it. 00:34:38.095 [2024-07-15 03:37:44.010685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.095 [2024-07-15 03:37:44.010711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.095 qpair failed and we were unable to recover it. 00:34:38.095 [2024-07-15 03:37:44.010851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.095 [2024-07-15 03:37:44.010883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.095 qpair failed and we were unable to recover it. 00:34:38.095 [2024-07-15 03:37:44.011023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.095 [2024-07-15 03:37:44.011049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.095 qpair failed and we were unable to recover it. 00:34:38.095 [2024-07-15 03:37:44.011195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.095 [2024-07-15 03:37:44.011226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.095 qpair failed and we were unable to recover it. 00:34:38.095 [2024-07-15 03:37:44.011373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.095 [2024-07-15 03:37:44.011400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.095 qpair failed and we were unable to recover it. 00:34:38.095 [2024-07-15 03:37:44.011514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.095 [2024-07-15 03:37:44.011541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.095 qpair failed and we were unable to recover it. 00:34:38.095 [2024-07-15 03:37:44.011676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.095 [2024-07-15 03:37:44.011702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.095 qpair failed and we were unable to recover it. 00:34:38.095 [2024-07-15 03:37:44.011846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.095 [2024-07-15 03:37:44.011874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.095 qpair failed and we were unable to recover it. 00:34:38.095 [2024-07-15 03:37:44.012028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.095 [2024-07-15 03:37:44.012055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.095 qpair failed and we were unable to recover it. 00:34:38.095 [2024-07-15 03:37:44.012170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.095 [2024-07-15 03:37:44.012196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.095 qpair failed and we were unable to recover it. 00:34:38.095 [2024-07-15 03:37:44.012339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.095 [2024-07-15 03:37:44.012365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.095 qpair failed and we were unable to recover it. 00:34:38.095 [2024-07-15 03:37:44.012532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.095 [2024-07-15 03:37:44.012559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.095 qpair failed and we were unable to recover it. 00:34:38.095 [2024-07-15 03:37:44.012664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.095 [2024-07-15 03:37:44.012689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.095 qpair failed and we were unable to recover it. 00:34:38.095 [2024-07-15 03:37:44.012827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.095 [2024-07-15 03:37:44.012853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.095 qpair failed and we were unable to recover it. 00:34:38.095 [2024-07-15 03:37:44.013039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.095 [2024-07-15 03:37:44.013078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.095 qpair failed and we were unable to recover it. 00:34:38.095 [2024-07-15 03:37:44.013229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.095 [2024-07-15 03:37:44.013256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.095 qpair failed and we were unable to recover it. 00:34:38.095 [2024-07-15 03:37:44.013392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.095 [2024-07-15 03:37:44.013419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.095 qpair failed and we were unable to recover it. 00:34:38.095 [2024-07-15 03:37:44.013563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.096 [2024-07-15 03:37:44.013588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.096 qpair failed and we were unable to recover it. 00:34:38.096 [2024-07-15 03:37:44.013696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.096 [2024-07-15 03:37:44.013723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.096 qpair failed and we were unable to recover it. 00:34:38.096 [2024-07-15 03:37:44.013897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.096 [2024-07-15 03:37:44.013923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.096 qpair failed and we were unable to recover it. 00:34:38.096 [2024-07-15 03:37:44.014063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.096 [2024-07-15 03:37:44.014088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.096 qpair failed and we were unable to recover it. 00:34:38.096 [2024-07-15 03:37:44.014229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.096 [2024-07-15 03:37:44.014254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.096 qpair failed and we were unable to recover it. 00:34:38.096 [2024-07-15 03:37:44.014381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.096 [2024-07-15 03:37:44.014408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.096 qpair failed and we were unable to recover it. 00:34:38.096 [2024-07-15 03:37:44.014575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.096 [2024-07-15 03:37:44.014601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.096 qpair failed and we were unable to recover it. 00:34:38.096 [2024-07-15 03:37:44.014744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.096 [2024-07-15 03:37:44.014770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.096 qpair failed and we were unable to recover it. 00:34:38.096 [2024-07-15 03:37:44.014921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.096 [2024-07-15 03:37:44.014949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.096 qpair failed and we were unable to recover it. 00:34:38.096 [2024-07-15 03:37:44.015084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.096 [2024-07-15 03:37:44.015111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.096 qpair failed and we were unable to recover it. 00:34:38.096 [2024-07-15 03:37:44.015276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.096 [2024-07-15 03:37:44.015302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.096 qpair failed and we were unable to recover it. 00:34:38.096 [2024-07-15 03:37:44.015436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.096 [2024-07-15 03:37:44.015461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.096 qpair failed and we were unable to recover it. 00:34:38.096 [2024-07-15 03:37:44.015599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.096 [2024-07-15 03:37:44.015626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.096 qpair failed and we were unable to recover it. 00:34:38.096 [2024-07-15 03:37:44.015785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.096 [2024-07-15 03:37:44.015811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.096 qpair failed and we were unable to recover it. 00:34:38.096 [2024-07-15 03:37:44.015964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.096 [2024-07-15 03:37:44.015991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.096 qpair failed and we were unable to recover it. 00:34:38.096 [2024-07-15 03:37:44.016105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.096 [2024-07-15 03:37:44.016131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.096 qpair failed and we were unable to recover it. 00:34:38.096 [2024-07-15 03:37:44.016295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.096 [2024-07-15 03:37:44.016321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.096 qpair failed and we were unable to recover it. 00:34:38.096 [2024-07-15 03:37:44.016427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.096 [2024-07-15 03:37:44.016454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.096 qpair failed and we were unable to recover it. 00:34:38.096 [2024-07-15 03:37:44.016612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.096 [2024-07-15 03:37:44.016638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.096 qpair failed and we were unable to recover it. 00:34:38.096 [2024-07-15 03:37:44.016766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.096 [2024-07-15 03:37:44.016793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.096 qpair failed and we were unable to recover it. 00:34:38.096 [2024-07-15 03:37:44.016936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.096 [2024-07-15 03:37:44.016963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.096 qpair failed and we were unable to recover it. 00:34:38.096 [2024-07-15 03:37:44.017096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.096 [2024-07-15 03:37:44.017122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.096 qpair failed and we were unable to recover it. 00:34:38.096 [2024-07-15 03:37:44.017259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.096 [2024-07-15 03:37:44.017285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.096 qpair failed and we were unable to recover it. 00:34:38.096 [2024-07-15 03:37:44.017440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.096 [2024-07-15 03:37:44.017466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.096 qpair failed and we were unable to recover it. 00:34:38.096 [2024-07-15 03:37:44.017606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.096 [2024-07-15 03:37:44.017632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.096 qpair failed and we were unable to recover it. 00:34:38.096 [2024-07-15 03:37:44.017735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.096 [2024-07-15 03:37:44.017761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.096 qpair failed and we were unable to recover it. 00:34:38.096 [2024-07-15 03:37:44.017889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.096 [2024-07-15 03:37:44.017921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.096 qpair failed and we were unable to recover it. 00:34:38.096 [2024-07-15 03:37:44.018063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.096 [2024-07-15 03:37:44.018091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.096 qpair failed and we were unable to recover it. 00:34:38.096 [2024-07-15 03:37:44.018230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.096 [2024-07-15 03:37:44.018256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.096 qpair failed and we were unable to recover it. 00:34:38.096 [2024-07-15 03:37:44.018374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.096 [2024-07-15 03:37:44.018400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.096 qpair failed and we were unable to recover it. 00:34:38.096 [2024-07-15 03:37:44.018504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.096 [2024-07-15 03:37:44.018529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.096 qpair failed and we were unable to recover it. 00:34:38.096 [2024-07-15 03:37:44.018671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.096 [2024-07-15 03:37:44.018697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.096 qpair failed and we were unable to recover it. 00:34:38.096 [2024-07-15 03:37:44.018819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.096 [2024-07-15 03:37:44.018846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.096 qpair failed and we were unable to recover it. 00:34:38.096 [2024-07-15 03:37:44.018993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.096 [2024-07-15 03:37:44.019020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.096 qpair failed and we were unable to recover it. 00:34:38.096 [2024-07-15 03:37:44.019139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.096 [2024-07-15 03:37:44.019165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.096 qpair failed and we were unable to recover it. 00:34:38.096 [2024-07-15 03:37:44.019285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.096 [2024-07-15 03:37:44.019311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.096 qpair failed and we were unable to recover it. 00:34:38.097 [2024-07-15 03:37:44.019422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.097 [2024-07-15 03:37:44.019450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.097 qpair failed and we were unable to recover it. 00:34:38.097 [2024-07-15 03:37:44.019614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.097 [2024-07-15 03:37:44.019641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.097 qpair failed and we were unable to recover it. 00:34:38.097 [2024-07-15 03:37:44.019784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.097 [2024-07-15 03:37:44.019809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.097 qpair failed and we were unable to recover it. 00:34:38.097 [2024-07-15 03:37:44.019957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.097 [2024-07-15 03:37:44.019984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.097 qpair failed and we were unable to recover it. 00:34:38.097 [2024-07-15 03:37:44.020105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.097 [2024-07-15 03:37:44.020131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.097 qpair failed and we were unable to recover it. 00:34:38.097 [2024-07-15 03:37:44.020275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.097 [2024-07-15 03:37:44.020301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.097 qpair failed and we were unable to recover it. 00:34:38.097 [2024-07-15 03:37:44.020415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.097 [2024-07-15 03:37:44.020441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.097 qpair failed and we were unable to recover it. 00:34:38.097 [2024-07-15 03:37:44.020578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.097 [2024-07-15 03:37:44.020604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.097 qpair failed and we were unable to recover it. 00:34:38.097 [2024-07-15 03:37:44.020719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.097 [2024-07-15 03:37:44.020745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.097 qpair failed and we were unable to recover it. 00:34:38.097 [2024-07-15 03:37:44.020858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.097 [2024-07-15 03:37:44.020889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.097 qpair failed and we were unable to recover it. 00:34:38.097 [2024-07-15 03:37:44.021058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.097 [2024-07-15 03:37:44.021083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.097 qpair failed and we were unable to recover it. 00:34:38.097 [2024-07-15 03:37:44.021235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.097 [2024-07-15 03:37:44.021261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.097 qpair failed and we were unable to recover it. 00:34:38.097 [2024-07-15 03:37:44.021410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.097 [2024-07-15 03:37:44.021436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.097 qpair failed and we were unable to recover it. 00:34:38.097 [2024-07-15 03:37:44.021576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.097 [2024-07-15 03:37:44.021601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.097 qpair failed and we were unable to recover it. 00:34:38.097 [2024-07-15 03:37:44.021739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.097 [2024-07-15 03:37:44.021765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.097 qpair failed and we were unable to recover it. 00:34:38.097 [2024-07-15 03:37:44.021900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.097 [2024-07-15 03:37:44.021927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.097 qpair failed and we were unable to recover it. 00:34:38.097 [2024-07-15 03:37:44.022042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.097 [2024-07-15 03:37:44.022068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.097 qpair failed and we were unable to recover it. 00:34:38.097 [2024-07-15 03:37:44.022195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.097 [2024-07-15 03:37:44.022221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.097 qpair failed and we were unable to recover it. 00:34:38.097 [2024-07-15 03:37:44.022354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.097 [2024-07-15 03:37:44.022380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.097 qpair failed and we were unable to recover it. 00:34:38.097 [2024-07-15 03:37:44.022521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.097 [2024-07-15 03:37:44.022548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.097 qpair failed and we were unable to recover it. 00:34:38.097 [2024-07-15 03:37:44.022698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.097 [2024-07-15 03:37:44.022723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.097 qpair failed and we were unable to recover it. 00:34:38.097 [2024-07-15 03:37:44.022836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.097 [2024-07-15 03:37:44.022862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.097 qpair failed and we were unable to recover it. 00:34:38.097 [2024-07-15 03:37:44.023028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.097 [2024-07-15 03:37:44.023054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.097 qpair failed and we were unable to recover it. 00:34:38.097 [2024-07-15 03:37:44.023225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.097 [2024-07-15 03:37:44.023251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.097 qpair failed and we were unable to recover it. 00:34:38.097 [2024-07-15 03:37:44.023388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.097 [2024-07-15 03:37:44.023414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.097 qpair failed and we were unable to recover it. 00:34:38.097 [2024-07-15 03:37:44.023539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.097 [2024-07-15 03:37:44.023564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.097 qpair failed and we were unable to recover it. 00:34:38.097 [2024-07-15 03:37:44.023731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.097 [2024-07-15 03:37:44.023758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.097 qpair failed and we were unable to recover it. 00:34:38.097 [2024-07-15 03:37:44.023897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.097 [2024-07-15 03:37:44.023924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.097 qpair failed and we were unable to recover it. 00:34:38.097 [2024-07-15 03:37:44.024062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.097 [2024-07-15 03:37:44.024088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.097 qpair failed and we were unable to recover it. 00:34:38.097 [2024-07-15 03:37:44.024234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.097 [2024-07-15 03:37:44.024261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.097 qpair failed and we were unable to recover it. 00:34:38.098 [2024-07-15 03:37:44.024398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.098 [2024-07-15 03:37:44.024430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.098 qpair failed and we were unable to recover it. 00:34:38.098 [2024-07-15 03:37:44.024597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.098 [2024-07-15 03:37:44.024624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.098 qpair failed and we were unable to recover it. 00:34:38.098 [2024-07-15 03:37:44.024739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.098 [2024-07-15 03:37:44.024766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.098 qpair failed and we were unable to recover it. 00:34:38.098 [2024-07-15 03:37:44.024938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.098 [2024-07-15 03:37:44.024964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.098 qpair failed and we were unable to recover it. 00:34:38.098 [2024-07-15 03:37:44.025080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.098 [2024-07-15 03:37:44.025106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.098 qpair failed and we were unable to recover it. 00:34:38.098 [2024-07-15 03:37:44.025258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.098 [2024-07-15 03:37:44.025284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.098 qpair failed and we were unable to recover it. 00:34:38.098 [2024-07-15 03:37:44.025422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.098 [2024-07-15 03:37:44.025447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.098 qpair failed and we were unable to recover it. 00:34:38.098 [2024-07-15 03:37:44.025588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.098 [2024-07-15 03:37:44.025614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.098 qpair failed and we were unable to recover it. 00:34:38.098 [2024-07-15 03:37:44.025733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.098 [2024-07-15 03:37:44.025759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.098 qpair failed and we were unable to recover it. 00:34:38.098 [2024-07-15 03:37:44.025869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.098 [2024-07-15 03:37:44.025901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.098 qpair failed and we were unable to recover it. 00:34:38.098 [2024-07-15 03:37:44.026016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.098 [2024-07-15 03:37:44.026042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.098 qpair failed and we were unable to recover it. 00:34:38.098 [2024-07-15 03:37:44.026167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.098 [2024-07-15 03:37:44.026192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.098 qpair failed and we were unable to recover it. 00:34:38.098 [2024-07-15 03:37:44.026302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.098 [2024-07-15 03:37:44.026327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.098 qpair failed and we were unable to recover it. 00:34:38.098 [2024-07-15 03:37:44.026469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.098 [2024-07-15 03:37:44.026495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.098 qpair failed and we were unable to recover it. 00:34:38.098 [2024-07-15 03:37:44.026645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.098 [2024-07-15 03:37:44.026672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.098 qpair failed and we were unable to recover it. 00:34:38.098 [2024-07-15 03:37:44.026828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.098 [2024-07-15 03:37:44.026854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.098 qpair failed and we were unable to recover it. 00:34:38.098 [2024-07-15 03:37:44.027000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.098 [2024-07-15 03:37:44.027026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.098 qpair failed and we were unable to recover it. 00:34:38.098 [2024-07-15 03:37:44.027140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.098 [2024-07-15 03:37:44.027167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.098 qpair failed and we were unable to recover it. 00:34:38.098 [2024-07-15 03:37:44.027319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.098 [2024-07-15 03:37:44.027345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.098 qpair failed and we were unable to recover it. 00:34:38.098 [2024-07-15 03:37:44.027458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.098 [2024-07-15 03:37:44.027484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.098 qpair failed and we were unable to recover it. 00:34:38.098 [2024-07-15 03:37:44.027622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.098 [2024-07-15 03:37:44.027648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.098 qpair failed and we were unable to recover it. 00:34:38.098 [2024-07-15 03:37:44.027788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.098 [2024-07-15 03:37:44.027813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.098 qpair failed and we were unable to recover it. 00:34:38.098 [2024-07-15 03:37:44.027932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.098 [2024-07-15 03:37:44.027959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.098 qpair failed and we were unable to recover it. 00:34:38.098 [2024-07-15 03:37:44.028099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.098 [2024-07-15 03:37:44.028124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.098 qpair failed and we were unable to recover it. 00:34:38.098 [2024-07-15 03:37:44.028290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.098 [2024-07-15 03:37:44.028316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.098 qpair failed and we were unable to recover it. 00:34:38.098 [2024-07-15 03:37:44.028423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.098 [2024-07-15 03:37:44.028449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.098 qpair failed and we were unable to recover it. 00:34:38.098 [2024-07-15 03:37:44.028596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.098 [2024-07-15 03:37:44.028622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.098 qpair failed and we were unable to recover it. 00:34:38.098 [2024-07-15 03:37:44.028766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.098 [2024-07-15 03:37:44.028793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.098 qpair failed and we were unable to recover it. 00:34:38.098 [2024-07-15 03:37:44.028921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.098 [2024-07-15 03:37:44.028947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.098 qpair failed and we were unable to recover it. 00:34:38.098 [2024-07-15 03:37:44.029092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.098 [2024-07-15 03:37:44.029118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.098 qpair failed and we were unable to recover it. 00:34:38.098 [2024-07-15 03:37:44.029290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.098 [2024-07-15 03:37:44.029316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.098 qpair failed and we were unable to recover it. 00:34:38.098 [2024-07-15 03:37:44.029459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.098 [2024-07-15 03:37:44.029485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.098 qpair failed and we were unable to recover it. 00:34:38.098 [2024-07-15 03:37:44.029623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.098 [2024-07-15 03:37:44.029649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.098 qpair failed and we were unable to recover it. 00:34:38.098 [2024-07-15 03:37:44.029787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.098 [2024-07-15 03:37:44.029813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.098 qpair failed and we were unable to recover it. 00:34:38.098 [2024-07-15 03:37:44.029944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.098 [2024-07-15 03:37:44.029971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.098 qpair failed and we were unable to recover it. 00:34:38.098 [2024-07-15 03:37:44.030142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.099 [2024-07-15 03:37:44.030173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.099 qpair failed and we were unable to recover it. 00:34:38.099 [2024-07-15 03:37:44.030311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.099 [2024-07-15 03:37:44.030338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.099 qpair failed and we were unable to recover it. 00:34:38.099 [2024-07-15 03:37:44.030485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.099 [2024-07-15 03:37:44.030511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.099 qpair failed and we were unable to recover it. 00:34:38.099 [2024-07-15 03:37:44.030654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.099 [2024-07-15 03:37:44.030679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.099 qpair failed and we were unable to recover it. 00:34:38.099 [2024-07-15 03:37:44.030793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.099 [2024-07-15 03:37:44.030819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.099 qpair failed and we were unable to recover it. 00:34:38.099 [2024-07-15 03:37:44.030945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.099 [2024-07-15 03:37:44.030976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.099 qpair failed and we were unable to recover it. 00:34:38.099 [2024-07-15 03:37:44.031095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.099 [2024-07-15 03:37:44.031121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.099 qpair failed and we were unable to recover it. 00:34:38.099 [2024-07-15 03:37:44.031271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.099 [2024-07-15 03:37:44.031297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.099 qpair failed and we were unable to recover it. 00:34:38.099 [2024-07-15 03:37:44.031450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.099 [2024-07-15 03:37:44.031475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.099 qpair failed and we were unable to recover it. 00:34:38.099 [2024-07-15 03:37:44.031641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.099 [2024-07-15 03:37:44.031667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.099 qpair failed and we were unable to recover it. 00:34:38.099 [2024-07-15 03:37:44.031803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.099 [2024-07-15 03:37:44.031829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.099 qpair failed and we were unable to recover it. 00:34:38.099 [2024-07-15 03:37:44.031972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.099 [2024-07-15 03:37:44.031999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.099 qpair failed and we were unable to recover it. 00:34:38.099 [2024-07-15 03:37:44.032115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.099 [2024-07-15 03:37:44.032141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.099 qpair failed and we were unable to recover it. 00:34:38.099 [2024-07-15 03:37:44.032255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.099 [2024-07-15 03:37:44.032281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.099 qpair failed and we were unable to recover it. 00:34:38.099 [2024-07-15 03:37:44.032437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.099 [2024-07-15 03:37:44.032464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.099 qpair failed and we were unable to recover it. 00:34:38.099 [2024-07-15 03:37:44.032604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.099 [2024-07-15 03:37:44.032630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.099 qpair failed and we were unable to recover it. 00:34:38.099 [2024-07-15 03:37:44.032773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.099 [2024-07-15 03:37:44.032799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.099 qpair failed and we were unable to recover it. 00:34:38.099 [2024-07-15 03:37:44.032927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.099 [2024-07-15 03:37:44.032954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.099 qpair failed and we were unable to recover it. 00:34:38.099 [2024-07-15 03:37:44.033099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.099 [2024-07-15 03:37:44.033126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.099 qpair failed and we were unable to recover it. 00:34:38.099 [2024-07-15 03:37:44.033283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.099 [2024-07-15 03:37:44.033309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.099 qpair failed and we were unable to recover it. 00:34:38.099 [2024-07-15 03:37:44.033446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.099 [2024-07-15 03:37:44.033472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.099 qpair failed and we were unable to recover it. 00:34:38.099 [2024-07-15 03:37:44.033614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.099 [2024-07-15 03:37:44.033641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.099 qpair failed and we were unable to recover it. 00:34:38.099 [2024-07-15 03:37:44.033806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.099 [2024-07-15 03:37:44.033832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.099 qpair failed and we were unable to recover it. 00:34:38.099 [2024-07-15 03:37:44.033958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.099 [2024-07-15 03:37:44.033985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.099 qpair failed and we were unable to recover it. 00:34:38.099 [2024-07-15 03:37:44.034125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.099 [2024-07-15 03:37:44.034151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.099 qpair failed and we were unable to recover it. 00:34:38.099 [2024-07-15 03:37:44.034267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.099 [2024-07-15 03:37:44.034293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.099 qpair failed and we were unable to recover it. 00:34:38.099 [2024-07-15 03:37:44.034418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.099 [2024-07-15 03:37:44.034446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.099 qpair failed and we were unable to recover it. 00:34:38.099 [2024-07-15 03:37:44.034599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.099 [2024-07-15 03:37:44.034625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.099 qpair failed and we were unable to recover it. 00:34:38.099 [2024-07-15 03:37:44.034778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.099 [2024-07-15 03:37:44.034804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.099 qpair failed and we were unable to recover it. 00:34:38.099 [2024-07-15 03:37:44.034951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.099 [2024-07-15 03:37:44.034978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.099 qpair failed and we were unable to recover it. 00:34:38.099 [2024-07-15 03:37:44.035124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.099 [2024-07-15 03:37:44.035151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.099 qpair failed and we were unable to recover it. 00:34:38.099 [2024-07-15 03:37:44.035297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.099 [2024-07-15 03:37:44.035324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.099 qpair failed and we were unable to recover it. 00:34:38.099 [2024-07-15 03:37:44.035473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.099 [2024-07-15 03:37:44.035499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.099 qpair failed and we were unable to recover it. 00:34:38.099 [2024-07-15 03:37:44.035614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.099 [2024-07-15 03:37:44.035639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.099 qpair failed and we were unable to recover it. 00:34:38.099 [2024-07-15 03:37:44.035772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.099 [2024-07-15 03:37:44.035797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.099 qpair failed and we were unable to recover it. 00:34:38.099 [2024-07-15 03:37:44.035920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.099 [2024-07-15 03:37:44.035947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.099 qpair failed and we were unable to recover it. 00:34:38.100 [2024-07-15 03:37:44.036086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.100 [2024-07-15 03:37:44.036112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.100 qpair failed and we were unable to recover it. 00:34:38.100 [2024-07-15 03:37:44.036236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.100 [2024-07-15 03:37:44.036262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.100 qpair failed and we were unable to recover it. 00:34:38.100 [2024-07-15 03:37:44.036403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.100 [2024-07-15 03:37:44.036429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.100 qpair failed and we were unable to recover it. 00:34:38.100 [2024-07-15 03:37:44.036570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.100 [2024-07-15 03:37:44.036596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.100 qpair failed and we were unable to recover it. 00:34:38.100 [2024-07-15 03:37:44.036704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.100 [2024-07-15 03:37:44.036730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.100 qpair failed and we were unable to recover it. 00:34:38.100 [2024-07-15 03:37:44.036870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.100 [2024-07-15 03:37:44.036921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.100 qpair failed and we were unable to recover it. 00:34:38.100 [2024-07-15 03:37:44.037089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.100 [2024-07-15 03:37:44.037115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.100 qpair failed and we were unable to recover it. 00:34:38.100 [2024-07-15 03:37:44.037282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.100 [2024-07-15 03:37:44.037308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.100 qpair failed and we were unable to recover it. 00:34:38.100 [2024-07-15 03:37:44.037449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.100 [2024-07-15 03:37:44.037475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.100 qpair failed and we were unable to recover it. 00:34:38.100 [2024-07-15 03:37:44.037615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.100 [2024-07-15 03:37:44.037646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.100 qpair failed and we were unable to recover it. 00:34:38.100 [2024-07-15 03:37:44.037792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.100 [2024-07-15 03:37:44.037819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.100 qpair failed and we were unable to recover it. 00:34:38.100 [2024-07-15 03:37:44.037989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.100 [2024-07-15 03:37:44.038015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.100 qpair failed and we were unable to recover it. 00:34:38.100 [2024-07-15 03:37:44.038182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.100 [2024-07-15 03:37:44.038208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.100 qpair failed and we were unable to recover it. 00:34:38.100 [2024-07-15 03:37:44.038316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.100 [2024-07-15 03:37:44.038342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.100 qpair failed and we were unable to recover it. 00:34:38.100 [2024-07-15 03:37:44.038451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.100 [2024-07-15 03:37:44.038477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.100 qpair failed and we were unable to recover it. 00:34:38.100 [2024-07-15 03:37:44.038595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.100 [2024-07-15 03:37:44.038623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.100 qpair failed and we were unable to recover it. 00:34:38.100 [2024-07-15 03:37:44.038790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.100 [2024-07-15 03:37:44.038816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.100 qpair failed and we were unable to recover it. 00:34:38.100 [2024-07-15 03:37:44.038964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.100 [2024-07-15 03:37:44.038991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.100 qpair failed and we were unable to recover it. 00:34:38.100 [2024-07-15 03:37:44.039114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.100 [2024-07-15 03:37:44.039141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.100 qpair failed and we were unable to recover it. 00:34:38.100 [2024-07-15 03:37:44.039280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.100 [2024-07-15 03:37:44.039306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.100 qpair failed and we were unable to recover it. 00:34:38.100 [2024-07-15 03:37:44.039459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.100 [2024-07-15 03:37:44.039485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.100 qpair failed and we were unable to recover it. 00:34:38.100 [2024-07-15 03:37:44.039625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.100 [2024-07-15 03:37:44.039652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.100 qpair failed and we were unable to recover it. 00:34:38.100 [2024-07-15 03:37:44.039782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.100 [2024-07-15 03:37:44.039808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.100 qpair failed and we were unable to recover it. 00:34:38.100 [2024-07-15 03:37:44.039936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.100 [2024-07-15 03:37:44.039963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.100 qpair failed and we were unable to recover it. 00:34:38.100 [2024-07-15 03:37:44.040105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.100 [2024-07-15 03:37:44.040131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.100 qpair failed and we were unable to recover it. 00:34:38.100 [2024-07-15 03:37:44.040253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.100 [2024-07-15 03:37:44.040279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.100 qpair failed and we were unable to recover it. 00:34:38.100 [2024-07-15 03:37:44.040420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.100 [2024-07-15 03:37:44.040446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.100 qpair failed and we were unable to recover it. 00:34:38.100 [2024-07-15 03:37:44.040613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.100 [2024-07-15 03:37:44.040639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.100 qpair failed and we were unable to recover it. 00:34:38.100 [2024-07-15 03:37:44.040753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.100 [2024-07-15 03:37:44.040779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.100 qpair failed and we were unable to recover it. 00:34:38.100 [2024-07-15 03:37:44.040917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.100 [2024-07-15 03:37:44.040944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.100 qpair failed and we were unable to recover it. 00:34:38.100 [2024-07-15 03:37:44.041092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.100 [2024-07-15 03:37:44.041119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.100 qpair failed and we were unable to recover it. 00:34:38.100 [2024-07-15 03:37:44.041271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.100 [2024-07-15 03:37:44.041297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.100 qpair failed and we were unable to recover it. 00:34:38.100 [2024-07-15 03:37:44.041423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.100 [2024-07-15 03:37:44.041448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.100 qpair failed and we were unable to recover it. 00:34:38.100 [2024-07-15 03:37:44.041561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.100 [2024-07-15 03:37:44.041587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.100 qpair failed and we were unable to recover it. 00:34:38.100 [2024-07-15 03:37:44.041722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.100 [2024-07-15 03:37:44.041748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.100 qpair failed and we were unable to recover it. 00:34:38.100 [2024-07-15 03:37:44.041894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.101 [2024-07-15 03:37:44.041921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.101 qpair failed and we were unable to recover it. 00:34:38.101 [2024-07-15 03:37:44.042041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.101 [2024-07-15 03:37:44.042067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.101 qpair failed and we were unable to recover it. 00:34:38.101 [2024-07-15 03:37:44.042181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.101 [2024-07-15 03:37:44.042207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.101 qpair failed and we were unable to recover it. 00:34:38.101 [2024-07-15 03:37:44.042348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.101 [2024-07-15 03:37:44.042374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.101 qpair failed and we were unable to recover it. 00:34:38.101 [2024-07-15 03:37:44.042538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.101 [2024-07-15 03:37:44.042564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.101 qpair failed and we were unable to recover it. 00:34:38.101 [2024-07-15 03:37:44.042704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.101 [2024-07-15 03:37:44.042731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.101 qpair failed and we were unable to recover it. 00:34:38.101 [2024-07-15 03:37:44.042840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.101 [2024-07-15 03:37:44.042866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.101 qpair failed and we were unable to recover it. 00:34:38.101 [2024-07-15 03:37:44.043019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.101 [2024-07-15 03:37:44.043045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.101 qpair failed and we were unable to recover it. 00:34:38.101 [2024-07-15 03:37:44.043215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.101 [2024-07-15 03:37:44.043241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.101 qpair failed and we were unable to recover it. 00:34:38.101 [2024-07-15 03:37:44.043372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.101 [2024-07-15 03:37:44.043398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.101 qpair failed and we were unable to recover it. 00:34:38.101 [2024-07-15 03:37:44.043513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.101 [2024-07-15 03:37:44.043541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.101 qpair failed and we were unable to recover it. 00:34:38.101 [2024-07-15 03:37:44.043704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.101 [2024-07-15 03:37:44.043730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.101 qpair failed and we were unable to recover it. 00:34:38.101 [2024-07-15 03:37:44.043873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.101 [2024-07-15 03:37:44.043906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.101 qpair failed and we were unable to recover it. 00:34:38.101 [2024-07-15 03:37:44.044049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.101 [2024-07-15 03:37:44.044075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.101 qpair failed and we were unable to recover it. 00:34:38.101 [2024-07-15 03:37:44.044215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.101 [2024-07-15 03:37:44.044246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.101 qpair failed and we were unable to recover it. 00:34:38.101 [2024-07-15 03:37:44.044387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.101 [2024-07-15 03:37:44.044413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.101 qpair failed and we were unable to recover it. 00:34:38.101 [2024-07-15 03:37:44.044553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.101 [2024-07-15 03:37:44.044580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.101 qpair failed and we were unable to recover it. 00:34:38.101 [2024-07-15 03:37:44.044709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.101 [2024-07-15 03:37:44.044734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.101 qpair failed and we were unable to recover it. 00:34:38.101 [2024-07-15 03:37:44.044873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.101 [2024-07-15 03:37:44.044906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.101 qpair failed and we were unable to recover it. 00:34:38.101 [2024-07-15 03:37:44.045045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.101 [2024-07-15 03:37:44.045070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.101 qpair failed and we were unable to recover it. 00:34:38.101 [2024-07-15 03:37:44.045210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.101 [2024-07-15 03:37:44.045236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.101 qpair failed and we were unable to recover it. 00:34:38.101 [2024-07-15 03:37:44.045376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.101 [2024-07-15 03:37:44.045403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.101 qpair failed and we were unable to recover it. 00:34:38.101 [2024-07-15 03:37:44.045531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.101 [2024-07-15 03:37:44.045556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.101 qpair failed and we were unable to recover it. 00:34:38.101 [2024-07-15 03:37:44.045718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.101 [2024-07-15 03:37:44.045744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.101 qpair failed and we were unable to recover it. 00:34:38.101 [2024-07-15 03:37:44.045893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.101 [2024-07-15 03:37:44.045920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.101 qpair failed and we were unable to recover it. 00:34:38.101 [2024-07-15 03:37:44.046042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.101 [2024-07-15 03:37:44.046068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.101 qpair failed and we were unable to recover it. 00:34:38.101 [2024-07-15 03:37:44.046206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.101 [2024-07-15 03:37:44.046232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.101 qpair failed and we were unable to recover it. 00:34:38.101 [2024-07-15 03:37:44.046355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.101 [2024-07-15 03:37:44.046380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.101 qpair failed and we were unable to recover it. 00:34:38.101 [2024-07-15 03:37:44.046523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.101 [2024-07-15 03:37:44.046549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.101 qpair failed and we were unable to recover it. 00:34:38.101 [2024-07-15 03:37:44.046710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.101 [2024-07-15 03:37:44.046735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.101 qpair failed and we were unable to recover it. 00:34:38.101 [2024-07-15 03:37:44.046874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.101 [2024-07-15 03:37:44.046904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.101 qpair failed and we were unable to recover it. 00:34:38.101 [2024-07-15 03:37:44.047058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.101 [2024-07-15 03:37:44.047083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.101 qpair failed and we were unable to recover it. 00:34:38.101 [2024-07-15 03:37:44.047222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.101 [2024-07-15 03:37:44.047248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.101 qpair failed and we were unable to recover it. 00:34:38.101 [2024-07-15 03:37:44.047387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.101 [2024-07-15 03:37:44.047413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.101 qpair failed and we were unable to recover it. 00:34:38.101 [2024-07-15 03:37:44.047531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.101 [2024-07-15 03:37:44.047557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.101 qpair failed and we were unable to recover it. 00:34:38.101 [2024-07-15 03:37:44.047693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.101 [2024-07-15 03:37:44.047719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.101 qpair failed and we were unable to recover it. 00:34:38.102 [2024-07-15 03:37:44.047845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.102 [2024-07-15 03:37:44.047872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.102 qpair failed and we were unable to recover it. 00:34:38.102 [2024-07-15 03:37:44.048038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.102 [2024-07-15 03:37:44.048064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.102 qpair failed and we were unable to recover it. 00:34:38.102 [2024-07-15 03:37:44.048179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.102 [2024-07-15 03:37:44.048205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.102 qpair failed and we were unable to recover it. 00:34:38.102 [2024-07-15 03:37:44.048369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.102 [2024-07-15 03:37:44.048396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.102 qpair failed and we were unable to recover it. 00:34:38.102 [2024-07-15 03:37:44.048535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.102 [2024-07-15 03:37:44.048562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.102 qpair failed and we were unable to recover it. 00:34:38.102 [2024-07-15 03:37:44.048728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.102 [2024-07-15 03:37:44.048754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.102 qpair failed and we were unable to recover it. 00:34:38.102 [2024-07-15 03:37:44.048908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.102 [2024-07-15 03:37:44.048935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.102 qpair failed and we were unable to recover it. 00:34:38.102 [2024-07-15 03:37:44.049076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.102 [2024-07-15 03:37:44.049102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.102 qpair failed and we were unable to recover it. 00:34:38.102 [2024-07-15 03:37:44.049241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.102 [2024-07-15 03:37:44.049268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.102 qpair failed and we were unable to recover it. 00:34:38.102 [2024-07-15 03:37:44.049407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.102 [2024-07-15 03:37:44.049434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.102 qpair failed and we were unable to recover it. 00:34:38.102 [2024-07-15 03:37:44.049547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.102 [2024-07-15 03:37:44.049573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.102 qpair failed and we were unable to recover it. 00:34:38.102 [2024-07-15 03:37:44.049725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.102 [2024-07-15 03:37:44.049751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.102 qpair failed and we were unable to recover it. 00:34:38.102 [2024-07-15 03:37:44.049895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.102 [2024-07-15 03:37:44.049922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.102 qpair failed and we were unable to recover it. 00:34:38.102 [2024-07-15 03:37:44.050090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.102 [2024-07-15 03:37:44.050116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.102 qpair failed and we were unable to recover it. 00:34:38.102 [2024-07-15 03:37:44.050253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.102 [2024-07-15 03:37:44.050279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.102 qpair failed and we were unable to recover it. 00:34:38.102 [2024-07-15 03:37:44.050422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.102 [2024-07-15 03:37:44.050448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.102 qpair failed and we were unable to recover it. 00:34:38.102 [2024-07-15 03:37:44.050593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.102 [2024-07-15 03:37:44.050620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.102 qpair failed and we were unable to recover it. 00:34:38.102 [2024-07-15 03:37:44.050731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.102 [2024-07-15 03:37:44.050757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.102 qpair failed and we were unable to recover it. 00:34:38.102 [2024-07-15 03:37:44.050905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.102 [2024-07-15 03:37:44.050936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.102 qpair failed and we were unable to recover it. 00:34:38.102 [2024-07-15 03:37:44.051076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.102 [2024-07-15 03:37:44.051102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.102 qpair failed and we were unable to recover it. 00:34:38.102 [2024-07-15 03:37:44.051237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.102 [2024-07-15 03:37:44.051263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.102 qpair failed and we were unable to recover it. 00:34:38.102 [2024-07-15 03:37:44.051417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.102 [2024-07-15 03:37:44.051443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.102 qpair failed and we were unable to recover it. 00:34:38.102 [2024-07-15 03:37:44.051586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.102 [2024-07-15 03:37:44.051613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.102 qpair failed and we were unable to recover it. 00:34:38.102 [2024-07-15 03:37:44.051718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.102 [2024-07-15 03:37:44.051744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.102 qpair failed and we were unable to recover it. 00:34:38.102 [2024-07-15 03:37:44.051886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.102 [2024-07-15 03:37:44.051913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.102 qpair failed and we were unable to recover it. 00:34:38.102 [2024-07-15 03:37:44.052063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.102 [2024-07-15 03:37:44.052089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.102 qpair failed and we were unable to recover it. 00:34:38.102 [2024-07-15 03:37:44.052229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.102 [2024-07-15 03:37:44.052255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.102 qpair failed and we were unable to recover it. 00:34:38.102 [2024-07-15 03:37:44.052397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.102 [2024-07-15 03:37:44.052424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.102 qpair failed and we were unable to recover it. 00:34:38.102 [2024-07-15 03:37:44.052566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.102 [2024-07-15 03:37:44.052592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.102 qpair failed and we were unable to recover it. 00:34:38.102 [2024-07-15 03:37:44.052728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.102 [2024-07-15 03:37:44.052755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.102 qpair failed and we were unable to recover it. 00:34:38.103 [2024-07-15 03:37:44.052893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.103 [2024-07-15 03:37:44.052921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.103 qpair failed and we were unable to recover it. 00:34:38.103 [2024-07-15 03:37:44.053062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.103 [2024-07-15 03:37:44.053088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.103 qpair failed and we were unable to recover it. 00:34:38.103 [2024-07-15 03:37:44.053221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.103 [2024-07-15 03:37:44.053247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.103 qpair failed and we were unable to recover it. 00:34:38.103 [2024-07-15 03:37:44.053412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.103 [2024-07-15 03:37:44.053438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.103 qpair failed and we were unable to recover it. 00:34:38.103 [2024-07-15 03:37:44.053588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.103 [2024-07-15 03:37:44.053614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.103 qpair failed and we were unable to recover it. 00:34:38.103 [2024-07-15 03:37:44.053726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.103 [2024-07-15 03:37:44.053753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.103 qpair failed and we were unable to recover it. 00:34:38.103 [2024-07-15 03:37:44.053919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.103 [2024-07-15 03:37:44.053946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.103 qpair failed and we were unable to recover it. 00:34:38.103 [2024-07-15 03:37:44.054086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.103 [2024-07-15 03:37:44.054112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.103 qpair failed and we were unable to recover it. 00:34:38.103 [2024-07-15 03:37:44.054258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.103 [2024-07-15 03:37:44.054284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.103 qpair failed and we were unable to recover it. 00:34:38.103 [2024-07-15 03:37:44.054437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.103 [2024-07-15 03:37:44.054463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.103 qpair failed and we were unable to recover it. 00:34:38.103 [2024-07-15 03:37:44.054604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.103 [2024-07-15 03:37:44.054629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.103 qpair failed and we were unable to recover it. 00:34:38.103 [2024-07-15 03:37:44.054765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.103 [2024-07-15 03:37:44.054791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.103 qpair failed and we were unable to recover it. 00:34:38.103 [2024-07-15 03:37:44.054927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.103 [2024-07-15 03:37:44.054954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.103 qpair failed and we were unable to recover it. 00:34:38.103 [2024-07-15 03:37:44.055108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.103 [2024-07-15 03:37:44.055134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.103 qpair failed and we were unable to recover it. 00:34:38.103 [2024-07-15 03:37:44.055281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.103 [2024-07-15 03:37:44.055307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.103 qpair failed and we were unable to recover it. 00:34:38.103 [2024-07-15 03:37:44.055448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.103 [2024-07-15 03:37:44.055474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.103 qpair failed and we were unable to recover it. 00:34:38.103 [2024-07-15 03:37:44.055614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.103 [2024-07-15 03:37:44.055640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.103 qpair failed and we were unable to recover it. 00:34:38.103 [2024-07-15 03:37:44.055781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.103 [2024-07-15 03:37:44.055808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.103 qpair failed and we were unable to recover it. 00:34:38.103 [2024-07-15 03:37:44.055960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.103 [2024-07-15 03:37:44.055987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.103 qpair failed and we were unable to recover it. 00:34:38.103 [2024-07-15 03:37:44.056155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.103 [2024-07-15 03:37:44.056181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.103 qpair failed and we were unable to recover it. 00:34:38.103 [2024-07-15 03:37:44.056319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.103 [2024-07-15 03:37:44.056345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.103 qpair failed and we were unable to recover it. 00:34:38.103 [2024-07-15 03:37:44.056464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.103 [2024-07-15 03:37:44.056490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.103 qpair failed and we were unable to recover it. 00:34:38.103 [2024-07-15 03:37:44.056629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.103 [2024-07-15 03:37:44.056655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.103 qpair failed and we were unable to recover it. 00:34:38.103 [2024-07-15 03:37:44.056790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.103 [2024-07-15 03:37:44.056816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.103 qpair failed and we were unable to recover it. 00:34:38.103 [2024-07-15 03:37:44.056960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.103 [2024-07-15 03:37:44.056987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.103 qpair failed and we were unable to recover it. 00:34:38.103 [2024-07-15 03:37:44.057141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.103 [2024-07-15 03:37:44.057167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.103 qpair failed and we were unable to recover it. 00:34:38.103 [2024-07-15 03:37:44.057314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.103 [2024-07-15 03:37:44.057339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.103 qpair failed and we were unable to recover it. 00:34:38.103 [2024-07-15 03:37:44.057479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.103 [2024-07-15 03:37:44.057505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.103 qpair failed and we were unable to recover it. 00:34:38.103 [2024-07-15 03:37:44.057623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.103 [2024-07-15 03:37:44.057653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.103 qpair failed and we were unable to recover it. 00:34:38.103 [2024-07-15 03:37:44.057766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.103 [2024-07-15 03:37:44.057794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.103 qpair failed and we were unable to recover it. 00:34:38.103 [2024-07-15 03:37:44.057936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.103 [2024-07-15 03:37:44.057963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.103 qpair failed and we were unable to recover it. 00:34:38.103 [2024-07-15 03:37:44.058079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.103 [2024-07-15 03:37:44.058106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.103 qpair failed and we were unable to recover it. 00:34:38.103 [2024-07-15 03:37:44.058269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.103 [2024-07-15 03:37:44.058295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.103 qpair failed and we were unable to recover it. 00:34:38.103 [2024-07-15 03:37:44.058412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.103 [2024-07-15 03:37:44.058438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.103 qpair failed and we were unable to recover it. 00:34:38.103 [2024-07-15 03:37:44.058545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.104 [2024-07-15 03:37:44.058571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.104 qpair failed and we were unable to recover it. 00:34:38.104 [2024-07-15 03:37:44.058703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.104 [2024-07-15 03:37:44.058728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.104 qpair failed and we were unable to recover it. 00:34:38.104 [2024-07-15 03:37:44.058842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.104 [2024-07-15 03:37:44.058868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.104 qpair failed and we were unable to recover it. 00:34:38.104 [2024-07-15 03:37:44.058984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.104 [2024-07-15 03:37:44.059010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.104 qpair failed and we were unable to recover it. 00:34:38.104 [2024-07-15 03:37:44.059148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.104 [2024-07-15 03:37:44.059174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.104 qpair failed and we were unable to recover it. 00:34:38.104 [2024-07-15 03:37:44.059322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.104 [2024-07-15 03:37:44.059348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.104 qpair failed and we were unable to recover it. 00:34:38.104 [2024-07-15 03:37:44.059465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.104 [2024-07-15 03:37:44.059490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.104 qpair failed and we were unable to recover it. 00:34:38.104 [2024-07-15 03:37:44.059601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.104 [2024-07-15 03:37:44.059627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.104 qpair failed and we were unable to recover it. 00:34:38.104 [2024-07-15 03:37:44.059747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.104 [2024-07-15 03:37:44.059773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.104 qpair failed and we were unable to recover it. 00:34:38.104 [2024-07-15 03:37:44.059918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.104 [2024-07-15 03:37:44.059945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.104 qpair failed and we were unable to recover it. 00:34:38.104 [2024-07-15 03:37:44.060059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.104 [2024-07-15 03:37:44.060085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.104 qpair failed and we were unable to recover it. 00:34:38.104 [2024-07-15 03:37:44.060199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.104 [2024-07-15 03:37:44.060225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.104 qpair failed and we were unable to recover it. 00:34:38.104 [2024-07-15 03:37:44.060365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.104 [2024-07-15 03:37:44.060390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.104 qpair failed and we were unable to recover it. 00:34:38.104 [2024-07-15 03:37:44.060533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.104 [2024-07-15 03:37:44.060559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.104 qpair failed and we were unable to recover it. 00:34:38.104 [2024-07-15 03:37:44.060699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.104 [2024-07-15 03:37:44.060726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.104 qpair failed and we were unable to recover it. 00:34:38.104 [2024-07-15 03:37:44.060837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.104 [2024-07-15 03:37:44.060863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.104 qpair failed and we were unable to recover it. 00:34:38.104 [2024-07-15 03:37:44.061047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.104 [2024-07-15 03:37:44.061073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.104 qpair failed and we were unable to recover it. 00:34:38.104 [2024-07-15 03:37:44.061207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.104 [2024-07-15 03:37:44.061233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.104 qpair failed and we were unable to recover it. 00:34:38.104 [2024-07-15 03:37:44.061369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.104 [2024-07-15 03:37:44.061395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.104 qpair failed and we were unable to recover it. 00:34:38.104 [2024-07-15 03:37:44.061569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.104 [2024-07-15 03:37:44.061594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.104 qpair failed and we were unable to recover it. 00:34:38.104 [2024-07-15 03:37:44.061701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.104 [2024-07-15 03:37:44.061726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.104 qpair failed and we were unable to recover it. 00:34:38.104 [2024-07-15 03:37:44.061917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.104 [2024-07-15 03:37:44.061953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.104 qpair failed and we were unable to recover it. 00:34:38.104 [2024-07-15 03:37:44.062120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.104 [2024-07-15 03:37:44.062148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.104 qpair failed and we were unable to recover it. 00:34:38.104 [2024-07-15 03:37:44.062267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.104 [2024-07-15 03:37:44.062293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.104 qpair failed and we were unable to recover it. 00:34:38.104 [2024-07-15 03:37:44.062437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.104 [2024-07-15 03:37:44.062462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.104 qpair failed and we were unable to recover it. 00:34:38.104 [2024-07-15 03:37:44.062575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.104 [2024-07-15 03:37:44.062600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.104 qpair failed and we were unable to recover it. 00:34:38.104 [2024-07-15 03:37:44.062742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.104 [2024-07-15 03:37:44.062767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.104 qpair failed and we were unable to recover it. 00:34:38.104 [2024-07-15 03:37:44.062887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.104 [2024-07-15 03:37:44.062913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.104 qpair failed and we were unable to recover it. 00:34:38.104 [2024-07-15 03:37:44.063030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.104 [2024-07-15 03:37:44.063059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.104 qpair failed and we were unable to recover it. 00:34:38.104 [2024-07-15 03:37:44.063176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.104 [2024-07-15 03:37:44.063201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.104 qpair failed and we were unable to recover it. 00:34:38.104 [2024-07-15 03:37:44.063339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.104 [2024-07-15 03:37:44.063365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.104 qpair failed and we were unable to recover it. 00:34:38.104 [2024-07-15 03:37:44.063502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.104 [2024-07-15 03:37:44.063528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.104 qpair failed and we were unable to recover it. 00:34:38.104 [2024-07-15 03:37:44.063665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.104 [2024-07-15 03:37:44.063690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.104 qpair failed and we were unable to recover it. 00:34:38.104 [2024-07-15 03:37:44.063797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.104 [2024-07-15 03:37:44.063822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.104 qpair failed and we were unable to recover it. 00:34:38.104 [2024-07-15 03:37:44.063981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.104 [2024-07-15 03:37:44.064012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.104 qpair failed and we were unable to recover it. 00:34:38.104 [2024-07-15 03:37:44.064154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.104 [2024-07-15 03:37:44.064180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.104 qpair failed and we were unable to recover it. 00:34:38.105 [2024-07-15 03:37:44.064331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.105 [2024-07-15 03:37:44.064358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.105 qpair failed and we were unable to recover it. 00:34:38.105 [2024-07-15 03:37:44.064474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.105 [2024-07-15 03:37:44.064499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.105 qpair failed and we were unable to recover it. 00:34:38.105 [2024-07-15 03:37:44.064631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.105 [2024-07-15 03:37:44.064657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.105 qpair failed and we were unable to recover it. 00:34:38.105 [2024-07-15 03:37:44.064776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.105 [2024-07-15 03:37:44.064800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.105 qpair failed and we were unable to recover it. 00:34:38.105 [2024-07-15 03:37:44.064946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.105 [2024-07-15 03:37:44.064994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.105 qpair failed and we were unable to recover it. 00:34:38.105 [2024-07-15 03:37:44.065138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.105 [2024-07-15 03:37:44.065164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.105 qpair failed and we were unable to recover it. 00:34:38.105 [2024-07-15 03:37:44.065282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.105 [2024-07-15 03:37:44.065307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.105 qpair failed and we were unable to recover it. 00:34:38.105 [2024-07-15 03:37:44.065425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.105 [2024-07-15 03:37:44.065451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.105 qpair failed and we were unable to recover it. 00:34:38.105 [2024-07-15 03:37:44.065625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.105 [2024-07-15 03:37:44.065650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.105 qpair failed and we were unable to recover it. 00:34:38.105 [2024-07-15 03:37:44.065768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.105 [2024-07-15 03:37:44.065792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.105 qpair failed and we were unable to recover it. 00:34:38.105 [2024-07-15 03:37:44.065911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.105 [2024-07-15 03:37:44.065937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.105 qpair failed and we were unable to recover it. 00:34:38.105 [2024-07-15 03:37:44.066080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.105 [2024-07-15 03:37:44.066105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.105 qpair failed and we were unable to recover it. 00:34:38.105 [2024-07-15 03:37:44.066262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.105 [2024-07-15 03:37:44.066287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.105 qpair failed and we were unable to recover it. 00:34:38.105 [2024-07-15 03:37:44.066402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.105 [2024-07-15 03:37:44.066427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.105 qpair failed and we were unable to recover it. 00:34:38.105 [2024-07-15 03:37:44.066538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.105 [2024-07-15 03:37:44.066563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.105 qpair failed and we were unable to recover it. 00:34:38.105 [2024-07-15 03:37:44.066678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.105 [2024-07-15 03:37:44.066704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.105 qpair failed and we were unable to recover it. 00:34:38.105 [2024-07-15 03:37:44.066839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.105 [2024-07-15 03:37:44.066864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.105 qpair failed and we were unable to recover it. 00:34:38.105 [2024-07-15 03:37:44.067013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.105 [2024-07-15 03:37:44.067038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.105 qpair failed and we were unable to recover it. 00:34:38.105 [2024-07-15 03:37:44.067184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.105 [2024-07-15 03:37:44.067209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.105 qpair failed and we were unable to recover it. 00:34:38.105 [2024-07-15 03:37:44.067345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.105 [2024-07-15 03:37:44.067371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.105 qpair failed and we were unable to recover it. 00:34:38.105 [2024-07-15 03:37:44.067509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.105 [2024-07-15 03:37:44.067533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.105 qpair failed and we were unable to recover it. 00:34:38.105 [2024-07-15 03:37:44.067675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.105 [2024-07-15 03:37:44.067700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.105 qpair failed and we were unable to recover it. 00:34:38.105 [2024-07-15 03:37:44.067839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.105 [2024-07-15 03:37:44.067864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.105 qpair failed and we were unable to recover it. 00:34:38.105 [2024-07-15 03:37:44.068003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.105 [2024-07-15 03:37:44.068028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.105 qpair failed and we were unable to recover it. 00:34:38.105 [2024-07-15 03:37:44.068147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.105 [2024-07-15 03:37:44.068171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.105 qpair failed and we were unable to recover it. 00:34:38.105 [2024-07-15 03:37:44.068288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.105 [2024-07-15 03:37:44.068313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.105 qpair failed and we were unable to recover it. 00:34:38.105 [2024-07-15 03:37:44.068474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.105 [2024-07-15 03:37:44.068499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.105 qpair failed and we were unable to recover it. 00:34:38.105 [2024-07-15 03:37:44.068663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.105 [2024-07-15 03:37:44.068687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.105 qpair failed and we were unable to recover it. 00:34:38.105 [2024-07-15 03:37:44.068798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.105 [2024-07-15 03:37:44.068823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.105 qpair failed and we were unable to recover it. 00:34:38.105 [2024-07-15 03:37:44.068959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.105 [2024-07-15 03:37:44.068985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.105 qpair failed and we were unable to recover it. 00:34:38.105 [2024-07-15 03:37:44.069130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.105 [2024-07-15 03:37:44.069155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.105 qpair failed and we were unable to recover it. 00:34:38.105 [2024-07-15 03:37:44.069270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.105 [2024-07-15 03:37:44.069296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.105 qpair failed and we were unable to recover it. 00:34:38.105 [2024-07-15 03:37:44.069406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.105 [2024-07-15 03:37:44.069432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.105 qpair failed and we were unable to recover it. 00:34:38.105 [2024-07-15 03:37:44.069560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.105 [2024-07-15 03:37:44.069587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.105 qpair failed and we were unable to recover it. 00:34:38.105 [2024-07-15 03:37:44.069727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.105 [2024-07-15 03:37:44.069753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.105 qpair failed and we were unable to recover it. 00:34:38.105 [2024-07-15 03:37:44.069904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.105 [2024-07-15 03:37:44.069931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.106 qpair failed and we were unable to recover it. 00:34:38.106 [2024-07-15 03:37:44.070048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.106 [2024-07-15 03:37:44.070074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.106 qpair failed and we were unable to recover it. 00:34:38.106 [2024-07-15 03:37:44.070188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.106 [2024-07-15 03:37:44.070215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.106 qpair failed and we were unable to recover it. 00:34:38.106 [2024-07-15 03:37:44.070359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.106 [2024-07-15 03:37:44.070388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.106 qpair failed and we were unable to recover it. 00:34:38.106 [2024-07-15 03:37:44.070503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.106 [2024-07-15 03:37:44.070529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.106 qpair failed and we were unable to recover it. 00:34:38.106 [2024-07-15 03:37:44.070636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.106 [2024-07-15 03:37:44.070662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.106 qpair failed and we were unable to recover it. 00:34:38.106 [2024-07-15 03:37:44.070775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.106 [2024-07-15 03:37:44.070800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.106 qpair failed and we were unable to recover it. 00:34:38.106 [2024-07-15 03:37:44.070939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.106 [2024-07-15 03:37:44.070965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.106 qpair failed and we were unable to recover it. 00:34:38.106 [2024-07-15 03:37:44.071069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.106 [2024-07-15 03:37:44.071094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.106 qpair failed and we were unable to recover it. 00:34:38.106 [2024-07-15 03:37:44.071231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.106 [2024-07-15 03:37:44.071257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.106 qpair failed and we were unable to recover it. 00:34:38.106 [2024-07-15 03:37:44.071368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.106 [2024-07-15 03:37:44.071394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.106 qpair failed and we were unable to recover it. 00:34:38.106 [2024-07-15 03:37:44.071523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.106 [2024-07-15 03:37:44.071564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.106 qpair failed and we were unable to recover it. 00:34:38.106 [2024-07-15 03:37:44.071686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.106 [2024-07-15 03:37:44.071714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.106 qpair failed and we were unable to recover it. 00:34:38.106 [2024-07-15 03:37:44.071839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.106 [2024-07-15 03:37:44.071866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.106 qpair failed and we were unable to recover it. 00:34:38.106 [2024-07-15 03:37:44.071955] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:38.106 [2024-07-15 03:37:44.071983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.106 [2024-07-15 03:37:44.071990] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:38.106 [2024-07-15 03:37:44.072005] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:38.106 [2024-07-15 03:37:44.072008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.106 [2024-07-15 03:37:44.072017] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:38.106 qpair failed and we were unable to recover it. 00:34:38.106 [2024-07-15 03:37:44.072028] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:38.106 [2024-07-15 03:37:44.072129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.106 [2024-07-15 03:37:44.072155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b9[2024-07-15 03:37:44.072091] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:34:38.106 0 with addr=10.0.0.2, port=4420 00:34:38.106 qpair failed and we were unable to recover it. 00:34:38.106 [2024-07-15 03:37:44.072125] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:34:38.106 [2024-07-15 03:37:44.072178] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:34:38.106 [2024-07-15 03:37:44.072181] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:34:38.106 [2024-07-15 03:37:44.072312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.106 [2024-07-15 03:37:44.072340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.106 qpair failed and we were unable to recover it. 00:34:38.106 [2024-07-15 03:37:44.072479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.106 [2024-07-15 03:37:44.072506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.106 qpair failed and we were unable to recover it. 00:34:38.106 [2024-07-15 03:37:44.072632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.106 [2024-07-15 03:37:44.072659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.106 qpair failed and we were unable to recover it. 00:34:38.106 [2024-07-15 03:37:44.072806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.106 [2024-07-15 03:37:44.072832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.106 qpair failed and we were unable to recover it. 00:34:38.106 [2024-07-15 03:37:44.072980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.106 [2024-07-15 03:37:44.073008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.106 qpair failed and we were unable to recover it. 00:34:38.106 [2024-07-15 03:37:44.073169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.106 [2024-07-15 03:37:44.073195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.106 qpair failed and we were unable to recover it. 00:34:38.106 [2024-07-15 03:37:44.073343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.106 [2024-07-15 03:37:44.073369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.106 qpair failed and we were unable to recover it. 00:34:38.106 [2024-07-15 03:37:44.073515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.106 [2024-07-15 03:37:44.073542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.106 qpair failed and we were unable to recover it. 00:34:38.106 [2024-07-15 03:37:44.073661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.106 [2024-07-15 03:37:44.073688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.106 qpair failed and we were unable to recover it. 00:34:38.106 [2024-07-15 03:37:44.073828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.106 [2024-07-15 03:37:44.073854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.106 qpair failed and we were unable to recover it. 00:34:38.106 [2024-07-15 03:37:44.073984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.106 [2024-07-15 03:37:44.074013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.106 qpair failed and we were unable to recover it. 00:34:38.106 [2024-07-15 03:37:44.074136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.106 [2024-07-15 03:37:44.074162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.106 qpair failed and we were unable to recover it. 00:34:38.106 [2024-07-15 03:37:44.074282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.106 [2024-07-15 03:37:44.074309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.106 qpair failed and we were unable to recover it. 00:34:38.106 [2024-07-15 03:37:44.074423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.106 [2024-07-15 03:37:44.074449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.106 qpair failed and we were unable to recover it. 00:34:38.106 [2024-07-15 03:37:44.074593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.106 [2024-07-15 03:37:44.074621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.106 qpair failed and we were unable to recover it. 00:34:38.106 [2024-07-15 03:37:44.074760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.106 [2024-07-15 03:37:44.074786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.106 qpair failed and we were unable to recover it. 00:34:38.106 [2024-07-15 03:37:44.074933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.106 [2024-07-15 03:37:44.074958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.107 qpair failed and we were unable to recover it. 00:34:38.107 [2024-07-15 03:37:44.075097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.107 [2024-07-15 03:37:44.075123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.107 qpair failed and we were unable to recover it. 00:34:38.107 [2024-07-15 03:37:44.075235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.107 [2024-07-15 03:37:44.075260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.107 qpair failed and we were unable to recover it. 00:34:38.107 [2024-07-15 03:37:44.075405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.107 [2024-07-15 03:37:44.075429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.107 qpair failed and we were unable to recover it. 00:34:38.107 [2024-07-15 03:37:44.075590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.107 [2024-07-15 03:37:44.075617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.107 qpair failed and we were unable to recover it. 00:34:38.107 [2024-07-15 03:37:44.075759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.107 [2024-07-15 03:37:44.075786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.107 qpair failed and we were unable to recover it. 00:34:38.107 [2024-07-15 03:37:44.075926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.107 [2024-07-15 03:37:44.075954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.107 qpair failed and we were unable to recover it. 00:34:38.107 [2024-07-15 03:37:44.076076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.107 [2024-07-15 03:37:44.076102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.107 qpair failed and we were unable to recover it. 00:34:38.107 [2024-07-15 03:37:44.076209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.107 [2024-07-15 03:37:44.076240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.107 qpair failed and we were unable to recover it. 00:34:38.107 [2024-07-15 03:37:44.076351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.107 [2024-07-15 03:37:44.076377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.107 qpair failed and we were unable to recover it. 00:34:38.107 [2024-07-15 03:37:44.076516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.107 [2024-07-15 03:37:44.076543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.107 qpair failed and we were unable to recover it. 00:34:38.107 [2024-07-15 03:37:44.076661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.107 [2024-07-15 03:37:44.076688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.107 qpair failed and we were unable to recover it. 00:34:38.107 [2024-07-15 03:37:44.076800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.107 [2024-07-15 03:37:44.076826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.107 qpair failed and we were unable to recover it. 00:34:38.107 [2024-07-15 03:37:44.076946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.107 [2024-07-15 03:37:44.076974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.107 qpair failed and we were unable to recover it. 00:34:38.107 [2024-07-15 03:37:44.077138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.107 [2024-07-15 03:37:44.077163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.107 qpair failed and we were unable to recover it. 00:34:38.107 [2024-07-15 03:37:44.077274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.107 [2024-07-15 03:37:44.077298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.107 qpair failed and we were unable to recover it. 00:34:38.107 [2024-07-15 03:37:44.077410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.107 [2024-07-15 03:37:44.077436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.107 qpair failed and we were unable to recover it. 00:34:38.107 [2024-07-15 03:37:44.077549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.107 [2024-07-15 03:37:44.077575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.107 qpair failed and we were unable to recover it. 00:34:38.107 [2024-07-15 03:37:44.077717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.107 [2024-07-15 03:37:44.077743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.107 qpair failed and we were unable to recover it. 00:34:38.107 [2024-07-15 03:37:44.077892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.107 [2024-07-15 03:37:44.077920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.107 qpair failed and we were unable to recover it. 00:34:38.107 [2024-07-15 03:37:44.078058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.107 [2024-07-15 03:37:44.078085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.107 qpair failed and we were unable to recover it. 00:34:38.107 [2024-07-15 03:37:44.078196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.107 [2024-07-15 03:37:44.078222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.107 qpair failed and we were unable to recover it. 00:34:38.107 [2024-07-15 03:37:44.078340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.107 [2024-07-15 03:37:44.078366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.107 qpair failed and we were unable to recover it. 00:34:38.107 [2024-07-15 03:37:44.078479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.107 [2024-07-15 03:37:44.078506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.107 qpair failed and we were unable to recover it. 00:34:38.107 [2024-07-15 03:37:44.078622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.107 [2024-07-15 03:37:44.078648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.107 qpair failed and we were unable to recover it. 00:34:38.107 [2024-07-15 03:37:44.078758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.107 [2024-07-15 03:37:44.078784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.107 qpair failed and we were unable to recover it. 00:34:38.107 [2024-07-15 03:37:44.078941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.107 [2024-07-15 03:37:44.078982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.107 qpair failed and we were unable to recover it. 00:34:38.107 [2024-07-15 03:37:44.079130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.107 [2024-07-15 03:37:44.079157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.107 qpair failed and we were unable to recover it. 00:34:38.107 [2024-07-15 03:37:44.079302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.107 [2024-07-15 03:37:44.079328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.107 qpair failed and we were unable to recover it. 00:34:38.107 [2024-07-15 03:37:44.079442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.107 [2024-07-15 03:37:44.079467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.107 qpair failed and we were unable to recover it. 00:34:38.107 [2024-07-15 03:37:44.079590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.107 [2024-07-15 03:37:44.079615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.107 qpair failed and we were unable to recover it. 00:34:38.107 [2024-07-15 03:37:44.079721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.108 [2024-07-15 03:37:44.079746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.108 qpair failed and we were unable to recover it. 00:34:38.108 [2024-07-15 03:37:44.079856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.108 [2024-07-15 03:37:44.079890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.108 qpair failed and we were unable to recover it. 00:34:38.108 [2024-07-15 03:37:44.080016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.108 [2024-07-15 03:37:44.080041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.108 qpair failed and we were unable to recover it. 00:34:38.108 [2024-07-15 03:37:44.080149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.108 [2024-07-15 03:37:44.080174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.108 qpair failed and we were unable to recover it. 00:34:38.108 [2024-07-15 03:37:44.080306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.108 [2024-07-15 03:37:44.080332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.108 qpair failed and we were unable to recover it. 00:34:38.108 [2024-07-15 03:37:44.080447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.108 [2024-07-15 03:37:44.080472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.108 qpair failed and we were unable to recover it. 00:34:38.108 [2024-07-15 03:37:44.080584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.108 [2024-07-15 03:37:44.080609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.108 qpair failed and we were unable to recover it. 00:34:38.108 [2024-07-15 03:37:44.080719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.108 [2024-07-15 03:37:44.080744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.108 qpair failed and we were unable to recover it. 00:34:38.108 [2024-07-15 03:37:44.080859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.108 [2024-07-15 03:37:44.081022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.108 qpair failed and we were unable to recover it. 00:34:38.108 [2024-07-15 03:37:44.081143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.108 [2024-07-15 03:37:44.081169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.108 qpair failed and we were unable to recover it. 00:34:38.108 [2024-07-15 03:37:44.081287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.108 [2024-07-15 03:37:44.081314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.108 qpair failed and we were unable to recover it. 00:34:38.108 [2024-07-15 03:37:44.081480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.108 [2024-07-15 03:37:44.081505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.108 qpair failed and we were unable to recover it. 00:34:38.108 [2024-07-15 03:37:44.081618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.108 [2024-07-15 03:37:44.081643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.108 qpair failed and we were unable to recover it. 00:34:38.108 [2024-07-15 03:37:44.081759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.108 [2024-07-15 03:37:44.081784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.108 qpair failed and we were unable to recover it. 00:34:38.108 [2024-07-15 03:37:44.081928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.108 [2024-07-15 03:37:44.081955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.108 qpair failed and we were unable to recover it. 00:34:38.108 [2024-07-15 03:37:44.082089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.108 [2024-07-15 03:37:44.082115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.108 qpair failed and we were unable to recover it. 00:34:38.108 [2024-07-15 03:37:44.082252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.108 [2024-07-15 03:37:44.082277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.108 qpair failed and we were unable to recover it. 00:34:38.108 [2024-07-15 03:37:44.082394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.108 [2024-07-15 03:37:44.082420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.108 qpair failed and we were unable to recover it. 00:34:38.108 [2024-07-15 03:37:44.082537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.108 [2024-07-15 03:37:44.082563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.108 qpair failed and we were unable to recover it. 00:34:38.108 [2024-07-15 03:37:44.082726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.108 [2024-07-15 03:37:44.082752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.108 qpair failed and we were unable to recover it. 00:34:38.108 [2024-07-15 03:37:44.082886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.108 [2024-07-15 03:37:44.082913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.108 qpair failed and we were unable to recover it. 00:34:38.108 [2024-07-15 03:37:44.083050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.108 [2024-07-15 03:37:44.083076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.108 qpair failed and we were unable to recover it. 00:34:38.108 [2024-07-15 03:37:44.083190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.108 [2024-07-15 03:37:44.083217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.108 qpair failed and we were unable to recover it. 00:34:38.108 [2024-07-15 03:37:44.083357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.108 [2024-07-15 03:37:44.083383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.108 qpair failed and we were unable to recover it. 00:34:38.108 [2024-07-15 03:37:44.083490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.108 [2024-07-15 03:37:44.083516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.108 qpair failed and we were unable to recover it. 00:34:38.108 [2024-07-15 03:37:44.083651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.108 [2024-07-15 03:37:44.083676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.108 qpair failed and we were unable to recover it. 00:34:38.108 [2024-07-15 03:37:44.083791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.108 [2024-07-15 03:37:44.083818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.108 qpair failed and we were unable to recover it. 00:34:38.108 [2024-07-15 03:37:44.083957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.108 [2024-07-15 03:37:44.083983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.108 qpair failed and we were unable to recover it. 00:34:38.108 [2024-07-15 03:37:44.084098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.108 [2024-07-15 03:37:44.084123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.108 qpair failed and we were unable to recover it. 00:34:38.108 [2024-07-15 03:37:44.084229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.108 [2024-07-15 03:37:44.084255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.108 qpair failed and we were unable to recover it. 00:34:38.108 [2024-07-15 03:37:44.084367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.108 [2024-07-15 03:37:44.084392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.108 qpair failed and we were unable to recover it. 00:34:38.108 [2024-07-15 03:37:44.084532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.108 [2024-07-15 03:37:44.084562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.108 qpair failed and we were unable to recover it. 00:34:38.108 [2024-07-15 03:37:44.084693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.108 [2024-07-15 03:37:44.084718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.108 qpair failed and we were unable to recover it. 00:34:38.108 [2024-07-15 03:37:44.084850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.108 [2024-07-15 03:37:44.084898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.108 qpair failed and we were unable to recover it. 00:34:38.108 [2024-07-15 03:37:44.085035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.108 [2024-07-15 03:37:44.085062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.108 qpair failed and we were unable to recover it. 00:34:38.108 [2024-07-15 03:37:44.085177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.108 [2024-07-15 03:37:44.085203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.108 qpair failed and we were unable to recover it. 00:34:38.109 [2024-07-15 03:37:44.085345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.109 [2024-07-15 03:37:44.085371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.109 qpair failed and we were unable to recover it. 00:34:38.109 [2024-07-15 03:37:44.085514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.109 [2024-07-15 03:37:44.085538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.109 qpair failed and we were unable to recover it. 00:34:38.109 [2024-07-15 03:37:44.085653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.109 [2024-07-15 03:37:44.085678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.109 qpair failed and we were unable to recover it. 00:34:38.109 [2024-07-15 03:37:44.085795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.109 [2024-07-15 03:37:44.085821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.109 qpair failed and we were unable to recover it. 00:34:38.109 [2024-07-15 03:37:44.085937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.109 [2024-07-15 03:37:44.085965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.109 qpair failed and we were unable to recover it. 00:34:38.109 [2024-07-15 03:37:44.086111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.109 [2024-07-15 03:37:44.086136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.109 qpair failed and we were unable to recover it. 00:34:38.109 [2024-07-15 03:37:44.086299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.109 [2024-07-15 03:37:44.086324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.109 qpair failed and we were unable to recover it. 00:34:38.109 [2024-07-15 03:37:44.086433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.109 [2024-07-15 03:37:44.086459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.109 qpair failed and we were unable to recover it. 00:34:38.109 [2024-07-15 03:37:44.086598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.109 [2024-07-15 03:37:44.086623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.109 qpair failed and we were unable to recover it. 00:34:38.109 [2024-07-15 03:37:44.086729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.109 [2024-07-15 03:37:44.086755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.109 qpair failed and we were unable to recover it. 00:34:38.109 [2024-07-15 03:37:44.086919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.109 [2024-07-15 03:37:44.086946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.109 qpair failed and we were unable to recover it. 00:34:38.109 [2024-07-15 03:37:44.087112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.109 [2024-07-15 03:37:44.087137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.109 qpair failed and we were unable to recover it. 00:34:38.109 [2024-07-15 03:37:44.087252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.109 [2024-07-15 03:37:44.087277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.109 qpair failed and we were unable to recover it. 00:34:38.109 [2024-07-15 03:37:44.087379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.109 [2024-07-15 03:37:44.087404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.109 qpair failed and we were unable to recover it. 00:34:38.109 [2024-07-15 03:37:44.087555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.109 [2024-07-15 03:37:44.087580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.109 qpair failed and we were unable to recover it. 00:34:38.109 [2024-07-15 03:37:44.087687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.109 [2024-07-15 03:37:44.087713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.109 qpair failed and we were unable to recover it. 00:34:38.109 [2024-07-15 03:37:44.087868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.109 [2024-07-15 03:37:44.087900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.109 qpair failed and we were unable to recover it. 00:34:38.109 [2024-07-15 03:37:44.088038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.109 [2024-07-15 03:37:44.088063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.109 qpair failed and we were unable to recover it. 00:34:38.109 [2024-07-15 03:37:44.088223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.109 [2024-07-15 03:37:44.088248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.109 qpair failed and we were unable to recover it. 00:34:38.109 [2024-07-15 03:37:44.088358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.109 [2024-07-15 03:37:44.088383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.109 qpair failed and we were unable to recover it. 00:34:38.109 [2024-07-15 03:37:44.088491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.109 [2024-07-15 03:37:44.088516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.109 qpair failed and we were unable to recover it. 00:34:38.109 [2024-07-15 03:37:44.088631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.109 [2024-07-15 03:37:44.088657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.109 qpair failed and we were unable to recover it. 00:34:38.109 [2024-07-15 03:37:44.088789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.109 [2024-07-15 03:37:44.088821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.109 qpair failed and we were unable to recover it. 00:34:38.109 [2024-07-15 03:37:44.088940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.109 [2024-07-15 03:37:44.088965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.109 qpair failed and we were unable to recover it. 00:34:38.109 [2024-07-15 03:37:44.089072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.109 [2024-07-15 03:37:44.089098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.109 qpair failed and we were unable to recover it. 00:34:38.109 [2024-07-15 03:37:44.089230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.109 [2024-07-15 03:37:44.089255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.109 qpair failed and we were unable to recover it. 00:34:38.109 [2024-07-15 03:37:44.089374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.109 [2024-07-15 03:37:44.089401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.109 qpair failed and we were unable to recover it. 00:34:38.109 [2024-07-15 03:37:44.089519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.109 [2024-07-15 03:37:44.089558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.109 qpair failed and we were unable to recover it. 00:34:38.109 [2024-07-15 03:37:44.089728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.109 [2024-07-15 03:37:44.089755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.109 qpair failed and we were unable to recover it. 00:34:38.109 [2024-07-15 03:37:44.089862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.109 [2024-07-15 03:37:44.089893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.109 qpair failed and we were unable to recover it. 00:34:38.109 [2024-07-15 03:37:44.090043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.109 [2024-07-15 03:37:44.090069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.109 qpair failed and we were unable to recover it. 00:34:38.109 [2024-07-15 03:37:44.090192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.109 [2024-07-15 03:37:44.090217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.109 qpair failed and we were unable to recover it. 00:34:38.109 [2024-07-15 03:37:44.090339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.109 [2024-07-15 03:37:44.090364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.109 qpair failed and we were unable to recover it. 00:34:38.109 [2024-07-15 03:37:44.090506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.109 [2024-07-15 03:37:44.090533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.109 qpair failed and we were unable to recover it. 00:34:38.109 [2024-07-15 03:37:44.090650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.109 [2024-07-15 03:37:44.090677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.109 qpair failed and we were unable to recover it. 00:34:38.109 [2024-07-15 03:37:44.090788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.109 [2024-07-15 03:37:44.090814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.109 qpair failed and we were unable to recover it. 00:34:38.110 [2024-07-15 03:37:44.090948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.110 [2024-07-15 03:37:44.090975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.110 qpair failed and we were unable to recover it. 00:34:38.110 [2024-07-15 03:37:44.091109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.110 [2024-07-15 03:37:44.091134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.110 qpair failed and we were unable to recover it. 00:34:38.110 [2024-07-15 03:37:44.091239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.110 [2024-07-15 03:37:44.091264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.110 qpair failed and we were unable to recover it. 00:34:38.110 [2024-07-15 03:37:44.091399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.110 [2024-07-15 03:37:44.091424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.110 qpair failed and we were unable to recover it. 00:34:38.110 [2024-07-15 03:37:44.091529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.110 [2024-07-15 03:37:44.091554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.110 qpair failed and we were unable to recover it. 00:34:38.110 [2024-07-15 03:37:44.091662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.110 [2024-07-15 03:37:44.091687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.110 qpair failed and we were unable to recover it. 00:34:38.110 [2024-07-15 03:37:44.091798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.110 [2024-07-15 03:37:44.091825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.110 qpair failed and we were unable to recover it. 00:34:38.110 [2024-07-15 03:37:44.091933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.110 [2024-07-15 03:37:44.091960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.110 qpair failed and we were unable to recover it. 00:34:38.110 [2024-07-15 03:37:44.092103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.110 [2024-07-15 03:37:44.092128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.110 qpair failed and we were unable to recover it. 00:34:38.110 [2024-07-15 03:37:44.092242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.110 [2024-07-15 03:37:44.092268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.110 qpair failed and we were unable to recover it. 00:34:38.110 [2024-07-15 03:37:44.092384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.110 [2024-07-15 03:37:44.092410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.110 qpair failed and we were unable to recover it. 00:34:38.110 [2024-07-15 03:37:44.092538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.110 [2024-07-15 03:37:44.092564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.110 qpair failed and we were unable to recover it. 00:34:38.110 [2024-07-15 03:37:44.092674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.110 [2024-07-15 03:37:44.092699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.110 qpair failed and we were unable to recover it. 00:34:38.110 [2024-07-15 03:37:44.092834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.110 [2024-07-15 03:37:44.092863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.110 qpair failed and we were unable to recover it. 00:34:38.110 [2024-07-15 03:37:44.092986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.110 [2024-07-15 03:37:44.093012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.110 qpair failed and we were unable to recover it. 00:34:38.110 [2024-07-15 03:37:44.093115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.110 [2024-07-15 03:37:44.093141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.110 qpair failed and we were unable to recover it. 00:34:38.110 [2024-07-15 03:37:44.093265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.110 [2024-07-15 03:37:44.093290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.110 qpair failed and we were unable to recover it. 00:34:38.110 [2024-07-15 03:37:44.093403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.110 [2024-07-15 03:37:44.093428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.110 qpair failed and we were unable to recover it. 00:34:38.110 [2024-07-15 03:37:44.093590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.110 [2024-07-15 03:37:44.093617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.110 qpair failed and we were unable to recover it. 00:34:38.110 [2024-07-15 03:37:44.093728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.110 [2024-07-15 03:37:44.093755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.110 qpair failed and we were unable to recover it. 00:34:38.110 [2024-07-15 03:37:44.093882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.110 [2024-07-15 03:37:44.093917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.110 qpair failed and we were unable to recover it. 00:34:38.110 [2024-07-15 03:37:44.094038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.110 [2024-07-15 03:37:44.094064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.110 qpair failed and we were unable to recover it. 00:34:38.110 [2024-07-15 03:37:44.094201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.110 [2024-07-15 03:37:44.094227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.110 qpair failed and we were unable to recover it. 00:34:38.110 [2024-07-15 03:37:44.094335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.110 [2024-07-15 03:37:44.094361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.110 qpair failed and we were unable to recover it. 00:34:38.110 [2024-07-15 03:37:44.094467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.110 [2024-07-15 03:37:44.094492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.110 qpair failed and we were unable to recover it. 00:34:38.110 [2024-07-15 03:37:44.094604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.110 [2024-07-15 03:37:44.094629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.110 qpair failed and we were unable to recover it. 00:34:38.110 [2024-07-15 03:37:44.094735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.110 [2024-07-15 03:37:44.094761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.110 qpair failed and we were unable to recover it. 00:34:38.110 [2024-07-15 03:37:44.094914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.110 [2024-07-15 03:37:44.094953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.110 qpair failed and we were unable to recover it. 00:34:38.110 [2024-07-15 03:37:44.095067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.110 [2024-07-15 03:37:44.095093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.110 qpair failed and we were unable to recover it. 00:34:38.110 [2024-07-15 03:37:44.095232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.110 [2024-07-15 03:37:44.095258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.110 qpair failed and we were unable to recover it. 00:34:38.110 [2024-07-15 03:37:44.095409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.110 [2024-07-15 03:37:44.095435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.110 qpair failed and we were unable to recover it. 00:34:38.110 [2024-07-15 03:37:44.095585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.110 [2024-07-15 03:37:44.095612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.110 qpair failed and we were unable to recover it. 00:34:38.110 [2024-07-15 03:37:44.095753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.110 [2024-07-15 03:37:44.095778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.110 qpair failed and we were unable to recover it. 00:34:38.110 [2024-07-15 03:37:44.095918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.110 [2024-07-15 03:37:44.095945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.110 qpair failed and we were unable to recover it. 00:34:38.110 [2024-07-15 03:37:44.096096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.110 [2024-07-15 03:37:44.096121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.110 qpair failed and we were unable to recover it. 00:34:38.110 [2024-07-15 03:37:44.096236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.110 [2024-07-15 03:37:44.096261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.110 qpair failed and we were unable to recover it. 00:34:38.110 [2024-07-15 03:37:44.096367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.110 [2024-07-15 03:37:44.096393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.110 qpair failed and we were unable to recover it. 00:34:38.110 [2024-07-15 03:37:44.096511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.111 [2024-07-15 03:37:44.096536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.111 qpair failed and we were unable to recover it. 00:34:38.111 [2024-07-15 03:37:44.096672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.111 [2024-07-15 03:37:44.096697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.111 qpair failed and we were unable to recover it. 00:34:38.111 [2024-07-15 03:37:44.096870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.111 [2024-07-15 03:37:44.096917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.111 qpair failed and we were unable to recover it. 00:34:38.111 [2024-07-15 03:37:44.097032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.111 [2024-07-15 03:37:44.097063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.111 qpair failed and we were unable to recover it. 00:34:38.111 [2024-07-15 03:37:44.097209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.111 [2024-07-15 03:37:44.097235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.111 qpair failed and we were unable to recover it. 00:34:38.111 [2024-07-15 03:37:44.097350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.111 [2024-07-15 03:37:44.097378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.111 qpair failed and we were unable to recover it. 00:34:38.111 [2024-07-15 03:37:44.097497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.111 [2024-07-15 03:37:44.097522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.111 qpair failed and we were unable to recover it. 00:34:38.111 [2024-07-15 03:37:44.097641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.111 [2024-07-15 03:37:44.097665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.111 qpair failed and we were unable to recover it. 00:34:38.111 [2024-07-15 03:37:44.097798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.111 [2024-07-15 03:37:44.097824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.111 qpair failed and we were unable to recover it. 00:34:38.111 [2024-07-15 03:37:44.097937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.111 [2024-07-15 03:37:44.097963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.111 qpair failed and we were unable to recover it. 00:34:38.111 [2024-07-15 03:37:44.098072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.111 [2024-07-15 03:37:44.098098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.111 qpair failed and we were unable to recover it. 00:34:38.111 [2024-07-15 03:37:44.098242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.111 [2024-07-15 03:37:44.098268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.111 qpair failed and we were unable to recover it. 00:34:38.111 [2024-07-15 03:37:44.098382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.111 [2024-07-15 03:37:44.098406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.111 qpair failed and we were unable to recover it. 00:34:38.111 [2024-07-15 03:37:44.098548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.111 [2024-07-15 03:37:44.098574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.111 qpair failed and we were unable to recover it. 00:34:38.111 [2024-07-15 03:37:44.098729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.111 [2024-07-15 03:37:44.098757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.111 qpair failed and we were unable to recover it. 00:34:38.111 [2024-07-15 03:37:44.098924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.111 [2024-07-15 03:37:44.098950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.111 qpair failed and we were unable to recover it. 00:34:38.111 [2024-07-15 03:37:44.099075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.111 [2024-07-15 03:37:44.099100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.111 qpair failed and we were unable to recover it. 00:34:38.111 [2024-07-15 03:37:44.099219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.111 [2024-07-15 03:37:44.099245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.111 qpair failed and we were unable to recover it. 00:34:38.111 [2024-07-15 03:37:44.099406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.111 [2024-07-15 03:37:44.099431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.111 qpair failed and we were unable to recover it. 00:34:38.111 [2024-07-15 03:37:44.099547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.111 [2024-07-15 03:37:44.099572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.111 qpair failed and we were unable to recover it. 00:34:38.111 [2024-07-15 03:37:44.099684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.111 [2024-07-15 03:37:44.099710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.111 qpair failed and we were unable to recover it. 00:34:38.111 [2024-07-15 03:37:44.099813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.111 [2024-07-15 03:37:44.099839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.111 qpair failed and we were unable to recover it. 00:34:38.111 [2024-07-15 03:37:44.099979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.111 [2024-07-15 03:37:44.100005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.111 qpair failed and we were unable to recover it. 00:34:38.111 [2024-07-15 03:37:44.100108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.111 [2024-07-15 03:37:44.100134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.111 qpair failed and we were unable to recover it. 00:34:38.111 [2024-07-15 03:37:44.100289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.111 [2024-07-15 03:37:44.100314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.111 qpair failed and we were unable to recover it. 00:34:38.111 [2024-07-15 03:37:44.100420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.111 [2024-07-15 03:37:44.100445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.111 qpair failed and we were unable to recover it. 00:34:38.111 [2024-07-15 03:37:44.100582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.111 [2024-07-15 03:37:44.100608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.111 qpair failed and we were unable to recover it. 00:34:38.111 [2024-07-15 03:37:44.100719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.111 [2024-07-15 03:37:44.100744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.111 qpair failed and we were unable to recover it. 00:34:38.111 [2024-07-15 03:37:44.100889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.111 [2024-07-15 03:37:44.100915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.111 qpair failed and we were unable to recover it. 00:34:38.111 [2024-07-15 03:37:44.101031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.111 [2024-07-15 03:37:44.101059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.111 qpair failed and we were unable to recover it. 00:34:38.111 [2024-07-15 03:37:44.101228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.111 [2024-07-15 03:37:44.101275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.111 qpair failed and we were unable to recover it. 00:34:38.111 [2024-07-15 03:37:44.101425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.111 [2024-07-15 03:37:44.101453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.111 qpair failed and we were unable to recover it. 00:34:38.111 [2024-07-15 03:37:44.101587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.111 [2024-07-15 03:37:44.101614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.111 qpair failed and we were unable to recover it. 00:34:38.111 [2024-07-15 03:37:44.101756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.111 [2024-07-15 03:37:44.101782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.111 qpair failed and we were unable to recover it. 00:34:38.111 [2024-07-15 03:37:44.101940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.111 [2024-07-15 03:37:44.101968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.111 qpair failed and we were unable to recover it. 00:34:38.111 [2024-07-15 03:37:44.102084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.111 [2024-07-15 03:37:44.102110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.111 qpair failed and we were unable to recover it. 00:34:38.111 [2024-07-15 03:37:44.102250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.111 [2024-07-15 03:37:44.102276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.111 qpair failed and we were unable to recover it. 00:34:38.111 [2024-07-15 03:37:44.102416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.111 [2024-07-15 03:37:44.102442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.111 qpair failed and we were unable to recover it. 00:34:38.111 [2024-07-15 03:37:44.102571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.111 [2024-07-15 03:37:44.102599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.111 qpair failed and we were unable to recover it. 00:34:38.112 [2024-07-15 03:37:44.102742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.112 [2024-07-15 03:37:44.102767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.112 qpair failed and we were unable to recover it. 00:34:38.112 [2024-07-15 03:37:44.102935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.112 [2024-07-15 03:37:44.102961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.112 qpair failed and we were unable to recover it. 00:34:38.112 [2024-07-15 03:37:44.103072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.112 [2024-07-15 03:37:44.103097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.112 qpair failed and we were unable to recover it. 00:34:38.112 [2024-07-15 03:37:44.103207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.112 [2024-07-15 03:37:44.103232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.112 qpair failed and we were unable to recover it. 00:34:38.112 [2024-07-15 03:37:44.103361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.112 [2024-07-15 03:37:44.103386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.112 qpair failed and we were unable to recover it. 00:34:38.112 [2024-07-15 03:37:44.103502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.112 [2024-07-15 03:37:44.103528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.112 qpair failed and we were unable to recover it. 00:34:38.112 [2024-07-15 03:37:44.103662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.112 [2024-07-15 03:37:44.103688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.112 qpair failed and we were unable to recover it. 00:34:38.112 [2024-07-15 03:37:44.103795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.112 [2024-07-15 03:37:44.103820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.112 qpair failed and we were unable to recover it. 00:34:38.112 [2024-07-15 03:37:44.103939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.112 [2024-07-15 03:37:44.103965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.112 qpair failed and we were unable to recover it. 00:34:38.112 [2024-07-15 03:37:44.104121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.112 [2024-07-15 03:37:44.104147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.112 qpair failed and we were unable to recover it. 00:34:38.112 [2024-07-15 03:37:44.104266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.112 [2024-07-15 03:37:44.104291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.112 qpair failed and we were unable to recover it. 00:34:38.112 [2024-07-15 03:37:44.104431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.112 [2024-07-15 03:37:44.104457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.112 qpair failed and we were unable to recover it. 00:34:38.112 [2024-07-15 03:37:44.104565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.112 [2024-07-15 03:37:44.104590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.112 qpair failed and we were unable to recover it. 00:34:38.112 [2024-07-15 03:37:44.104711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.112 [2024-07-15 03:37:44.104736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.112 qpair failed and we were unable to recover it. 00:34:38.112 [2024-07-15 03:37:44.104872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.112 [2024-07-15 03:37:44.104903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.112 qpair failed and we were unable to recover it. 00:34:38.112 [2024-07-15 03:37:44.105055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.112 [2024-07-15 03:37:44.105080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.112 qpair failed and we were unable to recover it. 00:34:38.112 [2024-07-15 03:37:44.105224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.112 [2024-07-15 03:37:44.105249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.112 qpair failed and we were unable to recover it. 00:34:38.112 [2024-07-15 03:37:44.105355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.112 [2024-07-15 03:37:44.105380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.112 qpair failed and we were unable to recover it. 00:34:38.112 [2024-07-15 03:37:44.105483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.112 [2024-07-15 03:37:44.105512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.112 qpair failed and we were unable to recover it. 00:34:38.112 [2024-07-15 03:37:44.105617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.112 [2024-07-15 03:37:44.105642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.112 qpair failed and we were unable to recover it. 00:34:38.112 [2024-07-15 03:37:44.105770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.112 [2024-07-15 03:37:44.105809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.112 qpair failed and we were unable to recover it. 00:34:38.112 [2024-07-15 03:37:44.105973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.112 [2024-07-15 03:37:44.106012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.112 qpair failed and we were unable to recover it. 00:34:38.112 [2024-07-15 03:37:44.106165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.112 [2024-07-15 03:37:44.106192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.112 qpair failed and we were unable to recover it. 00:34:38.112 [2024-07-15 03:37:44.106336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.112 [2024-07-15 03:37:44.106361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.112 qpair failed and we were unable to recover it. 00:34:38.112 [2024-07-15 03:37:44.106497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.112 [2024-07-15 03:37:44.106522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.112 qpair failed and we were unable to recover it. 00:34:38.112 [2024-07-15 03:37:44.106658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.112 [2024-07-15 03:37:44.106684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.112 qpair failed and we were unable to recover it. 00:34:38.112 [2024-07-15 03:37:44.106822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.112 [2024-07-15 03:37:44.106850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.112 qpair failed and we were unable to recover it. 00:34:38.112 [2024-07-15 03:37:44.107036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.112 [2024-07-15 03:37:44.107075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.112 qpair failed and we were unable to recover it. 00:34:38.112 [2024-07-15 03:37:44.107226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.112 [2024-07-15 03:37:44.107254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.112 qpair failed and we were unable to recover it. 00:34:38.112 [2024-07-15 03:37:44.107392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.112 [2024-07-15 03:37:44.107419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.112 qpair failed and we were unable to recover it. 00:34:38.112 [2024-07-15 03:37:44.107533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.112 [2024-07-15 03:37:44.107561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.112 qpair failed and we were unable to recover it. 00:34:38.112 [2024-07-15 03:37:44.107703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.112 [2024-07-15 03:37:44.107729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.112 qpair failed and we were unable to recover it. 00:34:38.112 [2024-07-15 03:37:44.107847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.112 [2024-07-15 03:37:44.107890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.112 qpair failed and we were unable to recover it. 00:34:38.112 [2024-07-15 03:37:44.108035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.113 [2024-07-15 03:37:44.108062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.113 qpair failed and we were unable to recover it. 00:34:38.113 [2024-07-15 03:37:44.108198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.113 [2024-07-15 03:37:44.108224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.113 qpair failed and we were unable to recover it. 00:34:38.113 [2024-07-15 03:37:44.108368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.113 [2024-07-15 03:37:44.108393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.113 qpair failed and we were unable to recover it. 00:34:38.113 [2024-07-15 03:37:44.108553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.113 [2024-07-15 03:37:44.108581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.113 qpair failed and we were unable to recover it. 00:34:38.113 [2024-07-15 03:37:44.108720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.113 [2024-07-15 03:37:44.108745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.113 qpair failed and we were unable to recover it. 00:34:38.113 [2024-07-15 03:37:44.108854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.113 [2024-07-15 03:37:44.108887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.113 qpair failed and we were unable to recover it. 00:34:38.113 [2024-07-15 03:37:44.109024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.113 [2024-07-15 03:37:44.109050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.113 qpair failed and we were unable to recover it. 00:34:38.113 [2024-07-15 03:37:44.109162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.113 [2024-07-15 03:37:44.109188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.113 qpair failed and we were unable to recover it. 00:34:38.113 [2024-07-15 03:37:44.109297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.113 [2024-07-15 03:37:44.109322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.113 qpair failed and we were unable to recover it. 00:34:38.113 [2024-07-15 03:37:44.109441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.113 [2024-07-15 03:37:44.109468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.113 qpair failed and we were unable to recover it. 00:34:38.113 [2024-07-15 03:37:44.109611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.113 [2024-07-15 03:37:44.109637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.113 qpair failed and we were unable to recover it. 00:34:38.113 [2024-07-15 03:37:44.109788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.113 [2024-07-15 03:37:44.109814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.113 qpair failed and we were unable to recover it. 00:34:38.113 [2024-07-15 03:37:44.109966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.113 [2024-07-15 03:37:44.109994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.113 qpair failed and we were unable to recover it. 00:34:38.113 [2024-07-15 03:37:44.110108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.113 [2024-07-15 03:37:44.110134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.113 qpair failed and we were unable to recover it. 00:34:38.113 [2024-07-15 03:37:44.110243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.113 [2024-07-15 03:37:44.110269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.113 qpair failed and we were unable to recover it. 00:34:38.113 [2024-07-15 03:37:44.110378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.113 [2024-07-15 03:37:44.110403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.113 qpair failed and we were unable to recover it. 00:34:38.113 [2024-07-15 03:37:44.110552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.113 [2024-07-15 03:37:44.110577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.113 qpair failed and we were unable to recover it. 00:34:38.113 [2024-07-15 03:37:44.110688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.113 [2024-07-15 03:37:44.110713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.113 qpair failed and we were unable to recover it. 00:34:38.113 [2024-07-15 03:37:44.110815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.113 [2024-07-15 03:37:44.110840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.113 qpair failed and we were unable to recover it. 00:34:38.113 [2024-07-15 03:37:44.110993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.113 [2024-07-15 03:37:44.111019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.113 qpair failed and we were unable to recover it. 00:34:38.113 [2024-07-15 03:37:44.111127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.113 [2024-07-15 03:37:44.111152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.113 qpair failed and we were unable to recover it. 00:34:38.113 [2024-07-15 03:37:44.111276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.113 [2024-07-15 03:37:44.111302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.113 qpair failed and we were unable to recover it. 00:34:38.113 [2024-07-15 03:37:44.111456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.113 [2024-07-15 03:37:44.111481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.113 qpair failed and we were unable to recover it. 00:34:38.113 [2024-07-15 03:37:44.111595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.113 [2024-07-15 03:37:44.111620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.113 qpair failed and we were unable to recover it. 00:34:38.113 [2024-07-15 03:37:44.111734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.113 [2024-07-15 03:37:44.111760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.113 qpair failed and we were unable to recover it. 00:34:38.113 [2024-07-15 03:37:44.111867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.113 [2024-07-15 03:37:44.111901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.113 qpair failed and we were unable to recover it. 00:34:38.113 [2024-07-15 03:37:44.112018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.113 [2024-07-15 03:37:44.112044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.113 qpair failed and we were unable to recover it. 00:34:38.113 [2024-07-15 03:37:44.112161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.113 [2024-07-15 03:37:44.112186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.113 qpair failed and we were unable to recover it. 00:34:38.113 [2024-07-15 03:37:44.112351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.113 [2024-07-15 03:37:44.112377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.113 qpair failed and we were unable to recover it. 00:34:38.113 [2024-07-15 03:37:44.112491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.113 [2024-07-15 03:37:44.112517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.113 qpair failed and we were unable to recover it. 00:34:38.113 [2024-07-15 03:37:44.112623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.113 [2024-07-15 03:37:44.112648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.113 qpair failed and we were unable to recover it. 00:34:38.113 [2024-07-15 03:37:44.112754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.113 [2024-07-15 03:37:44.112780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.113 qpair failed and we were unable to recover it. 00:34:38.113 [2024-07-15 03:37:44.112916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.113 [2024-07-15 03:37:44.112955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.113 qpair failed and we were unable to recover it. 00:34:38.113 [2024-07-15 03:37:44.113103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.113 [2024-07-15 03:37:44.113130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.113 qpair failed and we were unable to recover it. 00:34:38.113 [2024-07-15 03:37:44.113278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.113 [2024-07-15 03:37:44.113305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.113 qpair failed and we were unable to recover it. 00:34:38.113 [2024-07-15 03:37:44.113452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.113 [2024-07-15 03:37:44.113478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.113 qpair failed and we were unable to recover it. 00:34:38.113 [2024-07-15 03:37:44.113621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.113 [2024-07-15 03:37:44.113646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.113 qpair failed and we were unable to recover it. 00:34:38.113 [2024-07-15 03:37:44.113782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.113 [2024-07-15 03:37:44.113807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.113 qpair failed and we were unable to recover it. 00:34:38.113 [2024-07-15 03:37:44.113923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.113 [2024-07-15 03:37:44.113951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.113 qpair failed and we were unable to recover it. 00:34:38.113 [2024-07-15 03:37:44.114078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.113 [2024-07-15 03:37:44.114116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.113 qpair failed and we were unable to recover it. 00:34:38.113 [2024-07-15 03:37:44.114271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.113 [2024-07-15 03:37:44.114298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.113 qpair failed and we were unable to recover it. 00:34:38.113 [2024-07-15 03:37:44.114433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.114 [2024-07-15 03:37:44.114459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.114 qpair failed and we were unable to recover it. 00:34:38.114 [2024-07-15 03:37:44.114579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.114 [2024-07-15 03:37:44.114606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.114 qpair failed and we were unable to recover it. 00:34:38.114 [2024-07-15 03:37:44.114746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.114 [2024-07-15 03:37:44.114772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.114 qpair failed and we were unable to recover it. 00:34:38.114 [2024-07-15 03:37:44.114918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.114 [2024-07-15 03:37:44.114945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.114 qpair failed and we were unable to recover it. 00:34:38.114 [2024-07-15 03:37:44.115061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.114 [2024-07-15 03:37:44.115087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.114 qpair failed and we were unable to recover it. 00:34:38.114 [2024-07-15 03:37:44.115205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.114 [2024-07-15 03:37:44.115230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.114 qpair failed and we were unable to recover it. 00:34:38.114 [2024-07-15 03:37:44.115367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.114 [2024-07-15 03:37:44.115393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.114 qpair failed and we were unable to recover it. 00:34:38.114 [2024-07-15 03:37:44.115524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.114 [2024-07-15 03:37:44.115548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.114 qpair failed and we were unable to recover it. 00:34:38.114 [2024-07-15 03:37:44.115712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.114 [2024-07-15 03:37:44.115737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.114 qpair failed and we were unable to recover it. 00:34:38.114 [2024-07-15 03:37:44.115887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.114 [2024-07-15 03:37:44.115913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.114 qpair failed and we were unable to recover it. 00:34:38.114 [2024-07-15 03:37:44.116028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.114 [2024-07-15 03:37:44.116054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.114 qpair failed and we were unable to recover it. 00:34:38.114 [2024-07-15 03:37:44.116170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.114 [2024-07-15 03:37:44.116201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.114 qpair failed and we were unable to recover it. 00:34:38.114 [2024-07-15 03:37:44.116357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.114 [2024-07-15 03:37:44.116383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.114 qpair failed and we were unable to recover it. 00:34:38.114 [2024-07-15 03:37:44.116524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.114 [2024-07-15 03:37:44.116548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.114 qpair failed and we were unable to recover it. 00:34:38.114 [2024-07-15 03:37:44.116686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.114 [2024-07-15 03:37:44.116711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.114 qpair failed and we were unable to recover it. 00:34:38.114 [2024-07-15 03:37:44.116838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.114 [2024-07-15 03:37:44.116885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.114 qpair failed and we were unable to recover it. 00:34:38.114 [2024-07-15 03:37:44.117014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.114 [2024-07-15 03:37:44.117043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.114 qpair failed and we were unable to recover it. 00:34:38.114 [2024-07-15 03:37:44.117157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.114 [2024-07-15 03:37:44.117183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.114 qpair failed and we were unable to recover it. 00:34:38.114 [2024-07-15 03:37:44.117302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.114 [2024-07-15 03:37:44.117328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.114 qpair failed and we were unable to recover it. 00:34:38.114 [2024-07-15 03:37:44.117455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.114 [2024-07-15 03:37:44.117481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.114 qpair failed and we were unable to recover it. 00:34:38.114 [2024-07-15 03:37:44.117598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.114 [2024-07-15 03:37:44.117624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.114 qpair failed and we were unable to recover it. 00:34:38.114 [2024-07-15 03:37:44.117762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.114 [2024-07-15 03:37:44.117788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.114 qpair failed and we were unable to recover it. 00:34:38.114 [2024-07-15 03:37:44.117942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.114 [2024-07-15 03:37:44.117981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.114 qpair failed and we were unable to recover it. 00:34:38.114 [2024-07-15 03:37:44.118132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.114 [2024-07-15 03:37:44.118159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.114 qpair failed and we were unable to recover it. 00:34:38.114 [2024-07-15 03:37:44.118276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.114 [2024-07-15 03:37:44.118302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.114 qpair failed and we were unable to recover it. 00:34:38.114 [2024-07-15 03:37:44.118419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.114 [2024-07-15 03:37:44.118444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.114 qpair failed and we were unable to recover it. 00:34:38.114 [2024-07-15 03:37:44.118562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.114 [2024-07-15 03:37:44.118587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.114 qpair failed and we were unable to recover it. 00:34:38.114 [2024-07-15 03:37:44.118704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.114 [2024-07-15 03:37:44.118729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.114 qpair failed and we were unable to recover it. 00:34:38.114 [2024-07-15 03:37:44.118840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.114 [2024-07-15 03:37:44.118867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.114 qpair failed and we were unable to recover it. 00:34:38.114 [2024-07-15 03:37:44.118999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.114 [2024-07-15 03:37:44.119025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.114 qpair failed and we were unable to recover it. 00:34:38.114 [2024-07-15 03:37:44.119137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.114 [2024-07-15 03:37:44.119165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.114 qpair failed and we were unable to recover it. 00:34:38.114 [2024-07-15 03:37:44.119273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.114 [2024-07-15 03:37:44.119299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.114 qpair failed and we were unable to recover it. 00:34:38.114 [2024-07-15 03:37:44.119446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.114 [2024-07-15 03:37:44.119472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.114 qpair failed and we were unable to recover it. 00:34:38.114 [2024-07-15 03:37:44.119572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.114 [2024-07-15 03:37:44.119598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.114 qpair failed and we were unable to recover it. 00:34:38.114 [2024-07-15 03:37:44.119719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.114 [2024-07-15 03:37:44.119747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.114 qpair failed and we were unable to recover it. 00:34:38.114 [2024-07-15 03:37:44.119865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.114 [2024-07-15 03:37:44.119906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.114 qpair failed and we were unable to recover it. 00:34:38.114 [2024-07-15 03:37:44.120038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.114 [2024-07-15 03:37:44.120065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.114 qpair failed and we were unable to recover it. 00:34:38.114 [2024-07-15 03:37:44.120181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.114 [2024-07-15 03:37:44.120206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.114 qpair failed and we were unable to recover it. 00:34:38.114 [2024-07-15 03:37:44.120317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.114 [2024-07-15 03:37:44.120347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.114 qpair failed and we were unable to recover it. 00:34:38.114 [2024-07-15 03:37:44.120509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.114 [2024-07-15 03:37:44.120549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.114 qpair failed and we were unable to recover it. 00:34:38.114 [2024-07-15 03:37:44.120693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.114 [2024-07-15 03:37:44.120720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.114 qpair failed and we were unable to recover it. 00:34:38.114 [2024-07-15 03:37:44.120843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.114 [2024-07-15 03:37:44.120886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.115 qpair failed and we were unable to recover it. 00:34:38.115 [2024-07-15 03:37:44.120997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.115 [2024-07-15 03:37:44.121023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.115 qpair failed and we were unable to recover it. 00:34:38.115 [2024-07-15 03:37:44.121135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.115 [2024-07-15 03:37:44.121159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.115 qpair failed and we were unable to recover it. 00:34:38.115 [2024-07-15 03:37:44.121275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.115 [2024-07-15 03:37:44.121300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.115 qpair failed and we were unable to recover it. 00:34:38.115 [2024-07-15 03:37:44.121442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.115 [2024-07-15 03:37:44.121468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.115 qpair failed and we were unable to recover it. 00:34:38.115 [2024-07-15 03:37:44.121606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.115 [2024-07-15 03:37:44.121634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.115 qpair failed and we were unable to recover it. 00:34:38.115 [2024-07-15 03:37:44.121751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.115 [2024-07-15 03:37:44.121777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.115 qpair failed and we were unable to recover it. 00:34:38.115 [2024-07-15 03:37:44.121944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.115 [2024-07-15 03:37:44.121971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.115 qpair failed and we were unable to recover it. 00:34:38.115 [2024-07-15 03:37:44.122081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.115 [2024-07-15 03:37:44.122107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.115 qpair failed and we were unable to recover it. 00:34:38.115 [2024-07-15 03:37:44.122249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.115 [2024-07-15 03:37:44.122275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.115 qpair failed and we were unable to recover it. 00:34:38.115 [2024-07-15 03:37:44.122413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.115 [2024-07-15 03:37:44.122439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.115 qpair failed and we were unable to recover it. 00:34:38.115 [2024-07-15 03:37:44.122587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.115 [2024-07-15 03:37:44.122612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.115 qpair failed and we were unable to recover it. 00:34:38.115 [2024-07-15 03:37:44.122759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.115 [2024-07-15 03:37:44.122785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.115 qpair failed and we were unable to recover it. 00:34:38.115 [2024-07-15 03:37:44.122908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.115 [2024-07-15 03:37:44.122934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.115 qpair failed and we were unable to recover it. 00:34:38.115 [2024-07-15 03:37:44.123064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.115 [2024-07-15 03:37:44.123090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.115 qpair failed and we were unable to recover it. 00:34:38.115 [2024-07-15 03:37:44.123200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.115 [2024-07-15 03:37:44.123225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.115 qpair failed and we were unable to recover it. 00:34:38.115 [2024-07-15 03:37:44.123368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.115 [2024-07-15 03:37:44.123396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.115 qpair failed and we were unable to recover it. 00:34:38.115 [2024-07-15 03:37:44.123503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.115 [2024-07-15 03:37:44.123529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.115 qpair failed and we were unable to recover it. 00:34:38.115 [2024-07-15 03:37:44.123646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.115 [2024-07-15 03:37:44.123672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.115 qpair failed and we were unable to recover it. 00:34:38.115 [2024-07-15 03:37:44.123781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.115 [2024-07-15 03:37:44.123807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.115 qpair failed and we were unable to recover it. 00:34:38.115 [2024-07-15 03:37:44.123923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.115 [2024-07-15 03:37:44.123950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.115 qpair failed and we were unable to recover it. 00:34:38.115 [2024-07-15 03:37:44.124096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.115 [2024-07-15 03:37:44.124122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.115 qpair failed and we were unable to recover it. 00:34:38.115 [2024-07-15 03:37:44.124287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.115 [2024-07-15 03:37:44.124312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.115 qpair failed and we were unable to recover it. 00:34:38.115 [2024-07-15 03:37:44.124416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.115 [2024-07-15 03:37:44.124442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.115 qpair failed and we were unable to recover it. 00:34:38.115 [2024-07-15 03:37:44.124569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.115 [2024-07-15 03:37:44.124595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.115 qpair failed and we were unable to recover it. 00:34:38.115 [2024-07-15 03:37:44.124732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.115 [2024-07-15 03:37:44.124757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.115 qpair failed and we were unable to recover it. 00:34:38.115 [2024-07-15 03:37:44.124873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.115 [2024-07-15 03:37:44.124905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.115 qpair failed and we were unable to recover it. 00:34:38.115 [2024-07-15 03:37:44.125046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.115 [2024-07-15 03:37:44.125073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.115 qpair failed and we were unable to recover it. 00:34:38.115 [2024-07-15 03:37:44.125208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.115 [2024-07-15 03:37:44.125234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.115 qpair failed and we were unable to recover it. 00:34:38.115 [2024-07-15 03:37:44.125395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.115 [2024-07-15 03:37:44.125433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.115 qpair failed and we were unable to recover it. 00:34:38.115 [2024-07-15 03:37:44.125553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.115 [2024-07-15 03:37:44.125581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.115 qpair failed and we were unable to recover it. 00:34:38.115 [2024-07-15 03:37:44.125681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.115 [2024-07-15 03:37:44.125706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.115 qpair failed and we were unable to recover it. 00:34:38.115 [2024-07-15 03:37:44.125811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.115 [2024-07-15 03:37:44.125836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.115 qpair failed and we were unable to recover it. 00:34:38.115 [2024-07-15 03:37:44.125985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.115 [2024-07-15 03:37:44.126011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.115 qpair failed and we were unable to recover it. 00:34:38.115 [2024-07-15 03:37:44.126122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.115 [2024-07-15 03:37:44.126147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.115 qpair failed and we were unable to recover it. 00:34:38.115 [2024-07-15 03:37:44.126250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.115 [2024-07-15 03:37:44.126275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.115 qpair failed and we were unable to recover it. 00:34:38.115 [2024-07-15 03:37:44.126394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.115 [2024-07-15 03:37:44.126420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.115 qpair failed and we were unable to recover it. 00:34:38.115 [2024-07-15 03:37:44.126560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.115 [2024-07-15 03:37:44.126591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.115 qpair failed and we were unable to recover it. 00:34:38.115 [2024-07-15 03:37:44.126755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.115 [2024-07-15 03:37:44.126780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.115 qpair failed and we were unable to recover it. 00:34:38.115 [2024-07-15 03:37:44.126908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.115 [2024-07-15 03:37:44.126934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.115 qpair failed and we were unable to recover it. 00:34:38.115 [2024-07-15 03:37:44.127045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.115 [2024-07-15 03:37:44.127070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.115 qpair failed and we were unable to recover it. 00:34:38.115 [2024-07-15 03:37:44.127181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.115 [2024-07-15 03:37:44.127207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.115 qpair failed and we were unable to recover it. 00:34:38.115 [2024-07-15 03:37:44.127310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.116 [2024-07-15 03:37:44.127335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.116 qpair failed and we were unable to recover it. 00:34:38.116 [2024-07-15 03:37:44.127500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.116 [2024-07-15 03:37:44.127526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.116 qpair failed and we were unable to recover it. 00:34:38.116 [2024-07-15 03:37:44.127697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.116 [2024-07-15 03:37:44.127736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.116 qpair failed and we were unable to recover it. 00:34:38.116 [2024-07-15 03:37:44.127886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.116 [2024-07-15 03:37:44.127915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.116 qpair failed and we were unable to recover it. 00:34:38.116 [2024-07-15 03:37:44.128043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.116 [2024-07-15 03:37:44.128069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.116 qpair failed and we were unable to recover it. 00:34:38.116 [2024-07-15 03:37:44.128210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.116 [2024-07-15 03:37:44.128236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.116 qpair failed and we were unable to recover it. 00:34:38.116 [2024-07-15 03:37:44.128357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.116 [2024-07-15 03:37:44.128382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.116 qpair failed and we were unable to recover it. 00:34:38.116 [2024-07-15 03:37:44.128523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.116 [2024-07-15 03:37:44.128549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.116 qpair failed and we were unable to recover it. 00:34:38.116 [2024-07-15 03:37:44.128700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.116 [2024-07-15 03:37:44.128727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.116 qpair failed and we were unable to recover it. 00:34:38.116 [2024-07-15 03:37:44.128874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.116 [2024-07-15 03:37:44.128908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.116 qpair failed and we were unable to recover it. 00:34:38.116 [2024-07-15 03:37:44.129031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.116 [2024-07-15 03:37:44.129056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.116 qpair failed and we were unable to recover it. 00:34:38.116 [2024-07-15 03:37:44.129176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.116 [2024-07-15 03:37:44.129201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.116 qpair failed and we were unable to recover it. 00:34:38.116 [2024-07-15 03:37:44.129328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.116 [2024-07-15 03:37:44.129353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.116 qpair failed and we were unable to recover it. 00:34:38.116 [2024-07-15 03:37:44.129468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.116 [2024-07-15 03:37:44.129494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.116 qpair failed and we were unable to recover it. 00:34:38.116 [2024-07-15 03:37:44.129605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.116 [2024-07-15 03:37:44.129630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.116 qpair failed and we were unable to recover it. 00:34:38.116 [2024-07-15 03:37:44.129757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.116 [2024-07-15 03:37:44.129782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.116 qpair failed and we were unable to recover it. 00:34:38.116 [2024-07-15 03:37:44.129897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.116 [2024-07-15 03:37:44.129924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.116 qpair failed and we were unable to recover it. 00:34:38.116 [2024-07-15 03:37:44.130058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.116 [2024-07-15 03:37:44.130083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.116 qpair failed and we were unable to recover it. 00:34:38.116 [2024-07-15 03:37:44.130202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.116 [2024-07-15 03:37:44.130227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.116 qpair failed and we were unable to recover it. 00:34:38.116 [2024-07-15 03:37:44.130340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.116 [2024-07-15 03:37:44.130365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.116 qpair failed and we were unable to recover it. 00:34:38.116 [2024-07-15 03:37:44.130474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.116 [2024-07-15 03:37:44.130499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.116 qpair failed and we were unable to recover it. 00:34:38.116 [2024-07-15 03:37:44.130608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.116 [2024-07-15 03:37:44.130633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.116 qpair failed and we were unable to recover it. 00:34:38.116 [2024-07-15 03:37:44.130760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.116 [2024-07-15 03:37:44.130789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.116 qpair failed and we were unable to recover it. 00:34:38.116 [2024-07-15 03:37:44.130923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.116 [2024-07-15 03:37:44.130950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.116 qpair failed and we were unable to recover it. 00:34:38.116 [2024-07-15 03:37:44.131065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.116 [2024-07-15 03:37:44.131090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.116 qpair failed and we were unable to recover it. 00:34:38.116 [2024-07-15 03:37:44.131205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.116 [2024-07-15 03:37:44.131231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.116 qpair failed and we were unable to recover it. 00:34:38.116 [2024-07-15 03:37:44.131349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.116 [2024-07-15 03:37:44.131376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.116 qpair failed and we were unable to recover it. 00:34:38.116 [2024-07-15 03:37:44.131491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.116 [2024-07-15 03:37:44.131516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.116 qpair failed and we were unable to recover it. 00:34:38.116 [2024-07-15 03:37:44.131655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.116 [2024-07-15 03:37:44.131680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.116 qpair failed and we were unable to recover it. 00:34:38.116 [2024-07-15 03:37:44.131796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.116 [2024-07-15 03:37:44.131836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.116 qpair failed and we were unable to recover it. 00:34:38.116 [2024-07-15 03:37:44.131994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.116 [2024-07-15 03:37:44.132022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.116 qpair failed and we were unable to recover it. 00:34:38.116 [2024-07-15 03:37:44.132174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.116 [2024-07-15 03:37:44.132200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.116 qpair failed and we were unable to recover it. 00:34:38.116 [2024-07-15 03:37:44.132311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.116 [2024-07-15 03:37:44.132338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.116 qpair failed and we were unable to recover it. 00:34:38.116 [2024-07-15 03:37:44.132451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.116 [2024-07-15 03:37:44.132478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.116 qpair failed and we were unable to recover it. 00:34:38.116 [2024-07-15 03:37:44.132612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.116 [2024-07-15 03:37:44.132639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.116 qpair failed and we were unable to recover it. 00:34:38.116 [2024-07-15 03:37:44.132781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.116 [2024-07-15 03:37:44.132807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.116 qpair failed and we were unable to recover it. 00:34:38.116 [2024-07-15 03:37:44.132945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.116 [2024-07-15 03:37:44.132973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.117 qpair failed and we were unable to recover it. 00:34:38.117 [2024-07-15 03:37:44.133116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.117 [2024-07-15 03:37:44.133142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.117 qpair failed and we were unable to recover it. 00:34:38.117 [2024-07-15 03:37:44.133299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.117 [2024-07-15 03:37:44.133324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.117 qpair failed and we were unable to recover it. 00:34:38.117 [2024-07-15 03:37:44.133463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.117 [2024-07-15 03:37:44.133488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.117 qpair failed and we were unable to recover it. 00:34:38.117 [2024-07-15 03:37:44.133595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.117 [2024-07-15 03:37:44.133621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.117 qpair failed and we were unable to recover it. 00:34:38.117 [2024-07-15 03:37:44.133730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.117 [2024-07-15 03:37:44.133757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.117 qpair failed and we were unable to recover it. 00:34:38.117 [2024-07-15 03:37:44.133921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.117 [2024-07-15 03:37:44.133949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.117 qpair failed and we were unable to recover it. 00:34:38.117 [2024-07-15 03:37:44.134060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.117 [2024-07-15 03:37:44.134087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.117 qpair failed and we were unable to recover it. 00:34:38.117 [2024-07-15 03:37:44.134209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.117 [2024-07-15 03:37:44.134236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.117 qpair failed and we were unable to recover it. 00:34:38.117 [2024-07-15 03:37:44.134385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.117 [2024-07-15 03:37:44.134411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.117 qpair failed and we were unable to recover it. 00:34:38.117 [2024-07-15 03:37:44.134554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.117 [2024-07-15 03:37:44.134581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.117 qpair failed and we were unable to recover it. 00:34:38.117 [2024-07-15 03:37:44.134752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.117 [2024-07-15 03:37:44.134779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.117 qpair failed and we were unable to recover it. 00:34:38.117 [2024-07-15 03:37:44.134934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.117 [2024-07-15 03:37:44.134961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.117 qpair failed and we were unable to recover it. 00:34:38.117 [2024-07-15 03:37:44.135064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.117 [2024-07-15 03:37:44.135097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.117 qpair failed and we were unable to recover it. 00:34:38.117 [2024-07-15 03:37:44.135210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.117 [2024-07-15 03:37:44.135237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.117 qpair failed and we were unable to recover it. 00:34:38.117 [2024-07-15 03:37:44.135379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.117 [2024-07-15 03:37:44.135405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.117 qpair failed and we were unable to recover it. 00:34:38.117 [2024-07-15 03:37:44.135510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.117 [2024-07-15 03:37:44.135536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.117 qpair failed and we were unable to recover it. 00:34:38.117 [2024-07-15 03:37:44.135644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.117 [2024-07-15 03:37:44.135671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.117 qpair failed and we were unable to recover it. 00:34:38.117 [2024-07-15 03:37:44.135815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.117 [2024-07-15 03:37:44.135843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.117 qpair failed and we were unable to recover it. 00:34:38.117 [2024-07-15 03:37:44.135962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.117 [2024-07-15 03:37:44.135988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.117 qpair failed and we were unable to recover it. 00:34:38.117 [2024-07-15 03:37:44.136122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.117 [2024-07-15 03:37:44.136148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.117 qpair failed and we were unable to recover it. 00:34:38.117 [2024-07-15 03:37:44.136267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.117 [2024-07-15 03:37:44.136295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.117 qpair failed and we were unable to recover it. 00:34:38.117 [2024-07-15 03:37:44.136436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.117 [2024-07-15 03:37:44.136462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.117 qpair failed and we were unable to recover it. 00:34:38.117 [2024-07-15 03:37:44.136575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.117 [2024-07-15 03:37:44.136602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.117 qpair failed and we were unable to recover it. 00:34:38.117 [2024-07-15 03:37:44.136742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.117 [2024-07-15 03:37:44.136768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.117 qpair failed and we were unable to recover it. 00:34:38.117 [2024-07-15 03:37:44.136885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.118 [2024-07-15 03:37:44.136916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.118 qpair failed and we were unable to recover it. 00:34:38.118 [2024-07-15 03:37:44.137051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.118 [2024-07-15 03:37:44.137077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.118 qpair failed and we were unable to recover it. 00:34:38.118 [2024-07-15 03:37:44.137185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.118 [2024-07-15 03:37:44.137211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.118 qpair failed and we were unable to recover it. 00:34:38.118 [2024-07-15 03:37:44.137326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.118 [2024-07-15 03:37:44.137351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.118 qpair failed and we were unable to recover it. 00:34:38.118 [2024-07-15 03:37:44.137463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.118 [2024-07-15 03:37:44.137488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.118 qpair failed and we were unable to recover it. 00:34:38.118 [2024-07-15 03:37:44.137622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.118 [2024-07-15 03:37:44.137648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.118 qpair failed and we were unable to recover it. 00:34:38.118 [2024-07-15 03:37:44.137780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.118 [2024-07-15 03:37:44.137819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.118 qpair failed and we were unable to recover it. 00:34:38.118 [2024-07-15 03:37:44.137941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.118 [2024-07-15 03:37:44.137969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.118 qpair failed and we were unable to recover it. 00:34:38.118 [2024-07-15 03:37:44.138124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.118 [2024-07-15 03:37:44.138151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.118 qpair failed and we were unable to recover it. 00:34:38.118 [2024-07-15 03:37:44.138299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.118 [2024-07-15 03:37:44.138327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.118 qpair failed and we were unable to recover it. 00:34:38.118 [2024-07-15 03:37:44.138444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.118 [2024-07-15 03:37:44.138470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.118 qpair failed and we were unable to recover it. 00:34:38.118 [2024-07-15 03:37:44.138608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.118 [2024-07-15 03:37:44.138634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.118 qpair failed and we were unable to recover it. 00:34:38.118 [2024-07-15 03:37:44.138774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.118 [2024-07-15 03:37:44.138801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.118 qpair failed and we were unable to recover it. 00:34:38.118 [2024-07-15 03:37:44.138919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.118 [2024-07-15 03:37:44.138947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.118 qpair failed and we were unable to recover it. 00:34:38.118 [2024-07-15 03:37:44.139066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.118 [2024-07-15 03:37:44.139092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.118 qpair failed and we were unable to recover it. 00:34:38.118 [2024-07-15 03:37:44.139196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.118 [2024-07-15 03:37:44.139226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.118 qpair failed and we were unable to recover it. 00:34:38.118 [2024-07-15 03:37:44.139369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.118 [2024-07-15 03:37:44.139395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.118 qpair failed and we were unable to recover it. 00:34:38.118 [2024-07-15 03:37:44.139509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.118 [2024-07-15 03:37:44.139535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.118 qpair failed and we were unable to recover it. 00:34:38.118 [2024-07-15 03:37:44.139657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.118 [2024-07-15 03:37:44.139685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.118 qpair failed and we were unable to recover it. 00:34:38.118 [2024-07-15 03:37:44.139820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.118 [2024-07-15 03:37:44.139846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.118 qpair failed and we were unable to recover it. 00:34:38.118 [2024-07-15 03:37:44.139966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.118 [2024-07-15 03:37:44.139993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.118 qpair failed and we were unable to recover it. 00:34:38.118 [2024-07-15 03:37:44.140130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.118 [2024-07-15 03:37:44.140156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.118 qpair failed and we were unable to recover it. 00:34:38.118 [2024-07-15 03:37:44.140260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.118 [2024-07-15 03:37:44.140284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.118 qpair failed and we were unable to recover it. 00:34:38.118 [2024-07-15 03:37:44.140429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.118 [2024-07-15 03:37:44.140455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.118 qpair failed and we were unable to recover it. 00:34:38.118 [2024-07-15 03:37:44.140581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.118 [2024-07-15 03:37:44.140608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.118 qpair failed and we were unable to recover it. 00:34:38.118 [2024-07-15 03:37:44.140755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.118 [2024-07-15 03:37:44.140794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.118 qpair failed and we were unable to recover it. 00:34:38.118 [2024-07-15 03:37:44.140917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.118 [2024-07-15 03:37:44.140946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.118 qpair failed and we were unable to recover it. 00:34:38.118 [2024-07-15 03:37:44.141094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.118 [2024-07-15 03:37:44.141120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.118 qpair failed and we were unable to recover it. 00:34:38.118 [2024-07-15 03:37:44.141229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.118 [2024-07-15 03:37:44.141255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.118 qpair failed and we were unable to recover it. 00:34:38.118 [2024-07-15 03:37:44.141404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.118 [2024-07-15 03:37:44.141430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.118 qpair failed and we were unable to recover it. 00:34:38.118 [2024-07-15 03:37:44.141595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.118 [2024-07-15 03:37:44.141621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.118 qpair failed and we were unable to recover it. 00:34:38.118 [2024-07-15 03:37:44.141751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.118 [2024-07-15 03:37:44.141777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.118 qpair failed and we were unable to recover it. 00:34:38.118 [2024-07-15 03:37:44.141892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.118 [2024-07-15 03:37:44.141919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.118 qpair failed and we were unable to recover it. 00:34:38.118 [2024-07-15 03:37:44.142045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.118 [2024-07-15 03:37:44.142072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.118 qpair failed and we were unable to recover it. 00:34:38.118 [2024-07-15 03:37:44.142176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.118 [2024-07-15 03:37:44.142202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.118 qpair failed and we were unable to recover it. 00:34:38.118 [2024-07-15 03:37:44.142371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.118 [2024-07-15 03:37:44.142396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.118 qpair failed and we were unable to recover it. 00:34:38.118 [2024-07-15 03:37:44.142519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.118 [2024-07-15 03:37:44.142544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.118 qpair failed and we were unable to recover it. 00:34:38.119 [2024-07-15 03:37:44.142649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.119 [2024-07-15 03:37:44.142675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.119 qpair failed and we were unable to recover it. 00:34:38.119 [2024-07-15 03:37:44.142815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.119 [2024-07-15 03:37:44.142841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.119 qpair failed and we were unable to recover it. 00:34:38.119 [2024-07-15 03:37:44.142968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.119 [2024-07-15 03:37:44.143007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.119 qpair failed and we were unable to recover it. 00:34:38.119 [2024-07-15 03:37:44.143145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.119 [2024-07-15 03:37:44.143172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.119 qpair failed and we were unable to recover it. 00:34:38.119 [2024-07-15 03:37:44.143319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.119 [2024-07-15 03:37:44.143345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.119 qpair failed and we were unable to recover it. 00:34:38.119 [2024-07-15 03:37:44.143489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.119 [2024-07-15 03:37:44.143515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.119 qpair failed and we were unable to recover it. 00:34:38.119 [2024-07-15 03:37:44.143626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.119 [2024-07-15 03:37:44.143652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.119 qpair failed and we were unable to recover it. 00:34:38.119 [2024-07-15 03:37:44.143790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.119 [2024-07-15 03:37:44.143816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.119 qpair failed and we were unable to recover it. 00:34:38.119 [2024-07-15 03:37:44.143980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.119 [2024-07-15 03:37:44.144007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.119 qpair failed and we were unable to recover it. 00:34:38.119 [2024-07-15 03:37:44.144115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.119 [2024-07-15 03:37:44.144143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.119 qpair failed and we were unable to recover it. 00:34:38.119 [2024-07-15 03:37:44.144253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.119 [2024-07-15 03:37:44.144279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.119 qpair failed and we were unable to recover it. 00:34:38.119 [2024-07-15 03:37:44.144418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.119 [2024-07-15 03:37:44.144443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.119 qpair failed and we were unable to recover it. 00:34:38.119 [2024-07-15 03:37:44.144555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.119 [2024-07-15 03:37:44.144581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.119 qpair failed and we were unable to recover it. 00:34:38.119 [2024-07-15 03:37:44.144752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.119 [2024-07-15 03:37:44.144777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.119 qpair failed and we were unable to recover it. 00:34:38.119 [2024-07-15 03:37:44.144916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.119 [2024-07-15 03:37:44.144943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.119 qpair failed and we were unable to recover it. 00:34:38.119 [2024-07-15 03:37:44.145050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.119 [2024-07-15 03:37:44.145075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.119 qpair failed and we were unable to recover it. 00:34:38.119 [2024-07-15 03:37:44.145213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.119 [2024-07-15 03:37:44.145238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.119 qpair failed and we were unable to recover it. 00:34:38.119 [2024-07-15 03:37:44.145406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.119 [2024-07-15 03:37:44.145432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.119 qpair failed and we were unable to recover it. 00:34:38.119 [2024-07-15 03:37:44.145540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.119 [2024-07-15 03:37:44.145571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.119 qpair failed and we were unable to recover it. 00:34:38.119 [2024-07-15 03:37:44.145683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.119 [2024-07-15 03:37:44.145708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.119 qpair failed and we were unable to recover it. 00:34:38.119 [2024-07-15 03:37:44.145815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.119 [2024-07-15 03:37:44.145843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.119 qpair failed and we were unable to recover it. 00:34:38.119 [2024-07-15 03:37:44.145981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.119 [2024-07-15 03:37:44.146007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.119 qpair failed and we were unable to recover it. 00:34:38.119 [2024-07-15 03:37:44.146147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.119 [2024-07-15 03:37:44.146173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.119 qpair failed and we were unable to recover it. 00:34:38.119 [2024-07-15 03:37:44.146292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.119 [2024-07-15 03:37:44.146318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.119 qpair failed and we were unable to recover it. 00:34:38.119 [2024-07-15 03:37:44.146454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.119 [2024-07-15 03:37:44.146480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.119 qpair failed and we were unable to recover it. 00:34:38.119 [2024-07-15 03:37:44.146596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.119 [2024-07-15 03:37:44.146622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.119 qpair failed and we were unable to recover it. 00:34:38.119 [2024-07-15 03:37:44.146756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.119 [2024-07-15 03:37:44.146783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.119 qpair failed and we were unable to recover it. 00:34:38.119 [2024-07-15 03:37:44.146897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.119 [2024-07-15 03:37:44.146925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.119 qpair failed and we were unable to recover it. 00:34:38.119 [2024-07-15 03:37:44.147038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.119 [2024-07-15 03:37:44.147065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.119 qpair failed and we were unable to recover it. 00:34:38.119 [2024-07-15 03:37:44.147171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.119 [2024-07-15 03:37:44.147196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.119 qpair failed and we were unable to recover it. 00:34:38.119 [2024-07-15 03:37:44.147333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.119 [2024-07-15 03:37:44.147359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.119 qpair failed and we were unable to recover it. 00:34:38.119 [2024-07-15 03:37:44.147487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.119 [2024-07-15 03:37:44.147512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.119 qpair failed and we were unable to recover it. 00:34:38.119 [2024-07-15 03:37:44.147638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.119 [2024-07-15 03:37:44.147665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.119 qpair failed and we were unable to recover it. 00:34:38.119 [2024-07-15 03:37:44.147806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.119 [2024-07-15 03:37:44.147831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.119 qpair failed and we were unable to recover it. 00:34:38.119 [2024-07-15 03:37:44.147954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.119 [2024-07-15 03:37:44.147981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.119 qpair failed and we were unable to recover it. 00:34:38.119 [2024-07-15 03:37:44.148083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.119 [2024-07-15 03:37:44.148109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.119 qpair failed and we were unable to recover it. 00:34:38.119 [2024-07-15 03:37:44.148222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.120 [2024-07-15 03:37:44.148247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.120 qpair failed and we were unable to recover it. 00:34:38.120 [2024-07-15 03:37:44.148352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.120 [2024-07-15 03:37:44.148377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.120 qpair failed and we were unable to recover it. 00:34:38.120 [2024-07-15 03:37:44.148496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.120 [2024-07-15 03:37:44.148522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.120 qpair failed and we were unable to recover it. 00:34:38.120 [2024-07-15 03:37:44.148629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.120 [2024-07-15 03:37:44.148655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.120 qpair failed and we were unable to recover it. 00:34:38.120 [2024-07-15 03:37:44.148768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.120 [2024-07-15 03:37:44.148794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.120 qpair failed and we were unable to recover it. 00:34:38.120 [2024-07-15 03:37:44.148907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.120 [2024-07-15 03:37:44.148933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.120 qpair failed and we were unable to recover it. 00:34:38.120 [2024-07-15 03:37:44.149040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.120 [2024-07-15 03:37:44.149066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.120 qpair failed and we were unable to recover it. 00:34:38.120 [2024-07-15 03:37:44.149174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.120 [2024-07-15 03:37:44.149200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.120 qpair failed and we were unable to recover it. 00:34:38.120 [2024-07-15 03:37:44.149344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.120 [2024-07-15 03:37:44.149370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.120 qpair failed and we were unable to recover it. 00:34:38.120 [2024-07-15 03:37:44.149505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.120 [2024-07-15 03:37:44.149535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.120 qpair failed and we were unable to recover it. 00:34:38.120 [2024-07-15 03:37:44.149679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.120 [2024-07-15 03:37:44.149704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.120 qpair failed and we were unable to recover it. 00:34:38.120 [2024-07-15 03:37:44.149827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.120 [2024-07-15 03:37:44.149855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.120 qpair failed and we were unable to recover it. 00:34:38.120 [2024-07-15 03:37:44.149984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.120 [2024-07-15 03:37:44.150009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.120 qpair failed and we were unable to recover it. 00:34:38.120 [2024-07-15 03:37:44.150124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.120 [2024-07-15 03:37:44.150150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.120 qpair failed and we were unable to recover it. 00:34:38.120 [2024-07-15 03:37:44.150255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.120 [2024-07-15 03:37:44.150280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.120 qpair failed and we were unable to recover it. 00:34:38.120 [2024-07-15 03:37:44.150385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.120 [2024-07-15 03:37:44.150411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.120 qpair failed and we were unable to recover it. 00:34:38.120 [2024-07-15 03:37:44.150536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.120 [2024-07-15 03:37:44.150562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.120 qpair failed and we were unable to recover it. 00:34:38.120 [2024-07-15 03:37:44.150681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.120 [2024-07-15 03:37:44.150709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.120 qpair failed and we were unable to recover it. 00:34:38.120 [2024-07-15 03:37:44.150835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.120 [2024-07-15 03:37:44.150873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.120 qpair failed and we were unable to recover it. 00:34:38.120 [2024-07-15 03:37:44.151007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.120 [2024-07-15 03:37:44.151033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.120 qpair failed and we were unable to recover it. 00:34:38.120 [2024-07-15 03:37:44.151149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.120 [2024-07-15 03:37:44.151174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.120 qpair failed and we were unable to recover it. 00:34:38.120 [2024-07-15 03:37:44.151337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.120 [2024-07-15 03:37:44.151362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.120 qpair failed and we were unable to recover it. 00:34:38.120 [2024-07-15 03:37:44.151471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.120 [2024-07-15 03:37:44.151496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.120 qpair failed and we were unable to recover it. 00:34:38.120 [2024-07-15 03:37:44.151601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.120 [2024-07-15 03:37:44.151628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.120 qpair failed and we were unable to recover it. 00:34:38.120 [2024-07-15 03:37:44.151761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.120 [2024-07-15 03:37:44.151800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.120 qpair failed and we were unable to recover it. 00:34:38.120 [2024-07-15 03:37:44.151954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.120 [2024-07-15 03:37:44.151982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.120 qpair failed and we were unable to recover it. 00:34:38.120 [2024-07-15 03:37:44.152101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.120 [2024-07-15 03:37:44.152128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.120 qpair failed and we were unable to recover it. 00:34:38.120 [2024-07-15 03:37:44.152270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.120 [2024-07-15 03:37:44.152296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.120 qpair failed and we were unable to recover it. 00:34:38.120 [2024-07-15 03:37:44.152436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.120 [2024-07-15 03:37:44.152462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.120 qpair failed and we were unable to recover it. 00:34:38.120 [2024-07-15 03:37:44.152573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.120 [2024-07-15 03:37:44.152601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.120 qpair failed and we were unable to recover it. 00:34:38.120 [2024-07-15 03:37:44.152709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.120 [2024-07-15 03:37:44.152736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.120 qpair failed and we were unable to recover it. 00:34:38.120 [2024-07-15 03:37:44.152847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.120 [2024-07-15 03:37:44.152873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.120 qpair failed and we were unable to recover it. 00:34:38.120 [2024-07-15 03:37:44.152998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.120 [2024-07-15 03:37:44.153024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.120 qpair failed and we were unable to recover it. 00:34:38.120 [2024-07-15 03:37:44.153168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.120 [2024-07-15 03:37:44.153194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.120 qpair failed and we were unable to recover it. 00:34:38.120 [2024-07-15 03:37:44.153312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.120 [2024-07-15 03:37:44.153337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.120 qpair failed and we were unable to recover it. 00:34:38.120 [2024-07-15 03:37:44.153470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.120 [2024-07-15 03:37:44.153495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.120 qpair failed and we were unable to recover it. 00:34:38.120 [2024-07-15 03:37:44.153598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.120 [2024-07-15 03:37:44.153628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.120 qpair failed and we were unable to recover it. 00:34:38.120 [2024-07-15 03:37:44.153775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.121 [2024-07-15 03:37:44.153801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.121 qpair failed and we were unable to recover it. 00:34:38.121 [2024-07-15 03:37:44.153907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.121 [2024-07-15 03:37:44.153933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.121 qpair failed and we were unable to recover it. 00:34:38.121 [2024-07-15 03:37:44.154034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.121 [2024-07-15 03:37:44.154060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.121 qpair failed and we were unable to recover it. 00:34:38.121 [2024-07-15 03:37:44.154174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.121 [2024-07-15 03:37:44.154200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.121 qpair failed and we were unable to recover it. 00:34:38.121 [2024-07-15 03:37:44.154304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.121 [2024-07-15 03:37:44.154329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.121 qpair failed and we were unable to recover it. 00:34:38.121 [2024-07-15 03:37:44.154434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.121 [2024-07-15 03:37:44.154459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.121 qpair failed and we were unable to recover it. 00:34:38.121 [2024-07-15 03:37:44.154597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.121 [2024-07-15 03:37:44.154622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.121 qpair failed and we were unable to recover it. 00:34:38.121 [2024-07-15 03:37:44.154736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.121 [2024-07-15 03:37:44.154761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.121 qpair failed and we were unable to recover it. 00:34:38.121 [2024-07-15 03:37:44.154913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.121 [2024-07-15 03:37:44.154952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.121 qpair failed and we were unable to recover it. 00:34:38.121 [2024-07-15 03:37:44.155107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.121 [2024-07-15 03:37:44.155134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.121 qpair failed and we were unable to recover it. 00:34:38.121 [2024-07-15 03:37:44.155242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.121 [2024-07-15 03:37:44.155268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.121 qpair failed and we were unable to recover it. 00:34:38.121 [2024-07-15 03:37:44.155414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.121 [2024-07-15 03:37:44.155441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.121 qpair failed and we were unable to recover it. 00:34:38.121 [2024-07-15 03:37:44.155554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.121 [2024-07-15 03:37:44.155579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.121 qpair failed and we were unable to recover it. 00:34:38.121 [2024-07-15 03:37:44.155748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.121 [2024-07-15 03:37:44.155773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.121 qpair failed and we were unable to recover it. 00:34:38.121 [2024-07-15 03:37:44.155899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.121 [2024-07-15 03:37:44.155926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.121 qpair failed and we were unable to recover it. 00:34:38.121 [2024-07-15 03:37:44.156045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.121 [2024-07-15 03:37:44.156070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.121 qpair failed and we were unable to recover it. 00:34:38.121 [2024-07-15 03:37:44.156178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.121 [2024-07-15 03:37:44.156203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.121 qpair failed and we were unable to recover it. 00:34:38.121 [2024-07-15 03:37:44.156339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.121 [2024-07-15 03:37:44.156365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.121 qpair failed and we were unable to recover it. 00:34:38.121 [2024-07-15 03:37:44.156480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.121 [2024-07-15 03:37:44.156505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.121 qpair failed and we were unable to recover it. 00:34:38.121 [2024-07-15 03:37:44.156640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.121 [2024-07-15 03:37:44.156665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.121 qpair failed and we were unable to recover it. 00:34:38.121 [2024-07-15 03:37:44.156776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.121 [2024-07-15 03:37:44.156802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.121 qpair failed and we were unable to recover it. 00:34:38.121 [2024-07-15 03:37:44.156944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.121 [2024-07-15 03:37:44.156970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.121 qpair failed and we were unable to recover it. 00:34:38.121 [2024-07-15 03:37:44.157088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.121 [2024-07-15 03:37:44.157126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.121 qpair failed and we were unable to recover it. 00:34:38.121 [2024-07-15 03:37:44.157272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.121 [2024-07-15 03:37:44.157299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.121 qpair failed and we were unable to recover it. 00:34:38.121 [2024-07-15 03:37:44.157440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.121 [2024-07-15 03:37:44.157467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.121 qpair failed and we were unable to recover it. 00:34:38.121 [2024-07-15 03:37:44.157603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.121 [2024-07-15 03:37:44.157629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.121 qpair failed and we were unable to recover it. 00:34:38.121 [2024-07-15 03:37:44.157738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.121 [2024-07-15 03:37:44.157765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.121 qpair failed and we were unable to recover it. 00:34:38.121 [2024-07-15 03:37:44.157901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.121 [2024-07-15 03:37:44.157927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.121 qpair failed and we were unable to recover it. 00:34:38.121 [2024-07-15 03:37:44.158066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.121 [2024-07-15 03:37:44.158092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.121 qpair failed and we were unable to recover it. 00:34:38.121 [2024-07-15 03:37:44.158200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.122 [2024-07-15 03:37:44.158226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.122 qpair failed and we were unable to recover it. 00:34:38.122 [2024-07-15 03:37:44.158334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.122 [2024-07-15 03:37:44.158359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.122 qpair failed and we were unable to recover it. 00:34:38.122 [2024-07-15 03:37:44.158468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.122 [2024-07-15 03:37:44.158494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.122 qpair failed and we were unable to recover it. 00:34:38.122 [2024-07-15 03:37:44.158599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.122 [2024-07-15 03:37:44.158624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.122 qpair failed and we were unable to recover it. 00:34:38.122 [2024-07-15 03:37:44.158760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.122 [2024-07-15 03:37:44.158786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.122 qpair failed and we were unable to recover it. 00:34:38.122 [2024-07-15 03:37:44.158898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.122 [2024-07-15 03:37:44.158924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.122 qpair failed and we were unable to recover it. 00:34:38.122 [2024-07-15 03:37:44.159033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.122 [2024-07-15 03:37:44.159058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.122 qpair failed and we were unable to recover it. 00:34:38.122 [2024-07-15 03:37:44.159171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.122 [2024-07-15 03:37:44.159196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.122 qpair failed and we were unable to recover it. 00:34:38.122 [2024-07-15 03:37:44.159300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.122 [2024-07-15 03:37:44.159326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.122 qpair failed and we were unable to recover it. 00:34:38.122 [2024-07-15 03:37:44.159432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.122 [2024-07-15 03:37:44.159457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.122 qpair failed and we were unable to recover it. 00:34:38.122 [2024-07-15 03:37:44.159570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.122 [2024-07-15 03:37:44.159595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.122 qpair failed and we were unable to recover it. 00:34:38.122 [2024-07-15 03:37:44.159703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.122 [2024-07-15 03:37:44.159729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.122 qpair failed and we were unable to recover it. 00:34:38.122 [2024-07-15 03:37:44.159836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.122 [2024-07-15 03:37:44.159861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.122 qpair failed and we were unable to recover it. 00:34:38.122 [2024-07-15 03:37:44.159977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.122 [2024-07-15 03:37:44.160003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.122 qpair failed and we were unable to recover it. 00:34:38.122 [2024-07-15 03:37:44.160105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.122 [2024-07-15 03:37:44.160131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.122 qpair failed and we were unable to recover it. 00:34:38.122 [2024-07-15 03:37:44.160233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.122 [2024-07-15 03:37:44.160258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.122 qpair failed and we were unable to recover it. 00:34:38.122 [2024-07-15 03:37:44.160392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.122 [2024-07-15 03:37:44.160418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.122 qpair failed and we were unable to recover it. 00:34:38.122 [2024-07-15 03:37:44.160515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.122 [2024-07-15 03:37:44.160541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.122 qpair failed and we were unable to recover it. 00:34:38.122 [2024-07-15 03:37:44.160685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.122 [2024-07-15 03:37:44.160724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.122 qpair failed and we were unable to recover it. 00:34:38.122 [2024-07-15 03:37:44.160869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.122 [2024-07-15 03:37:44.160903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.122 qpair failed and we were unable to recover it. 00:34:38.122 [2024-07-15 03:37:44.161048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.122 [2024-07-15 03:37:44.161076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.122 qpair failed and we were unable to recover it. 00:34:38.122 [2024-07-15 03:37:44.161192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.122 [2024-07-15 03:37:44.161218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.122 qpair failed and we were unable to recover it. 00:34:38.122 [2024-07-15 03:37:44.161353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.122 [2024-07-15 03:37:44.161379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.122 qpair failed and we were unable to recover it. 00:34:38.122 [2024-07-15 03:37:44.161488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.122 [2024-07-15 03:37:44.161515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.122 qpair failed and we were unable to recover it. 00:34:38.122 [2024-07-15 03:37:44.161662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.122 [2024-07-15 03:37:44.161693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.122 qpair failed and we were unable to recover it. 00:34:38.122 [2024-07-15 03:37:44.161836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.122 [2024-07-15 03:37:44.161863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.122 qpair failed and we were unable to recover it. 00:34:38.122 [2024-07-15 03:37:44.161982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.122 [2024-07-15 03:37:44.162007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.122 qpair failed and we were unable to recover it. 00:34:38.122 [2024-07-15 03:37:44.162146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.122 [2024-07-15 03:37:44.162171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.122 qpair failed and we were unable to recover it. 00:34:38.122 [2024-07-15 03:37:44.162288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.122 [2024-07-15 03:37:44.162314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.122 qpair failed and we were unable to recover it. 00:34:38.122 [2024-07-15 03:37:44.162447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.122 [2024-07-15 03:37:44.162472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.122 qpair failed and we were unable to recover it. 00:34:38.122 [2024-07-15 03:37:44.162579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.122 [2024-07-15 03:37:44.162607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.122 qpair failed and we were unable to recover it. 00:34:38.122 [2024-07-15 03:37:44.162722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.122 [2024-07-15 03:37:44.162748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.122 qpair failed and we were unable to recover it. 00:34:38.122 [2024-07-15 03:37:44.162856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.122 [2024-07-15 03:37:44.162894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.122 qpair failed and we were unable to recover it. 00:34:38.122 [2024-07-15 03:37:44.163029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.122 [2024-07-15 03:37:44.163055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.122 qpair failed and we were unable to recover it. 00:34:38.122 [2024-07-15 03:37:44.163201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.122 [2024-07-15 03:37:44.163227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.122 qpair failed and we were unable to recover it. 00:34:38.122 [2024-07-15 03:37:44.163364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.122 [2024-07-15 03:37:44.163391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.122 qpair failed and we were unable to recover it. 00:34:38.122 [2024-07-15 03:37:44.163512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.123 [2024-07-15 03:37:44.163539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.123 qpair failed and we were unable to recover it. 00:34:38.123 [2024-07-15 03:37:44.163682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.123 [2024-07-15 03:37:44.163707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.123 qpair failed and we were unable to recover it. 00:34:38.123 [2024-07-15 03:37:44.163888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.123 [2024-07-15 03:37:44.163915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.123 qpair failed and we were unable to recover it. 00:34:38.123 [2024-07-15 03:37:44.164021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.123 [2024-07-15 03:37:44.164046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.123 qpair failed and we were unable to recover it. 00:34:38.123 [2024-07-15 03:37:44.164148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.123 [2024-07-15 03:37:44.164173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.123 qpair failed and we were unable to recover it. 00:34:38.123 [2024-07-15 03:37:44.164303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.123 [2024-07-15 03:37:44.164329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.123 qpair failed and we were unable to recover it. 00:34:38.123 [2024-07-15 03:37:44.164435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.123 [2024-07-15 03:37:44.164460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.123 qpair failed and we were unable to recover it. 00:34:38.123 [2024-07-15 03:37:44.164601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.123 [2024-07-15 03:37:44.164626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.123 qpair failed and we were unable to recover it. 00:34:38.123 [2024-07-15 03:37:44.164739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.123 [2024-07-15 03:37:44.164765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.123 qpair failed and we were unable to recover it. 00:34:38.123 [2024-07-15 03:37:44.164914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.123 [2024-07-15 03:37:44.164941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.123 qpair failed and we were unable to recover it. 00:34:38.123 [2024-07-15 03:37:44.165108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.123 [2024-07-15 03:37:44.165134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.123 qpair failed and we were unable to recover it. 00:34:38.123 [2024-07-15 03:37:44.165278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.123 [2024-07-15 03:37:44.165304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.123 qpair failed and we were unable to recover it. 00:34:38.123 [2024-07-15 03:37:44.165466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.123 [2024-07-15 03:37:44.165492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.123 qpair failed and we were unable to recover it. 00:34:38.123 [2024-07-15 03:37:44.165629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.123 [2024-07-15 03:37:44.165655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.123 qpair failed and we were unable to recover it. 00:34:38.123 [2024-07-15 03:37:44.165769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.123 [2024-07-15 03:37:44.165795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.123 qpair failed and we were unable to recover it. 00:34:38.123 [2024-07-15 03:37:44.165938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.123 [2024-07-15 03:37:44.165966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.123 qpair failed and we were unable to recover it. 00:34:38.123 [2024-07-15 03:37:44.166078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.123 [2024-07-15 03:37:44.166105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.123 qpair failed and we were unable to recover it. 00:34:38.123 [2024-07-15 03:37:44.166249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.123 [2024-07-15 03:37:44.166275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.123 qpair failed and we were unable to recover it. 00:34:38.123 [2024-07-15 03:37:44.166413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.123 [2024-07-15 03:37:44.166439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.123 qpair failed and we were unable to recover it. 00:34:38.123 [2024-07-15 03:37:44.166551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.123 [2024-07-15 03:37:44.166577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.123 qpair failed and we were unable to recover it. 00:34:38.123 [2024-07-15 03:37:44.166710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.123 [2024-07-15 03:37:44.166735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.123 qpair failed and we were unable to recover it. 00:34:38.123 [2024-07-15 03:37:44.166836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.123 [2024-07-15 03:37:44.166863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.123 qpair failed and we were unable to recover it. 00:34:38.123 [2024-07-15 03:37:44.166985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.123 [2024-07-15 03:37:44.167011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.123 qpair failed and we were unable to recover it. 00:34:38.123 [2024-07-15 03:37:44.167130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.123 [2024-07-15 03:37:44.167156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.123 qpair failed and we were unable to recover it. 00:34:38.123 [2024-07-15 03:37:44.167295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.123 [2024-07-15 03:37:44.167323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.123 qpair failed and we were unable to recover it. 00:34:38.123 [2024-07-15 03:37:44.167434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.123 [2024-07-15 03:37:44.167460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.123 qpair failed and we were unable to recover it. 00:34:38.123 [2024-07-15 03:37:44.167593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.123 [2024-07-15 03:37:44.167618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.123 qpair failed and we were unable to recover it. 00:34:38.123 [2024-07-15 03:37:44.167735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.123 [2024-07-15 03:37:44.167762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.123 qpair failed and we were unable to recover it. 00:34:38.123 [2024-07-15 03:37:44.167882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.123 [2024-07-15 03:37:44.167909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.123 qpair failed and we were unable to recover it. 00:34:38.123 [2024-07-15 03:37:44.168051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.123 [2024-07-15 03:37:44.168076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.123 qpair failed and we were unable to recover it. 00:34:38.123 [2024-07-15 03:37:44.168222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.123 [2024-07-15 03:37:44.168247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.123 qpair failed and we were unable to recover it. 00:34:38.123 [2024-07-15 03:37:44.168390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.123 [2024-07-15 03:37:44.168416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.123 qpair failed and we were unable to recover it. 00:34:38.123 [2024-07-15 03:37:44.168520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.123 [2024-07-15 03:37:44.168545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.123 qpair failed and we were unable to recover it. 00:34:38.123 [2024-07-15 03:37:44.168655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.123 [2024-07-15 03:37:44.168681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.123 qpair failed and we were unable to recover it. 00:34:38.123 [2024-07-15 03:37:44.168792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.123 [2024-07-15 03:37:44.168817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.123 qpair failed and we were unable to recover it. 00:34:38.123 [2024-07-15 03:37:44.168960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.123 [2024-07-15 03:37:44.168987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.123 qpair failed and we were unable to recover it. 00:34:38.123 [2024-07-15 03:37:44.169096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.123 [2024-07-15 03:37:44.169121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.124 qpair failed and we were unable to recover it. 00:34:38.124 [2024-07-15 03:37:44.169244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.124 [2024-07-15 03:37:44.169269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.124 qpair failed and we were unable to recover it. 00:34:38.124 [2024-07-15 03:37:44.169377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.124 [2024-07-15 03:37:44.169402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.124 qpair failed and we were unable to recover it. 00:34:38.124 [2024-07-15 03:37:44.169530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.124 [2024-07-15 03:37:44.169555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.124 qpair failed and we were unable to recover it. 00:34:38.124 [2024-07-15 03:37:44.169662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.124 [2024-07-15 03:37:44.169687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.124 qpair failed and we were unable to recover it. 00:34:38.124 [2024-07-15 03:37:44.169792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.124 [2024-07-15 03:37:44.169817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.124 qpair failed and we were unable to recover it. 00:34:38.124 [2024-07-15 03:37:44.169977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.124 [2024-07-15 03:37:44.170005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.124 qpair failed and we were unable to recover it. 00:34:38.124 [2024-07-15 03:37:44.170123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.124 [2024-07-15 03:37:44.170149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.124 qpair failed and we were unable to recover it. 00:34:38.124 [2024-07-15 03:37:44.170285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.124 [2024-07-15 03:37:44.170311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.124 qpair failed and we were unable to recover it. 00:34:38.124 [2024-07-15 03:37:44.170423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.124 [2024-07-15 03:37:44.170449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.124 qpair failed and we were unable to recover it. 00:34:38.124 [2024-07-15 03:37:44.170590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.124 [2024-07-15 03:37:44.170618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.124 qpair failed and we were unable to recover it. 00:34:38.124 [2024-07-15 03:37:44.170752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.124 [2024-07-15 03:37:44.170779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.124 qpair failed and we were unable to recover it. 00:34:38.124 [2024-07-15 03:37:44.170899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.124 [2024-07-15 03:37:44.170927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.124 qpair failed and we were unable to recover it. 00:34:38.124 [2024-07-15 03:37:44.171075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.124 [2024-07-15 03:37:44.171101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.124 qpair failed and we were unable to recover it. 00:34:38.124 [2024-07-15 03:37:44.171212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.124 [2024-07-15 03:37:44.171237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.124 qpair failed and we were unable to recover it. 00:34:38.124 [2024-07-15 03:37:44.171346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.124 [2024-07-15 03:37:44.171372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.124 qpair failed and we were unable to recover it. 00:34:38.124 [2024-07-15 03:37:44.171491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.124 [2024-07-15 03:37:44.171515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.124 qpair failed and we were unable to recover it. 00:34:38.124 [2024-07-15 03:37:44.171618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.124 [2024-07-15 03:37:44.171643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.124 qpair failed and we were unable to recover it. 00:34:38.124 [2024-07-15 03:37:44.171774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.124 [2024-07-15 03:37:44.171799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.124 qpair failed and we were unable to recover it. 00:34:38.124 [2024-07-15 03:37:44.171929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.124 [2024-07-15 03:37:44.171955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.124 qpair failed and we were unable to recover it. 00:34:38.124 [2024-07-15 03:37:44.172100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.124 [2024-07-15 03:37:44.172126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.124 qpair failed and we were unable to recover it. 00:34:38.124 [2024-07-15 03:37:44.172277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.124 [2024-07-15 03:37:44.172301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.124 qpair failed and we were unable to recover it. 00:34:38.124 [2024-07-15 03:37:44.172447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.124 [2024-07-15 03:37:44.172472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.124 qpair failed and we were unable to recover it. 00:34:38.124 [2024-07-15 03:37:44.172613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.124 [2024-07-15 03:37:44.172639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.124 qpair failed and we were unable to recover it. 00:34:38.124 [2024-07-15 03:37:44.172771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.124 [2024-07-15 03:37:44.172796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.124 qpair failed and we were unable to recover it. 00:34:38.124 [2024-07-15 03:37:44.172917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.124 [2024-07-15 03:37:44.172942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.124 qpair failed and we were unable to recover it. 00:34:38.124 [2024-07-15 03:37:44.173049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.124 [2024-07-15 03:37:44.173074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.124 qpair failed and we were unable to recover it. 00:34:38.124 [2024-07-15 03:37:44.173199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.124 [2024-07-15 03:37:44.173224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.124 qpair failed and we were unable to recover it. 00:34:38.124 [2024-07-15 03:37:44.173327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.124 [2024-07-15 03:37:44.173351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.124 qpair failed and we were unable to recover it. 00:34:38.124 [2024-07-15 03:37:44.173460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.124 [2024-07-15 03:37:44.173485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.124 qpair failed and we were unable to recover it. 00:34:38.124 [2024-07-15 03:37:44.173614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.124 [2024-07-15 03:37:44.173639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.124 qpair failed and we were unable to recover it. 00:34:38.124 [2024-07-15 03:37:44.173774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.124 [2024-07-15 03:37:44.173799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.124 qpair failed and we were unable to recover it. 00:34:38.124 [2024-07-15 03:37:44.173920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.124 [2024-07-15 03:37:44.173947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.124 qpair failed and we were unable to recover it. 00:34:38.124 [2024-07-15 03:37:44.174099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.124 [2024-07-15 03:37:44.174138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.124 qpair failed and we were unable to recover it. 00:34:38.124 [2024-07-15 03:37:44.174285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.124 [2024-07-15 03:37:44.174312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.124 qpair failed and we were unable to recover it. 00:34:38.124 [2024-07-15 03:37:44.174423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.124 [2024-07-15 03:37:44.174449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.124 qpair failed and we were unable to recover it. 00:34:38.124 [2024-07-15 03:37:44.174561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.124 [2024-07-15 03:37:44.174588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.124 qpair failed and we were unable to recover it. 00:34:38.124 [2024-07-15 03:37:44.174694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.125 [2024-07-15 03:37:44.174720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.125 qpair failed and we were unable to recover it. 00:34:38.125 [2024-07-15 03:37:44.174835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.125 [2024-07-15 03:37:44.174860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.125 qpair failed and we were unable to recover it. 00:34:38.125 [2024-07-15 03:37:44.175006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.125 [2024-07-15 03:37:44.175032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.125 qpair failed and we were unable to recover it. 00:34:38.125 [2024-07-15 03:37:44.175170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.125 [2024-07-15 03:37:44.175195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.125 qpair failed and we were unable to recover it. 00:34:38.125 [2024-07-15 03:37:44.175331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.125 [2024-07-15 03:37:44.175356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.125 qpair failed and we were unable to recover it. 00:34:38.125 [2024-07-15 03:37:44.175512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.125 [2024-07-15 03:37:44.175538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.125 qpair failed and we were unable to recover it. 00:34:38.125 [2024-07-15 03:37:44.175655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.125 [2024-07-15 03:37:44.175680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.125 qpair failed and we were unable to recover it. 00:34:38.125 [2024-07-15 03:37:44.175794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.125 [2024-07-15 03:37:44.175818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.125 qpair failed and we were unable to recover it. 00:34:38.125 [2024-07-15 03:37:44.175926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.125 [2024-07-15 03:37:44.175952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.125 qpair failed and we were unable to recover it. 00:34:38.125 [2024-07-15 03:37:44.176066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.125 [2024-07-15 03:37:44.176091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.125 qpair failed and we were unable to recover it. 00:34:38.125 [2024-07-15 03:37:44.176237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.125 [2024-07-15 03:37:44.176262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.125 qpair failed and we were unable to recover it. 00:34:38.125 [2024-07-15 03:37:44.176363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.125 [2024-07-15 03:37:44.176388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.125 qpair failed and we were unable to recover it. 00:34:38.125 [2024-07-15 03:37:44.176543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.125 [2024-07-15 03:37:44.176569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.125 qpair failed and we were unable to recover it. 00:34:38.125 [2024-07-15 03:37:44.176673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.125 [2024-07-15 03:37:44.176698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.125 qpair failed and we were unable to recover it. 00:34:38.125 [2024-07-15 03:37:44.176854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.125 [2024-07-15 03:37:44.176908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.125 qpair failed and we were unable to recover it. 00:34:38.125 [2024-07-15 03:37:44.177058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.125 [2024-07-15 03:37:44.177085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.125 qpair failed and we were unable to recover it. 00:34:38.125 [2024-07-15 03:37:44.177221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.125 [2024-07-15 03:37:44.177247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.125 qpair failed and we were unable to recover it. 00:34:38.125 [2024-07-15 03:37:44.177392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.125 [2024-07-15 03:37:44.177418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.125 qpair failed and we were unable to recover it. 00:34:38.125 [2024-07-15 03:37:44.177528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.125 [2024-07-15 03:37:44.177555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.125 qpair failed and we were unable to recover it. 00:34:38.125 [2024-07-15 03:37:44.177689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.125 [2024-07-15 03:37:44.177715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.125 qpair failed and we were unable to recover it. 00:34:38.125 [2024-07-15 03:37:44.177897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.125 [2024-07-15 03:37:44.177925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.125 qpair failed and we were unable to recover it. 00:34:38.125 [2024-07-15 03:37:44.178048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.125 [2024-07-15 03:37:44.178074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.125 qpair failed and we were unable to recover it. 00:34:38.125 [2024-07-15 03:37:44.178183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.125 [2024-07-15 03:37:44.178208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.125 qpair failed and we were unable to recover it. 00:34:38.125 [2024-07-15 03:37:44.178345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.125 [2024-07-15 03:37:44.178370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.125 qpair failed and we were unable to recover it. 00:34:38.125 [2024-07-15 03:37:44.178494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.125 [2024-07-15 03:37:44.178519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.125 qpair failed and we were unable to recover it. 00:34:38.125 [2024-07-15 03:37:44.178625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.125 [2024-07-15 03:37:44.178650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.125 qpair failed and we were unable to recover it. 00:34:38.125 [2024-07-15 03:37:44.178782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.125 [2024-07-15 03:37:44.178822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.125 qpair failed and we were unable to recover it. 00:34:38.125 [2024-07-15 03:37:44.178974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.125 [2024-07-15 03:37:44.179002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.125 qpair failed and we were unable to recover it. 00:34:38.125 [2024-07-15 03:37:44.179144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.125 [2024-07-15 03:37:44.179172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.125 qpair failed and we were unable to recover it. 00:34:38.125 [2024-07-15 03:37:44.179285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.125 [2024-07-15 03:37:44.179311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.125 qpair failed and we were unable to recover it. 00:34:38.125 [2024-07-15 03:37:44.179453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.125 [2024-07-15 03:37:44.179480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.125 qpair failed and we were unable to recover it. 00:34:38.125 [2024-07-15 03:37:44.179604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.125 [2024-07-15 03:37:44.179630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.125 qpair failed and we were unable to recover it. 00:34:38.125 [2024-07-15 03:37:44.179748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.125 [2024-07-15 03:37:44.179775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.125 qpair failed and we were unable to recover it. 00:34:38.125 [2024-07-15 03:37:44.179918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.125 [2024-07-15 03:37:44.179945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.125 qpair failed and we were unable to recover it. 00:34:38.125 [2024-07-15 03:37:44.180067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.125 [2024-07-15 03:37:44.180092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.125 qpair failed and we were unable to recover it. 00:34:38.125 [2024-07-15 03:37:44.180223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.125 [2024-07-15 03:37:44.180248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.125 qpair failed and we were unable to recover it. 00:34:38.125 [2024-07-15 03:37:44.180381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.126 [2024-07-15 03:37:44.180405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.126 qpair failed and we were unable to recover it. 00:34:38.126 [2024-07-15 03:37:44.180522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.126 [2024-07-15 03:37:44.180547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.126 qpair failed and we were unable to recover it. 00:34:38.126 [2024-07-15 03:37:44.180663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.126 [2024-07-15 03:37:44.180689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.126 qpair failed and we were unable to recover it. 00:34:38.126 [2024-07-15 03:37:44.180799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.126 [2024-07-15 03:37:44.180825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.126 qpair failed and we were unable to recover it. 00:34:38.126 [2024-07-15 03:37:44.180949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.126 [2024-07-15 03:37:44.180977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.126 qpair failed and we were unable to recover it. 00:34:38.126 [2024-07-15 03:37:44.181117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.126 [2024-07-15 03:37:44.181143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.126 qpair failed and we were unable to recover it. 00:34:38.126 [2024-07-15 03:37:44.181309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.126 [2024-07-15 03:37:44.181335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.126 qpair failed and we were unable to recover it. 00:34:38.126 [2024-07-15 03:37:44.181453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.126 [2024-07-15 03:37:44.181479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.126 qpair failed and we were unable to recover it. 00:34:38.126 [2024-07-15 03:37:44.181618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.126 [2024-07-15 03:37:44.181646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.126 qpair failed and we were unable to recover it. 00:34:38.126 [2024-07-15 03:37:44.181761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.126 [2024-07-15 03:37:44.181786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.126 qpair failed and we were unable to recover it. 00:34:38.126 [2024-07-15 03:37:44.181951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.126 [2024-07-15 03:37:44.181985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.126 qpair failed and we were unable to recover it. 00:34:38.126 [2024-07-15 03:37:44.182107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.126 [2024-07-15 03:37:44.182139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.126 qpair failed and we were unable to recover it. 00:34:38.126 [2024-07-15 03:37:44.182296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.126 [2024-07-15 03:37:44.182324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.126 qpair failed and we were unable to recover it. 00:34:38.126 [2024-07-15 03:37:44.182444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.126 [2024-07-15 03:37:44.182469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.126 qpair failed and we were unable to recover it. 00:34:38.126 [2024-07-15 03:37:44.182591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.126 [2024-07-15 03:37:44.182620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.126 qpair failed and we were unable to recover it. 00:34:38.126 [2024-07-15 03:37:44.182758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.126 [2024-07-15 03:37:44.182784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.126 qpair failed and we were unable to recover it. 00:34:38.126 [2024-07-15 03:37:44.182929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.126 [2024-07-15 03:37:44.182956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.126 qpair failed and we were unable to recover it. 00:34:38.126 [2024-07-15 03:37:44.183065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.126 [2024-07-15 03:37:44.183091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.126 qpair failed and we were unable to recover it. 00:34:38.126 [2024-07-15 03:37:44.183240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.126 [2024-07-15 03:37:44.183267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.126 qpair failed and we were unable to recover it. 00:34:38.126 [2024-07-15 03:37:44.183371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.126 [2024-07-15 03:37:44.183397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.126 qpair failed and we were unable to recover it. 00:34:38.126 [2024-07-15 03:37:44.183509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.126 [2024-07-15 03:37:44.183534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.126 qpair failed and we were unable to recover it. 00:34:38.126 [2024-07-15 03:37:44.183640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.126 [2024-07-15 03:37:44.183666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.126 qpair failed and we were unable to recover it. 00:34:38.126 [2024-07-15 03:37:44.183802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.126 [2024-07-15 03:37:44.183828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.126 qpair failed and we were unable to recover it. 00:34:38.126 [2024-07-15 03:37:44.183944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.126 [2024-07-15 03:37:44.183971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.126 qpair failed and we were unable to recover it. 00:34:38.126 [2024-07-15 03:37:44.184109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.126 [2024-07-15 03:37:44.184135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.126 qpair failed and we were unable to recover it. 00:34:38.126 [2024-07-15 03:37:44.184245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.126 [2024-07-15 03:37:44.184271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.126 qpair failed and we were unable to recover it. 00:34:38.126 [2024-07-15 03:37:44.184416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.126 [2024-07-15 03:37:44.184442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.126 qpair failed and we were unable to recover it. 00:34:38.126 [2024-07-15 03:37:44.184595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.126 [2024-07-15 03:37:44.184626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.126 qpair failed and we were unable to recover it. 00:34:38.126 [2024-07-15 03:37:44.184742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.126 [2024-07-15 03:37:44.184768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.126 qpair failed and we were unable to recover it. 00:34:38.126 [2024-07-15 03:37:44.184907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.126 [2024-07-15 03:37:44.184935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.126 qpair failed and we were unable to recover it. 00:34:38.126 [2024-07-15 03:37:44.185047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.126 [2024-07-15 03:37:44.185073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.126 qpair failed and we were unable to recover it. 00:34:38.126 [2024-07-15 03:37:44.185209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.127 [2024-07-15 03:37:44.185235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.127 qpair failed and we were unable to recover it. 00:34:38.127 [2024-07-15 03:37:44.185352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.127 [2024-07-15 03:37:44.185379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.127 qpair failed and we were unable to recover it. 00:34:38.127 [2024-07-15 03:37:44.185504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.127 [2024-07-15 03:37:44.185530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.127 qpair failed and we were unable to recover it. 00:34:38.127 [2024-07-15 03:37:44.185642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.127 [2024-07-15 03:37:44.185668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.127 qpair failed and we were unable to recover it. 00:34:38.127 [2024-07-15 03:37:44.185817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.127 [2024-07-15 03:37:44.185846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.127 qpair failed and we were unable to recover it. 00:34:38.127 [2024-07-15 03:37:44.185978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.127 [2024-07-15 03:37:44.186005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.127 qpair failed and we were unable to recover it. 00:34:38.127 [2024-07-15 03:37:44.186131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.127 [2024-07-15 03:37:44.186165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.127 qpair failed and we were unable to recover it. 00:34:38.127 [2024-07-15 03:37:44.186286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.127 [2024-07-15 03:37:44.186323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.127 qpair failed and we were unable to recover it. 00:34:38.127 [2024-07-15 03:37:44.186477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.127 [2024-07-15 03:37:44.186504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.127 qpair failed and we were unable to recover it. 00:34:38.127 [2024-07-15 03:37:44.186612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.127 [2024-07-15 03:37:44.186637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.127 qpair failed and we were unable to recover it. 00:34:38.394 [2024-07-15 03:37:44.186753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.394 [2024-07-15 03:37:44.186778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.394 qpair failed and we were unable to recover it. 00:34:38.394 [2024-07-15 03:37:44.186923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.394 [2024-07-15 03:37:44.186949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.394 qpair failed and we were unable to recover it. 00:34:38.394 [2024-07-15 03:37:44.187083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.394 [2024-07-15 03:37:44.187109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.394 qpair failed and we were unable to recover it. 00:34:38.394 [2024-07-15 03:37:44.187234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.394 [2024-07-15 03:37:44.187259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.394 qpair failed and we were unable to recover it. 00:34:38.394 [2024-07-15 03:37:44.187412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.394 [2024-07-15 03:37:44.187437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.394 qpair failed and we were unable to recover it. 00:34:38.394 [2024-07-15 03:37:44.187560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.394 [2024-07-15 03:37:44.187586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.394 qpair failed and we were unable to recover it. 00:34:38.394 [2024-07-15 03:37:44.187692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.394 [2024-07-15 03:37:44.187717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.394 qpair failed and we were unable to recover it. 00:34:38.394 [2024-07-15 03:37:44.187856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.394 [2024-07-15 03:37:44.187909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.394 qpair failed and we were unable to recover it. 00:34:38.394 [2024-07-15 03:37:44.188031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.394 [2024-07-15 03:37:44.188058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.394 qpair failed and we were unable to recover it. 00:34:38.394 [2024-07-15 03:37:44.188206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.394 [2024-07-15 03:37:44.188234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.394 qpair failed and we were unable to recover it. 00:34:38.394 [2024-07-15 03:37:44.188346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.394 [2024-07-15 03:37:44.188372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.394 qpair failed and we were unable to recover it. 00:34:38.394 [2024-07-15 03:37:44.188495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.394 [2024-07-15 03:37:44.188524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.394 qpair failed and we were unable to recover it. 00:34:38.394 [2024-07-15 03:37:44.188640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.394 [2024-07-15 03:37:44.188666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.394 qpair failed and we were unable to recover it. 00:34:38.394 [2024-07-15 03:37:44.188802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.394 [2024-07-15 03:37:44.188833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.394 qpair failed and we were unable to recover it. 00:34:38.394 [2024-07-15 03:37:44.188994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.394 [2024-07-15 03:37:44.189021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.394 qpair failed and we were unable to recover it. 00:34:38.394 [2024-07-15 03:37:44.189136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.395 [2024-07-15 03:37:44.189162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.395 qpair failed and we were unable to recover it. 00:34:38.395 [2024-07-15 03:37:44.189273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.395 [2024-07-15 03:37:44.189299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.395 qpair failed and we were unable to recover it. 00:34:38.395 [2024-07-15 03:37:44.189449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.395 [2024-07-15 03:37:44.189474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.395 qpair failed and we were unable to recover it. 00:34:38.395 [2024-07-15 03:37:44.189623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.395 [2024-07-15 03:37:44.189649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.395 qpair failed and we were unable to recover it. 00:34:38.395 [2024-07-15 03:37:44.189778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.395 [2024-07-15 03:37:44.189805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.395 qpair failed and we were unable to recover it. 00:34:38.395 [2024-07-15 03:37:44.189932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.395 [2024-07-15 03:37:44.189959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.395 qpair failed and we were unable to recover it. 00:34:38.395 [2024-07-15 03:37:44.190071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.395 [2024-07-15 03:37:44.190099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.395 qpair failed and we were unable to recover it. 00:34:38.395 [2024-07-15 03:37:44.190223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.395 [2024-07-15 03:37:44.190249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.395 qpair failed and we were unable to recover it. 00:34:38.395 [2024-07-15 03:37:44.190369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.395 [2024-07-15 03:37:44.190395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.395 qpair failed and we were unable to recover it. 00:34:38.395 [2024-07-15 03:37:44.190514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.395 [2024-07-15 03:37:44.190540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.395 qpair failed and we were unable to recover it. 00:34:38.395 [2024-07-15 03:37:44.190649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.395 [2024-07-15 03:37:44.190676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.395 qpair failed and we were unable to recover it. 00:34:38.395 [2024-07-15 03:37:44.190798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.395 [2024-07-15 03:37:44.190824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.395 qpair failed and we were unable to recover it. 00:34:38.395 [2024-07-15 03:37:44.190966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.395 [2024-07-15 03:37:44.190994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.395 qpair failed and we were unable to recover it. 00:34:38.395 [2024-07-15 03:37:44.191103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.395 [2024-07-15 03:37:44.191129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.395 qpair failed and we were unable to recover it. 00:34:38.395 [2024-07-15 03:37:44.191242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.395 [2024-07-15 03:37:44.191268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.395 qpair failed and we were unable to recover it. 00:34:38.395 [2024-07-15 03:37:44.191383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.395 [2024-07-15 03:37:44.191410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.395 qpair failed and we were unable to recover it. 00:34:38.395 [2024-07-15 03:37:44.191555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.395 [2024-07-15 03:37:44.191581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.395 qpair failed and we were unable to recover it. 00:34:38.395 [2024-07-15 03:37:44.191723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.395 [2024-07-15 03:37:44.191749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.395 qpair failed and we were unable to recover it. 00:34:38.395 [2024-07-15 03:37:44.191882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.395 [2024-07-15 03:37:44.191909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.395 qpair failed and we were unable to recover it. 00:34:38.395 [2024-07-15 03:37:44.192048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.395 [2024-07-15 03:37:44.192075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.395 qpair failed and we were unable to recover it. 00:34:38.395 [2024-07-15 03:37:44.192195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.395 [2024-07-15 03:37:44.192232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.395 qpair failed and we were unable to recover it. 00:34:38.395 [2024-07-15 03:37:44.192382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.395 [2024-07-15 03:37:44.192408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.395 qpair failed and we were unable to recover it. 00:34:38.395 [2024-07-15 03:37:44.192523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.395 [2024-07-15 03:37:44.192549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.395 qpair failed and we were unable to recover it. 00:34:38.395 [2024-07-15 03:37:44.192677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.395 [2024-07-15 03:37:44.192703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.395 qpair failed and we were unable to recover it. 00:34:38.395 [2024-07-15 03:37:44.192856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.395 [2024-07-15 03:37:44.192898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.395 qpair failed and we were unable to recover it. 00:34:38.395 [2024-07-15 03:37:44.193036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.395 [2024-07-15 03:37:44.193062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.395 qpair failed and we were unable to recover it. 00:34:38.395 [2024-07-15 03:37:44.193184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.395 [2024-07-15 03:37:44.193211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.395 qpair failed and we were unable to recover it. 00:34:38.395 [2024-07-15 03:37:44.193330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.395 [2024-07-15 03:37:44.193357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.395 qpair failed and we were unable to recover it. 00:34:38.395 [2024-07-15 03:37:44.193494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.395 [2024-07-15 03:37:44.193520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.395 qpair failed and we were unable to recover it. 00:34:38.395 [2024-07-15 03:37:44.193636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.395 [2024-07-15 03:37:44.193675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.395 qpair failed and we were unable to recover it. 00:34:38.395 [2024-07-15 03:37:44.193833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.395 [2024-07-15 03:37:44.193861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.395 qpair failed and we were unable to recover it. 00:34:38.395 [2024-07-15 03:37:44.194019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.395 [2024-07-15 03:37:44.194058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.395 qpair failed and we were unable to recover it. 00:34:38.395 [2024-07-15 03:37:44.194171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.395 [2024-07-15 03:37:44.194198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.395 qpair failed and we were unable to recover it. 00:34:38.395 [2024-07-15 03:37:44.194341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.395 [2024-07-15 03:37:44.194366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.395 qpair failed and we were unable to recover it. 00:34:38.395 [2024-07-15 03:37:44.194488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.395 [2024-07-15 03:37:44.194514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.395 qpair failed and we were unable to recover it. 00:34:38.395 [2024-07-15 03:37:44.194631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.395 [2024-07-15 03:37:44.194656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.395 qpair failed and we were unable to recover it. 00:34:38.395 [2024-07-15 03:37:44.194793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.396 [2024-07-15 03:37:44.194818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.396 qpair failed and we were unable to recover it. 00:34:38.396 [2024-07-15 03:37:44.194958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.396 [2024-07-15 03:37:44.194985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.396 qpair failed and we were unable to recover it. 00:34:38.396 [2024-07-15 03:37:44.195091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.396 [2024-07-15 03:37:44.195122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.396 qpair failed and we were unable to recover it. 00:34:38.396 [2024-07-15 03:37:44.195252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.396 [2024-07-15 03:37:44.195280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.396 qpair failed and we were unable to recover it. 00:34:38.396 [2024-07-15 03:37:44.195424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.396 [2024-07-15 03:37:44.195449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.396 qpair failed and we were unable to recover it. 00:34:38.396 [2024-07-15 03:37:44.195598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.396 [2024-07-15 03:37:44.195626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.396 qpair failed and we were unable to recover it. 00:34:38.396 [2024-07-15 03:37:44.195797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.396 [2024-07-15 03:37:44.195822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.396 qpair failed and we were unable to recover it. 00:34:38.396 [2024-07-15 03:37:44.195958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.396 [2024-07-15 03:37:44.195984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.396 qpair failed and we were unable to recover it. 00:34:38.396 [2024-07-15 03:37:44.196100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.396 [2024-07-15 03:37:44.196126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.396 qpair failed and we were unable to recover it. 00:34:38.396 [2024-07-15 03:37:44.196249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.396 [2024-07-15 03:37:44.196274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.396 qpair failed and we were unable to recover it. 00:34:38.396 [2024-07-15 03:37:44.196410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.396 [2024-07-15 03:37:44.196437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.396 qpair failed and we were unable to recover it. 00:34:38.396 [2024-07-15 03:37:44.196542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.396 [2024-07-15 03:37:44.196568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.396 qpair failed and we were unable to recover it. 00:34:38.396 [2024-07-15 03:37:44.196713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.396 [2024-07-15 03:37:44.196739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.396 qpair failed and we were unable to recover it. 00:34:38.396 [2024-07-15 03:37:44.196849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.396 [2024-07-15 03:37:44.196881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.396 qpair failed and we were unable to recover it. 00:34:38.396 [2024-07-15 03:37:44.196989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.396 [2024-07-15 03:37:44.197015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.396 qpair failed and we were unable to recover it. 00:34:38.396 [2024-07-15 03:37:44.197131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.396 [2024-07-15 03:37:44.197157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.396 qpair failed and we were unable to recover it. 00:34:38.396 [2024-07-15 03:37:44.197289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.396 [2024-07-15 03:37:44.197316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.396 qpair failed and we were unable to recover it. 00:34:38.396 [2024-07-15 03:37:44.197423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.396 [2024-07-15 03:37:44.197450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.396 qpair failed and we were unable to recover it. 00:34:38.396 [2024-07-15 03:37:44.197566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.396 [2024-07-15 03:37:44.197592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.396 qpair failed and we were unable to recover it. 00:34:38.396 [2024-07-15 03:37:44.197745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.396 [2024-07-15 03:37:44.197784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.396 qpair failed and we were unable to recover it. 00:34:38.396 03:37:44 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:34:38.396 [2024-07-15 03:37:44.197936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.396 [2024-07-15 03:37:44.197974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.396 qpair failed and we were unable to recover it. 00:34:38.396 03:37:44 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@862 -- # return 0 00:34:38.396 [2024-07-15 03:37:44.198100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.396 [2024-07-15 03:37:44.198127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.396 qpair failed and we were unable to recover it. 00:34:38.396 03:37:44 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:34:38.396 [2024-07-15 03:37:44.198239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.396 [2024-07-15 03:37:44.198264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.396 qpair failed and we were unable to recover it. 00:34:38.396 03:37:44 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:34:38.396 [2024-07-15 03:37:44.198371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.396 03:37:44 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:38.396 [2024-07-15 03:37:44.198397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.396 qpair failed and we were unable to recover it. 00:34:38.396 [2024-07-15 03:37:44.198510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.396 [2024-07-15 03:37:44.198534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.396 qpair failed and we were unable to recover it. 00:34:38.396 [2024-07-15 03:37:44.198714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.396 [2024-07-15 03:37:44.198739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.396 qpair failed and we were unable to recover it. 00:34:38.396 [2024-07-15 03:37:44.198843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.396 [2024-07-15 03:37:44.198886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.396 qpair failed and we were unable to recover it. 00:34:38.396 [2024-07-15 03:37:44.199022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.396 [2024-07-15 03:37:44.199054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.396 qpair failed and we were unable to recover it. 00:34:38.396 [2024-07-15 03:37:44.199166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.396 [2024-07-15 03:37:44.199201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.396 qpair failed and we were unable to recover it. 00:34:38.396 [2024-07-15 03:37:44.199315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.396 [2024-07-15 03:37:44.199341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.396 qpair failed and we were unable to recover it. 00:34:38.396 [2024-07-15 03:37:44.199465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.396 [2024-07-15 03:37:44.199493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.396 qpair failed and we were unable to recover it. 00:34:38.396 [2024-07-15 03:37:44.199630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.396 [2024-07-15 03:37:44.199662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.396 qpair failed and we were unable to recover it. 00:34:38.396 [2024-07-15 03:37:44.199775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.396 [2024-07-15 03:37:44.199801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.396 qpair failed and we were unable to recover it. 00:34:38.396 [2024-07-15 03:37:44.199919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.396 [2024-07-15 03:37:44.199947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.396 qpair failed and we were unable to recover it. 00:34:38.396 [2024-07-15 03:37:44.200082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.397 [2024-07-15 03:37:44.200108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.397 qpair failed and we were unable to recover it. 00:34:38.397 [2024-07-15 03:37:44.200231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.397 [2024-07-15 03:37:44.200256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.397 qpair failed and we were unable to recover it. 00:34:38.397 [2024-07-15 03:37:44.200369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.397 [2024-07-15 03:37:44.200395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.397 qpair failed and we were unable to recover it. 00:34:38.397 [2024-07-15 03:37:44.200501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.397 [2024-07-15 03:37:44.200528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.397 qpair failed and we were unable to recover it. 00:34:38.397 [2024-07-15 03:37:44.200654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.397 [2024-07-15 03:37:44.200679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.397 qpair failed and we were unable to recover it. 00:34:38.397 [2024-07-15 03:37:44.200828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.397 [2024-07-15 03:37:44.200853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.397 qpair failed and we were unable to recover it. 00:34:38.397 [2024-07-15 03:37:44.201003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.397 [2024-07-15 03:37:44.201031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.397 qpair failed and we were unable to recover it. 00:34:38.397 [2024-07-15 03:37:44.201149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.397 [2024-07-15 03:37:44.201187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.397 qpair failed and we were unable to recover it. 00:34:38.397 [2024-07-15 03:37:44.201302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.397 [2024-07-15 03:37:44.201328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.397 qpair failed and we were unable to recover it. 00:34:38.397 [2024-07-15 03:37:44.201460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.397 [2024-07-15 03:37:44.201487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.397 qpair failed and we were unable to recover it. 00:34:38.397 [2024-07-15 03:37:44.201604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.397 [2024-07-15 03:37:44.201629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.397 qpair failed and we were unable to recover it. 00:34:38.397 [2024-07-15 03:37:44.201736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.397 [2024-07-15 03:37:44.201763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.397 qpair failed and we were unable to recover it. 00:34:38.397 [2024-07-15 03:37:44.201874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.397 [2024-07-15 03:37:44.201907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.397 qpair failed and we were unable to recover it. 00:34:38.397 [2024-07-15 03:37:44.202081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.397 [2024-07-15 03:37:44.202107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.397 qpair failed and we were unable to recover it. 00:34:38.397 [2024-07-15 03:37:44.202252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.397 [2024-07-15 03:37:44.202279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.397 qpair failed and we were unable to recover it. 00:34:38.397 [2024-07-15 03:37:44.202441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.397 [2024-07-15 03:37:44.202468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.397 qpair failed and we were unable to recover it. 00:34:38.397 [2024-07-15 03:37:44.202584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.397 [2024-07-15 03:37:44.202609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.397 qpair failed and we were unable to recover it. 00:34:38.397 [2024-07-15 03:37:44.202749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.397 [2024-07-15 03:37:44.202776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.397 qpair failed and we were unable to recover it. 00:34:38.397 [2024-07-15 03:37:44.202894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.397 [2024-07-15 03:37:44.202921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.397 qpair failed and we were unable to recover it. 00:34:38.397 [2024-07-15 03:37:44.203038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.397 [2024-07-15 03:37:44.203063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.397 qpair failed and we were unable to recover it. 00:34:38.397 [2024-07-15 03:37:44.203182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.397 [2024-07-15 03:37:44.203212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.397 qpair failed and we were unable to recover it. 00:34:38.397 [2024-07-15 03:37:44.203327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.397 [2024-07-15 03:37:44.203352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.397 qpair failed and we were unable to recover it. 00:34:38.397 [2024-07-15 03:37:44.203474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.397 [2024-07-15 03:37:44.203500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.397 qpair failed and we were unable to recover it. 00:34:38.397 [2024-07-15 03:37:44.203618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.397 [2024-07-15 03:37:44.203644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.397 qpair failed and we were unable to recover it. 00:34:38.397 [2024-07-15 03:37:44.203753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.397 [2024-07-15 03:37:44.203780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.397 qpair failed and we were unable to recover it. 00:34:38.397 [2024-07-15 03:37:44.203889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.397 [2024-07-15 03:37:44.203915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.397 qpair failed and we were unable to recover it. 00:34:38.397 [2024-07-15 03:37:44.204035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.397 [2024-07-15 03:37:44.204060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.397 qpair failed and we were unable to recover it. 00:34:38.397 [2024-07-15 03:37:44.204168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.397 [2024-07-15 03:37:44.204195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.397 qpair failed and we were unable to recover it. 00:34:38.397 [2024-07-15 03:37:44.204331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.397 [2024-07-15 03:37:44.204356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.397 qpair failed and we were unable to recover it. 00:34:38.397 [2024-07-15 03:37:44.204492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.397 [2024-07-15 03:37:44.204532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.397 qpair failed and we were unable to recover it. 00:34:38.397 [2024-07-15 03:37:44.204688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.397 [2024-07-15 03:37:44.204727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.397 qpair failed and we were unable to recover it. 00:34:38.397 [2024-07-15 03:37:44.204850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.397 [2024-07-15 03:37:44.204885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.397 qpair failed and we were unable to recover it. 00:34:38.397 [2024-07-15 03:37:44.204996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.397 [2024-07-15 03:37:44.205022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.397 qpair failed and we were unable to recover it. 00:34:38.397 [2024-07-15 03:37:44.205133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.397 [2024-07-15 03:37:44.205159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.397 qpair failed and we were unable to recover it. 00:34:38.397 [2024-07-15 03:37:44.205324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.397 [2024-07-15 03:37:44.205350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.397 qpair failed and we were unable to recover it. 00:34:38.397 [2024-07-15 03:37:44.205516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.397 [2024-07-15 03:37:44.205543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.397 qpair failed and we were unable to recover it. 00:34:38.397 [2024-07-15 03:37:44.205684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.398 [2024-07-15 03:37:44.205713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.398 qpair failed and we were unable to recover it. 00:34:38.398 [2024-07-15 03:37:44.205835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.398 [2024-07-15 03:37:44.205862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.398 qpair failed and we were unable to recover it. 00:34:38.398 [2024-07-15 03:37:44.206010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.398 [2024-07-15 03:37:44.206036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.398 qpair failed and we were unable to recover it. 00:34:38.398 [2024-07-15 03:37:44.206148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.398 [2024-07-15 03:37:44.206174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.398 qpair failed and we were unable to recover it. 00:34:38.398 [2024-07-15 03:37:44.206321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.398 [2024-07-15 03:37:44.206350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.398 qpair failed and we were unable to recover it. 00:34:38.398 [2024-07-15 03:37:44.206475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.398 [2024-07-15 03:37:44.206502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.398 qpair failed and we were unable to recover it. 00:34:38.398 [2024-07-15 03:37:44.206635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.398 [2024-07-15 03:37:44.206669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.398 qpair failed and we were unable to recover it. 00:34:38.398 [2024-07-15 03:37:44.206803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.398 [2024-07-15 03:37:44.206829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.398 qpair failed and we were unable to recover it. 00:34:38.398 [2024-07-15 03:37:44.206956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.398 [2024-07-15 03:37:44.206983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.398 qpair failed and we were unable to recover it. 00:34:38.398 [2024-07-15 03:37:44.207089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.398 [2024-07-15 03:37:44.207116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.398 qpair failed and we were unable to recover it. 00:34:38.398 [2024-07-15 03:37:44.207232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.398 [2024-07-15 03:37:44.207266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.398 qpair failed and we were unable to recover it. 00:34:38.398 [2024-07-15 03:37:44.207419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.398 [2024-07-15 03:37:44.207449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.398 qpair failed and we were unable to recover it. 00:34:38.398 [2024-07-15 03:37:44.207566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.398 [2024-07-15 03:37:44.207591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.398 qpair failed and we were unable to recover it. 00:34:38.398 [2024-07-15 03:37:44.207730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.398 [2024-07-15 03:37:44.207755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.398 qpair failed and we were unable to recover it. 00:34:38.398 [2024-07-15 03:37:44.207923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.398 [2024-07-15 03:37:44.207950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.398 qpair failed and we were unable to recover it. 00:34:38.398 [2024-07-15 03:37:44.208063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.398 [2024-07-15 03:37:44.208089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.398 qpair failed and we were unable to recover it. 00:34:38.398 [2024-07-15 03:37:44.208228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.398 [2024-07-15 03:37:44.208254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.398 qpair failed and we were unable to recover it. 00:34:38.398 [2024-07-15 03:37:44.208373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.398 [2024-07-15 03:37:44.208398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.398 qpair failed and we were unable to recover it. 00:34:38.398 [2024-07-15 03:37:44.208532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.398 [2024-07-15 03:37:44.208557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.398 qpair failed and we were unable to recover it. 00:34:38.398 [2024-07-15 03:37:44.208664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.398 [2024-07-15 03:37:44.208689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.398 qpair failed and we were unable to recover it. 00:34:38.398 [2024-07-15 03:37:44.208808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.398 [2024-07-15 03:37:44.208833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.398 qpair failed and we were unable to recover it. 00:34:38.398 [2024-07-15 03:37:44.208965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.398 [2024-07-15 03:37:44.208991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.398 qpair failed and we were unable to recover it. 00:34:38.398 [2024-07-15 03:37:44.209128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.398 [2024-07-15 03:37:44.209154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.398 qpair failed and we were unable to recover it. 00:34:38.398 [2024-07-15 03:37:44.209298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.398 [2024-07-15 03:37:44.209324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.398 qpair failed and we were unable to recover it. 00:34:38.398 [2024-07-15 03:37:44.209448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.398 [2024-07-15 03:37:44.209474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.398 qpair failed and we were unable to recover it. 00:34:38.398 [2024-07-15 03:37:44.209630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.398 [2024-07-15 03:37:44.209656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.398 qpair failed and we were unable to recover it. 00:34:38.398 [2024-07-15 03:37:44.209773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.398 [2024-07-15 03:37:44.209799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.398 qpair failed and we were unable to recover it. 00:34:38.398 [2024-07-15 03:37:44.209913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.398 [2024-07-15 03:37:44.209940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.398 qpair failed and we were unable to recover it. 00:34:38.398 [2024-07-15 03:37:44.210044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.398 [2024-07-15 03:37:44.210070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.398 qpair failed and we were unable to recover it. 00:34:38.398 [2024-07-15 03:37:44.210188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.398 [2024-07-15 03:37:44.210213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.398 qpair failed and we were unable to recover it. 00:34:38.398 [2024-07-15 03:37:44.210312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.398 [2024-07-15 03:37:44.210343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.398 qpair failed and we were unable to recover it. 00:34:38.398 [2024-07-15 03:37:44.210453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.398 [2024-07-15 03:37:44.210478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.398 qpair failed and we were unable to recover it. 00:34:38.398 [2024-07-15 03:37:44.210583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.398 [2024-07-15 03:37:44.210609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.398 qpair failed and we were unable to recover it. 00:34:38.398 [2024-07-15 03:37:44.210751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.398 [2024-07-15 03:37:44.210777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.398 qpair failed and we were unable to recover it. 00:34:38.398 [2024-07-15 03:37:44.210913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.398 [2024-07-15 03:37:44.210939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.398 qpair failed and we were unable to recover it. 00:34:38.398 [2024-07-15 03:37:44.211072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.398 [2024-07-15 03:37:44.211098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.398 qpair failed and we were unable to recover it. 00:34:38.398 [2024-07-15 03:37:44.211248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.398 [2024-07-15 03:37:44.211274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.398 qpair failed and we were unable to recover it. 00:34:38.398 [2024-07-15 03:37:44.211408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.399 [2024-07-15 03:37:44.211434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.399 qpair failed and we were unable to recover it. 00:34:38.399 [2024-07-15 03:37:44.211550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.399 [2024-07-15 03:37:44.211579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.399 qpair failed and we were unable to recover it. 00:34:38.399 [2024-07-15 03:37:44.211691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.399 [2024-07-15 03:37:44.211716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.399 qpair failed and we were unable to recover it. 00:34:38.399 [2024-07-15 03:37:44.211821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.399 [2024-07-15 03:37:44.211848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.399 qpair failed and we were unable to recover it. 00:34:38.399 [2024-07-15 03:37:44.211984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.399 [2024-07-15 03:37:44.212015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.399 qpair failed and we were unable to recover it. 00:34:38.399 [2024-07-15 03:37:44.212134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.399 [2024-07-15 03:37:44.212162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.399 qpair failed and we were unable to recover it. 00:34:38.399 [2024-07-15 03:37:44.212314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.399 [2024-07-15 03:37:44.212340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.399 qpair failed and we were unable to recover it. 00:34:38.399 [2024-07-15 03:37:44.212510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.399 [2024-07-15 03:37:44.212536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.399 qpair failed and we were unable to recover it. 00:34:38.399 [2024-07-15 03:37:44.212657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.399 [2024-07-15 03:37:44.212683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.399 qpair failed and we were unable to recover it. 00:34:38.399 [2024-07-15 03:37:44.212794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.399 [2024-07-15 03:37:44.212821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.399 qpair failed and we were unable to recover it. 00:34:38.399 [2024-07-15 03:37:44.212956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.399 [2024-07-15 03:37:44.212983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.399 qpair failed and we were unable to recover it. 00:34:38.399 [2024-07-15 03:37:44.213088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.399 [2024-07-15 03:37:44.213113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.399 qpair failed and we were unable to recover it. 00:34:38.399 [2024-07-15 03:37:44.213244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.399 [2024-07-15 03:37:44.213270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.399 qpair failed and we were unable to recover it. 00:34:38.399 [2024-07-15 03:37:44.213407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.399 [2024-07-15 03:37:44.213432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.399 qpair failed and we were unable to recover it. 00:34:38.399 [2024-07-15 03:37:44.213564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.399 [2024-07-15 03:37:44.213592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.399 qpair failed and we were unable to recover it. 00:34:38.399 [2024-07-15 03:37:44.213718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.399 [2024-07-15 03:37:44.213746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.399 qpair failed and we were unable to recover it. 00:34:38.399 [2024-07-15 03:37:44.213857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.399 [2024-07-15 03:37:44.213900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.399 qpair failed and we were unable to recover it. 00:34:38.399 [2024-07-15 03:37:44.214007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.399 [2024-07-15 03:37:44.214034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.399 qpair failed and we were unable to recover it. 00:34:38.399 [2024-07-15 03:37:44.214140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.399 [2024-07-15 03:37:44.214166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.399 qpair failed and we were unable to recover it. 00:34:38.399 [2024-07-15 03:37:44.214282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.399 [2024-07-15 03:37:44.214309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.399 qpair failed and we were unable to recover it. 00:34:38.399 [2024-07-15 03:37:44.214430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.399 [2024-07-15 03:37:44.214457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.399 qpair failed and we were unable to recover it. 00:34:38.399 [2024-07-15 03:37:44.214628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.399 [2024-07-15 03:37:44.214654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.399 qpair failed and we were unable to recover it. 00:34:38.399 [2024-07-15 03:37:44.214765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.399 [2024-07-15 03:37:44.214791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.399 qpair failed and we were unable to recover it. 00:34:38.399 [2024-07-15 03:37:44.214910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.399 [2024-07-15 03:37:44.214937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.399 qpair failed and we were unable to recover it. 00:34:38.399 [2024-07-15 03:37:44.215053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.399 [2024-07-15 03:37:44.215080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.399 qpair failed and we were unable to recover it. 00:34:38.399 [2024-07-15 03:37:44.215227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.399 [2024-07-15 03:37:44.215262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.399 qpair failed and we were unable to recover it. 00:34:38.399 [2024-07-15 03:37:44.215372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.399 [2024-07-15 03:37:44.215399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.399 qpair failed and we were unable to recover it. 00:34:38.399 [2024-07-15 03:37:44.215536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.399 [2024-07-15 03:37:44.215562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.399 qpair failed and we were unable to recover it. 00:34:38.399 [2024-07-15 03:37:44.215674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.399 [2024-07-15 03:37:44.215705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.399 qpair failed and we were unable to recover it. 00:34:38.399 [2024-07-15 03:37:44.215846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.400 [2024-07-15 03:37:44.215874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.400 qpair failed and we were unable to recover it. 00:34:38.400 [2024-07-15 03:37:44.216023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.400 [2024-07-15 03:37:44.216049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.400 qpair failed and we were unable to recover it. 00:34:38.400 03:37:44 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:38.400 [2024-07-15 03:37:44.216176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.400 [2024-07-15 03:37:44.216208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.400 qpair failed and we were unable to recover it. 00:34:38.400 03:37:44 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:34:38.400 [2024-07-15 03:37:44.216349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.400 [2024-07-15 03:37:44.216375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.400 qpair failed and we were unable to recover it. 00:34:38.400 03:37:44 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:38.400 [2024-07-15 03:37:44.216490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.400 [2024-07-15 03:37:44.216516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.400 qpair failed and we were unable to recover it. 00:34:38.400 03:37:44 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:38.400 [2024-07-15 03:37:44.216627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.400 [2024-07-15 03:37:44.216653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.400 qpair failed and we were unable to recover it. 00:34:38.400 [2024-07-15 03:37:44.216813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.400 [2024-07-15 03:37:44.216853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.400 qpair failed and we were unable to recover it. 00:34:38.400 [2024-07-15 03:37:44.216986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.400 [2024-07-15 03:37:44.217025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.400 qpair failed and we were unable to recover it. 00:34:38.400 [2024-07-15 03:37:44.217180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.400 [2024-07-15 03:37:44.217207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.400 qpair failed and we were unable to recover it. 00:34:38.400 [2024-07-15 03:37:44.217342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.400 [2024-07-15 03:37:44.217367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.400 qpair failed and we were unable to recover it. 00:34:38.400 [2024-07-15 03:37:44.217528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.400 [2024-07-15 03:37:44.217554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.400 qpair failed and we were unable to recover it. 00:34:38.400 [2024-07-15 03:37:44.217672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.400 [2024-07-15 03:37:44.217698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.400 qpair failed and we were unable to recover it. 00:34:38.400 [2024-07-15 03:37:44.217810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.400 [2024-07-15 03:37:44.217835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.400 qpair failed and we were unable to recover it. 00:34:38.400 [2024-07-15 03:37:44.217969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.400 [2024-07-15 03:37:44.217995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.400 qpair failed and we were unable to recover it. 00:34:38.400 [2024-07-15 03:37:44.218129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.400 [2024-07-15 03:37:44.218153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.400 qpair failed and we were unable to recover it. 00:34:38.400 [2024-07-15 03:37:44.218264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.400 [2024-07-15 03:37:44.218289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.400 qpair failed and we were unable to recover it. 00:34:38.400 [2024-07-15 03:37:44.218395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.400 [2024-07-15 03:37:44.218421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.400 qpair failed and we were unable to recover it. 00:34:38.400 [2024-07-15 03:37:44.218553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.400 [2024-07-15 03:37:44.218577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.400 qpair failed and we were unable to recover it. 00:34:38.400 [2024-07-15 03:37:44.218727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.400 [2024-07-15 03:37:44.218753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.400 qpair failed and we were unable to recover it. 00:34:38.400 [2024-07-15 03:37:44.218872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.400 [2024-07-15 03:37:44.218903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.400 qpair failed and we were unable to recover it. 00:34:38.400 [2024-07-15 03:37:44.219021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.400 [2024-07-15 03:37:44.219046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.400 qpair failed and we were unable to recover it. 00:34:38.400 [2024-07-15 03:37:44.219155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.400 [2024-07-15 03:37:44.219181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.400 qpair failed and we were unable to recover it. 00:34:38.400 [2024-07-15 03:37:44.219302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.400 [2024-07-15 03:37:44.219327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.400 qpair failed and we were unable to recover it. 00:34:38.400 [2024-07-15 03:37:44.219444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.400 [2024-07-15 03:37:44.219469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.400 qpair failed and we were unable to recover it. 00:34:38.400 [2024-07-15 03:37:44.219606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.400 [2024-07-15 03:37:44.219631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.400 qpair failed and we were unable to recover it. 00:34:38.400 [2024-07-15 03:37:44.219731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.400 [2024-07-15 03:37:44.219756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.400 qpair failed and we were unable to recover it. 00:34:38.400 [2024-07-15 03:37:44.219864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.400 [2024-07-15 03:37:44.219898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.400 qpair failed and we were unable to recover it. 00:34:38.400 [2024-07-15 03:37:44.220013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.400 [2024-07-15 03:37:44.220038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.400 qpair failed and we were unable to recover it. 00:34:38.400 [2024-07-15 03:37:44.220180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.400 [2024-07-15 03:37:44.220205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.400 qpair failed and we were unable to recover it. 00:34:38.400 [2024-07-15 03:37:44.220342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.400 [2024-07-15 03:37:44.220368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.400 qpair failed and we were unable to recover it. 00:34:38.400 [2024-07-15 03:37:44.220516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.400 [2024-07-15 03:37:44.220549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.400 qpair failed and we were unable to recover it. 00:34:38.400 [2024-07-15 03:37:44.220690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.400 [2024-07-15 03:37:44.220716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.400 qpair failed and we were unable to recover it. 00:34:38.400 [2024-07-15 03:37:44.220848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.400 [2024-07-15 03:37:44.220874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.400 qpair failed and we were unable to recover it. 00:34:38.400 [2024-07-15 03:37:44.220984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.400 [2024-07-15 03:37:44.221009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.400 qpair failed and we were unable to recover it. 00:34:38.400 [2024-07-15 03:37:44.221108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.400 [2024-07-15 03:37:44.221134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.401 qpair failed and we were unable to recover it. 00:34:38.401 [2024-07-15 03:37:44.221271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.401 [2024-07-15 03:37:44.221296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.401 qpair failed and we were unable to recover it. 00:34:38.401 [2024-07-15 03:37:44.221427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.401 [2024-07-15 03:37:44.221452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.401 qpair failed and we were unable to recover it. 00:34:38.401 [2024-07-15 03:37:44.221584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.401 [2024-07-15 03:37:44.221615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.401 qpair failed and we were unable to recover it. 00:34:38.401 [2024-07-15 03:37:44.221759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.401 [2024-07-15 03:37:44.221784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.401 qpair failed and we were unable to recover it. 00:34:38.401 [2024-07-15 03:37:44.221929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.401 [2024-07-15 03:37:44.221956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.401 qpair failed and we were unable to recover it. 00:34:38.401 [2024-07-15 03:37:44.222066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.401 [2024-07-15 03:37:44.222090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.401 qpair failed and we were unable to recover it. 00:34:38.401 [2024-07-15 03:37:44.222250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.401 [2024-07-15 03:37:44.222276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.401 qpair failed and we were unable to recover it. 00:34:38.401 [2024-07-15 03:37:44.222448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.401 [2024-07-15 03:37:44.222474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.401 qpair failed and we were unable to recover it. 00:34:38.401 [2024-07-15 03:37:44.222576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.401 [2024-07-15 03:37:44.222601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.401 qpair failed and we were unable to recover it. 00:34:38.401 [2024-07-15 03:37:44.222716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.401 [2024-07-15 03:37:44.222742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.401 qpair failed and we were unable to recover it. 00:34:38.401 [2024-07-15 03:37:44.222894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.401 [2024-07-15 03:37:44.222920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.401 qpair failed and we were unable to recover it. 00:34:38.401 [2024-07-15 03:37:44.223034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.401 [2024-07-15 03:37:44.223059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.401 qpair failed and we were unable to recover it. 00:34:38.401 [2024-07-15 03:37:44.223198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.401 [2024-07-15 03:37:44.223225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.401 qpair failed and we were unable to recover it. 00:34:38.401 [2024-07-15 03:37:44.223358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.401 [2024-07-15 03:37:44.223383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.401 qpair failed and we were unable to recover it. 00:34:38.401 [2024-07-15 03:37:44.223490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.401 [2024-07-15 03:37:44.223516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.401 qpair failed and we were unable to recover it. 00:34:38.401 [2024-07-15 03:37:44.223650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.401 [2024-07-15 03:37:44.223674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.401 qpair failed and we were unable to recover it. 00:34:38.401 [2024-07-15 03:37:44.223792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.401 [2024-07-15 03:37:44.223816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.401 qpair failed and we were unable to recover it. 00:34:38.401 [2024-07-15 03:37:44.223938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.401 [2024-07-15 03:37:44.223966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.401 qpair failed and we were unable to recover it. 00:34:38.401 [2024-07-15 03:37:44.224105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.401 [2024-07-15 03:37:44.224130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.401 qpair failed and we were unable to recover it. 00:34:38.401 [2024-07-15 03:37:44.224271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.401 [2024-07-15 03:37:44.224296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.401 qpair failed and we were unable to recover it. 00:34:38.401 [2024-07-15 03:37:44.224412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.401 [2024-07-15 03:37:44.224437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.401 qpair failed and we were unable to recover it. 00:34:38.401 [2024-07-15 03:37:44.224543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.401 [2024-07-15 03:37:44.224567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.401 qpair failed and we were unable to recover it. 00:34:38.401 [2024-07-15 03:37:44.224676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.401 [2024-07-15 03:37:44.224701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.401 qpair failed and we were unable to recover it. 00:34:38.401 [2024-07-15 03:37:44.224874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.401 [2024-07-15 03:37:44.224922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.401 qpair failed and we were unable to recover it. 00:34:38.401 [2024-07-15 03:37:44.225079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.401 [2024-07-15 03:37:44.225112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.401 qpair failed and we were unable to recover it. 00:34:38.401 [2024-07-15 03:37:44.225238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.401 [2024-07-15 03:37:44.225264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.401 qpair failed and we were unable to recover it. 00:34:38.401 [2024-07-15 03:37:44.225382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.401 [2024-07-15 03:37:44.225411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.401 qpair failed and we were unable to recover it. 00:34:38.401 [2024-07-15 03:37:44.225583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.401 [2024-07-15 03:37:44.225610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.401 qpair failed and we were unable to recover it. 00:34:38.401 [2024-07-15 03:37:44.225746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.401 [2024-07-15 03:37:44.225771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.401 qpair failed and we were unable to recover it. 00:34:38.401 [2024-07-15 03:37:44.225895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.401 [2024-07-15 03:37:44.225925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.401 qpair failed and we were unable to recover it. 00:34:38.401 [2024-07-15 03:37:44.226033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.401 [2024-07-15 03:37:44.226058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.401 qpair failed and we were unable to recover it. 00:34:38.401 [2024-07-15 03:37:44.226170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.401 [2024-07-15 03:37:44.226195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.401 qpair failed and we were unable to recover it. 00:34:38.401 [2024-07-15 03:37:44.226364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.401 [2024-07-15 03:37:44.226389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.401 qpair failed and we were unable to recover it. 00:34:38.401 [2024-07-15 03:37:44.226526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.401 [2024-07-15 03:37:44.226551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.401 qpair failed and we were unable to recover it. 00:34:38.401 [2024-07-15 03:37:44.226657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.401 [2024-07-15 03:37:44.226683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.401 qpair failed and we were unable to recover it. 00:34:38.401 [2024-07-15 03:37:44.226803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.402 [2024-07-15 03:37:44.226829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.402 qpair failed and we were unable to recover it. 00:34:38.402 [2024-07-15 03:37:44.226967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.402 [2024-07-15 03:37:44.226993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.402 qpair failed and we were unable to recover it. 00:34:38.402 [2024-07-15 03:37:44.227091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.402 [2024-07-15 03:37:44.227116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.402 qpair failed and we were unable to recover it. 00:34:38.402 [2024-07-15 03:37:44.227233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.402 [2024-07-15 03:37:44.227258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.402 qpair failed and we were unable to recover it. 00:34:38.402 [2024-07-15 03:37:44.227364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.402 [2024-07-15 03:37:44.227389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.402 qpair failed and we were unable to recover it. 00:34:38.402 [2024-07-15 03:37:44.227525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.402 [2024-07-15 03:37:44.227562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.402 qpair failed and we were unable to recover it. 00:34:38.402 [2024-07-15 03:37:44.227678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.402 [2024-07-15 03:37:44.227703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.402 qpair failed and we were unable to recover it. 00:34:38.402 [2024-07-15 03:37:44.227817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.402 [2024-07-15 03:37:44.227846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.402 qpair failed and we were unable to recover it. 00:34:38.402 [2024-07-15 03:37:44.227981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.402 [2024-07-15 03:37:44.228007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.402 qpair failed and we were unable to recover it. 00:34:38.402 [2024-07-15 03:37:44.228118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.402 [2024-07-15 03:37:44.228143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.402 qpair failed and we were unable to recover it. 00:34:38.402 [2024-07-15 03:37:44.228259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.402 [2024-07-15 03:37:44.228285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.402 qpair failed and we were unable to recover it. 00:34:38.402 [2024-07-15 03:37:44.228402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.402 [2024-07-15 03:37:44.228427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.402 qpair failed and we were unable to recover it. 00:34:38.402 [2024-07-15 03:37:44.228597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.402 [2024-07-15 03:37:44.228622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.402 qpair failed and we were unable to recover it. 00:34:38.402 [2024-07-15 03:37:44.228722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.402 [2024-07-15 03:37:44.228747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.402 qpair failed and we were unable to recover it. 00:34:38.402 [2024-07-15 03:37:44.228918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.402 [2024-07-15 03:37:44.228959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.402 qpair failed and we were unable to recover it. 00:34:38.402 [2024-07-15 03:37:44.229087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.402 [2024-07-15 03:37:44.229117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.402 qpair failed and we were unable to recover it. 00:34:38.402 [2024-07-15 03:37:44.229265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.402 [2024-07-15 03:37:44.229292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.402 qpair failed and we were unable to recover it. 00:34:38.402 [2024-07-15 03:37:44.229406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.402 [2024-07-15 03:37:44.229433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.402 qpair failed and we were unable to recover it. 00:34:38.402 [2024-07-15 03:37:44.229603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.402 [2024-07-15 03:37:44.229629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.402 qpair failed and we were unable to recover it. 00:34:38.402 [2024-07-15 03:37:44.229745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.402 [2024-07-15 03:37:44.229771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.402 qpair failed and we were unable to recover it. 00:34:38.402 [2024-07-15 03:37:44.229883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.402 [2024-07-15 03:37:44.229910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.402 qpair failed and we were unable to recover it. 00:34:38.402 [2024-07-15 03:37:44.230055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.402 [2024-07-15 03:37:44.230082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.402 qpair failed and we were unable to recover it. 00:34:38.402 [2024-07-15 03:37:44.230195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.402 [2024-07-15 03:37:44.230221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.402 qpair failed and we were unable to recover it. 00:34:38.402 [2024-07-15 03:37:44.230360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.402 [2024-07-15 03:37:44.230386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.402 qpair failed and we were unable to recover it. 00:34:38.402 [2024-07-15 03:37:44.230520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.402 [2024-07-15 03:37:44.230547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.402 qpair failed and we were unable to recover it. 00:34:38.402 [2024-07-15 03:37:44.230657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.402 [2024-07-15 03:37:44.230684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.402 qpair failed and we were unable to recover it. 00:34:38.402 [2024-07-15 03:37:44.230805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.402 [2024-07-15 03:37:44.230830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.402 qpair failed and we were unable to recover it. 00:34:38.402 [2024-07-15 03:37:44.230991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.402 [2024-07-15 03:37:44.231017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.402 qpair failed and we were unable to recover it. 00:34:38.402 [2024-07-15 03:37:44.231187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.402 [2024-07-15 03:37:44.231213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.402 qpair failed and we were unable to recover it. 00:34:38.402 [2024-07-15 03:37:44.231338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.402 [2024-07-15 03:37:44.231364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.402 qpair failed and we were unable to recover it. 00:34:38.402 [2024-07-15 03:37:44.231507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.402 [2024-07-15 03:37:44.231542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.402 qpair failed and we were unable to recover it. 00:34:38.402 [2024-07-15 03:37:44.231648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.402 [2024-07-15 03:37:44.231672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.402 qpair failed and we were unable to recover it. 00:34:38.402 [2024-07-15 03:37:44.231804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.402 [2024-07-15 03:37:44.231830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.402 qpair failed and we were unable to recover it. 00:34:38.402 [2024-07-15 03:37:44.231961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.402 [2024-07-15 03:37:44.231987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe0000b90 with addr=10.0.0.2, port=4420 00:34:38.402 qpair failed and we were unable to recover it. 00:34:38.402 [2024-07-15 03:37:44.232115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.402 [2024-07-15 03:37:44.232155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.402 qpair failed and we were unable to recover it. 00:34:38.402 [2024-07-15 03:37:44.232307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.402 [2024-07-15 03:37:44.232336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.402 qpair failed and we were unable to recover it. 00:34:38.402 [2024-07-15 03:37:44.232451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.403 [2024-07-15 03:37:44.232478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.403 qpair failed and we were unable to recover it. 00:34:38.403 [2024-07-15 03:37:44.232589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.403 [2024-07-15 03:37:44.232614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.403 qpair failed and we were unable to recover it. 00:34:38.403 [2024-07-15 03:37:44.232744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.403 [2024-07-15 03:37:44.232770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.403 qpair failed and we were unable to recover it. 00:34:38.403 [2024-07-15 03:37:44.232899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.403 [2024-07-15 03:37:44.232939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.403 qpair failed and we were unable to recover it. 00:34:38.403 [2024-07-15 03:37:44.233062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.403 [2024-07-15 03:37:44.233089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.403 qpair failed and we were unable to recover it. 00:34:38.403 [2024-07-15 03:37:44.233205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.403 [2024-07-15 03:37:44.233230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.403 qpair failed and we were unable to recover it. 00:34:38.403 [2024-07-15 03:37:44.233348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.403 [2024-07-15 03:37:44.233375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.403 qpair failed and we were unable to recover it. 00:34:38.403 [2024-07-15 03:37:44.233518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.403 [2024-07-15 03:37:44.233544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.403 qpair failed and we were unable to recover it. 00:34:38.403 [2024-07-15 03:37:44.233647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.403 [2024-07-15 03:37:44.233672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.403 qpair failed and we were unable to recover it. 00:34:38.403 [2024-07-15 03:37:44.233809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.403 [2024-07-15 03:37:44.233835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.403 qpair failed and we were unable to recover it. 00:34:38.403 [2024-07-15 03:37:44.233978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.403 [2024-07-15 03:37:44.234005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.403 qpair failed and we were unable to recover it. 00:34:38.403 [2024-07-15 03:37:44.234153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.403 [2024-07-15 03:37:44.234178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.403 qpair failed and we were unable to recover it. 00:34:38.403 [2024-07-15 03:37:44.234302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.403 [2024-07-15 03:37:44.234328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.403 qpair failed and we were unable to recover it. 00:34:38.403 [2024-07-15 03:37:44.234438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.403 [2024-07-15 03:37:44.234464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.403 qpair failed and we were unable to recover it. 00:34:38.403 [2024-07-15 03:37:44.234575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.403 [2024-07-15 03:37:44.234609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.403 qpair failed and we were unable to recover it. 00:34:38.403 [2024-07-15 03:37:44.234716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.403 [2024-07-15 03:37:44.234742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.403 qpair failed and we were unable to recover it. 00:34:38.403 [2024-07-15 03:37:44.234850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.403 [2024-07-15 03:37:44.234891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.403 qpair failed and we were unable to recover it. 00:34:38.403 [2024-07-15 03:37:44.235005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.403 [2024-07-15 03:37:44.235030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.403 qpair failed and we were unable to recover it. 00:34:38.403 [2024-07-15 03:37:44.235145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.403 [2024-07-15 03:37:44.235171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.403 qpair failed and we were unable to recover it. 00:34:38.403 [2024-07-15 03:37:44.235282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.403 [2024-07-15 03:37:44.235316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.403 qpair failed and we were unable to recover it. 00:34:38.403 [2024-07-15 03:37:44.235432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.403 [2024-07-15 03:37:44.235458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.403 qpair failed and we were unable to recover it. 00:34:38.403 [2024-07-15 03:37:44.235561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.403 [2024-07-15 03:37:44.235586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.403 qpair failed and we were unable to recover it. 00:34:38.403 [2024-07-15 03:37:44.235693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.403 [2024-07-15 03:37:44.235718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.403 qpair failed and we were unable to recover it. 00:34:38.403 [2024-07-15 03:37:44.235827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.403 [2024-07-15 03:37:44.235852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.403 qpair failed and we were unable to recover it. 00:34:38.403 [2024-07-15 03:37:44.235971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.403 [2024-07-15 03:37:44.236010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.403 qpair failed and we were unable to recover it. 00:34:38.403 [2024-07-15 03:37:44.236130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.403 [2024-07-15 03:37:44.236162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.403 qpair failed and we were unable to recover it. 00:34:38.403 [2024-07-15 03:37:44.236296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.403 [2024-07-15 03:37:44.236322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.403 qpair failed and we were unable to recover it. 00:34:38.403 [2024-07-15 03:37:44.236445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.403 [2024-07-15 03:37:44.236470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.403 qpair failed and we were unable to recover it. 00:34:38.403 [2024-07-15 03:37:44.236614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.403 [2024-07-15 03:37:44.236639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.403 qpair failed and we were unable to recover it. 00:34:38.403 [2024-07-15 03:37:44.236767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.403 [2024-07-15 03:37:44.236792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.403 qpair failed and we were unable to recover it. 00:34:38.403 [2024-07-15 03:37:44.236919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.403 [2024-07-15 03:37:44.236946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.403 qpair failed and we were unable to recover it. 00:34:38.403 [2024-07-15 03:37:44.237066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.403 [2024-07-15 03:37:44.237091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.403 qpair failed and we were unable to recover it. 00:34:38.403 [2024-07-15 03:37:44.237203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.403 [2024-07-15 03:37:44.237228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.403 qpair failed and we were unable to recover it. 00:34:38.403 [2024-07-15 03:37:44.237338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.403 [2024-07-15 03:37:44.237363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.403 qpair failed and we were unable to recover it. 00:34:38.403 [2024-07-15 03:37:44.237478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.403 [2024-07-15 03:37:44.237503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.403 qpair failed and we were unable to recover it. 00:34:38.403 [2024-07-15 03:37:44.237640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.403 [2024-07-15 03:37:44.237665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.403 qpair failed and we were unable to recover it. 00:34:38.403 [2024-07-15 03:37:44.237814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.404 [2024-07-15 03:37:44.237839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.404 qpair failed and we were unable to recover it. 00:34:38.404 [2024-07-15 03:37:44.237966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.404 [2024-07-15 03:37:44.237992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.404 qpair failed and we were unable to recover it. 00:34:38.404 [2024-07-15 03:37:44.238131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.404 [2024-07-15 03:37:44.238156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.404 qpair failed and we were unable to recover it. 00:34:38.404 [2024-07-15 03:37:44.238277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.404 [2024-07-15 03:37:44.238303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.404 qpair failed and we were unable to recover it. 00:34:38.404 [2024-07-15 03:37:44.238438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.404 [2024-07-15 03:37:44.238464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.404 qpair failed and we were unable to recover it. 00:34:38.404 [2024-07-15 03:37:44.238607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.404 [2024-07-15 03:37:44.238633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.404 qpair failed and we were unable to recover it. 00:34:38.404 [2024-07-15 03:37:44.238742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.404 [2024-07-15 03:37:44.238767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.404 qpair failed and we were unable to recover it. 00:34:38.404 [2024-07-15 03:37:44.238886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.404 [2024-07-15 03:37:44.238913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.404 qpair failed and we were unable to recover it. 00:34:38.404 Malloc0 00:34:38.404 [2024-07-15 03:37:44.239026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.404 [2024-07-15 03:37:44.239053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.404 qpair failed and we were unable to recover it. 00:34:38.404 [2024-07-15 03:37:44.239162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.404 [2024-07-15 03:37:44.239187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.404 qpair failed and we were unable to recover it. 00:34:38.404 03:37:44 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:38.404 [2024-07-15 03:37:44.239305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.404 [2024-07-15 03:37:44.239332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.404 qpair failed and we were unable to recover it. 00:34:38.404 03:37:44 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:34:38.404 [2024-07-15 03:37:44.239475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.404 [2024-07-15 03:37:44.239502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.404 qpair failed and we were unable to recover it. 00:34:38.404 03:37:44 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:38.404 [2024-07-15 03:37:44.239613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.404 [2024-07-15 03:37:44.239639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.404 qpair failed and we were unable to recover it. 00:34:38.404 03:37:44 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:38.404 [2024-07-15 03:37:44.239782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.404 [2024-07-15 03:37:44.239809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.404 qpair failed and we were unable to recover it. 00:34:38.404 [2024-07-15 03:37:44.239920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.404 [2024-07-15 03:37:44.239946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.404 qpair failed and we were unable to recover it. 00:34:38.404 [2024-07-15 03:37:44.240067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.404 [2024-07-15 03:37:44.240094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.404 qpair failed and we were unable to recover it. 00:34:38.404 [2024-07-15 03:37:44.240207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.404 [2024-07-15 03:37:44.240232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.404 qpair failed and we were unable to recover it. 00:34:38.404 [2024-07-15 03:37:44.240372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.404 [2024-07-15 03:37:44.240398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.404 qpair failed and we were unable to recover it. 00:34:38.404 [2024-07-15 03:37:44.240501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.404 [2024-07-15 03:37:44.240527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.404 qpair failed and we were unable to recover it. 00:34:38.404 [2024-07-15 03:37:44.240644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.404 [2024-07-15 03:37:44.240669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.404 qpair failed and we were unable to recover it. 00:34:38.404 [2024-07-15 03:37:44.240806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.404 [2024-07-15 03:37:44.240832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.404 qpair failed and we were unable to recover it. 00:34:38.404 [2024-07-15 03:37:44.241006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.404 [2024-07-15 03:37:44.241033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.404 qpair failed and we were unable to recover it. 00:34:38.404 [2024-07-15 03:37:44.241172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.404 [2024-07-15 03:37:44.241197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.404 qpair failed and we were unable to recover it. 00:34:38.404 [2024-07-15 03:37:44.241321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.404 [2024-07-15 03:37:44.241347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.404 qpair failed and we were unable to recover it. 00:34:38.404 [2024-07-15 03:37:44.241469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.404 [2024-07-15 03:37:44.241494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.404 qpair failed and we were unable to recover it. 00:34:38.404 [2024-07-15 03:37:44.241603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.404 [2024-07-15 03:37:44.241628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.404 qpair failed and we were unable to recover it. 00:34:38.404 [2024-07-15 03:37:44.241734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.404 [2024-07-15 03:37:44.241759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.404 qpair failed and we were unable to recover it. 00:34:38.404 [2024-07-15 03:37:44.241900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.404 [2024-07-15 03:37:44.241926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.404 qpair failed and we were unable to recover it. 00:34:38.404 [2024-07-15 03:37:44.242051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.404 [2024-07-15 03:37:44.242077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.404 qpair failed and we were unable to recover it. 00:34:38.404 [2024-07-15 03:37:44.242210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.404 [2024-07-15 03:37:44.242235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.404 qpair failed and we were unable to recover it. 00:34:38.404 [2024-07-15 03:37:44.242342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.404 [2024-07-15 03:37:44.242367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.405 qpair failed and we were unable to recover it. 00:34:38.405 [2024-07-15 03:37:44.242508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.405 [2024-07-15 03:37:44.242534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.405 qpair failed and we were unable to recover it. 00:34:38.405 [2024-07-15 03:37:44.242646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.405 [2024-07-15 03:37:44.242672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.405 qpair failed and we were unable to recover it. 00:34:38.405 [2024-07-15 03:37:44.242739] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:38.405 [2024-07-15 03:37:44.242836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.405 [2024-07-15 03:37:44.242861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.405 qpair failed and we were unable to recover it. 00:34:38.405 [2024-07-15 03:37:44.242976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.405 [2024-07-15 03:37:44.243002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.405 qpair failed and we were unable to recover it. 00:34:38.405 [2024-07-15 03:37:44.243129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.405 [2024-07-15 03:37:44.243154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.405 qpair failed and we were unable to recover it. 00:34:38.405 [2024-07-15 03:37:44.243291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.405 [2024-07-15 03:37:44.243317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.405 qpair failed and we were unable to recover it. 00:34:38.405 [2024-07-15 03:37:44.243460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.405 [2024-07-15 03:37:44.243487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.405 qpair failed and we were unable to recover it. 00:34:38.405 [2024-07-15 03:37:44.243638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.405 [2024-07-15 03:37:44.243663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.405 qpair failed and we were unable to recover it. 00:34:38.405 [2024-07-15 03:37:44.243795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.405 [2024-07-15 03:37:44.243821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.405 qpair failed and we were unable to recover it. 00:34:38.405 [2024-07-15 03:37:44.243959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.405 [2024-07-15 03:37:44.243985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.405 qpair failed and we were unable to recover it. 00:34:38.405 [2024-07-15 03:37:44.244102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.405 [2024-07-15 03:37:44.244132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.405 qpair failed and we were unable to recover it. 00:34:38.405 [2024-07-15 03:37:44.244263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.405 [2024-07-15 03:37:44.244289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.405 qpair failed and we were unable to recover it. 00:34:38.405 [2024-07-15 03:37:44.244423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.405 [2024-07-15 03:37:44.244448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.405 qpair failed and we were unable to recover it. 00:34:38.405 [2024-07-15 03:37:44.244558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.405 [2024-07-15 03:37:44.244584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.405 qpair failed and we were unable to recover it. 00:34:38.405 [2024-07-15 03:37:44.244688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.405 [2024-07-15 03:37:44.244713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.405 qpair failed and we were unable to recover it. 00:34:38.405 [2024-07-15 03:37:44.244826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.405 [2024-07-15 03:37:44.244862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.405 qpair failed and we were unable to recover it. 00:34:38.405 [2024-07-15 03:37:44.245008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.405 [2024-07-15 03:37:44.245048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.405 qpair failed and we were unable to recover it. 00:34:38.405 [2024-07-15 03:37:44.245169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.405 [2024-07-15 03:37:44.245196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.405 qpair failed and we were unable to recover it. 00:34:38.405 [2024-07-15 03:37:44.245353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.405 [2024-07-15 03:37:44.245380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.405 qpair failed and we were unable to recover it. 00:34:38.405 [2024-07-15 03:37:44.245510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.405 [2024-07-15 03:37:44.245537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.405 qpair failed and we were unable to recover it. 00:34:38.405 [2024-07-15 03:37:44.245673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.405 [2024-07-15 03:37:44.245699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.405 qpair failed and we were unable to recover it. 00:34:38.405 [2024-07-15 03:37:44.245813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.405 [2024-07-15 03:37:44.245840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.405 qpair failed and we were unable to recover it. 00:34:38.405 [2024-07-15 03:37:44.245980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.405 [2024-07-15 03:37:44.246007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.405 qpair failed and we were unable to recover it. 00:34:38.405 [2024-07-15 03:37:44.246131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.405 [2024-07-15 03:37:44.246157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.405 qpair failed and we were unable to recover it. 00:34:38.405 [2024-07-15 03:37:44.246288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.405 [2024-07-15 03:37:44.246314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.405 qpair failed and we were unable to recover it. 00:34:38.405 [2024-07-15 03:37:44.246452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.405 [2024-07-15 03:37:44.246478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.405 qpair failed and we were unable to recover it. 00:34:38.405 [2024-07-15 03:37:44.246593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.405 [2024-07-15 03:37:44.246618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.405 qpair failed and we were unable to recover it. 00:34:38.405 [2024-07-15 03:37:44.246809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.405 [2024-07-15 03:37:44.246835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.405 qpair failed and we were unable to recover it. 00:34:38.405 [2024-07-15 03:37:44.246953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.405 [2024-07-15 03:37:44.246979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.405 qpair failed and we were unable to recover it. 00:34:38.405 [2024-07-15 03:37:44.247084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.405 [2024-07-15 03:37:44.247109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.405 qpair failed and we were unable to recover it. 00:34:38.405 [2024-07-15 03:37:44.247235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.405 [2024-07-15 03:37:44.247267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.405 qpair failed and we were unable to recover it. 00:34:38.405 [2024-07-15 03:37:44.247371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.405 [2024-07-15 03:37:44.247396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.405 qpair failed and we were unable to recover it. 00:34:38.405 [2024-07-15 03:37:44.247525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.405 [2024-07-15 03:37:44.247550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.405 qpair failed and we were unable to recover it. 00:34:38.405 [2024-07-15 03:37:44.247685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.405 [2024-07-15 03:37:44.247711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.405 qpair failed and we were unable to recover it. 00:34:38.405 [2024-07-15 03:37:44.247819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.405 [2024-07-15 03:37:44.247844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.405 qpair failed and we were unable to recover it. 00:34:38.405 [2024-07-15 03:37:44.247966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.405 [2024-07-15 03:37:44.247993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.406 qpair failed and we were unable to recover it. 00:34:38.406 [2024-07-15 03:37:44.248111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.406 [2024-07-15 03:37:44.248137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.406 qpair failed and we were unable to recover it. 00:34:38.406 [2024-07-15 03:37:44.248248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.406 [2024-07-15 03:37:44.248280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.406 qpair failed and we were unable to recover it. 00:34:38.406 [2024-07-15 03:37:44.248413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.406 [2024-07-15 03:37:44.248439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.406 qpair failed and we were unable to recover it. 00:34:38.406 [2024-07-15 03:37:44.248550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.406 [2024-07-15 03:37:44.248575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.406 qpair failed and we were unable to recover it. 00:34:38.406 [2024-07-15 03:37:44.248695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.406 [2024-07-15 03:37:44.248734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.406 qpair failed and we were unable to recover it. 00:34:38.406 [2024-07-15 03:37:44.248889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.406 [2024-07-15 03:37:44.248928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.406 qpair failed and we were unable to recover it. 00:34:38.406 [2024-07-15 03:37:44.249060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.406 [2024-07-15 03:37:44.249087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.406 qpair failed and we were unable to recover it. 00:34:38.406 [2024-07-15 03:37:44.249191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.406 [2024-07-15 03:37:44.249217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.406 qpair failed and we were unable to recover it. 00:34:38.406 [2024-07-15 03:37:44.249360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.406 [2024-07-15 03:37:44.249385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.406 qpair failed and we were unable to recover it. 00:34:38.406 [2024-07-15 03:37:44.249512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.406 [2024-07-15 03:37:44.249538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.406 qpair failed and we were unable to recover it. 00:34:38.406 [2024-07-15 03:37:44.249651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.406 [2024-07-15 03:37:44.249678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.406 qpair failed and we were unable to recover it. 00:34:38.406 [2024-07-15 03:37:44.249782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.406 [2024-07-15 03:37:44.249817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.406 qpair failed and we were unable to recover it. 00:34:38.406 [2024-07-15 03:37:44.249968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.406 [2024-07-15 03:37:44.249994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.406 qpair failed and we were unable to recover it. 00:34:38.406 [2024-07-15 03:37:44.250104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.406 [2024-07-15 03:37:44.250131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.406 qpair failed and we were unable to recover it. 00:34:38.406 [2024-07-15 03:37:44.250281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.406 [2024-07-15 03:37:44.250307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.406 qpair failed and we were unable to recover it. 00:34:38.406 [2024-07-15 03:37:44.250414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.406 [2024-07-15 03:37:44.250439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.406 qpair failed and we were unable to recover it. 00:34:38.406 [2024-07-15 03:37:44.250551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.406 [2024-07-15 03:37:44.250588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.406 qpair failed and we were unable to recover it. 00:34:38.406 [2024-07-15 03:37:44.250701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.406 [2024-07-15 03:37:44.250726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.406 qpair failed and we were unable to recover it. 00:34:38.406 [2024-07-15 03:37:44.250847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.406 [2024-07-15 03:37:44.250889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.406 qpair failed and we were unable to recover it. 00:34:38.406 [2024-07-15 03:37:44.251004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.406 [2024-07-15 03:37:44.251030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.406 qpair failed and we were unable to recover it. 00:34:38.406 [2024-07-15 03:37:44.251174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.406 [2024-07-15 03:37:44.251205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.406 qpair failed and we were unable to recover it. 00:34:38.406 03:37:44 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:38.406 [2024-07-15 03:37:44.251314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.406 [2024-07-15 03:37:44.251340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.406 qpair failed and we were unable to recover it. 00:34:38.406 03:37:44 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:34:38.406 [2024-07-15 03:37:44.251473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.406 [2024-07-15 03:37:44.251498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.406 qpair failed and we were unable to recover it. 00:34:38.406 03:37:44 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:38.406 [2024-07-15 03:37:44.251619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.406 [2024-07-15 03:37:44.251645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.406 qpair failed and we were unable to recover it. 00:34:38.406 03:37:44 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:38.406 [2024-07-15 03:37:44.251764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.406 [2024-07-15 03:37:44.251790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.406 qpair failed and we were unable to recover it. 00:34:38.406 [2024-07-15 03:37:44.251906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.406 [2024-07-15 03:37:44.251933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.406 qpair failed and we were unable to recover it. 00:34:38.406 [2024-07-15 03:37:44.252050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.406 [2024-07-15 03:37:44.252080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.406 qpair failed and we were unable to recover it. 00:34:38.406 [2024-07-15 03:37:44.252214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.406 [2024-07-15 03:37:44.252239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.406 qpair failed and we were unable to recover it. 00:34:38.406 [2024-07-15 03:37:44.252359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.406 [2024-07-15 03:37:44.252385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.406 qpair failed and we were unable to recover it. 00:34:38.406 [2024-07-15 03:37:44.252491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.406 [2024-07-15 03:37:44.252516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.406 qpair failed and we were unable to recover it. 00:34:38.406 [2024-07-15 03:37:44.252653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.406 [2024-07-15 03:37:44.252678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.406 qpair failed and we were unable to recover it. 00:34:38.406 [2024-07-15 03:37:44.252796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.406 [2024-07-15 03:37:44.252822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.406 qpair failed and we were unable to recover it. 00:34:38.406 [2024-07-15 03:37:44.252937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.406 [2024-07-15 03:37:44.252964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.406 qpair failed and we were unable to recover it. 00:34:38.406 [2024-07-15 03:37:44.253081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.406 [2024-07-15 03:37:44.253107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.406 qpair failed and we were unable to recover it. 00:34:38.406 [2024-07-15 03:37:44.253253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.407 [2024-07-15 03:37:44.253287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.407 qpair failed and we were unable to recover it. 00:34:38.407 [2024-07-15 03:37:44.253403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.407 [2024-07-15 03:37:44.253429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.407 qpair failed and we were unable to recover it. 00:34:38.407 [2024-07-15 03:37:44.253568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.407 [2024-07-15 03:37:44.253593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.407 qpair failed and we were unable to recover it. 00:34:38.407 [2024-07-15 03:37:44.253708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.407 [2024-07-15 03:37:44.253745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.407 qpair failed and we were unable to recover it. 00:34:38.407 [2024-07-15 03:37:44.253894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.407 [2024-07-15 03:37:44.253921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.407 qpair failed and we were unable to recover it. 00:34:38.407 [2024-07-15 03:37:44.254029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.407 [2024-07-15 03:37:44.254053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.407 qpair failed and we were unable to recover it. 00:34:38.407 [2024-07-15 03:37:44.254166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.407 [2024-07-15 03:37:44.254192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.407 qpair failed and we were unable to recover it. 00:34:38.407 [2024-07-15 03:37:44.254302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.407 [2024-07-15 03:37:44.254327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.407 qpair failed and we were unable to recover it. 00:34:38.407 [2024-07-15 03:37:44.254454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.407 [2024-07-15 03:37:44.254480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.407 qpair failed and we were unable to recover it. 00:34:38.407 [2024-07-15 03:37:44.254653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.407 [2024-07-15 03:37:44.254678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.407 qpair failed and we were unable to recover it. 00:34:38.407 [2024-07-15 03:37:44.254785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.407 [2024-07-15 03:37:44.254810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.407 qpair failed and we were unable to recover it. 00:34:38.407 [2024-07-15 03:37:44.254961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.407 [2024-07-15 03:37:44.254987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.407 qpair failed and we were unable to recover it. 00:34:38.407 [2024-07-15 03:37:44.255096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.407 [2024-07-15 03:37:44.255122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.407 qpair failed and we were unable to recover it. 00:34:38.407 [2024-07-15 03:37:44.255273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.407 [2024-07-15 03:37:44.255299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.407 qpair failed and we were unable to recover it. 00:34:38.407 [2024-07-15 03:37:44.255435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.407 [2024-07-15 03:37:44.255461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.407 qpair failed and we were unable to recover it. 00:34:38.407 [2024-07-15 03:37:44.255594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.407 [2024-07-15 03:37:44.255621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.407 qpair failed and we were unable to recover it. 00:34:38.407 [2024-07-15 03:37:44.255734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.407 [2024-07-15 03:37:44.255759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.407 qpair failed and we were unable to recover it. 00:34:38.407 [2024-07-15 03:37:44.255907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.407 [2024-07-15 03:37:44.255934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.407 qpair failed and we were unable to recover it. 00:34:38.407 [2024-07-15 03:37:44.256044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.407 [2024-07-15 03:37:44.256070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.407 qpair failed and we were unable to recover it. 00:34:38.407 [2024-07-15 03:37:44.256199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.407 [2024-07-15 03:37:44.256229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.407 qpair failed and we were unable to recover it. 00:34:38.407 [2024-07-15 03:37:44.256347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.407 [2024-07-15 03:37:44.256373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.407 qpair failed and we were unable to recover it. 00:34:38.407 [2024-07-15 03:37:44.256493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.407 [2024-07-15 03:37:44.256520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.407 qpair failed and we were unable to recover it. 00:34:38.407 [2024-07-15 03:37:44.256638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.407 [2024-07-15 03:37:44.256670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.407 qpair failed and we were unable to recover it. 00:34:38.407 [2024-07-15 03:37:44.256811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.407 [2024-07-15 03:37:44.256836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.407 qpair failed and we were unable to recover it. 00:34:38.407 [2024-07-15 03:37:44.256962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.407 [2024-07-15 03:37:44.256987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.407 qpair failed and we were unable to recover it. 00:34:38.407 [2024-07-15 03:37:44.257097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.407 [2024-07-15 03:37:44.257123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.407 qpair failed and we were unable to recover it. 00:34:38.407 [2024-07-15 03:37:44.257242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.407 [2024-07-15 03:37:44.257268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.407 qpair failed and we were unable to recover it. 00:34:38.407 [2024-07-15 03:37:44.257394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.407 [2024-07-15 03:37:44.257420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.407 qpair failed and we were unable to recover it. 00:34:38.407 [2024-07-15 03:37:44.257525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.407 [2024-07-15 03:37:44.257555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.407 qpair failed and we were unable to recover it. 00:34:38.407 [2024-07-15 03:37:44.257705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.407 [2024-07-15 03:37:44.257731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.407 qpair failed and we were unable to recover it. 00:34:38.407 [2024-07-15 03:37:44.257862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.407 [2024-07-15 03:37:44.257912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.407 qpair failed and we were unable to recover it. 00:34:38.407 [2024-07-15 03:37:44.258033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.407 [2024-07-15 03:37:44.258060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.407 qpair failed and we were unable to recover it. 00:34:38.407 [2024-07-15 03:37:44.258178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.407 [2024-07-15 03:37:44.258214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.407 qpair failed and we were unable to recover it. 00:34:38.407 [2024-07-15 03:37:44.258351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.407 [2024-07-15 03:37:44.258377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.407 qpair failed and we were unable to recover it. 00:34:38.407 [2024-07-15 03:37:44.258494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.407 [2024-07-15 03:37:44.258520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.407 qpair failed and we were unable to recover it. 00:34:38.407 [2024-07-15 03:37:44.258636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.407 [2024-07-15 03:37:44.258661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.407 qpair failed and we were unable to recover it. 00:34:38.407 [2024-07-15 03:37:44.258768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.407 [2024-07-15 03:37:44.258794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.408 qpair failed and we were unable to recover it. 00:34:38.408 [2024-07-15 03:37:44.258914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.408 [2024-07-15 03:37:44.258954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.408 qpair failed and we were unable to recover it. 00:34:38.408 03:37:44 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:38.408 [2024-07-15 03:37:44.259076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.408 [2024-07-15 03:37:44.259104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.408 qpair failed and we were unable to recover it. 00:34:38.408 03:37:44 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:34:38.408 [2024-07-15 03:37:44.259230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.408 [2024-07-15 03:37:44.259257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.408 qpair failed and we were unable to recover it. 00:34:38.408 03:37:44 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:38.408 [2024-07-15 03:37:44.259383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.408 [2024-07-15 03:37:44.259418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.408 qpair failed and we were unable to recover it. 00:34:38.408 03:37:44 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:38.408 [2024-07-15 03:37:44.259558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.408 [2024-07-15 03:37:44.259586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.408 qpair failed and we were unable to recover it. 00:34:38.408 [2024-07-15 03:37:44.259718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.408 [2024-07-15 03:37:44.259744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.408 qpair failed and we were unable to recover it. 00:34:38.408 [2024-07-15 03:37:44.259883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.408 [2024-07-15 03:37:44.259910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.408 qpair failed and we were unable to recover it. 00:34:38.408 [2024-07-15 03:37:44.260053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.408 [2024-07-15 03:37:44.260080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.408 qpair failed and we were unable to recover it. 00:34:38.408 [2024-07-15 03:37:44.260239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.408 [2024-07-15 03:37:44.260266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.408 qpair failed and we were unable to recover it. 00:34:38.408 [2024-07-15 03:37:44.260379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.408 [2024-07-15 03:37:44.260407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.408 qpair failed and we were unable to recover it. 00:34:38.408 [2024-07-15 03:37:44.260559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.408 [2024-07-15 03:37:44.260584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.408 qpair failed and we were unable to recover it. 00:34:38.408 [2024-07-15 03:37:44.260715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.408 [2024-07-15 03:37:44.260740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.408 qpair failed and we were unable to recover it. 00:34:38.408 [2024-07-15 03:37:44.260842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.408 [2024-07-15 03:37:44.260868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.408 qpair failed and we were unable to recover it. 00:34:38.408 [2024-07-15 03:37:44.260991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.408 [2024-07-15 03:37:44.261017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.408 qpair failed and we were unable to recover it. 00:34:38.408 [2024-07-15 03:37:44.261131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.408 [2024-07-15 03:37:44.261159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.408 qpair failed and we were unable to recover it. 00:34:38.408 [2024-07-15 03:37:44.261297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.408 [2024-07-15 03:37:44.261333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.408 qpair failed and we were unable to recover it. 00:34:38.408 [2024-07-15 03:37:44.261475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.408 [2024-07-15 03:37:44.261500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.408 qpair failed and we were unable to recover it. 00:34:38.408 [2024-07-15 03:37:44.261627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.408 [2024-07-15 03:37:44.261654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbf0000b90 with addr=10.0.0.2, port=4420 00:34:38.408 qpair failed and we were unable to recover it. 00:34:38.408 [2024-07-15 03:37:44.261757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.408 [2024-07-15 03:37:44.261784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.408 qpair failed and we were unable to recover it. 00:34:38.408 [2024-07-15 03:37:44.261905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.408 [2024-07-15 03:37:44.261932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.408 qpair failed and we were unable to recover it. 00:34:38.408 [2024-07-15 03:37:44.262049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.408 [2024-07-15 03:37:44.262075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.408 qpair failed and we were unable to recover it. 00:34:38.408 [2024-07-15 03:37:44.262216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.408 [2024-07-15 03:37:44.262242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.408 qpair failed and we were unable to recover it. 00:34:38.408 [2024-07-15 03:37:44.262354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.408 [2024-07-15 03:37:44.262380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.408 qpair failed and we were unable to recover it. 00:34:38.408 [2024-07-15 03:37:44.262488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.408 [2024-07-15 03:37:44.262514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.408 qpair failed and we were unable to recover it. 00:34:38.408 [2024-07-15 03:37:44.262663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.408 [2024-07-15 03:37:44.262690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.408 qpair failed and we were unable to recover it. 00:34:38.408 [2024-07-15 03:37:44.262835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.408 [2024-07-15 03:37:44.262861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.408 qpair failed and we were unable to recover it. 00:34:38.408 [2024-07-15 03:37:44.262989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.408 [2024-07-15 03:37:44.263016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.408 qpair failed and we were unable to recover it. 00:34:38.408 [2024-07-15 03:37:44.263180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.408 [2024-07-15 03:37:44.263206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.408 qpair failed and we were unable to recover it. 00:34:38.408 [2024-07-15 03:37:44.263342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.408 [2024-07-15 03:37:44.263368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.408 qpair failed and we were unable to recover it. 00:34:38.408 [2024-07-15 03:37:44.263513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.408 [2024-07-15 03:37:44.263539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.408 qpair failed and we were unable to recover it. 00:34:38.408 [2024-07-15 03:37:44.263654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.408 [2024-07-15 03:37:44.263680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.408 qpair failed and we were unable to recover it. 00:34:38.408 [2024-07-15 03:37:44.263805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.408 [2024-07-15 03:37:44.263831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.408 qpair failed and we were unable to recover it. 00:34:38.408 [2024-07-15 03:37:44.263952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.408 [2024-07-15 03:37:44.263978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.408 qpair failed and we were unable to recover it. 00:34:38.408 [2024-07-15 03:37:44.264095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.409 [2024-07-15 03:37:44.264121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.409 qpair failed and we were unable to recover it. 00:34:38.409 [2024-07-15 03:37:44.264275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.409 [2024-07-15 03:37:44.264305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.409 qpair failed and we were unable to recover it. 00:34:38.409 [2024-07-15 03:37:44.264455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.409 [2024-07-15 03:37:44.264481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.409 qpair failed and we were unable to recover it. 00:34:38.409 [2024-07-15 03:37:44.264593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.409 [2024-07-15 03:37:44.264631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.409 qpair failed and we were unable to recover it. 00:34:38.409 [2024-07-15 03:37:44.264748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.409 [2024-07-15 03:37:44.264776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.409 qpair failed and we were unable to recover it. 00:34:38.409 [2024-07-15 03:37:44.264887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.409 [2024-07-15 03:37:44.264914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.409 qpair failed and we were unable to recover it. 00:34:38.409 [2024-07-15 03:37:44.265030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.409 [2024-07-15 03:37:44.265057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.409 qpair failed and we were unable to recover it. 00:34:38.409 [2024-07-15 03:37:44.265175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.409 [2024-07-15 03:37:44.265201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.409 qpair failed and we were unable to recover it. 00:34:38.409 [2024-07-15 03:37:44.265342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.409 [2024-07-15 03:37:44.265369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.409 qpair failed and we were unable to recover it. 00:34:38.409 [2024-07-15 03:37:44.265481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.409 [2024-07-15 03:37:44.265507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.409 qpair failed and we were unable to recover it. 00:34:38.409 [2024-07-15 03:37:44.265673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.409 [2024-07-15 03:37:44.265699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.409 qpair failed and we were unable to recover it. 00:34:38.409 [2024-07-15 03:37:44.265812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.409 [2024-07-15 03:37:44.265839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.409 qpair failed and we were unable to recover it. 00:34:38.409 [2024-07-15 03:37:44.265989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.409 [2024-07-15 03:37:44.266016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.409 qpair failed and we were unable to recover it. 00:34:38.409 [2024-07-15 03:37:44.266152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.409 [2024-07-15 03:37:44.266178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.409 qpair failed and we were unable to recover it. 00:34:38.409 [2024-07-15 03:37:44.266312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.409 [2024-07-15 03:37:44.266338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.409 qpair failed and we were unable to recover it. 00:34:38.409 [2024-07-15 03:37:44.266451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.409 [2024-07-15 03:37:44.266477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.409 qpair failed and we were unable to recover it. 00:34:38.409 [2024-07-15 03:37:44.266625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.409 [2024-07-15 03:37:44.266651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcbe8000b90 with addr=10.0.0.2, port=4420 00:34:38.409 qpair failed and we were unable to recover it. 00:34:38.409 [2024-07-15 03:37:44.266781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.409 [2024-07-15 03:37:44.266811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.409 qpair failed and we were unable to recover it. 00:34:38.409 [2024-07-15 03:37:44.266935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.409 [2024-07-15 03:37:44.266961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.409 qpair failed and we were unable to recover it. 00:34:38.409 03:37:44 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:38.409 [2024-07-15 03:37:44.267079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.409 [2024-07-15 03:37:44.267105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.409 qpair failed and we were unable to recover it. 00:34:38.409 03:37:44 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:38.409 [2024-07-15 03:37:44.267231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.409 [2024-07-15 03:37:44.267256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.409 qpair failed and we were unable to recover it. 00:34:38.409 03:37:44 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:38.409 [2024-07-15 03:37:44.267363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.409 [2024-07-15 03:37:44.267389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.409 qpair failed and we were unable to recover it. 00:34:38.409 03:37:44 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:38.409 [2024-07-15 03:37:44.267509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.409 [2024-07-15 03:37:44.267535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.409 qpair failed and we were unable to recover it. 00:34:38.409 [2024-07-15 03:37:44.267643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.409 [2024-07-15 03:37:44.267668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.409 qpair failed and we were unable to recover it. 00:34:38.409 [2024-07-15 03:37:44.267779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.409 [2024-07-15 03:37:44.267806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.409 qpair failed and we were unable to recover it. 00:34:38.409 [2024-07-15 03:37:44.267953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.409 [2024-07-15 03:37:44.267980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.409 qpair failed and we were unable to recover it. 00:34:38.409 [2024-07-15 03:37:44.268097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.409 [2024-07-15 03:37:44.268127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.409 qpair failed and we were unable to recover it. 00:34:38.409 [2024-07-15 03:37:44.268241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.409 [2024-07-15 03:37:44.268267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.409 qpair failed and we were unable to recover it. 00:34:38.409 [2024-07-15 03:37:44.268402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.409 [2024-07-15 03:37:44.268437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.409 qpair failed and we were unable to recover it. 00:34:38.409 [2024-07-15 03:37:44.268554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.409 [2024-07-15 03:37:44.268579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.409 qpair failed and we were unable to recover it. 00:34:38.409 [2024-07-15 03:37:44.268691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.409 [2024-07-15 03:37:44.268718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.410 qpair failed and we were unable to recover it. 00:34:38.410 [2024-07-15 03:37:44.268831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.410 [2024-07-15 03:37:44.268858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.410 qpair failed and we were unable to recover it. 00:34:38.410 [2024-07-15 03:37:44.268985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.410 [2024-07-15 03:37:44.269011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.410 qpair failed and we were unable to recover it. 00:34:38.410 [2024-07-15 03:37:44.269125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.410 [2024-07-15 03:37:44.269151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.410 qpair failed and we were unable to recover it. 00:34:38.410 [2024-07-15 03:37:44.269268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.410 [2024-07-15 03:37:44.269293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.410 qpair failed and we were unable to recover it. 00:34:38.410 [2024-07-15 03:37:44.269409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.410 [2024-07-15 03:37:44.269434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.410 qpair failed and we were unable to recover it. 00:34:38.410 [2024-07-15 03:37:44.269571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.410 [2024-07-15 03:37:44.269597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.410 qpair failed and we were unable to recover it. 00:34:38.410 [2024-07-15 03:37:44.269713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.410 [2024-07-15 03:37:44.269748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.410 qpair failed and we were unable to recover it. 00:34:38.410 [2024-07-15 03:37:44.269857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.410 [2024-07-15 03:37:44.269903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.410 qpair failed and we were unable to recover it. 00:34:38.410 [2024-07-15 03:37:44.270020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.410 [2024-07-15 03:37:44.270046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.410 qpair failed and we were unable to recover it. 00:34:38.410 [2024-07-15 03:37:44.270162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.410 [2024-07-15 03:37:44.270188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.410 qpair failed and we were unable to recover it. 00:34:38.410 [2024-07-15 03:37:44.270357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.410 [2024-07-15 03:37:44.270382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.410 qpair failed and we were unable to recover it. 00:34:38.410 [2024-07-15 03:37:44.270484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.410 [2024-07-15 03:37:44.270509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.410 qpair failed and we were unable to recover it. 00:34:38.410 [2024-07-15 03:37:44.270625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.410 [2024-07-15 03:37:44.270651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.410 qpair failed and we were unable to recover it. 00:34:38.410 [2024-07-15 03:37:44.270771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.410 [2024-07-15 03:37:44.270796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300f20 with addr=10.0.0.2, port=4420 00:34:38.410 qpair failed and we were unable to recover it. 00:34:38.410 [2024-07-15 03:37:44.271172] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:38.410 [2024-07-15 03:37:44.273425] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:38.410 [2024-07-15 03:37:44.273561] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:38.410 [2024-07-15 03:37:44.273589] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:38.410 [2024-07-15 03:37:44.273604] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:38.410 [2024-07-15 03:37:44.273617] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:38.410 [2024-07-15 03:37:44.273651] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:38.410 qpair failed and we were unable to recover it. 00:34:38.410 03:37:44 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:38.410 03:37:44 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:34:38.410 03:37:44 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:38.410 03:37:44 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:38.410 03:37:44 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:38.410 03:37:44 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 3350531 00:34:38.410 [2024-07-15 03:37:44.283315] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:38.410 [2024-07-15 03:37:44.283433] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:38.410 [2024-07-15 03:37:44.283459] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:38.410 [2024-07-15 03:37:44.283474] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:38.410 [2024-07-15 03:37:44.283487] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:38.410 [2024-07-15 03:37:44.283522] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:38.410 qpair failed and we were unable to recover it. 00:34:38.410 [2024-07-15 03:37:44.293392] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:38.410 [2024-07-15 03:37:44.293506] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:38.410 [2024-07-15 03:37:44.293534] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:38.410 [2024-07-15 03:37:44.293548] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:38.410 [2024-07-15 03:37:44.293561] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:38.410 [2024-07-15 03:37:44.293590] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:38.410 qpair failed and we were unable to recover it. 00:34:38.410 [2024-07-15 03:37:44.303386] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:38.410 [2024-07-15 03:37:44.303511] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:38.410 [2024-07-15 03:37:44.303538] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:38.410 [2024-07-15 03:37:44.303554] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:38.410 [2024-07-15 03:37:44.303575] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:38.410 [2024-07-15 03:37:44.303608] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:38.410 qpair failed and we were unable to recover it. 00:34:38.410 [2024-07-15 03:37:44.313338] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:38.410 [2024-07-15 03:37:44.313454] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:38.410 [2024-07-15 03:37:44.313480] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:38.410 [2024-07-15 03:37:44.313494] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:38.410 [2024-07-15 03:37:44.313507] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:38.410 [2024-07-15 03:37:44.313535] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:38.410 qpair failed and we were unable to recover it. 00:34:38.410 [2024-07-15 03:37:44.323334] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:38.410 [2024-07-15 03:37:44.323445] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:38.410 [2024-07-15 03:37:44.323470] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:38.410 [2024-07-15 03:37:44.323484] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:38.410 [2024-07-15 03:37:44.323496] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:38.410 [2024-07-15 03:37:44.323523] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:38.410 qpair failed and we were unable to recover it. 00:34:38.410 [2024-07-15 03:37:44.333389] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:38.410 [2024-07-15 03:37:44.333511] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:38.410 [2024-07-15 03:37:44.333543] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:38.410 [2024-07-15 03:37:44.333564] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:38.411 [2024-07-15 03:37:44.333578] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:38.411 [2024-07-15 03:37:44.333607] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:38.411 qpair failed and we were unable to recover it. 00:34:38.411 [2024-07-15 03:37:44.343367] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:38.411 [2024-07-15 03:37:44.343484] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:38.411 [2024-07-15 03:37:44.343510] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:38.411 [2024-07-15 03:37:44.343525] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:38.411 [2024-07-15 03:37:44.343537] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:38.411 [2024-07-15 03:37:44.343565] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:38.411 qpair failed and we were unable to recover it. 00:34:38.411 [2024-07-15 03:37:44.353380] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:38.411 [2024-07-15 03:37:44.353493] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:38.411 [2024-07-15 03:37:44.353518] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:38.411 [2024-07-15 03:37:44.353532] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:38.411 [2024-07-15 03:37:44.353545] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:38.411 [2024-07-15 03:37:44.353573] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:38.411 qpair failed and we were unable to recover it. 00:34:38.411 [2024-07-15 03:37:44.363417] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:38.411 [2024-07-15 03:37:44.363526] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:38.411 [2024-07-15 03:37:44.363551] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:38.411 [2024-07-15 03:37:44.363567] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:38.411 [2024-07-15 03:37:44.363581] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:38.411 [2024-07-15 03:37:44.363608] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:38.411 qpair failed and we were unable to recover it. 00:34:38.411 [2024-07-15 03:37:44.373444] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:38.411 [2024-07-15 03:37:44.373546] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:38.411 [2024-07-15 03:37:44.373571] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:38.411 [2024-07-15 03:37:44.373585] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:38.411 [2024-07-15 03:37:44.373598] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:38.411 [2024-07-15 03:37:44.373631] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:38.411 qpair failed and we were unable to recover it. 00:34:38.411 [2024-07-15 03:37:44.383474] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:38.411 [2024-07-15 03:37:44.383588] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:38.411 [2024-07-15 03:37:44.383613] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:38.411 [2024-07-15 03:37:44.383627] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:38.411 [2024-07-15 03:37:44.383640] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:38.411 [2024-07-15 03:37:44.383669] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:38.411 qpair failed and we were unable to recover it. 00:34:38.411 [2024-07-15 03:37:44.393528] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:38.411 [2024-07-15 03:37:44.393676] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:38.411 [2024-07-15 03:37:44.393701] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:38.411 [2024-07-15 03:37:44.393715] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:38.411 [2024-07-15 03:37:44.393728] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:38.411 [2024-07-15 03:37:44.393756] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:38.411 qpair failed and we were unable to recover it. 00:34:38.411 [2024-07-15 03:37:44.403574] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:38.411 [2024-07-15 03:37:44.403690] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:38.411 [2024-07-15 03:37:44.403715] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:38.411 [2024-07-15 03:37:44.403730] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:38.411 [2024-07-15 03:37:44.403743] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:38.411 [2024-07-15 03:37:44.403771] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:38.411 qpair failed and we were unable to recover it. 00:34:38.411 [2024-07-15 03:37:44.413669] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:38.411 [2024-07-15 03:37:44.413785] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:38.411 [2024-07-15 03:37:44.413810] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:38.411 [2024-07-15 03:37:44.413825] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:38.411 [2024-07-15 03:37:44.413838] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:38.411 [2024-07-15 03:37:44.413866] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:38.411 qpair failed and we were unable to recover it. 00:34:38.411 [2024-07-15 03:37:44.423604] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:38.411 [2024-07-15 03:37:44.423728] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:38.411 [2024-07-15 03:37:44.423758] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:38.411 [2024-07-15 03:37:44.423773] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:38.411 [2024-07-15 03:37:44.423786] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:38.411 [2024-07-15 03:37:44.423814] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:38.411 qpair failed and we were unable to recover it. 00:34:38.411 [2024-07-15 03:37:44.433733] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:38.411 [2024-07-15 03:37:44.433842] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:38.411 [2024-07-15 03:37:44.433867] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:38.411 [2024-07-15 03:37:44.433888] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:38.411 [2024-07-15 03:37:44.433901] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:38.411 [2024-07-15 03:37:44.433929] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:38.411 qpair failed and we were unable to recover it. 00:34:38.411 [2024-07-15 03:37:44.443764] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:38.411 [2024-07-15 03:37:44.443873] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:38.411 [2024-07-15 03:37:44.443904] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:38.411 [2024-07-15 03:37:44.443918] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:38.411 [2024-07-15 03:37:44.443932] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:38.411 [2024-07-15 03:37:44.443960] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:38.411 qpair failed and we were unable to recover it. 00:34:38.411 [2024-07-15 03:37:44.453764] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:38.411 [2024-07-15 03:37:44.453922] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:38.411 [2024-07-15 03:37:44.453947] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:38.411 [2024-07-15 03:37:44.453961] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:38.411 [2024-07-15 03:37:44.453974] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:38.411 [2024-07-15 03:37:44.454002] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:38.411 qpair failed and we were unable to recover it. 00:34:38.411 [2024-07-15 03:37:44.463711] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:38.411 [2024-07-15 03:37:44.463823] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:38.411 [2024-07-15 03:37:44.463848] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:38.412 [2024-07-15 03:37:44.463862] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:38.412 [2024-07-15 03:37:44.463887] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:38.412 [2024-07-15 03:37:44.463919] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:38.412 qpair failed and we were unable to recover it. 00:34:38.412 [2024-07-15 03:37:44.473751] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:38.412 [2024-07-15 03:37:44.473862] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:38.412 [2024-07-15 03:37:44.473893] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:38.412 [2024-07-15 03:37:44.473908] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:38.412 [2024-07-15 03:37:44.473921] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:38.412 [2024-07-15 03:37:44.473949] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:38.412 qpair failed and we were unable to recover it. 00:34:38.412 [2024-07-15 03:37:44.483802] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:38.412 [2024-07-15 03:37:44.483919] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:38.412 [2024-07-15 03:37:44.483945] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:38.412 [2024-07-15 03:37:44.483958] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:38.412 [2024-07-15 03:37:44.483971] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:38.412 [2024-07-15 03:37:44.484000] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:38.412 qpair failed and we were unable to recover it. 00:34:38.412 [2024-07-15 03:37:44.493841] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:38.412 [2024-07-15 03:37:44.493960] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:38.412 [2024-07-15 03:37:44.493986] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:38.412 [2024-07-15 03:37:44.494000] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:38.412 [2024-07-15 03:37:44.494013] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:38.412 [2024-07-15 03:37:44.494043] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:38.412 qpair failed and we were unable to recover it. 00:34:38.412 [2024-07-15 03:37:44.503821] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:38.412 [2024-07-15 03:37:44.503953] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:38.412 [2024-07-15 03:37:44.503978] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:38.412 [2024-07-15 03:37:44.503992] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:38.412 [2024-07-15 03:37:44.504004] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:38.412 [2024-07-15 03:37:44.504033] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:38.412 qpair failed and we were unable to recover it. 00:34:38.412 [2024-07-15 03:37:44.513868] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:38.412 [2024-07-15 03:37:44.514018] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:38.412 [2024-07-15 03:37:44.514045] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:38.412 [2024-07-15 03:37:44.514063] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:38.412 [2024-07-15 03:37:44.514077] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:38.412 [2024-07-15 03:37:44.514107] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:38.412 qpair failed and we were unable to recover it. 00:34:38.412 [2024-07-15 03:37:44.523913] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:38.412 [2024-07-15 03:37:44.524024] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:38.412 [2024-07-15 03:37:44.524049] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:38.412 [2024-07-15 03:37:44.524063] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:38.412 [2024-07-15 03:37:44.524076] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:38.412 [2024-07-15 03:37:44.524104] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:38.412 qpair failed and we were unable to recover it. 00:34:38.671 [2024-07-15 03:37:44.533932] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:38.671 [2024-07-15 03:37:44.534043] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:38.671 [2024-07-15 03:37:44.534070] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:38.671 [2024-07-15 03:37:44.534084] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:38.671 [2024-07-15 03:37:44.534098] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:38.672 [2024-07-15 03:37:44.534126] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:38.672 qpair failed and we were unable to recover it. 00:34:38.672 [2024-07-15 03:37:44.543976] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:38.672 [2024-07-15 03:37:44.544096] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:38.672 [2024-07-15 03:37:44.544123] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:38.672 [2024-07-15 03:37:44.544137] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:38.672 [2024-07-15 03:37:44.544150] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:38.672 [2024-07-15 03:37:44.544178] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:38.672 qpair failed and we were unable to recover it. 00:34:38.672 [2024-07-15 03:37:44.554040] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:38.672 [2024-07-15 03:37:44.554151] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:38.672 [2024-07-15 03:37:44.554176] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:38.672 [2024-07-15 03:37:44.554190] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:38.672 [2024-07-15 03:37:44.554208] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:38.672 [2024-07-15 03:37:44.554237] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:38.672 qpair failed and we were unable to recover it. 00:34:38.672 [2024-07-15 03:37:44.564069] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:38.672 [2024-07-15 03:37:44.564199] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:38.672 [2024-07-15 03:37:44.564226] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:38.672 [2024-07-15 03:37:44.564240] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:38.672 [2024-07-15 03:37:44.564254] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:38.672 [2024-07-15 03:37:44.564282] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:38.672 qpair failed and we were unable to recover it. 00:34:38.672 [2024-07-15 03:37:44.574034] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:38.672 [2024-07-15 03:37:44.574142] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:38.672 [2024-07-15 03:37:44.574168] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:38.672 [2024-07-15 03:37:44.574182] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:38.672 [2024-07-15 03:37:44.574195] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:38.672 [2024-07-15 03:37:44.574223] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:38.672 qpair failed and we were unable to recover it. 00:34:38.672 [2024-07-15 03:37:44.584121] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:38.672 [2024-07-15 03:37:44.584240] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:38.672 [2024-07-15 03:37:44.584265] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:38.672 [2024-07-15 03:37:44.584279] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:38.672 [2024-07-15 03:37:44.584292] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:38.672 [2024-07-15 03:37:44.584320] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:38.672 qpair failed and we were unable to recover it. 00:34:38.672 [2024-07-15 03:37:44.594077] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:38.672 [2024-07-15 03:37:44.594191] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:38.672 [2024-07-15 03:37:44.594217] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:38.672 [2024-07-15 03:37:44.594232] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:38.672 [2024-07-15 03:37:44.594245] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:38.672 [2024-07-15 03:37:44.594273] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:38.672 qpair failed and we were unable to recover it. 00:34:38.672 [2024-07-15 03:37:44.604128] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:38.672 [2024-07-15 03:37:44.604243] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:38.672 [2024-07-15 03:37:44.604268] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:38.672 [2024-07-15 03:37:44.604282] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:38.672 [2024-07-15 03:37:44.604295] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:38.672 [2024-07-15 03:37:44.604323] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:38.672 qpair failed and we were unable to recover it. 00:34:38.672 [2024-07-15 03:37:44.614194] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:38.672 [2024-07-15 03:37:44.614307] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:38.672 [2024-07-15 03:37:44.614333] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:38.672 [2024-07-15 03:37:44.614347] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:38.672 [2024-07-15 03:37:44.614360] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:38.672 [2024-07-15 03:37:44.614388] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:38.672 qpair failed and we were unable to recover it. 00:34:38.672 [2024-07-15 03:37:44.624209] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:38.672 [2024-07-15 03:37:44.624325] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:38.672 [2024-07-15 03:37:44.624350] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:38.672 [2024-07-15 03:37:44.624364] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:38.672 [2024-07-15 03:37:44.624377] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:38.672 [2024-07-15 03:37:44.624404] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:38.672 qpair failed and we were unable to recover it. 00:34:38.672 [2024-07-15 03:37:44.634258] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:38.672 [2024-07-15 03:37:44.634411] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:38.672 [2024-07-15 03:37:44.634437] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:38.672 [2024-07-15 03:37:44.634451] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:38.672 [2024-07-15 03:37:44.634464] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:38.672 [2024-07-15 03:37:44.634493] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:38.672 qpair failed and we were unable to recover it. 00:34:38.672 [2024-07-15 03:37:44.644238] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:38.672 [2024-07-15 03:37:44.644369] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:38.672 [2024-07-15 03:37:44.644394] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:38.672 [2024-07-15 03:37:44.644408] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:38.672 [2024-07-15 03:37:44.644427] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:38.672 [2024-07-15 03:37:44.644457] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:38.672 qpair failed and we were unable to recover it. 00:34:38.672 [2024-07-15 03:37:44.654267] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:38.672 [2024-07-15 03:37:44.654375] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:38.672 [2024-07-15 03:37:44.654400] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:38.672 [2024-07-15 03:37:44.654414] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:38.672 [2024-07-15 03:37:44.654426] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:38.672 [2024-07-15 03:37:44.654455] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:38.672 qpair failed and we were unable to recover it. 00:34:38.672 [2024-07-15 03:37:44.664332] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:38.672 [2024-07-15 03:37:44.664442] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:38.673 [2024-07-15 03:37:44.664467] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:38.673 [2024-07-15 03:37:44.664481] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:38.673 [2024-07-15 03:37:44.664493] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:38.673 [2024-07-15 03:37:44.664521] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:38.673 qpair failed and we were unable to recover it. 00:34:38.673 [2024-07-15 03:37:44.674369] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:38.673 [2024-07-15 03:37:44.674483] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:38.673 [2024-07-15 03:37:44.674508] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:38.673 [2024-07-15 03:37:44.674522] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:38.673 [2024-07-15 03:37:44.674535] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:38.673 [2024-07-15 03:37:44.674563] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:38.673 qpair failed and we were unable to recover it. 00:34:38.673 [2024-07-15 03:37:44.684426] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:38.673 [2024-07-15 03:37:44.684530] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:38.673 [2024-07-15 03:37:44.684556] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:38.673 [2024-07-15 03:37:44.684570] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:38.673 [2024-07-15 03:37:44.684583] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:38.673 [2024-07-15 03:37:44.684611] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:38.673 qpair failed and we were unable to recover it. 00:34:38.673 [2024-07-15 03:37:44.694416] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:38.673 [2024-07-15 03:37:44.694525] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:38.673 [2024-07-15 03:37:44.694551] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:38.673 [2024-07-15 03:37:44.694565] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:38.673 [2024-07-15 03:37:44.694578] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:38.673 [2024-07-15 03:37:44.694605] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:38.673 qpair failed and we were unable to recover it. 00:34:38.673 [2024-07-15 03:37:44.704410] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:38.673 [2024-07-15 03:37:44.704527] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:38.673 [2024-07-15 03:37:44.704551] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:38.673 [2024-07-15 03:37:44.704566] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:38.673 [2024-07-15 03:37:44.704579] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:38.673 [2024-07-15 03:37:44.704606] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:38.673 qpair failed and we were unable to recover it. 00:34:38.673 [2024-07-15 03:37:44.714428] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:38.673 [2024-07-15 03:37:44.714534] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:38.673 [2024-07-15 03:37:44.714560] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:38.673 [2024-07-15 03:37:44.714574] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:38.673 [2024-07-15 03:37:44.714586] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:38.673 [2024-07-15 03:37:44.714614] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:38.673 qpair failed and we were unable to recover it. 00:34:38.673 [2024-07-15 03:37:44.724459] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:38.673 [2024-07-15 03:37:44.724564] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:38.673 [2024-07-15 03:37:44.724589] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:38.673 [2024-07-15 03:37:44.724603] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:38.673 [2024-07-15 03:37:44.724615] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:38.673 [2024-07-15 03:37:44.724644] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:38.673 qpair failed and we were unable to recover it. 00:34:38.673 [2024-07-15 03:37:44.734469] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:38.673 [2024-07-15 03:37:44.734579] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:38.673 [2024-07-15 03:37:44.734604] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:38.673 [2024-07-15 03:37:44.734627] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:38.673 [2024-07-15 03:37:44.734641] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:38.673 [2024-07-15 03:37:44.734670] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:38.673 qpair failed and we were unable to recover it. 00:34:38.673 [2024-07-15 03:37:44.744515] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:38.673 [2024-07-15 03:37:44.744649] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:38.673 [2024-07-15 03:37:44.744674] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:38.673 [2024-07-15 03:37:44.744688] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:38.673 [2024-07-15 03:37:44.744701] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:38.673 [2024-07-15 03:37:44.744728] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:38.673 qpair failed and we were unable to recover it. 00:34:38.673 [2024-07-15 03:37:44.754600] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:38.673 [2024-07-15 03:37:44.754716] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:38.673 [2024-07-15 03:37:44.754743] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:38.673 [2024-07-15 03:37:44.754757] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:38.673 [2024-07-15 03:37:44.754774] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:38.673 [2024-07-15 03:37:44.754803] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:38.673 qpair failed and we were unable to recover it. 00:34:38.673 [2024-07-15 03:37:44.764568] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:38.673 [2024-07-15 03:37:44.764678] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:38.673 [2024-07-15 03:37:44.764704] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:38.673 [2024-07-15 03:37:44.764718] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:38.673 [2024-07-15 03:37:44.764730] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:38.673 [2024-07-15 03:37:44.764758] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:38.673 qpair failed and we were unable to recover it. 00:34:38.673 [2024-07-15 03:37:44.774599] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:38.673 [2024-07-15 03:37:44.774707] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:38.673 [2024-07-15 03:37:44.774732] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:38.673 [2024-07-15 03:37:44.774746] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:38.673 [2024-07-15 03:37:44.774759] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:38.673 [2024-07-15 03:37:44.774787] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:38.673 qpair failed and we were unable to recover it. 00:34:38.673 [2024-07-15 03:37:44.784630] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:38.673 [2024-07-15 03:37:44.784741] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:38.673 [2024-07-15 03:37:44.784766] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:38.673 [2024-07-15 03:37:44.784780] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:38.673 [2024-07-15 03:37:44.784793] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:38.673 [2024-07-15 03:37:44.784821] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:38.673 qpair failed and we were unable to recover it. 00:34:38.673 [2024-07-15 03:37:44.794659] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:38.674 [2024-07-15 03:37:44.794771] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:38.674 [2024-07-15 03:37:44.794795] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:38.674 [2024-07-15 03:37:44.794809] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:38.674 [2024-07-15 03:37:44.794823] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:38.674 [2024-07-15 03:37:44.794850] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:38.674 qpair failed and we were unable to recover it. 00:34:38.674 [2024-07-15 03:37:44.804716] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:38.674 [2024-07-15 03:37:44.804831] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:38.674 [2024-07-15 03:37:44.804856] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:38.674 [2024-07-15 03:37:44.804870] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:38.674 [2024-07-15 03:37:44.804895] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:38.674 [2024-07-15 03:37:44.804924] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:38.674 qpair failed and we were unable to recover it. 00:34:38.933 [2024-07-15 03:37:44.814741] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:38.933 [2024-07-15 03:37:44.814863] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:38.933 [2024-07-15 03:37:44.814898] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:38.933 [2024-07-15 03:37:44.814914] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:38.933 [2024-07-15 03:37:44.814927] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:38.933 [2024-07-15 03:37:44.814956] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:38.933 qpair failed and we were unable to recover it. 00:34:38.933 [2024-07-15 03:37:44.824753] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:38.933 [2024-07-15 03:37:44.824922] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:38.933 [2024-07-15 03:37:44.824949] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:38.933 [2024-07-15 03:37:44.824970] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:38.933 [2024-07-15 03:37:44.824984] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:38.933 [2024-07-15 03:37:44.825013] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:38.933 qpair failed and we were unable to recover it. 00:34:38.933 [2024-07-15 03:37:44.834894] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:38.933 [2024-07-15 03:37:44.835020] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:38.933 [2024-07-15 03:37:44.835045] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:38.933 [2024-07-15 03:37:44.835059] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:38.933 [2024-07-15 03:37:44.835072] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:38.933 [2024-07-15 03:37:44.835100] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:38.933 qpair failed and we were unable to recover it. 00:34:38.933 [2024-07-15 03:37:44.844885] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:38.933 [2024-07-15 03:37:44.844994] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:38.933 [2024-07-15 03:37:44.845019] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:38.933 [2024-07-15 03:37:44.845034] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:38.933 [2024-07-15 03:37:44.845046] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:38.933 [2024-07-15 03:37:44.845074] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:38.933 qpair failed and we were unable to recover it. 00:34:38.933 [2024-07-15 03:37:44.854821] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:38.933 [2024-07-15 03:37:44.854938] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:38.933 [2024-07-15 03:37:44.854963] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:38.933 [2024-07-15 03:37:44.854977] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:38.933 [2024-07-15 03:37:44.854990] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:38.933 [2024-07-15 03:37:44.855017] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:38.933 qpair failed and we were unable to recover it. 00:34:38.933 [2024-07-15 03:37:44.864855] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:38.933 [2024-07-15 03:37:44.864979] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:38.933 [2024-07-15 03:37:44.865004] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:38.934 [2024-07-15 03:37:44.865018] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:38.934 [2024-07-15 03:37:44.865031] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:38.934 [2024-07-15 03:37:44.865059] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:38.934 qpair failed and we were unable to recover it. 00:34:38.934 [2024-07-15 03:37:44.874866] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:38.934 [2024-07-15 03:37:44.874987] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:38.934 [2024-07-15 03:37:44.875013] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:38.934 [2024-07-15 03:37:44.875027] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:38.934 [2024-07-15 03:37:44.875038] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:38.934 [2024-07-15 03:37:44.875065] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:38.934 qpair failed and we were unable to recover it. 00:34:38.934 [2024-07-15 03:37:44.884907] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:38.934 [2024-07-15 03:37:44.885046] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:38.934 [2024-07-15 03:37:44.885072] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:38.934 [2024-07-15 03:37:44.885086] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:38.934 [2024-07-15 03:37:44.885099] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:38.934 [2024-07-15 03:37:44.885126] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:38.934 qpair failed and we were unable to recover it. 00:34:38.934 [2024-07-15 03:37:44.894941] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:38.934 [2024-07-15 03:37:44.895053] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:38.934 [2024-07-15 03:37:44.895079] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:38.934 [2024-07-15 03:37:44.895093] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:38.934 [2024-07-15 03:37:44.895107] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:38.934 [2024-07-15 03:37:44.895134] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:38.934 qpair failed and we were unable to recover it. 00:34:38.934 [2024-07-15 03:37:44.904975] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:38.934 [2024-07-15 03:37:44.905091] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:38.934 [2024-07-15 03:37:44.905116] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:38.934 [2024-07-15 03:37:44.905130] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:38.934 [2024-07-15 03:37:44.905142] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:38.934 [2024-07-15 03:37:44.905170] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:38.934 qpair failed and we were unable to recover it. 00:34:38.934 [2024-07-15 03:37:44.915016] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:38.934 [2024-07-15 03:37:44.915132] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:38.934 [2024-07-15 03:37:44.915155] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:38.934 [2024-07-15 03:37:44.915174] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:38.934 [2024-07-15 03:37:44.915186] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:38.934 [2024-07-15 03:37:44.915213] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:38.934 qpair failed and we were unable to recover it. 00:34:38.934 [2024-07-15 03:37:44.925016] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:38.934 [2024-07-15 03:37:44.925128] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:38.934 [2024-07-15 03:37:44.925153] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:38.934 [2024-07-15 03:37:44.925167] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:38.934 [2024-07-15 03:37:44.925180] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:38.934 [2024-07-15 03:37:44.925207] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:38.934 qpair failed and we were unable to recover it. 00:34:38.934 [2024-07-15 03:37:44.935074] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:38.934 [2024-07-15 03:37:44.935185] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:38.934 [2024-07-15 03:37:44.935211] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:38.934 [2024-07-15 03:37:44.935225] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:38.934 [2024-07-15 03:37:44.935238] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:38.934 [2024-07-15 03:37:44.935266] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:38.934 qpair failed and we were unable to recover it. 00:34:38.934 [2024-07-15 03:37:44.945136] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:38.934 [2024-07-15 03:37:44.945246] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:38.934 [2024-07-15 03:37:44.945271] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:38.934 [2024-07-15 03:37:44.945284] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:38.934 [2024-07-15 03:37:44.945297] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:38.934 [2024-07-15 03:37:44.945324] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:38.934 qpair failed and we were unable to recover it. 00:34:38.934 [2024-07-15 03:37:44.955152] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:38.934 [2024-07-15 03:37:44.955272] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:38.934 [2024-07-15 03:37:44.955297] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:38.934 [2024-07-15 03:37:44.955311] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:38.934 [2024-07-15 03:37:44.955324] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:38.934 [2024-07-15 03:37:44.955353] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:38.934 qpair failed and we were unable to recover it. 00:34:38.934 [2024-07-15 03:37:44.965131] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:38.934 [2024-07-15 03:37:44.965275] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:38.934 [2024-07-15 03:37:44.965301] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:38.934 [2024-07-15 03:37:44.965315] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:38.934 [2024-07-15 03:37:44.965328] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:38.934 [2024-07-15 03:37:44.965356] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:38.934 qpair failed and we were unable to recover it. 00:34:38.934 [2024-07-15 03:37:44.975261] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:38.934 [2024-07-15 03:37:44.975366] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:38.934 [2024-07-15 03:37:44.975391] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:38.934 [2024-07-15 03:37:44.975405] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:38.934 [2024-07-15 03:37:44.975418] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:38.934 [2024-07-15 03:37:44.975446] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:38.934 qpair failed and we were unable to recover it. 00:34:38.934 [2024-07-15 03:37:44.985208] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:38.934 [2024-07-15 03:37:44.985322] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:38.934 [2024-07-15 03:37:44.985347] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:38.934 [2024-07-15 03:37:44.985361] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:38.934 [2024-07-15 03:37:44.985373] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:38.934 [2024-07-15 03:37:44.985401] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:38.934 qpair failed and we were unable to recover it. 00:34:38.934 [2024-07-15 03:37:44.995248] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:38.935 [2024-07-15 03:37:44.995358] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:38.935 [2024-07-15 03:37:44.995383] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:38.935 [2024-07-15 03:37:44.995396] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:38.935 [2024-07-15 03:37:44.995409] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:38.935 [2024-07-15 03:37:44.995437] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:38.935 qpair failed and we were unable to recover it. 00:34:38.935 [2024-07-15 03:37:45.005264] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:38.935 [2024-07-15 03:37:45.005377] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:38.935 [2024-07-15 03:37:45.005407] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:38.935 [2024-07-15 03:37:45.005421] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:38.935 [2024-07-15 03:37:45.005435] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:38.935 [2024-07-15 03:37:45.005462] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:38.935 qpair failed and we were unable to recover it. 00:34:38.935 [2024-07-15 03:37:45.015303] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:38.935 [2024-07-15 03:37:45.015408] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:38.935 [2024-07-15 03:37:45.015433] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:38.935 [2024-07-15 03:37:45.015447] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:38.935 [2024-07-15 03:37:45.015460] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:38.935 [2024-07-15 03:37:45.015487] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:38.935 qpair failed and we were unable to recover it. 00:34:38.935 [2024-07-15 03:37:45.025393] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:38.935 [2024-07-15 03:37:45.025509] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:38.935 [2024-07-15 03:37:45.025534] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:38.935 [2024-07-15 03:37:45.025548] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:38.935 [2024-07-15 03:37:45.025561] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:38.935 [2024-07-15 03:37:45.025588] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:38.935 qpair failed and we were unable to recover it. 00:34:38.935 [2024-07-15 03:37:45.035393] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:38.935 [2024-07-15 03:37:45.035510] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:38.935 [2024-07-15 03:37:45.035536] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:38.935 [2024-07-15 03:37:45.035551] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:38.935 [2024-07-15 03:37:45.035564] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:38.935 [2024-07-15 03:37:45.035592] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:38.935 qpair failed and we were unable to recover it. 00:34:38.935 [2024-07-15 03:37:45.045382] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:38.935 [2024-07-15 03:37:45.045496] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:38.935 [2024-07-15 03:37:45.045522] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:38.935 [2024-07-15 03:37:45.045536] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:38.935 [2024-07-15 03:37:45.045549] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:38.935 [2024-07-15 03:37:45.045582] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:38.935 qpair failed and we were unable to recover it. 00:34:38.935 [2024-07-15 03:37:45.055413] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:38.935 [2024-07-15 03:37:45.055534] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:38.935 [2024-07-15 03:37:45.055559] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:38.935 [2024-07-15 03:37:45.055574] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:38.935 [2024-07-15 03:37:45.055587] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:38.935 [2024-07-15 03:37:45.055614] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:38.935 qpair failed and we were unable to recover it. 00:34:38.935 [2024-07-15 03:37:45.065480] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:38.935 [2024-07-15 03:37:45.065635] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:38.935 [2024-07-15 03:37:45.065660] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:38.935 [2024-07-15 03:37:45.065675] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:38.935 [2024-07-15 03:37:45.065688] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:38.935 [2024-07-15 03:37:45.065715] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:38.935 qpair failed and we were unable to recover it. 00:34:39.194 [2024-07-15 03:37:45.075511] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:39.194 [2024-07-15 03:37:45.075642] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:39.194 [2024-07-15 03:37:45.075676] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:39.194 [2024-07-15 03:37:45.075703] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:39.194 [2024-07-15 03:37:45.075727] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:39.194 [2024-07-15 03:37:45.075766] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:39.194 qpair failed and we were unable to recover it. 00:34:39.194 [2024-07-15 03:37:45.085680] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:39.194 [2024-07-15 03:37:45.085848] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:39.194 [2024-07-15 03:37:45.085875] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:39.194 [2024-07-15 03:37:45.085903] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:39.195 [2024-07-15 03:37:45.085917] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:39.195 [2024-07-15 03:37:45.085946] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:39.195 qpair failed and we were unable to recover it. 00:34:39.195 [2024-07-15 03:37:45.095597] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:39.195 [2024-07-15 03:37:45.095710] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:39.195 [2024-07-15 03:37:45.095741] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:39.195 [2024-07-15 03:37:45.095757] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:39.195 [2024-07-15 03:37:45.095770] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:39.195 [2024-07-15 03:37:45.095798] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:39.195 qpair failed and we were unable to recover it. 00:34:39.195 [2024-07-15 03:37:45.105632] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:39.195 [2024-07-15 03:37:45.105748] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:39.195 [2024-07-15 03:37:45.105773] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:39.195 [2024-07-15 03:37:45.105787] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:39.195 [2024-07-15 03:37:45.105801] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:39.195 [2024-07-15 03:37:45.105830] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:39.195 qpair failed and we were unable to recover it. 00:34:39.195 [2024-07-15 03:37:45.115651] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:39.195 [2024-07-15 03:37:45.115768] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:39.195 [2024-07-15 03:37:45.115794] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:39.195 [2024-07-15 03:37:45.115809] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:39.195 [2024-07-15 03:37:45.115821] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:39.195 [2024-07-15 03:37:45.115849] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:39.195 qpair failed and we were unable to recover it. 00:34:39.195 [2024-07-15 03:37:45.125622] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:39.195 [2024-07-15 03:37:45.125733] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:39.195 [2024-07-15 03:37:45.125758] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:39.195 [2024-07-15 03:37:45.125772] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:39.195 [2024-07-15 03:37:45.125784] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:39.195 [2024-07-15 03:37:45.125812] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:39.195 qpair failed and we were unable to recover it. 00:34:39.195 [2024-07-15 03:37:45.135661] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:39.195 [2024-07-15 03:37:45.135769] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:39.195 [2024-07-15 03:37:45.135794] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:39.195 [2024-07-15 03:37:45.135808] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:39.195 [2024-07-15 03:37:45.135822] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:39.195 [2024-07-15 03:37:45.135856] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:39.195 qpair failed and we were unable to recover it. 00:34:39.195 [2024-07-15 03:37:45.145664] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:39.195 [2024-07-15 03:37:45.145776] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:39.195 [2024-07-15 03:37:45.145801] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:39.195 [2024-07-15 03:37:45.145815] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:39.195 [2024-07-15 03:37:45.145828] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:39.195 [2024-07-15 03:37:45.145855] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:39.195 qpair failed and we were unable to recover it. 00:34:39.195 [2024-07-15 03:37:45.155696] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:39.195 [2024-07-15 03:37:45.155809] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:39.195 [2024-07-15 03:37:45.155834] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:39.195 [2024-07-15 03:37:45.155848] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:39.195 [2024-07-15 03:37:45.155861] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:39.195 [2024-07-15 03:37:45.155895] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:39.195 qpair failed and we were unable to recover it. 00:34:39.195 [2024-07-15 03:37:45.165739] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:39.195 [2024-07-15 03:37:45.165844] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:39.195 [2024-07-15 03:37:45.165869] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:39.195 [2024-07-15 03:37:45.165895] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:39.195 [2024-07-15 03:37:45.165910] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:39.195 [2024-07-15 03:37:45.165938] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:39.195 qpair failed and we were unable to recover it. 00:34:39.195 [2024-07-15 03:37:45.175811] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:39.195 [2024-07-15 03:37:45.175960] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:39.195 [2024-07-15 03:37:45.175985] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:39.195 [2024-07-15 03:37:45.175999] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:39.195 [2024-07-15 03:37:45.176012] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:39.195 [2024-07-15 03:37:45.176041] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:39.195 qpair failed and we were unable to recover it. 00:34:39.195 [2024-07-15 03:37:45.185789] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:39.195 [2024-07-15 03:37:45.185899] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:39.195 [2024-07-15 03:37:45.185929] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:39.195 [2024-07-15 03:37:45.185944] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:39.195 [2024-07-15 03:37:45.185957] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:39.195 [2024-07-15 03:37:45.185985] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:39.195 qpair failed and we were unable to recover it. 00:34:39.195 [2024-07-15 03:37:45.195813] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:39.195 [2024-07-15 03:37:45.195920] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:39.195 [2024-07-15 03:37:45.195946] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:39.195 [2024-07-15 03:37:45.195960] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:39.195 [2024-07-15 03:37:45.195973] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:39.195 [2024-07-15 03:37:45.196000] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:39.195 qpair failed and we were unable to recover it. 00:34:39.195 [2024-07-15 03:37:45.205910] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:39.195 [2024-07-15 03:37:45.206017] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:39.195 [2024-07-15 03:37:45.206042] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:39.195 [2024-07-15 03:37:45.206056] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:39.195 [2024-07-15 03:37:45.206068] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:39.195 [2024-07-15 03:37:45.206096] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:39.195 qpair failed and we were unable to recover it. 00:34:39.195 [2024-07-15 03:37:45.215927] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:39.195 [2024-07-15 03:37:45.216088] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:39.195 [2024-07-15 03:37:45.216113] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:39.196 [2024-07-15 03:37:45.216127] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:39.196 [2024-07-15 03:37:45.216140] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:39.196 [2024-07-15 03:37:45.216168] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:39.196 qpair failed and we were unable to recover it. 00:34:39.196 [2024-07-15 03:37:45.225919] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:39.196 [2024-07-15 03:37:45.226037] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:39.196 [2024-07-15 03:37:45.226061] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:39.196 [2024-07-15 03:37:45.226075] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:39.196 [2024-07-15 03:37:45.226088] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:39.196 [2024-07-15 03:37:45.226121] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:39.196 qpair failed and we were unable to recover it. 00:34:39.196 [2024-07-15 03:37:45.235940] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:39.196 [2024-07-15 03:37:45.236047] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:39.196 [2024-07-15 03:37:45.236072] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:39.196 [2024-07-15 03:37:45.236086] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:39.196 [2024-07-15 03:37:45.236098] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:39.196 [2024-07-15 03:37:45.236126] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:39.196 qpair failed and we were unable to recover it. 00:34:39.196 [2024-07-15 03:37:45.245961] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:39.196 [2024-07-15 03:37:45.246066] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:39.196 [2024-07-15 03:37:45.246091] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:39.196 [2024-07-15 03:37:45.246104] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:39.196 [2024-07-15 03:37:45.246118] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:39.196 [2024-07-15 03:37:45.246145] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:39.196 qpair failed and we were unable to recover it. 00:34:39.196 [2024-07-15 03:37:45.255991] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:39.196 [2024-07-15 03:37:45.256097] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:39.196 [2024-07-15 03:37:45.256123] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:39.196 [2024-07-15 03:37:45.256137] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:39.196 [2024-07-15 03:37:45.256150] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:39.196 [2024-07-15 03:37:45.256177] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:39.196 qpair failed and we were unable to recover it. 00:34:39.196 [2024-07-15 03:37:45.266080] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:39.196 [2024-07-15 03:37:45.266256] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:39.196 [2024-07-15 03:37:45.266282] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:39.196 [2024-07-15 03:37:45.266301] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:39.196 [2024-07-15 03:37:45.266314] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:39.196 [2024-07-15 03:37:45.266344] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:39.196 qpair failed and we were unable to recover it. 00:34:39.196 [2024-07-15 03:37:45.276143] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:39.196 [2024-07-15 03:37:45.276254] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:39.196 [2024-07-15 03:37:45.276285] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:39.196 [2024-07-15 03:37:45.276300] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:39.196 [2024-07-15 03:37:45.276313] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:39.196 [2024-07-15 03:37:45.276341] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:39.196 qpair failed and we were unable to recover it. 00:34:39.196 [2024-07-15 03:37:45.286112] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:39.196 [2024-07-15 03:37:45.286222] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:39.196 [2024-07-15 03:37:45.286248] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:39.196 [2024-07-15 03:37:45.286263] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:39.196 [2024-07-15 03:37:45.286276] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:39.196 [2024-07-15 03:37:45.286303] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:39.196 qpair failed and we were unable to recover it. 00:34:39.196 [2024-07-15 03:37:45.296098] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:39.196 [2024-07-15 03:37:45.296198] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:39.196 [2024-07-15 03:37:45.296224] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:39.196 [2024-07-15 03:37:45.296238] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:39.196 [2024-07-15 03:37:45.296250] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:39.196 [2024-07-15 03:37:45.296278] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:39.196 qpair failed and we were unable to recover it. 00:34:39.196 [2024-07-15 03:37:45.306207] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:39.196 [2024-07-15 03:37:45.306333] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:39.196 [2024-07-15 03:37:45.306359] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:39.196 [2024-07-15 03:37:45.306373] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:39.196 [2024-07-15 03:37:45.306386] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:39.196 [2024-07-15 03:37:45.306413] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:39.196 qpair failed and we were unable to recover it. 00:34:39.196 [2024-07-15 03:37:45.316211] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:39.196 [2024-07-15 03:37:45.316328] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:39.196 [2024-07-15 03:37:45.316354] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:39.196 [2024-07-15 03:37:45.316368] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:39.196 [2024-07-15 03:37:45.316386] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:39.196 [2024-07-15 03:37:45.316415] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:39.196 qpair failed and we were unable to recover it. 00:34:39.196 [2024-07-15 03:37:45.326219] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:39.196 [2024-07-15 03:37:45.326356] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:39.196 [2024-07-15 03:37:45.326381] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:39.196 [2024-07-15 03:37:45.326395] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:39.196 [2024-07-15 03:37:45.326408] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:39.196 [2024-07-15 03:37:45.326436] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:39.196 qpair failed and we were unable to recover it. 00:34:39.196 [2024-07-15 03:37:45.336249] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:39.196 [2024-07-15 03:37:45.336426] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:39.196 [2024-07-15 03:37:45.336458] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:39.196 [2024-07-15 03:37:45.336477] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:39.196 [2024-07-15 03:37:45.336491] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:39.196 [2024-07-15 03:37:45.336521] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:39.196 qpair failed and we were unable to recover it. 00:34:39.456 [2024-07-15 03:37:45.346280] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:39.456 [2024-07-15 03:37:45.346395] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:39.456 [2024-07-15 03:37:45.346430] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:39.456 [2024-07-15 03:37:45.346445] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:39.456 [2024-07-15 03:37:45.346458] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:39.456 [2024-07-15 03:37:45.346486] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:39.456 qpair failed and we were unable to recover it. 00:34:39.456 [2024-07-15 03:37:45.356340] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:39.456 [2024-07-15 03:37:45.356466] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:39.456 [2024-07-15 03:37:45.356493] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:39.456 [2024-07-15 03:37:45.356508] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:39.456 [2024-07-15 03:37:45.356525] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:39.456 [2024-07-15 03:37:45.356555] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:39.456 qpair failed and we were unable to recover it. 00:34:39.456 [2024-07-15 03:37:45.366361] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:39.456 [2024-07-15 03:37:45.366525] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:39.456 [2024-07-15 03:37:45.366551] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:39.456 [2024-07-15 03:37:45.366565] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:39.456 [2024-07-15 03:37:45.366578] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:39.456 [2024-07-15 03:37:45.366607] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:39.456 qpair failed and we were unable to recover it. 00:34:39.456 [2024-07-15 03:37:45.376404] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:39.456 [2024-07-15 03:37:45.376522] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:39.456 [2024-07-15 03:37:45.376548] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:39.456 [2024-07-15 03:37:45.376562] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:39.456 [2024-07-15 03:37:45.376574] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:39.456 [2024-07-15 03:37:45.376602] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:39.456 qpair failed and we were unable to recover it. 00:34:39.456 [2024-07-15 03:37:45.386377] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:39.456 [2024-07-15 03:37:45.386543] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:39.456 [2024-07-15 03:37:45.386568] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:39.456 [2024-07-15 03:37:45.386582] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:39.456 [2024-07-15 03:37:45.386596] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:39.456 [2024-07-15 03:37:45.386624] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:39.456 qpair failed and we were unable to recover it. 00:34:39.456 [2024-07-15 03:37:45.396417] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:39.456 [2024-07-15 03:37:45.396539] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:39.456 [2024-07-15 03:37:45.396565] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:39.456 [2024-07-15 03:37:45.396580] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:39.456 [2024-07-15 03:37:45.396593] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:39.456 [2024-07-15 03:37:45.396622] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:39.456 qpair failed and we were unable to recover it. 00:34:39.456 [2024-07-15 03:37:45.406398] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:39.456 [2024-07-15 03:37:45.406501] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:39.456 [2024-07-15 03:37:45.406526] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:39.456 [2024-07-15 03:37:45.406540] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:39.456 [2024-07-15 03:37:45.406558] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:39.456 [2024-07-15 03:37:45.406586] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:39.456 qpair failed and we were unable to recover it. 00:34:39.456 [2024-07-15 03:37:45.416445] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:39.456 [2024-07-15 03:37:45.416551] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:39.456 [2024-07-15 03:37:45.416576] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:39.456 [2024-07-15 03:37:45.416590] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:39.456 [2024-07-15 03:37:45.416603] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:39.456 [2024-07-15 03:37:45.416631] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:39.456 qpair failed and we were unable to recover it. 00:34:39.456 [2024-07-15 03:37:45.426505] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:39.456 [2024-07-15 03:37:45.426620] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:39.456 [2024-07-15 03:37:45.426645] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:39.456 [2024-07-15 03:37:45.426659] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:39.456 [2024-07-15 03:37:45.426672] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:39.456 [2024-07-15 03:37:45.426699] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:39.456 qpair failed and we were unable to recover it. 00:34:39.456 [2024-07-15 03:37:45.436491] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:39.456 [2024-07-15 03:37:45.436605] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:39.456 [2024-07-15 03:37:45.436632] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:39.456 [2024-07-15 03:37:45.436646] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:39.456 [2024-07-15 03:37:45.436659] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:39.456 [2024-07-15 03:37:45.436687] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:39.456 qpair failed and we were unable to recover it. 00:34:39.457 [2024-07-15 03:37:45.446531] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:39.457 [2024-07-15 03:37:45.446655] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:39.457 [2024-07-15 03:37:45.446682] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:39.457 [2024-07-15 03:37:45.446696] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:39.457 [2024-07-15 03:37:45.446713] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:39.457 [2024-07-15 03:37:45.446742] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:39.457 qpair failed and we were unable to recover it. 00:34:39.457 [2024-07-15 03:37:45.456570] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:39.457 [2024-07-15 03:37:45.456697] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:39.457 [2024-07-15 03:37:45.456724] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:39.457 [2024-07-15 03:37:45.456739] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:39.457 [2024-07-15 03:37:45.456752] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:39.457 [2024-07-15 03:37:45.456780] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:39.457 qpair failed and we were unable to recover it. 00:34:39.457 [2024-07-15 03:37:45.466585] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:39.457 [2024-07-15 03:37:45.466701] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:39.457 [2024-07-15 03:37:45.466727] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:39.457 [2024-07-15 03:37:45.466741] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:39.457 [2024-07-15 03:37:45.466754] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:39.457 [2024-07-15 03:37:45.466782] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:39.457 qpair failed and we were unable to recover it. 00:34:39.457 [2024-07-15 03:37:45.476598] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:39.457 [2024-07-15 03:37:45.476714] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:39.457 [2024-07-15 03:37:45.476740] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:39.457 [2024-07-15 03:37:45.476755] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:39.457 [2024-07-15 03:37:45.476768] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:39.457 [2024-07-15 03:37:45.476796] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:39.457 qpair failed and we were unable to recover it. 00:34:39.457 [2024-07-15 03:37:45.486633] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:39.457 [2024-07-15 03:37:45.486739] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:39.457 [2024-07-15 03:37:45.486764] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:39.457 [2024-07-15 03:37:45.486778] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:39.457 [2024-07-15 03:37:45.486791] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:39.457 [2024-07-15 03:37:45.486819] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:39.457 qpair failed and we were unable to recover it. 00:34:39.457 [2024-07-15 03:37:45.496796] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:39.457 [2024-07-15 03:37:45.496946] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:39.457 [2024-07-15 03:37:45.496972] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:39.457 [2024-07-15 03:37:45.496994] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:39.457 [2024-07-15 03:37:45.497009] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:39.457 [2024-07-15 03:37:45.497037] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:39.457 qpair failed and we were unable to recover it. 00:34:39.457 [2024-07-15 03:37:45.506721] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:39.457 [2024-07-15 03:37:45.506836] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:39.457 [2024-07-15 03:37:45.506862] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:39.457 [2024-07-15 03:37:45.506882] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:39.457 [2024-07-15 03:37:45.506897] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:39.457 [2024-07-15 03:37:45.506926] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:39.457 qpair failed and we were unable to recover it. 00:34:39.457 [2024-07-15 03:37:45.516766] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:39.457 [2024-07-15 03:37:45.516903] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:39.457 [2024-07-15 03:37:45.516928] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:39.457 [2024-07-15 03:37:45.516942] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:39.457 [2024-07-15 03:37:45.516954] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:39.457 [2024-07-15 03:37:45.516983] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:39.457 qpair failed and we were unable to recover it. 00:34:39.457 [2024-07-15 03:37:45.526792] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:39.457 [2024-07-15 03:37:45.526905] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:39.457 [2024-07-15 03:37:45.526931] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:39.457 [2024-07-15 03:37:45.526945] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:39.457 [2024-07-15 03:37:45.526958] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:39.457 [2024-07-15 03:37:45.526986] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:39.457 qpair failed and we were unable to recover it. 00:34:39.457 [2024-07-15 03:37:45.536810] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:39.457 [2024-07-15 03:37:45.536940] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:39.457 [2024-07-15 03:37:45.536965] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:39.457 [2024-07-15 03:37:45.536979] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:39.457 [2024-07-15 03:37:45.536992] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:39.457 [2024-07-15 03:37:45.537020] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:39.457 qpair failed and we were unable to recover it. 00:34:39.457 [2024-07-15 03:37:45.546835] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:39.457 [2024-07-15 03:37:45.546960] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:39.457 [2024-07-15 03:37:45.546986] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:39.457 [2024-07-15 03:37:45.546999] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:39.457 [2024-07-15 03:37:45.547012] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:39.457 [2024-07-15 03:37:45.547040] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:39.457 qpair failed and we were unable to recover it. 00:34:39.457 [2024-07-15 03:37:45.556842] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:39.457 [2024-07-15 03:37:45.556962] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:39.457 [2024-07-15 03:37:45.556987] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:39.457 [2024-07-15 03:37:45.557001] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:39.457 [2024-07-15 03:37:45.557014] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:39.457 [2024-07-15 03:37:45.557042] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:39.457 qpair failed and we were unable to recover it. 00:34:39.457 [2024-07-15 03:37:45.566913] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:39.457 [2024-07-15 03:37:45.567033] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:39.457 [2024-07-15 03:37:45.567058] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:39.457 [2024-07-15 03:37:45.567073] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:39.457 [2024-07-15 03:37:45.567086] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:39.458 [2024-07-15 03:37:45.567113] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:39.458 qpair failed and we were unable to recover it. 00:34:39.458 [2024-07-15 03:37:45.576926] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:39.458 [2024-07-15 03:37:45.577029] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:39.458 [2024-07-15 03:37:45.577054] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:39.458 [2024-07-15 03:37:45.577068] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:39.458 [2024-07-15 03:37:45.577081] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:39.458 [2024-07-15 03:37:45.577109] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:39.458 qpair failed and we were unable to recover it. 00:34:39.458 [2024-07-15 03:37:45.586983] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:39.458 [2024-07-15 03:37:45.587114] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:39.458 [2024-07-15 03:37:45.587138] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:39.458 [2024-07-15 03:37:45.587158] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:39.458 [2024-07-15 03:37:45.587172] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:39.458 [2024-07-15 03:37:45.587200] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:39.458 qpair failed and we were unable to recover it. 00:34:39.458 [2024-07-15 03:37:45.597000] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:39.458 [2024-07-15 03:37:45.597129] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:39.458 [2024-07-15 03:37:45.597156] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:39.458 [2024-07-15 03:37:45.597171] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:39.458 [2024-07-15 03:37:45.597184] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:39.458 [2024-07-15 03:37:45.597212] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:39.458 qpair failed and we were unable to recover it. 00:34:39.717 [2024-07-15 03:37:45.607008] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:39.717 [2024-07-15 03:37:45.607121] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:39.717 [2024-07-15 03:37:45.607147] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:39.717 [2024-07-15 03:37:45.607162] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:39.717 [2024-07-15 03:37:45.607175] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:39.717 [2024-07-15 03:37:45.607203] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:39.717 qpair failed and we were unable to recover it. 00:34:39.717 [2024-07-15 03:37:45.617043] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:39.717 [2024-07-15 03:37:45.617195] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:39.717 [2024-07-15 03:37:45.617221] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:39.717 [2024-07-15 03:37:45.617235] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:39.717 [2024-07-15 03:37:45.617248] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:39.717 [2024-07-15 03:37:45.617275] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:39.717 qpair failed and we were unable to recover it. 00:34:39.717 [2024-07-15 03:37:45.627055] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:39.717 [2024-07-15 03:37:45.627169] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:39.717 [2024-07-15 03:37:45.627195] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:39.717 [2024-07-15 03:37:45.627209] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:39.717 [2024-07-15 03:37:45.627222] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:39.717 [2024-07-15 03:37:45.627250] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:39.717 qpair failed and we were unable to recover it. 00:34:39.717 [2024-07-15 03:37:45.637120] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:39.717 [2024-07-15 03:37:45.637234] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:39.717 [2024-07-15 03:37:45.637259] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:39.717 [2024-07-15 03:37:45.637274] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:39.717 [2024-07-15 03:37:45.637287] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:39.717 [2024-07-15 03:37:45.637315] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:39.717 qpair failed and we were unable to recover it. 00:34:39.717 [2024-07-15 03:37:45.647089] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:39.717 [2024-07-15 03:37:45.647198] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:39.717 [2024-07-15 03:37:45.647224] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:39.717 [2024-07-15 03:37:45.647238] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:39.717 [2024-07-15 03:37:45.647251] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:39.717 [2024-07-15 03:37:45.647278] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:39.717 qpair failed and we were unable to recover it. 00:34:39.717 [2024-07-15 03:37:45.657124] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:39.717 [2024-07-15 03:37:45.657233] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:39.717 [2024-07-15 03:37:45.657258] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:39.717 [2024-07-15 03:37:45.657272] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:39.717 [2024-07-15 03:37:45.657285] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:39.717 [2024-07-15 03:37:45.657313] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:39.717 qpair failed and we were unable to recover it. 00:34:39.717 [2024-07-15 03:37:45.667163] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:39.717 [2024-07-15 03:37:45.667317] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:39.717 [2024-07-15 03:37:45.667342] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:39.717 [2024-07-15 03:37:45.667356] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:39.717 [2024-07-15 03:37:45.667368] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:39.717 [2024-07-15 03:37:45.667395] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:39.717 qpair failed and we were unable to recover it. 00:34:39.717 [2024-07-15 03:37:45.677178] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:39.717 [2024-07-15 03:37:45.677291] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:39.717 [2024-07-15 03:37:45.677317] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:39.717 [2024-07-15 03:37:45.677337] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:39.717 [2024-07-15 03:37:45.677351] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:39.717 [2024-07-15 03:37:45.677379] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:39.717 qpair failed and we were unable to recover it. 00:34:39.717 [2024-07-15 03:37:45.687201] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:39.718 [2024-07-15 03:37:45.687319] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:39.718 [2024-07-15 03:37:45.687344] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:39.718 [2024-07-15 03:37:45.687358] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:39.718 [2024-07-15 03:37:45.687371] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:39.718 [2024-07-15 03:37:45.687399] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:39.718 qpair failed and we were unable to recover it. 00:34:39.718 [2024-07-15 03:37:45.697278] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:39.718 [2024-07-15 03:37:45.697405] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:39.718 [2024-07-15 03:37:45.697430] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:39.718 [2024-07-15 03:37:45.697444] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:39.718 [2024-07-15 03:37:45.697457] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:39.718 [2024-07-15 03:37:45.697486] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:39.718 qpair failed and we were unable to recover it. 00:34:39.718 [2024-07-15 03:37:45.707262] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:39.718 [2024-07-15 03:37:45.707371] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:39.718 [2024-07-15 03:37:45.707396] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:39.718 [2024-07-15 03:37:45.707410] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:39.718 [2024-07-15 03:37:45.707423] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:39.718 [2024-07-15 03:37:45.707451] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:39.718 qpair failed and we were unable to recover it. 00:34:39.718 [2024-07-15 03:37:45.717305] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:39.718 [2024-07-15 03:37:45.717477] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:39.718 [2024-07-15 03:37:45.717502] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:39.718 [2024-07-15 03:37:45.717517] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:39.718 [2024-07-15 03:37:45.717530] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:39.718 [2024-07-15 03:37:45.717558] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:39.718 qpair failed and we were unable to recover it. 00:34:39.718 [2024-07-15 03:37:45.727334] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:39.718 [2024-07-15 03:37:45.727440] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:39.718 [2024-07-15 03:37:45.727466] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:39.718 [2024-07-15 03:37:45.727480] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:39.718 [2024-07-15 03:37:45.727492] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:39.718 [2024-07-15 03:37:45.727520] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:39.718 qpair failed and we were unable to recover it. 00:34:39.718 [2024-07-15 03:37:45.737382] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:39.718 [2024-07-15 03:37:45.737490] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:39.718 [2024-07-15 03:37:45.737515] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:39.718 [2024-07-15 03:37:45.737530] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:39.718 [2024-07-15 03:37:45.737543] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:39.718 [2024-07-15 03:37:45.737570] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:39.718 qpair failed and we were unable to recover it. 00:34:39.718 [2024-07-15 03:37:45.747378] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:39.718 [2024-07-15 03:37:45.747529] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:39.718 [2024-07-15 03:37:45.747553] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:39.718 [2024-07-15 03:37:45.747568] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:39.718 [2024-07-15 03:37:45.747580] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:39.718 [2024-07-15 03:37:45.747608] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:39.718 qpair failed and we were unable to recover it. 00:34:39.718 [2024-07-15 03:37:45.757434] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:39.718 [2024-07-15 03:37:45.757544] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:39.718 [2024-07-15 03:37:45.757569] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:39.718 [2024-07-15 03:37:45.757583] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:39.718 [2024-07-15 03:37:45.757596] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:39.718 [2024-07-15 03:37:45.757624] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:39.718 qpair failed and we were unable to recover it. 00:34:39.718 [2024-07-15 03:37:45.767474] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:39.718 [2024-07-15 03:37:45.767593] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:39.718 [2024-07-15 03:37:45.767623] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:39.718 [2024-07-15 03:37:45.767637] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:39.718 [2024-07-15 03:37:45.767650] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:39.718 [2024-07-15 03:37:45.767678] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:39.718 qpair failed and we were unable to recover it. 00:34:39.718 [2024-07-15 03:37:45.777474] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:39.718 [2024-07-15 03:37:45.777579] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:39.718 [2024-07-15 03:37:45.777604] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:39.718 [2024-07-15 03:37:45.777618] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:39.718 [2024-07-15 03:37:45.777631] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:39.718 [2024-07-15 03:37:45.777659] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:39.718 qpair failed and we were unable to recover it. 00:34:39.718 [2024-07-15 03:37:45.787504] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:39.718 [2024-07-15 03:37:45.787616] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:39.718 [2024-07-15 03:37:45.787640] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:39.718 [2024-07-15 03:37:45.787654] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:39.718 [2024-07-15 03:37:45.787667] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:39.718 [2024-07-15 03:37:45.787695] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:39.718 qpair failed and we were unable to recover it. 00:34:39.718 [2024-07-15 03:37:45.797507] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:39.718 [2024-07-15 03:37:45.797612] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:39.718 [2024-07-15 03:37:45.797638] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:39.718 [2024-07-15 03:37:45.797653] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:39.718 [2024-07-15 03:37:45.797666] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:39.718 [2024-07-15 03:37:45.797694] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:39.718 qpair failed and we were unable to recover it. 00:34:39.718 [2024-07-15 03:37:45.807554] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:39.718 [2024-07-15 03:37:45.807660] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:39.718 [2024-07-15 03:37:45.807684] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:39.718 [2024-07-15 03:37:45.807700] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:39.718 [2024-07-15 03:37:45.807714] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:39.718 [2024-07-15 03:37:45.807743] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:39.719 qpair failed and we were unable to recover it. 00:34:39.719 [2024-07-15 03:37:45.817602] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:39.719 [2024-07-15 03:37:45.817730] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:39.719 [2024-07-15 03:37:45.817756] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:39.719 [2024-07-15 03:37:45.817770] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:39.719 [2024-07-15 03:37:45.817783] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:39.719 [2024-07-15 03:37:45.817811] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:39.719 qpair failed and we were unable to recover it. 00:34:39.719 [2024-07-15 03:37:45.827648] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:39.719 [2024-07-15 03:37:45.827809] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:39.719 [2024-07-15 03:37:45.827834] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:39.719 [2024-07-15 03:37:45.827848] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:39.719 [2024-07-15 03:37:45.827861] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:39.719 [2024-07-15 03:37:45.827897] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:39.719 qpair failed and we were unable to recover it. 00:34:39.719 [2024-07-15 03:37:45.837631] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:39.719 [2024-07-15 03:37:45.837735] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:39.719 [2024-07-15 03:37:45.837760] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:39.719 [2024-07-15 03:37:45.837774] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:39.719 [2024-07-15 03:37:45.837787] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:39.719 [2024-07-15 03:37:45.837814] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:39.719 qpair failed and we were unable to recover it. 00:34:39.719 [2024-07-15 03:37:45.847672] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:39.719 [2024-07-15 03:37:45.847795] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:39.719 [2024-07-15 03:37:45.847820] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:39.719 [2024-07-15 03:37:45.847834] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:39.719 [2024-07-15 03:37:45.847847] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:39.719 [2024-07-15 03:37:45.847874] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:39.719 qpair failed and we were unable to recover it. 00:34:39.719 [2024-07-15 03:37:45.857677] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:39.719 [2024-07-15 03:37:45.857790] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:39.719 [2024-07-15 03:37:45.857822] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:39.719 [2024-07-15 03:37:45.857838] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:39.719 [2024-07-15 03:37:45.857850] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:39.719 [2024-07-15 03:37:45.857888] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:39.719 qpair failed and we were unable to recover it. 00:34:39.978 [2024-07-15 03:37:45.867737] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:39.978 [2024-07-15 03:37:45.867850] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:39.978 [2024-07-15 03:37:45.867884] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:39.978 [2024-07-15 03:37:45.867905] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:39.978 [2024-07-15 03:37:45.867918] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:39.978 [2024-07-15 03:37:45.867949] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:39.978 qpair failed and we were unable to recover it. 00:34:39.978 [2024-07-15 03:37:45.877743] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:39.978 [2024-07-15 03:37:45.877885] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:39.978 [2024-07-15 03:37:45.877911] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:39.978 [2024-07-15 03:37:45.877925] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:39.978 [2024-07-15 03:37:45.877937] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:39.978 [2024-07-15 03:37:45.877965] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:39.978 qpair failed and we were unable to recover it. 00:34:39.978 [2024-07-15 03:37:45.887764] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:39.978 [2024-07-15 03:37:45.887875] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:39.978 [2024-07-15 03:37:45.887906] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:39.978 [2024-07-15 03:37:45.887920] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:39.978 [2024-07-15 03:37:45.887933] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:39.978 [2024-07-15 03:37:45.887963] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:39.978 qpair failed and we were unable to recover it. 00:34:39.978 [2024-07-15 03:37:45.897791] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:39.978 [2024-07-15 03:37:45.897902] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:39.978 [2024-07-15 03:37:45.897928] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:39.978 [2024-07-15 03:37:45.897942] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:39.978 [2024-07-15 03:37:45.897954] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:39.978 [2024-07-15 03:37:45.897988] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:39.978 qpair failed and we were unable to recover it. 00:34:39.978 [2024-07-15 03:37:45.907907] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:39.978 [2024-07-15 03:37:45.908024] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:39.978 [2024-07-15 03:37:45.908049] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:39.978 [2024-07-15 03:37:45.908063] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:39.978 [2024-07-15 03:37:45.908076] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:39.978 [2024-07-15 03:37:45.908103] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:39.978 qpair failed and we were unable to recover it. 00:34:39.978 [2024-07-15 03:37:45.917861] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:39.978 [2024-07-15 03:37:45.918021] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:39.978 [2024-07-15 03:37:45.918044] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:39.978 [2024-07-15 03:37:45.918058] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:39.979 [2024-07-15 03:37:45.918070] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:39.979 [2024-07-15 03:37:45.918097] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:39.979 qpair failed and we were unable to recover it. 00:34:39.979 [2024-07-15 03:37:45.927899] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:39.979 [2024-07-15 03:37:45.928018] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:39.979 [2024-07-15 03:37:45.928043] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:39.979 [2024-07-15 03:37:45.928057] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:39.979 [2024-07-15 03:37:45.928069] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:39.979 [2024-07-15 03:37:45.928098] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:39.979 qpair failed and we were unable to recover it. 00:34:39.979 [2024-07-15 03:37:45.937930] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:39.979 [2024-07-15 03:37:45.938038] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:39.979 [2024-07-15 03:37:45.938063] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:39.979 [2024-07-15 03:37:45.938078] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:39.979 [2024-07-15 03:37:45.938091] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:39.979 [2024-07-15 03:37:45.938118] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:39.979 qpair failed and we were unable to recover it. 00:34:39.979 [2024-07-15 03:37:45.947960] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:39.979 [2024-07-15 03:37:45.948073] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:39.979 [2024-07-15 03:37:45.948102] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:39.979 [2024-07-15 03:37:45.948117] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:39.979 [2024-07-15 03:37:45.948130] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:39.979 [2024-07-15 03:37:45.948157] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:39.979 qpair failed and we were unable to recover it. 00:34:39.979 [2024-07-15 03:37:45.957976] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:39.979 [2024-07-15 03:37:45.958089] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:39.979 [2024-07-15 03:37:45.958114] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:39.979 [2024-07-15 03:37:45.958127] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:39.979 [2024-07-15 03:37:45.958140] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:39.979 [2024-07-15 03:37:45.958168] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:39.979 qpair failed and we were unable to recover it. 00:34:39.979 [2024-07-15 03:37:45.968034] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:39.979 [2024-07-15 03:37:45.968149] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:39.979 [2024-07-15 03:37:45.968174] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:39.979 [2024-07-15 03:37:45.968188] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:39.979 [2024-07-15 03:37:45.968201] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:39.979 [2024-07-15 03:37:45.968229] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:39.979 qpair failed and we were unable to recover it. 00:34:39.979 [2024-07-15 03:37:45.978085] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:39.979 [2024-07-15 03:37:45.978195] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:39.979 [2024-07-15 03:37:45.978221] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:39.979 [2024-07-15 03:37:45.978235] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:39.979 [2024-07-15 03:37:45.978248] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:39.979 [2024-07-15 03:37:45.978276] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:39.979 qpair failed and we were unable to recover it. 00:34:39.979 [2024-07-15 03:37:45.988067] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:39.979 [2024-07-15 03:37:45.988182] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:39.979 [2024-07-15 03:37:45.988207] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:39.979 [2024-07-15 03:37:45.988221] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:39.979 [2024-07-15 03:37:45.988234] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:39.979 [2024-07-15 03:37:45.988269] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:39.979 qpair failed and we were unable to recover it. 00:34:39.979 [2024-07-15 03:37:45.998088] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:39.979 [2024-07-15 03:37:45.998218] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:39.979 [2024-07-15 03:37:45.998244] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:39.979 [2024-07-15 03:37:45.998258] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:39.979 [2024-07-15 03:37:45.998271] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:39.979 [2024-07-15 03:37:45.998298] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:39.979 qpair failed and we were unable to recover it. 00:34:39.979 [2024-07-15 03:37:46.008107] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:39.979 [2024-07-15 03:37:46.008216] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:39.979 [2024-07-15 03:37:46.008241] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:39.979 [2024-07-15 03:37:46.008255] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:39.979 [2024-07-15 03:37:46.008268] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:39.979 [2024-07-15 03:37:46.008296] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:39.979 qpair failed and we were unable to recover it. 00:34:39.979 [2024-07-15 03:37:46.018172] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:39.979 [2024-07-15 03:37:46.018290] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:39.979 [2024-07-15 03:37:46.018315] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:39.979 [2024-07-15 03:37:46.018329] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:39.979 [2024-07-15 03:37:46.018341] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:39.979 [2024-07-15 03:37:46.018369] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:39.979 qpair failed and we were unable to recover it. 00:34:39.979 [2024-07-15 03:37:46.028202] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:39.979 [2024-07-15 03:37:46.028326] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:39.979 [2024-07-15 03:37:46.028352] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:39.979 [2024-07-15 03:37:46.028367] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:39.979 [2024-07-15 03:37:46.028384] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:39.979 [2024-07-15 03:37:46.028413] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:39.979 qpair failed and we were unable to recover it. 00:34:39.979 [2024-07-15 03:37:46.038211] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:39.979 [2024-07-15 03:37:46.038321] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:39.979 [2024-07-15 03:37:46.038351] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:39.979 [2024-07-15 03:37:46.038367] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:39.979 [2024-07-15 03:37:46.038379] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:39.979 [2024-07-15 03:37:46.038408] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:39.979 qpair failed and we were unable to recover it. 00:34:39.979 [2024-07-15 03:37:46.048302] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:39.979 [2024-07-15 03:37:46.048432] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:39.980 [2024-07-15 03:37:46.048457] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:39.980 [2024-07-15 03:37:46.048471] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:39.980 [2024-07-15 03:37:46.048484] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:39.980 [2024-07-15 03:37:46.048512] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:39.980 qpair failed and we were unable to recover it. 00:34:39.980 [2024-07-15 03:37:46.058262] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:39.980 [2024-07-15 03:37:46.058374] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:39.980 [2024-07-15 03:37:46.058399] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:39.980 [2024-07-15 03:37:46.058413] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:39.980 [2024-07-15 03:37:46.058426] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:39.980 [2024-07-15 03:37:46.058453] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:39.980 qpair failed and we were unable to recover it. 00:34:39.980 [2024-07-15 03:37:46.068323] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:39.980 [2024-07-15 03:37:46.068447] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:39.980 [2024-07-15 03:37:46.068472] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:39.980 [2024-07-15 03:37:46.068487] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:39.980 [2024-07-15 03:37:46.068501] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:39.980 [2024-07-15 03:37:46.068528] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:39.980 qpair failed and we were unable to recover it. 00:34:39.980 [2024-07-15 03:37:46.078351] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:39.980 [2024-07-15 03:37:46.078461] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:39.980 [2024-07-15 03:37:46.078486] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:39.980 [2024-07-15 03:37:46.078501] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:39.980 [2024-07-15 03:37:46.078519] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:39.980 [2024-07-15 03:37:46.078548] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:39.980 qpair failed and we were unable to recover it. 00:34:39.980 [2024-07-15 03:37:46.088356] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:39.980 [2024-07-15 03:37:46.088464] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:39.980 [2024-07-15 03:37:46.088490] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:39.980 [2024-07-15 03:37:46.088504] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:39.980 [2024-07-15 03:37:46.088517] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:39.980 [2024-07-15 03:37:46.088545] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:39.980 qpair failed and we were unable to recover it. 00:34:39.980 [2024-07-15 03:37:46.098385] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:39.980 [2024-07-15 03:37:46.098485] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:39.980 [2024-07-15 03:37:46.098510] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:39.980 [2024-07-15 03:37:46.098525] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:39.980 [2024-07-15 03:37:46.098538] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:39.980 [2024-07-15 03:37:46.098565] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:39.980 qpair failed and we were unable to recover it. 00:34:39.980 [2024-07-15 03:37:46.108411] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:39.980 [2024-07-15 03:37:46.108520] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:39.980 [2024-07-15 03:37:46.108545] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:39.980 [2024-07-15 03:37:46.108559] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:39.980 [2024-07-15 03:37:46.108572] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:39.980 [2024-07-15 03:37:46.108599] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:39.980 qpair failed and we were unable to recover it. 00:34:39.980 [2024-07-15 03:37:46.118487] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:39.980 [2024-07-15 03:37:46.118604] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:39.980 [2024-07-15 03:37:46.118630] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:39.980 [2024-07-15 03:37:46.118645] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:39.980 [2024-07-15 03:37:46.118658] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:39.980 [2024-07-15 03:37:46.118687] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:39.980 qpair failed and we were unable to recover it. 00:34:40.239 [2024-07-15 03:37:46.128508] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:40.239 [2024-07-15 03:37:46.128634] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:40.239 [2024-07-15 03:37:46.128662] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:40.239 [2024-07-15 03:37:46.128677] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:40.239 [2024-07-15 03:37:46.128690] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:40.239 [2024-07-15 03:37:46.128719] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:40.239 qpair failed and we were unable to recover it. 00:34:40.239 [2024-07-15 03:37:46.138503] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:40.239 [2024-07-15 03:37:46.138610] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:40.239 [2024-07-15 03:37:46.138637] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:40.239 [2024-07-15 03:37:46.138652] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:40.239 [2024-07-15 03:37:46.138664] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:40.239 [2024-07-15 03:37:46.138692] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:40.239 qpair failed and we were unable to recover it. 00:34:40.239 [2024-07-15 03:37:46.148523] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:40.239 [2024-07-15 03:37:46.148639] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:40.239 [2024-07-15 03:37:46.148664] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:40.239 [2024-07-15 03:37:46.148678] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:40.239 [2024-07-15 03:37:46.148691] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:40.239 [2024-07-15 03:37:46.148719] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:40.239 qpair failed and we were unable to recover it. 00:34:40.239 [2024-07-15 03:37:46.158549] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:40.239 [2024-07-15 03:37:46.158665] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:40.239 [2024-07-15 03:37:46.158690] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:40.239 [2024-07-15 03:37:46.158704] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:40.239 [2024-07-15 03:37:46.158717] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:40.239 [2024-07-15 03:37:46.158746] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:40.239 qpair failed and we were unable to recover it. 00:34:40.239 [2024-07-15 03:37:46.168568] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:40.239 [2024-07-15 03:37:46.168681] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:40.239 [2024-07-15 03:37:46.168706] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:40.239 [2024-07-15 03:37:46.168720] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:40.239 [2024-07-15 03:37:46.168738] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:40.239 [2024-07-15 03:37:46.168766] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:40.239 qpair failed and we were unable to recover it. 00:34:40.239 [2024-07-15 03:37:46.178598] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:40.239 [2024-07-15 03:37:46.178703] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:40.239 [2024-07-15 03:37:46.178729] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:40.239 [2024-07-15 03:37:46.178743] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:40.239 [2024-07-15 03:37:46.178756] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:40.239 [2024-07-15 03:37:46.178784] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:40.239 qpair failed and we were unable to recover it. 00:34:40.239 [2024-07-15 03:37:46.188693] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:40.239 [2024-07-15 03:37:46.188825] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:40.239 [2024-07-15 03:37:46.188850] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:40.239 [2024-07-15 03:37:46.188865] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:40.239 [2024-07-15 03:37:46.188885] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:40.239 [2024-07-15 03:37:46.188917] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:40.239 qpair failed and we were unable to recover it. 00:34:40.239 [2024-07-15 03:37:46.198670] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:40.239 [2024-07-15 03:37:46.198780] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:40.239 [2024-07-15 03:37:46.198805] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:40.239 [2024-07-15 03:37:46.198820] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:40.240 [2024-07-15 03:37:46.198832] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:40.240 [2024-07-15 03:37:46.198860] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:40.240 qpair failed and we were unable to recover it. 00:34:40.240 [2024-07-15 03:37:46.208734] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:40.240 [2024-07-15 03:37:46.208896] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:40.240 [2024-07-15 03:37:46.208921] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:40.240 [2024-07-15 03:37:46.208935] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:40.240 [2024-07-15 03:37:46.208949] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:40.240 [2024-07-15 03:37:46.208977] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:40.240 qpair failed and we were unable to recover it. 00:34:40.240 [2024-07-15 03:37:46.218769] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:40.240 [2024-07-15 03:37:46.218925] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:40.240 [2024-07-15 03:37:46.218951] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:40.240 [2024-07-15 03:37:46.218965] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:40.240 [2024-07-15 03:37:46.218979] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:40.240 [2024-07-15 03:37:46.219007] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:40.240 qpair failed and we were unable to recover it. 00:34:40.240 [2024-07-15 03:37:46.228782] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:40.240 [2024-07-15 03:37:46.228949] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:40.240 [2024-07-15 03:37:46.228974] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:40.240 [2024-07-15 03:37:46.228988] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:40.240 [2024-07-15 03:37:46.229000] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:40.240 [2024-07-15 03:37:46.229028] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:40.240 qpair failed and we were unable to recover it. 00:34:40.240 [2024-07-15 03:37:46.238787] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:40.240 [2024-07-15 03:37:46.238904] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:40.240 [2024-07-15 03:37:46.238929] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:40.240 [2024-07-15 03:37:46.238943] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:40.240 [2024-07-15 03:37:46.238956] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:40.240 [2024-07-15 03:37:46.238985] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:40.240 qpair failed and we were unable to recover it. 00:34:40.240 [2024-07-15 03:37:46.248822] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:40.240 [2024-07-15 03:37:46.248934] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:40.240 [2024-07-15 03:37:46.248969] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:40.240 [2024-07-15 03:37:46.248984] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:40.240 [2024-07-15 03:37:46.248996] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:40.240 [2024-07-15 03:37:46.249025] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:40.240 qpair failed and we were unable to recover it. 00:34:40.240 [2024-07-15 03:37:46.258896] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:40.240 [2024-07-15 03:37:46.259012] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:40.240 [2024-07-15 03:37:46.259037] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:40.240 [2024-07-15 03:37:46.259051] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:40.240 [2024-07-15 03:37:46.259072] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:40.240 [2024-07-15 03:37:46.259101] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:40.240 qpair failed and we were unable to recover it. 00:34:40.240 [2024-07-15 03:37:46.268917] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:40.240 [2024-07-15 03:37:46.269061] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:40.240 [2024-07-15 03:37:46.269085] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:40.240 [2024-07-15 03:37:46.269100] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:40.240 [2024-07-15 03:37:46.269113] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:40.240 [2024-07-15 03:37:46.269140] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:40.240 qpair failed and we were unable to recover it. 00:34:40.240 [2024-07-15 03:37:46.278912] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:40.240 [2024-07-15 03:37:46.279029] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:40.240 [2024-07-15 03:37:46.279054] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:40.240 [2024-07-15 03:37:46.279068] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:40.240 [2024-07-15 03:37:46.279081] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:40.240 [2024-07-15 03:37:46.279110] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:40.240 qpair failed and we were unable to recover it. 00:34:40.240 [2024-07-15 03:37:46.288972] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:40.240 [2024-07-15 03:37:46.289091] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:40.240 [2024-07-15 03:37:46.289116] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:40.240 [2024-07-15 03:37:46.289131] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:40.240 [2024-07-15 03:37:46.289144] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:40.240 [2024-07-15 03:37:46.289171] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:40.240 qpair failed and we were unable to recover it. 00:34:40.240 [2024-07-15 03:37:46.298960] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:40.240 [2024-07-15 03:37:46.299060] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:40.240 [2024-07-15 03:37:46.299084] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:40.240 [2024-07-15 03:37:46.299098] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:40.240 [2024-07-15 03:37:46.299111] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:40.240 [2024-07-15 03:37:46.299140] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:40.240 qpair failed and we were unable to recover it. 00:34:40.240 [2024-07-15 03:37:46.309022] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:40.240 [2024-07-15 03:37:46.309135] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:40.240 [2024-07-15 03:37:46.309160] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:40.240 [2024-07-15 03:37:46.309174] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:40.240 [2024-07-15 03:37:46.309187] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:40.240 [2024-07-15 03:37:46.309214] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:40.240 qpair failed and we were unable to recover it. 00:34:40.240 [2024-07-15 03:37:46.319025] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:40.240 [2024-07-15 03:37:46.319138] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:40.240 [2024-07-15 03:37:46.319163] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:40.240 [2024-07-15 03:37:46.319177] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:40.240 [2024-07-15 03:37:46.319190] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:40.240 [2024-07-15 03:37:46.319218] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:40.240 qpair failed and we were unable to recover it. 00:34:40.240 [2024-07-15 03:37:46.329086] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:40.240 [2024-07-15 03:37:46.329195] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:40.240 [2024-07-15 03:37:46.329220] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:40.241 [2024-07-15 03:37:46.329234] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:40.241 [2024-07-15 03:37:46.329247] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:40.241 [2024-07-15 03:37:46.329274] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:40.241 qpair failed and we were unable to recover it. 00:34:40.241 [2024-07-15 03:37:46.339107] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:40.241 [2024-07-15 03:37:46.339214] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:40.241 [2024-07-15 03:37:46.339238] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:40.241 [2024-07-15 03:37:46.339252] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:40.241 [2024-07-15 03:37:46.339265] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:40.241 [2024-07-15 03:37:46.339293] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:40.241 qpair failed and we were unable to recover it. 00:34:40.241 [2024-07-15 03:37:46.349194] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:40.241 [2024-07-15 03:37:46.349344] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:40.241 [2024-07-15 03:37:46.349369] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:40.241 [2024-07-15 03:37:46.349389] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:40.241 [2024-07-15 03:37:46.349403] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:40.241 [2024-07-15 03:37:46.349430] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:40.241 qpair failed and we were unable to recover it. 00:34:40.241 [2024-07-15 03:37:46.359149] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:40.241 [2024-07-15 03:37:46.359255] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:40.241 [2024-07-15 03:37:46.359280] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:40.241 [2024-07-15 03:37:46.359294] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:40.241 [2024-07-15 03:37:46.359308] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:40.241 [2024-07-15 03:37:46.359337] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:40.241 qpair failed and we were unable to recover it. 00:34:40.241 [2024-07-15 03:37:46.369169] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:40.241 [2024-07-15 03:37:46.369279] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:40.241 [2024-07-15 03:37:46.369304] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:40.241 [2024-07-15 03:37:46.369318] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:40.241 [2024-07-15 03:37:46.369331] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:40.241 [2024-07-15 03:37:46.369359] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:40.241 qpair failed and we were unable to recover it. 00:34:40.241 [2024-07-15 03:37:46.379213] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:40.241 [2024-07-15 03:37:46.379338] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:40.241 [2024-07-15 03:37:46.379372] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:40.241 [2024-07-15 03:37:46.379396] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:40.241 [2024-07-15 03:37:46.379418] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:40.241 [2024-07-15 03:37:46.379453] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:40.241 qpair failed and we were unable to recover it. 00:34:40.500 [2024-07-15 03:37:46.389226] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:40.500 [2024-07-15 03:37:46.389343] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:40.500 [2024-07-15 03:37:46.389370] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:40.500 [2024-07-15 03:37:46.389385] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:40.500 [2024-07-15 03:37:46.389398] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:40.500 [2024-07-15 03:37:46.389426] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:40.500 qpair failed and we were unable to recover it. 00:34:40.500 [2024-07-15 03:37:46.399248] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:40.500 [2024-07-15 03:37:46.399375] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:40.500 [2024-07-15 03:37:46.399400] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:40.500 [2024-07-15 03:37:46.399415] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:40.500 [2024-07-15 03:37:46.399428] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:40.500 [2024-07-15 03:37:46.399456] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:40.500 qpair failed and we were unable to recover it. 00:34:40.500 [2024-07-15 03:37:46.409293] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:40.500 [2024-07-15 03:37:46.409402] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:40.500 [2024-07-15 03:37:46.409428] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:40.500 [2024-07-15 03:37:46.409442] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:40.500 [2024-07-15 03:37:46.409454] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:40.500 [2024-07-15 03:37:46.409485] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:40.500 qpair failed and we were unable to recover it. 00:34:40.500 [2024-07-15 03:37:46.419307] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:40.500 [2024-07-15 03:37:46.419416] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:40.500 [2024-07-15 03:37:46.419441] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:40.500 [2024-07-15 03:37:46.419455] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:40.500 [2024-07-15 03:37:46.419468] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:40.500 [2024-07-15 03:37:46.419496] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:40.500 qpair failed and we were unable to recover it. 00:34:40.500 [2024-07-15 03:37:46.429403] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:40.500 [2024-07-15 03:37:46.429544] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:40.500 [2024-07-15 03:37:46.429569] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:40.500 [2024-07-15 03:37:46.429583] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:40.500 [2024-07-15 03:37:46.429595] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:40.500 [2024-07-15 03:37:46.429622] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:40.500 qpair failed and we were unable to recover it. 00:34:40.500 [2024-07-15 03:37:46.439404] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:40.500 [2024-07-15 03:37:46.439514] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:40.500 [2024-07-15 03:37:46.439540] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:40.500 [2024-07-15 03:37:46.439560] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:40.500 [2024-07-15 03:37:46.439573] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:40.500 [2024-07-15 03:37:46.439602] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:40.500 qpair failed and we were unable to recover it. 00:34:40.500 [2024-07-15 03:37:46.449417] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:40.500 [2024-07-15 03:37:46.449526] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:40.500 [2024-07-15 03:37:46.449551] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:40.500 [2024-07-15 03:37:46.449565] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:40.500 [2024-07-15 03:37:46.449578] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:40.500 [2024-07-15 03:37:46.449605] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:40.500 qpair failed and we were unable to recover it. 00:34:40.500 [2024-07-15 03:37:46.459444] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:40.501 [2024-07-15 03:37:46.459569] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:40.501 [2024-07-15 03:37:46.459594] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:40.501 [2024-07-15 03:37:46.459610] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:40.501 [2024-07-15 03:37:46.459623] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:40.501 [2024-07-15 03:37:46.459652] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:40.501 qpair failed and we were unable to recover it. 00:34:40.501 [2024-07-15 03:37:46.469499] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:40.501 [2024-07-15 03:37:46.469612] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:40.501 [2024-07-15 03:37:46.469637] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:40.501 [2024-07-15 03:37:46.469651] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:40.501 [2024-07-15 03:37:46.469664] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:40.501 [2024-07-15 03:37:46.469691] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:40.501 qpair failed and we were unable to recover it. 00:34:40.501 [2024-07-15 03:37:46.479504] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:40.501 [2024-07-15 03:37:46.479609] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:40.501 [2024-07-15 03:37:46.479634] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:40.501 [2024-07-15 03:37:46.479648] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:40.501 [2024-07-15 03:37:46.479661] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:40.501 [2024-07-15 03:37:46.479689] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:40.501 qpair failed and we were unable to recover it. 00:34:40.501 [2024-07-15 03:37:46.489514] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:40.501 [2024-07-15 03:37:46.489624] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:40.501 [2024-07-15 03:37:46.489650] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:40.501 [2024-07-15 03:37:46.489664] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:40.501 [2024-07-15 03:37:46.489676] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:40.501 [2024-07-15 03:37:46.489703] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:40.501 qpair failed and we were unable to recover it. 00:34:40.501 [2024-07-15 03:37:46.499579] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:40.501 [2024-07-15 03:37:46.499728] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:40.501 [2024-07-15 03:37:46.499753] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:40.501 [2024-07-15 03:37:46.499767] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:40.501 [2024-07-15 03:37:46.499779] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:40.501 [2024-07-15 03:37:46.499809] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:40.501 qpair failed and we were unable to recover it. 00:34:40.501 [2024-07-15 03:37:46.509580] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:40.501 [2024-07-15 03:37:46.509699] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:40.501 [2024-07-15 03:37:46.509724] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:40.501 [2024-07-15 03:37:46.509739] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:40.501 [2024-07-15 03:37:46.509751] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:40.501 [2024-07-15 03:37:46.509779] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:40.501 qpair failed and we were unable to recover it. 00:34:40.501 [2024-07-15 03:37:46.519653] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:40.501 [2024-07-15 03:37:46.519804] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:40.501 [2024-07-15 03:37:46.519830] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:40.501 [2024-07-15 03:37:46.519844] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:40.501 [2024-07-15 03:37:46.519856] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:40.501 [2024-07-15 03:37:46.519891] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:40.501 qpair failed and we were unable to recover it. 00:34:40.501 [2024-07-15 03:37:46.529633] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:40.501 [2024-07-15 03:37:46.529751] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:40.501 [2024-07-15 03:37:46.529781] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:40.501 [2024-07-15 03:37:46.529797] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:40.501 [2024-07-15 03:37:46.529810] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:40.501 [2024-07-15 03:37:46.529838] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:40.501 qpair failed and we were unable to recover it. 00:34:40.501 [2024-07-15 03:37:46.539641] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:40.501 [2024-07-15 03:37:46.539786] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:40.501 [2024-07-15 03:37:46.539812] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:40.501 [2024-07-15 03:37:46.539825] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:40.501 [2024-07-15 03:37:46.539838] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:40.501 [2024-07-15 03:37:46.539866] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:40.501 qpair failed and we were unable to recover it. 00:34:40.501 [2024-07-15 03:37:46.549727] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:40.501 [2024-07-15 03:37:46.549840] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:40.501 [2024-07-15 03:37:46.549865] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:40.501 [2024-07-15 03:37:46.549891] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:40.501 [2024-07-15 03:37:46.549906] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:40.501 [2024-07-15 03:37:46.549936] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:40.501 qpair failed and we were unable to recover it. 00:34:40.501 [2024-07-15 03:37:46.559731] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:40.501 [2024-07-15 03:37:46.559874] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:40.501 [2024-07-15 03:37:46.559907] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:40.501 [2024-07-15 03:37:46.559921] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:40.501 [2024-07-15 03:37:46.559934] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:40.501 [2024-07-15 03:37:46.559962] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:40.501 qpair failed and we were unable to recover it. 00:34:40.501 [2024-07-15 03:37:46.569755] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:40.501 [2024-07-15 03:37:46.569863] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:40.501 [2024-07-15 03:37:46.569905] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:40.501 [2024-07-15 03:37:46.569920] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:40.501 [2024-07-15 03:37:46.569933] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:40.501 [2024-07-15 03:37:46.569963] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:40.501 qpair failed and we were unable to recover it. 00:34:40.501 [2024-07-15 03:37:46.579837] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:40.501 [2024-07-15 03:37:46.579982] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:40.501 [2024-07-15 03:37:46.580009] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:40.501 [2024-07-15 03:37:46.580023] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:40.501 [2024-07-15 03:37:46.580036] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:40.502 [2024-07-15 03:37:46.580065] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:40.502 qpair failed and we were unable to recover it. 00:34:40.502 [2024-07-15 03:37:46.589809] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:40.502 [2024-07-15 03:37:46.589932] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:40.502 [2024-07-15 03:37:46.589958] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:40.502 [2024-07-15 03:37:46.589972] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:40.502 [2024-07-15 03:37:46.589985] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:40.502 [2024-07-15 03:37:46.590013] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:40.502 qpair failed and we were unable to recover it. 00:34:40.502 [2024-07-15 03:37:46.599821] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:40.502 [2024-07-15 03:37:46.599950] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:40.502 [2024-07-15 03:37:46.599976] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:40.502 [2024-07-15 03:37:46.599990] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:40.502 [2024-07-15 03:37:46.600003] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:40.502 [2024-07-15 03:37:46.600031] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:40.502 qpair failed and we were unable to recover it. 00:34:40.502 [2024-07-15 03:37:46.609839] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:40.502 [2024-07-15 03:37:46.609962] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:40.502 [2024-07-15 03:37:46.609987] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:40.502 [2024-07-15 03:37:46.610001] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:40.502 [2024-07-15 03:37:46.610014] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:40.502 [2024-07-15 03:37:46.610042] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:40.502 qpair failed and we were unable to recover it. 00:34:40.502 [2024-07-15 03:37:46.619906] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:40.502 [2024-07-15 03:37:46.620021] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:40.502 [2024-07-15 03:37:46.620051] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:40.502 [2024-07-15 03:37:46.620067] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:40.502 [2024-07-15 03:37:46.620080] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:40.502 [2024-07-15 03:37:46.620108] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:40.502 qpair failed and we were unable to recover it. 00:34:40.502 [2024-07-15 03:37:46.629931] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:40.502 [2024-07-15 03:37:46.630057] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:40.502 [2024-07-15 03:37:46.630081] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:40.502 [2024-07-15 03:37:46.630095] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:40.502 [2024-07-15 03:37:46.630107] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:40.502 [2024-07-15 03:37:46.630135] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:40.502 qpair failed and we were unable to recover it. 00:34:40.502 [2024-07-15 03:37:46.639970] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:40.502 [2024-07-15 03:37:46.640082] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:40.502 [2024-07-15 03:37:46.640116] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:40.502 [2024-07-15 03:37:46.640140] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:40.502 [2024-07-15 03:37:46.640154] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:40.502 [2024-07-15 03:37:46.640184] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:40.502 qpair failed and we were unable to recover it. 00:34:40.761 [2024-07-15 03:37:46.650066] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:40.761 [2024-07-15 03:37:46.650198] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:40.761 [2024-07-15 03:37:46.650234] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:40.761 [2024-07-15 03:37:46.650249] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:40.761 [2024-07-15 03:37:46.650262] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:40.761 [2024-07-15 03:37:46.650290] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:40.761 qpair failed and we were unable to recover it. 00:34:40.761 [2024-07-15 03:37:46.660029] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:40.761 [2024-07-15 03:37:46.660140] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:40.761 [2024-07-15 03:37:46.660166] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:40.761 [2024-07-15 03:37:46.660180] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:40.761 [2024-07-15 03:37:46.660192] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:40.761 [2024-07-15 03:37:46.660226] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:40.761 qpair failed and we were unable to recover it. 00:34:40.761 [2024-07-15 03:37:46.670052] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:40.761 [2024-07-15 03:37:46.670172] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:40.761 [2024-07-15 03:37:46.670198] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:40.761 [2024-07-15 03:37:46.670212] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:40.761 [2024-07-15 03:37:46.670225] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:40.761 [2024-07-15 03:37:46.670253] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:40.761 qpair failed and we were unable to recover it. 00:34:40.761 [2024-07-15 03:37:46.680064] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:40.761 [2024-07-15 03:37:46.680177] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:40.761 [2024-07-15 03:37:46.680203] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:40.761 [2024-07-15 03:37:46.680217] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:40.761 [2024-07-15 03:37:46.680229] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:40.761 [2024-07-15 03:37:46.680257] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:40.761 qpair failed and we were unable to recover it. 00:34:40.761 [2024-07-15 03:37:46.690105] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:40.761 [2024-07-15 03:37:46.690214] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:40.761 [2024-07-15 03:37:46.690240] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:40.761 [2024-07-15 03:37:46.690254] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:40.761 [2024-07-15 03:37:46.690267] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:40.761 [2024-07-15 03:37:46.690296] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:40.761 qpair failed and we were unable to recover it. 00:34:40.761 [2024-07-15 03:37:46.700119] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:40.761 [2024-07-15 03:37:46.700228] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:40.761 [2024-07-15 03:37:46.700253] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:40.761 [2024-07-15 03:37:46.700267] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:40.761 [2024-07-15 03:37:46.700280] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:40.761 [2024-07-15 03:37:46.700308] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:40.761 qpair failed and we were unable to recover it. 00:34:40.761 [2024-07-15 03:37:46.710205] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:40.762 [2024-07-15 03:37:46.710320] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:40.762 [2024-07-15 03:37:46.710350] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:40.762 [2024-07-15 03:37:46.710365] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:40.762 [2024-07-15 03:37:46.710378] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:40.762 [2024-07-15 03:37:46.710406] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:40.762 qpair failed and we were unable to recover it. 00:34:40.762 [2024-07-15 03:37:46.720161] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:40.762 [2024-07-15 03:37:46.720267] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:40.762 [2024-07-15 03:37:46.720292] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:40.762 [2024-07-15 03:37:46.720306] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:40.762 [2024-07-15 03:37:46.720319] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:40.762 [2024-07-15 03:37:46.720346] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:40.762 qpair failed and we were unable to recover it. 00:34:40.762 [2024-07-15 03:37:46.730180] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:40.762 [2024-07-15 03:37:46.730291] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:40.762 [2024-07-15 03:37:46.730316] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:40.762 [2024-07-15 03:37:46.730330] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:40.762 [2024-07-15 03:37:46.730343] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:40.762 [2024-07-15 03:37:46.730371] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:40.762 qpair failed and we were unable to recover it. 00:34:40.762 [2024-07-15 03:37:46.740212] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:40.762 [2024-07-15 03:37:46.740337] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:40.762 [2024-07-15 03:37:46.740364] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:40.762 [2024-07-15 03:37:46.740378] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:40.762 [2024-07-15 03:37:46.740395] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:40.762 [2024-07-15 03:37:46.740425] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:40.762 qpair failed and we were unable to recover it. 00:34:40.762 [2024-07-15 03:37:46.750264] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:40.762 [2024-07-15 03:37:46.750378] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:40.762 [2024-07-15 03:37:46.750403] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:40.762 [2024-07-15 03:37:46.750417] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:40.762 [2024-07-15 03:37:46.750430] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:40.762 [2024-07-15 03:37:46.750464] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:40.762 qpair failed and we were unable to recover it. 00:34:40.762 [2024-07-15 03:37:46.760292] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:40.762 [2024-07-15 03:37:46.760410] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:40.762 [2024-07-15 03:37:46.760435] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:40.762 [2024-07-15 03:37:46.760449] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:40.762 [2024-07-15 03:37:46.760462] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:40.762 [2024-07-15 03:37:46.760490] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:40.762 qpair failed and we were unable to recover it. 00:34:40.762 [2024-07-15 03:37:46.770323] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:40.762 [2024-07-15 03:37:46.770436] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:40.762 [2024-07-15 03:37:46.770463] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:40.762 [2024-07-15 03:37:46.770478] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:40.762 [2024-07-15 03:37:46.770495] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:40.762 [2024-07-15 03:37:46.770524] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:40.762 qpair failed and we were unable to recover it. 00:34:40.762 [2024-07-15 03:37:46.780355] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:40.762 [2024-07-15 03:37:46.780488] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:40.762 [2024-07-15 03:37:46.780514] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:40.762 [2024-07-15 03:37:46.780528] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:40.762 [2024-07-15 03:37:46.780541] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:40.762 [2024-07-15 03:37:46.780568] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:40.762 qpair failed and we were unable to recover it. 00:34:40.762 [2024-07-15 03:37:46.790415] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:40.762 [2024-07-15 03:37:46.790527] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:40.762 [2024-07-15 03:37:46.790552] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:40.762 [2024-07-15 03:37:46.790566] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:40.762 [2024-07-15 03:37:46.790579] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:40.762 [2024-07-15 03:37:46.790607] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:40.762 qpair failed and we were unable to recover it. 00:34:40.762 [2024-07-15 03:37:46.800474] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:40.762 [2024-07-15 03:37:46.800584] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:40.762 [2024-07-15 03:37:46.800614] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:40.762 [2024-07-15 03:37:46.800628] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:40.762 [2024-07-15 03:37:46.800642] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:40.762 [2024-07-15 03:37:46.800669] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:40.762 qpair failed and we were unable to recover it. 00:34:40.762 [2024-07-15 03:37:46.810449] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:40.762 [2024-07-15 03:37:46.810602] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:40.762 [2024-07-15 03:37:46.810627] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:40.762 [2024-07-15 03:37:46.810641] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:40.762 [2024-07-15 03:37:46.810653] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:40.762 [2024-07-15 03:37:46.810681] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:40.762 qpair failed and we were unable to recover it. 00:34:40.762 [2024-07-15 03:37:46.820458] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:40.762 [2024-07-15 03:37:46.820564] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:40.762 [2024-07-15 03:37:46.820589] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:40.762 [2024-07-15 03:37:46.820603] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:40.762 [2024-07-15 03:37:46.820616] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:40.762 [2024-07-15 03:37:46.820643] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:40.762 qpair failed and we were unable to recover it. 00:34:40.762 [2024-07-15 03:37:46.830507] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:40.762 [2024-07-15 03:37:46.830633] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:40.762 [2024-07-15 03:37:46.830658] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:40.762 [2024-07-15 03:37:46.830673] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:40.762 [2024-07-15 03:37:46.830686] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:40.763 [2024-07-15 03:37:46.830713] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:40.763 qpair failed and we were unable to recover it. 00:34:40.763 [2024-07-15 03:37:46.840490] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:40.763 [2024-07-15 03:37:46.840597] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:40.763 [2024-07-15 03:37:46.840621] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:40.763 [2024-07-15 03:37:46.840635] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:40.763 [2024-07-15 03:37:46.840653] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:40.763 [2024-07-15 03:37:46.840681] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:40.763 qpair failed and we were unable to recover it. 00:34:40.763 [2024-07-15 03:37:46.850547] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:40.763 [2024-07-15 03:37:46.850651] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:40.763 [2024-07-15 03:37:46.850677] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:40.763 [2024-07-15 03:37:46.850691] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:40.763 [2024-07-15 03:37:46.850704] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:40.763 [2024-07-15 03:37:46.850733] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:40.763 qpair failed and we were unable to recover it. 00:34:40.763 [2024-07-15 03:37:46.860582] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:40.763 [2024-07-15 03:37:46.860690] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:40.763 [2024-07-15 03:37:46.860715] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:40.763 [2024-07-15 03:37:46.860729] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:40.763 [2024-07-15 03:37:46.860741] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:40.763 [2024-07-15 03:37:46.860769] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:40.763 qpair failed and we were unable to recover it. 00:34:40.763 [2024-07-15 03:37:46.870651] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:40.763 [2024-07-15 03:37:46.870765] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:40.763 [2024-07-15 03:37:46.870789] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:40.763 [2024-07-15 03:37:46.870802] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:40.763 [2024-07-15 03:37:46.870815] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:40.763 [2024-07-15 03:37:46.870843] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:40.763 qpair failed and we were unable to recover it. 00:34:40.763 [2024-07-15 03:37:46.880621] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:40.763 [2024-07-15 03:37:46.880749] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:40.763 [2024-07-15 03:37:46.880773] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:40.763 [2024-07-15 03:37:46.880787] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:40.763 [2024-07-15 03:37:46.880800] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:40.763 [2024-07-15 03:37:46.880827] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:40.763 qpair failed and we were unable to recover it. 00:34:40.763 [2024-07-15 03:37:46.890645] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:40.763 [2024-07-15 03:37:46.890754] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:40.763 [2024-07-15 03:37:46.890779] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:40.763 [2024-07-15 03:37:46.890793] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:40.763 [2024-07-15 03:37:46.890805] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:40.763 [2024-07-15 03:37:46.890833] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:40.763 qpair failed and we were unable to recover it. 00:34:40.763 [2024-07-15 03:37:46.900723] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:40.763 [2024-07-15 03:37:46.900867] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:40.763 [2024-07-15 03:37:46.900901] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:40.763 [2024-07-15 03:37:46.900916] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:40.763 [2024-07-15 03:37:46.900929] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:40.763 [2024-07-15 03:37:46.900957] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:40.763 qpair failed and we were unable to recover it. 00:34:41.022 [2024-07-15 03:37:46.910712] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.022 [2024-07-15 03:37:46.910829] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.022 [2024-07-15 03:37:46.910856] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.022 [2024-07-15 03:37:46.910871] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.022 [2024-07-15 03:37:46.910892] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:41.022 [2024-07-15 03:37:46.910922] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:41.022 qpair failed and we were unable to recover it. 00:34:41.022 [2024-07-15 03:37:46.920727] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.022 [2024-07-15 03:37:46.920835] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.022 [2024-07-15 03:37:46.920860] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.022 [2024-07-15 03:37:46.920874] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.022 [2024-07-15 03:37:46.920893] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:41.022 [2024-07-15 03:37:46.920921] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:41.022 qpair failed and we were unable to recover it. 00:34:41.022 [2024-07-15 03:37:46.930755] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.022 [2024-07-15 03:37:46.930914] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.022 [2024-07-15 03:37:46.930940] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.022 [2024-07-15 03:37:46.930954] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.022 [2024-07-15 03:37:46.930972] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:41.022 [2024-07-15 03:37:46.931001] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:41.022 qpair failed and we were unable to recover it. 00:34:41.022 [2024-07-15 03:37:46.940872] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.022 [2024-07-15 03:37:46.940988] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.022 [2024-07-15 03:37:46.941013] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.022 [2024-07-15 03:37:46.941027] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.022 [2024-07-15 03:37:46.941040] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:41.022 [2024-07-15 03:37:46.941068] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:41.022 qpair failed and we were unable to recover it. 00:34:41.022 [2024-07-15 03:37:46.950813] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.022 [2024-07-15 03:37:46.950935] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.022 [2024-07-15 03:37:46.950960] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.022 [2024-07-15 03:37:46.950974] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.022 [2024-07-15 03:37:46.950987] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:41.022 [2024-07-15 03:37:46.951014] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:41.022 qpair failed and we were unable to recover it. 00:34:41.022 [2024-07-15 03:37:46.960884] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.022 [2024-07-15 03:37:46.961010] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.022 [2024-07-15 03:37:46.961035] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.022 [2024-07-15 03:37:46.961048] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.023 [2024-07-15 03:37:46.961061] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:41.023 [2024-07-15 03:37:46.961089] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:41.023 qpair failed and we were unable to recover it. 00:34:41.023 [2024-07-15 03:37:46.970855] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.023 [2024-07-15 03:37:46.970972] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.023 [2024-07-15 03:37:46.970998] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.023 [2024-07-15 03:37:46.971012] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.023 [2024-07-15 03:37:46.971025] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:41.023 [2024-07-15 03:37:46.971053] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:41.023 qpair failed and we were unable to recover it. 00:34:41.023 [2024-07-15 03:37:46.980932] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.023 [2024-07-15 03:37:46.981062] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.023 [2024-07-15 03:37:46.981091] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.023 [2024-07-15 03:37:46.981106] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.023 [2024-07-15 03:37:46.981124] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:41.023 [2024-07-15 03:37:46.981154] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:41.023 qpair failed and we were unable to recover it. 00:34:41.023 [2024-07-15 03:37:46.990936] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.023 [2024-07-15 03:37:46.991091] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.023 [2024-07-15 03:37:46.991117] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.023 [2024-07-15 03:37:46.991131] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.023 [2024-07-15 03:37:46.991144] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:41.023 [2024-07-15 03:37:46.991173] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:41.023 qpair failed and we were unable to recover it. 00:34:41.023 [2024-07-15 03:37:47.000966] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.023 [2024-07-15 03:37:47.001099] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.023 [2024-07-15 03:37:47.001124] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.023 [2024-07-15 03:37:47.001138] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.023 [2024-07-15 03:37:47.001151] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:41.023 [2024-07-15 03:37:47.001179] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:41.023 qpair failed and we were unable to recover it. 00:34:41.023 [2024-07-15 03:37:47.011011] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.023 [2024-07-15 03:37:47.011125] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.023 [2024-07-15 03:37:47.011150] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.023 [2024-07-15 03:37:47.011164] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.023 [2024-07-15 03:37:47.011177] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:41.023 [2024-07-15 03:37:47.011205] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:41.023 qpair failed and we were unable to recover it. 00:34:41.023 [2024-07-15 03:37:47.021015] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.023 [2024-07-15 03:37:47.021121] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.023 [2024-07-15 03:37:47.021146] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.023 [2024-07-15 03:37:47.021159] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.023 [2024-07-15 03:37:47.021180] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:41.023 [2024-07-15 03:37:47.021208] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:41.023 qpair failed and we were unable to recover it. 00:34:41.023 [2024-07-15 03:37:47.031081] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.023 [2024-07-15 03:37:47.031195] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.023 [2024-07-15 03:37:47.031220] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.023 [2024-07-15 03:37:47.031234] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.023 [2024-07-15 03:37:47.031247] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:41.023 [2024-07-15 03:37:47.031275] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:41.023 qpair failed and we were unable to recover it. 00:34:41.023 [2024-07-15 03:37:47.041071] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.023 [2024-07-15 03:37:47.041182] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.023 [2024-07-15 03:37:47.041208] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.023 [2024-07-15 03:37:47.041222] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.023 [2024-07-15 03:37:47.041234] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:41.023 [2024-07-15 03:37:47.041262] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:41.023 qpair failed and we were unable to recover it. 00:34:41.023 [2024-07-15 03:37:47.051111] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.023 [2024-07-15 03:37:47.051229] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.023 [2024-07-15 03:37:47.051254] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.023 [2024-07-15 03:37:47.051267] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.023 [2024-07-15 03:37:47.051280] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:41.023 [2024-07-15 03:37:47.051307] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:41.023 qpair failed and we were unable to recover it. 00:34:41.023 [2024-07-15 03:37:47.061149] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.023 [2024-07-15 03:37:47.061264] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.023 [2024-07-15 03:37:47.061291] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.023 [2024-07-15 03:37:47.061306] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.023 [2024-07-15 03:37:47.061323] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:41.023 [2024-07-15 03:37:47.061352] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:41.023 qpair failed and we were unable to recover it. 00:34:41.023 [2024-07-15 03:37:47.071165] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.023 [2024-07-15 03:37:47.071279] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.023 [2024-07-15 03:37:47.071305] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.023 [2024-07-15 03:37:47.071319] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.023 [2024-07-15 03:37:47.071332] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:41.023 [2024-07-15 03:37:47.071360] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:41.023 qpair failed and we were unable to recover it. 00:34:41.023 [2024-07-15 03:37:47.081187] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.023 [2024-07-15 03:37:47.081303] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.023 [2024-07-15 03:37:47.081328] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.023 [2024-07-15 03:37:47.081342] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.023 [2024-07-15 03:37:47.081355] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:41.023 [2024-07-15 03:37:47.081383] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:41.023 qpair failed and we were unable to recover it. 00:34:41.023 [2024-07-15 03:37:47.091257] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.023 [2024-07-15 03:37:47.091371] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.023 [2024-07-15 03:37:47.091398] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.023 [2024-07-15 03:37:47.091413] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.024 [2024-07-15 03:37:47.091426] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:41.024 [2024-07-15 03:37:47.091453] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:41.024 qpair failed and we were unable to recover it. 00:34:41.024 [2024-07-15 03:37:47.101305] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.024 [2024-07-15 03:37:47.101416] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.024 [2024-07-15 03:37:47.101441] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.024 [2024-07-15 03:37:47.101455] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.024 [2024-07-15 03:37:47.101468] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:41.024 [2024-07-15 03:37:47.101496] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:41.024 qpair failed and we were unable to recover it. 00:34:41.024 [2024-07-15 03:37:47.111352] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.024 [2024-07-15 03:37:47.111505] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.024 [2024-07-15 03:37:47.111530] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.024 [2024-07-15 03:37:47.111550] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.024 [2024-07-15 03:37:47.111564] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:41.024 [2024-07-15 03:37:47.111591] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:41.024 qpair failed and we were unable to recover it. 00:34:41.024 [2024-07-15 03:37:47.121352] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.024 [2024-07-15 03:37:47.121490] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.024 [2024-07-15 03:37:47.121514] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.024 [2024-07-15 03:37:47.121528] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.024 [2024-07-15 03:37:47.121541] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:41.024 [2024-07-15 03:37:47.121569] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:41.024 qpair failed and we were unable to recover it. 00:34:41.024 [2024-07-15 03:37:47.131366] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.024 [2024-07-15 03:37:47.131474] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.024 [2024-07-15 03:37:47.131499] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.024 [2024-07-15 03:37:47.131513] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.024 [2024-07-15 03:37:47.131525] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:41.024 [2024-07-15 03:37:47.131553] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:41.024 qpair failed and we were unable to recover it. 00:34:41.024 [2024-07-15 03:37:47.141342] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.024 [2024-07-15 03:37:47.141448] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.024 [2024-07-15 03:37:47.141473] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.024 [2024-07-15 03:37:47.141489] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.024 [2024-07-15 03:37:47.141502] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:41.024 [2024-07-15 03:37:47.141531] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:41.024 qpair failed and we were unable to recover it. 00:34:41.024 [2024-07-15 03:37:47.151414] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.024 [2024-07-15 03:37:47.151547] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.024 [2024-07-15 03:37:47.151571] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.024 [2024-07-15 03:37:47.151585] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.024 [2024-07-15 03:37:47.151598] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:41.024 [2024-07-15 03:37:47.151625] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:41.024 qpair failed and we were unable to recover it. 00:34:41.024 [2024-07-15 03:37:47.161457] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.024 [2024-07-15 03:37:47.161580] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.024 [2024-07-15 03:37:47.161607] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.024 [2024-07-15 03:37:47.161621] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.024 [2024-07-15 03:37:47.161634] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:41.024 [2024-07-15 03:37:47.161662] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:41.024 qpair failed and we were unable to recover it. 00:34:41.283 [2024-07-15 03:37:47.171481] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.283 [2024-07-15 03:37:47.171601] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.283 [2024-07-15 03:37:47.171629] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.283 [2024-07-15 03:37:47.171644] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.283 [2024-07-15 03:37:47.171657] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:41.283 [2024-07-15 03:37:47.171685] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:41.283 qpair failed and we were unable to recover it. 00:34:41.283 [2024-07-15 03:37:47.181457] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.283 [2024-07-15 03:37:47.181565] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.283 [2024-07-15 03:37:47.181591] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.283 [2024-07-15 03:37:47.181605] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.283 [2024-07-15 03:37:47.181618] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:41.283 [2024-07-15 03:37:47.181646] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:41.283 qpair failed and we were unable to recover it. 00:34:41.283 [2024-07-15 03:37:47.191555] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.283 [2024-07-15 03:37:47.191669] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.283 [2024-07-15 03:37:47.191694] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.283 [2024-07-15 03:37:47.191709] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.283 [2024-07-15 03:37:47.191721] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:41.283 [2024-07-15 03:37:47.191749] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:41.283 qpair failed and we were unable to recover it. 00:34:41.283 [2024-07-15 03:37:47.201527] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.283 [2024-07-15 03:37:47.201650] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.283 [2024-07-15 03:37:47.201675] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.283 [2024-07-15 03:37:47.201697] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.283 [2024-07-15 03:37:47.201711] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:41.283 [2024-07-15 03:37:47.201738] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:41.283 qpair failed and we were unable to recover it. 00:34:41.283 [2024-07-15 03:37:47.211611] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.283 [2024-07-15 03:37:47.211728] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.283 [2024-07-15 03:37:47.211753] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.283 [2024-07-15 03:37:47.211767] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.283 [2024-07-15 03:37:47.211780] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:41.283 [2024-07-15 03:37:47.211808] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:41.283 qpair failed and we were unable to recover it. 00:34:41.283 [2024-07-15 03:37:47.221576] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.283 [2024-07-15 03:37:47.221682] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.283 [2024-07-15 03:37:47.221707] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.283 [2024-07-15 03:37:47.221722] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.283 [2024-07-15 03:37:47.221735] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:41.283 [2024-07-15 03:37:47.221762] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:41.283 qpair failed and we were unable to recover it. 00:34:41.283 [2024-07-15 03:37:47.231630] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.284 [2024-07-15 03:37:47.231744] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.284 [2024-07-15 03:37:47.231770] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.284 [2024-07-15 03:37:47.231784] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.284 [2024-07-15 03:37:47.231797] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:41.284 [2024-07-15 03:37:47.231824] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:41.284 qpair failed and we were unable to recover it. 00:34:41.284 [2024-07-15 03:37:47.241665] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.284 [2024-07-15 03:37:47.241797] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.284 [2024-07-15 03:37:47.241823] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.284 [2024-07-15 03:37:47.241836] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.284 [2024-07-15 03:37:47.241849] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:41.284 [2024-07-15 03:37:47.241885] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:41.284 qpair failed and we were unable to recover it. 00:34:41.284 [2024-07-15 03:37:47.251681] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.284 [2024-07-15 03:37:47.251802] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.284 [2024-07-15 03:37:47.251827] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.284 [2024-07-15 03:37:47.251841] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.284 [2024-07-15 03:37:47.251854] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:41.284 [2024-07-15 03:37:47.251889] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:41.284 qpair failed and we were unable to recover it. 00:34:41.284 [2024-07-15 03:37:47.261741] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.284 [2024-07-15 03:37:47.261857] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.284 [2024-07-15 03:37:47.261889] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.284 [2024-07-15 03:37:47.261905] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.284 [2024-07-15 03:37:47.261918] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:41.284 [2024-07-15 03:37:47.261946] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:41.284 qpair failed and we were unable to recover it. 00:34:41.284 [2024-07-15 03:37:47.271727] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.284 [2024-07-15 03:37:47.271845] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.284 [2024-07-15 03:37:47.271870] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.284 [2024-07-15 03:37:47.271895] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.284 [2024-07-15 03:37:47.271909] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:41.284 [2024-07-15 03:37:47.271937] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:41.284 qpair failed and we were unable to recover it. 00:34:41.284 [2024-07-15 03:37:47.281730] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.284 [2024-07-15 03:37:47.281844] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.284 [2024-07-15 03:37:47.281870] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.284 [2024-07-15 03:37:47.281896] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.284 [2024-07-15 03:37:47.281911] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:41.284 [2024-07-15 03:37:47.281939] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:41.284 qpair failed and we were unable to recover it. 00:34:41.284 [2024-07-15 03:37:47.291771] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.284 [2024-07-15 03:37:47.291887] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.284 [2024-07-15 03:37:47.291914] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.284 [2024-07-15 03:37:47.291935] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.284 [2024-07-15 03:37:47.291949] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:41.284 [2024-07-15 03:37:47.291977] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:41.284 qpair failed and we were unable to recover it. 00:34:41.284 [2024-07-15 03:37:47.301792] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.284 [2024-07-15 03:37:47.301910] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.284 [2024-07-15 03:37:47.301936] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.284 [2024-07-15 03:37:47.301951] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.284 [2024-07-15 03:37:47.301963] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:41.284 [2024-07-15 03:37:47.301991] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:41.284 qpair failed and we were unable to recover it. 00:34:41.284 [2024-07-15 03:37:47.311831] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.284 [2024-07-15 03:37:47.311967] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.284 [2024-07-15 03:37:47.311993] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.284 [2024-07-15 03:37:47.312007] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.284 [2024-07-15 03:37:47.312020] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:41.284 [2024-07-15 03:37:47.312048] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:41.284 qpair failed and we were unable to recover it. 00:34:41.284 [2024-07-15 03:37:47.321889] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.284 [2024-07-15 03:37:47.322048] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.284 [2024-07-15 03:37:47.322073] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.284 [2024-07-15 03:37:47.322088] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.284 [2024-07-15 03:37:47.322101] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:41.284 [2024-07-15 03:37:47.322129] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:41.284 qpair failed and we were unable to recover it. 00:34:41.284 [2024-07-15 03:37:47.331904] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.284 [2024-07-15 03:37:47.332044] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.284 [2024-07-15 03:37:47.332069] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.284 [2024-07-15 03:37:47.332084] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.284 [2024-07-15 03:37:47.332096] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:41.284 [2024-07-15 03:37:47.332124] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:41.284 qpair failed and we were unable to recover it. 00:34:41.284 [2024-07-15 03:37:47.341920] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.284 [2024-07-15 03:37:47.342027] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.284 [2024-07-15 03:37:47.342052] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.284 [2024-07-15 03:37:47.342067] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.284 [2024-07-15 03:37:47.342080] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:41.284 [2024-07-15 03:37:47.342108] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:41.284 qpair failed and we were unable to recover it. 00:34:41.284 [2024-07-15 03:37:47.351947] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.284 [2024-07-15 03:37:47.352061] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.284 [2024-07-15 03:37:47.352086] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.284 [2024-07-15 03:37:47.352100] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.284 [2024-07-15 03:37:47.352113] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:41.285 [2024-07-15 03:37:47.352141] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:41.285 qpair failed and we were unable to recover it. 00:34:41.285 [2024-07-15 03:37:47.361963] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.285 [2024-07-15 03:37:47.362081] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.285 [2024-07-15 03:37:47.362107] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.285 [2024-07-15 03:37:47.362121] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.285 [2024-07-15 03:37:47.362134] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:41.285 [2024-07-15 03:37:47.362161] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:41.285 qpair failed and we were unable to recover it. 00:34:41.285 [2024-07-15 03:37:47.372006] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.285 [2024-07-15 03:37:47.372119] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.285 [2024-07-15 03:37:47.372144] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.285 [2024-07-15 03:37:47.372159] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.285 [2024-07-15 03:37:47.372172] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:41.285 [2024-07-15 03:37:47.372199] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:41.285 qpair failed and we were unable to recover it. 00:34:41.285 [2024-07-15 03:37:47.382015] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.285 [2024-07-15 03:37:47.382150] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.285 [2024-07-15 03:37:47.382179] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.285 [2024-07-15 03:37:47.382194] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.285 [2024-07-15 03:37:47.382207] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:41.285 [2024-07-15 03:37:47.382234] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:41.285 qpair failed and we were unable to recover it. 00:34:41.285 [2024-07-15 03:37:47.392093] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.285 [2024-07-15 03:37:47.392212] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.285 [2024-07-15 03:37:47.392237] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.285 [2024-07-15 03:37:47.392250] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.285 [2024-07-15 03:37:47.392263] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:41.285 [2024-07-15 03:37:47.392291] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:41.285 qpair failed and we were unable to recover it. 00:34:41.285 [2024-07-15 03:37:47.402105] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.285 [2024-07-15 03:37:47.402219] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.285 [2024-07-15 03:37:47.402244] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.285 [2024-07-15 03:37:47.402258] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.285 [2024-07-15 03:37:47.402271] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:41.285 [2024-07-15 03:37:47.402299] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:41.285 qpair failed and we were unable to recover it. 00:34:41.285 [2024-07-15 03:37:47.412167] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.285 [2024-07-15 03:37:47.412287] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.285 [2024-07-15 03:37:47.412312] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.285 [2024-07-15 03:37:47.412326] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.285 [2024-07-15 03:37:47.412339] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:41.285 [2024-07-15 03:37:47.412366] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:41.285 qpair failed and we were unable to recover it. 00:34:41.285 [2024-07-15 03:37:47.422157] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.285 [2024-07-15 03:37:47.422268] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.285 [2024-07-15 03:37:47.422294] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.285 [2024-07-15 03:37:47.422309] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.285 [2024-07-15 03:37:47.422322] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:41.285 [2024-07-15 03:37:47.422356] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:41.285 qpair failed and we were unable to recover it. 00:34:41.545 [2024-07-15 03:37:47.432224] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.545 [2024-07-15 03:37:47.432340] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.545 [2024-07-15 03:37:47.432367] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.545 [2024-07-15 03:37:47.432381] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.545 [2024-07-15 03:37:47.432394] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:41.545 [2024-07-15 03:37:47.432422] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:41.545 qpair failed and we were unable to recover it. 00:34:41.545 [2024-07-15 03:37:47.442222] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.545 [2024-07-15 03:37:47.442344] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.545 [2024-07-15 03:37:47.442370] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.545 [2024-07-15 03:37:47.442384] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.545 [2024-07-15 03:37:47.442397] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:41.545 [2024-07-15 03:37:47.442425] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:41.545 qpair failed and we were unable to recover it. 00:34:41.545 [2024-07-15 03:37:47.452218] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.545 [2024-07-15 03:37:47.452323] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.545 [2024-07-15 03:37:47.452349] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.545 [2024-07-15 03:37:47.452363] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.545 [2024-07-15 03:37:47.452376] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:41.545 [2024-07-15 03:37:47.452403] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:41.545 qpair failed and we were unable to recover it. 00:34:41.546 [2024-07-15 03:37:47.462315] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.546 [2024-07-15 03:37:47.462423] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.546 [2024-07-15 03:37:47.462448] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.546 [2024-07-15 03:37:47.462462] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.546 [2024-07-15 03:37:47.462475] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:41.546 [2024-07-15 03:37:47.462503] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:41.546 qpair failed and we were unable to recover it. 00:34:41.546 [2024-07-15 03:37:47.472279] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.546 [2024-07-15 03:37:47.472438] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.546 [2024-07-15 03:37:47.472470] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.546 [2024-07-15 03:37:47.472490] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.546 [2024-07-15 03:37:47.472503] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:41.546 [2024-07-15 03:37:47.472533] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:41.546 qpair failed and we were unable to recover it. 00:34:41.546 [2024-07-15 03:37:47.482344] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.546 [2024-07-15 03:37:47.482454] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.546 [2024-07-15 03:37:47.482480] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.546 [2024-07-15 03:37:47.482493] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.546 [2024-07-15 03:37:47.482506] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:41.546 [2024-07-15 03:37:47.482534] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:41.546 qpair failed and we were unable to recover it. 00:34:41.546 [2024-07-15 03:37:47.492316] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.546 [2024-07-15 03:37:47.492419] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.546 [2024-07-15 03:37:47.492444] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.546 [2024-07-15 03:37:47.492458] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.546 [2024-07-15 03:37:47.492471] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:41.546 [2024-07-15 03:37:47.492498] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:41.546 qpair failed and we were unable to recover it. 00:34:41.546 [2024-07-15 03:37:47.502385] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.546 [2024-07-15 03:37:47.502507] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.546 [2024-07-15 03:37:47.502532] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.546 [2024-07-15 03:37:47.502545] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.546 [2024-07-15 03:37:47.502558] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:41.546 [2024-07-15 03:37:47.502585] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:41.546 qpair failed and we were unable to recover it. 00:34:41.546 [2024-07-15 03:37:47.512395] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.546 [2024-07-15 03:37:47.512514] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.546 [2024-07-15 03:37:47.512539] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.546 [2024-07-15 03:37:47.512553] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.546 [2024-07-15 03:37:47.512566] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:41.546 [2024-07-15 03:37:47.512599] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:41.546 qpair failed and we were unable to recover it. 00:34:41.546 [2024-07-15 03:37:47.522447] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.546 [2024-07-15 03:37:47.522570] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.546 [2024-07-15 03:37:47.522595] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.546 [2024-07-15 03:37:47.522609] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.546 [2024-07-15 03:37:47.522623] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:41.546 [2024-07-15 03:37:47.522650] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:41.546 qpair failed and we were unable to recover it. 00:34:41.546 [2024-07-15 03:37:47.532481] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.546 [2024-07-15 03:37:47.532630] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.546 [2024-07-15 03:37:47.532654] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.546 [2024-07-15 03:37:47.532668] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.546 [2024-07-15 03:37:47.532681] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:41.546 [2024-07-15 03:37:47.532709] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:41.546 qpair failed and we were unable to recover it. 00:34:41.546 [2024-07-15 03:37:47.542470] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.546 [2024-07-15 03:37:47.542573] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.546 [2024-07-15 03:37:47.542599] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.546 [2024-07-15 03:37:47.542613] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.546 [2024-07-15 03:37:47.542626] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:41.546 [2024-07-15 03:37:47.542654] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:41.546 qpair failed and we were unable to recover it. 00:34:41.546 [2024-07-15 03:37:47.552514] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.546 [2024-07-15 03:37:47.552626] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.546 [2024-07-15 03:37:47.552652] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.546 [2024-07-15 03:37:47.552666] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.546 [2024-07-15 03:37:47.552679] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:41.546 [2024-07-15 03:37:47.552706] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:41.546 qpair failed and we were unable to recover it. 00:34:41.546 [2024-07-15 03:37:47.562564] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.546 [2024-07-15 03:37:47.562680] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.546 [2024-07-15 03:37:47.562710] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.546 [2024-07-15 03:37:47.562724] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.546 [2024-07-15 03:37:47.562737] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:41.546 [2024-07-15 03:37:47.562765] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:41.546 qpair failed and we were unable to recover it. 00:34:41.546 [2024-07-15 03:37:47.572549] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.546 [2024-07-15 03:37:47.572659] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.546 [2024-07-15 03:37:47.572684] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.546 [2024-07-15 03:37:47.572699] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.546 [2024-07-15 03:37:47.572713] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:41.546 [2024-07-15 03:37:47.572741] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:41.546 qpair failed and we were unable to recover it. 00:34:41.546 [2024-07-15 03:37:47.582587] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.546 [2024-07-15 03:37:47.582726] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.546 [2024-07-15 03:37:47.582752] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.546 [2024-07-15 03:37:47.582766] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.546 [2024-07-15 03:37:47.582779] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:41.547 [2024-07-15 03:37:47.582806] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:41.547 qpair failed and we were unable to recover it. 00:34:41.547 [2024-07-15 03:37:47.592624] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.547 [2024-07-15 03:37:47.592758] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.547 [2024-07-15 03:37:47.592785] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.547 [2024-07-15 03:37:47.592799] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.547 [2024-07-15 03:37:47.592811] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:41.547 [2024-07-15 03:37:47.592839] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:41.547 qpair failed and we were unable to recover it. 00:34:41.547 [2024-07-15 03:37:47.602649] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.547 [2024-07-15 03:37:47.602762] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.547 [2024-07-15 03:37:47.602787] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.547 [2024-07-15 03:37:47.602801] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.547 [2024-07-15 03:37:47.602814] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:41.547 [2024-07-15 03:37:47.602849] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:41.547 qpair failed and we were unable to recover it. 00:34:41.547 [2024-07-15 03:37:47.612696] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.547 [2024-07-15 03:37:47.612828] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.547 [2024-07-15 03:37:47.612853] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.547 [2024-07-15 03:37:47.612867] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.547 [2024-07-15 03:37:47.612887] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:41.547 [2024-07-15 03:37:47.612924] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:41.547 qpair failed and we were unable to recover it. 00:34:41.547 [2024-07-15 03:37:47.622714] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.547 [2024-07-15 03:37:47.622826] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.547 [2024-07-15 03:37:47.622851] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.547 [2024-07-15 03:37:47.622865] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.547 [2024-07-15 03:37:47.622886] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:41.547 [2024-07-15 03:37:47.622917] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:41.547 qpair failed and we were unable to recover it. 00:34:41.547 [2024-07-15 03:37:47.632788] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.547 [2024-07-15 03:37:47.632951] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.547 [2024-07-15 03:37:47.632975] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.547 [2024-07-15 03:37:47.632989] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.547 [2024-07-15 03:37:47.633004] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:41.547 [2024-07-15 03:37:47.633032] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:41.547 qpair failed and we were unable to recover it. 00:34:41.547 [2024-07-15 03:37:47.642780] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.547 [2024-07-15 03:37:47.642897] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.547 [2024-07-15 03:37:47.642923] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.547 [2024-07-15 03:37:47.642937] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.547 [2024-07-15 03:37:47.642950] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:41.547 [2024-07-15 03:37:47.642977] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:41.547 qpair failed and we were unable to recover it. 00:34:41.547 [2024-07-15 03:37:47.652810] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.547 [2024-07-15 03:37:47.652932] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.547 [2024-07-15 03:37:47.652962] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.547 [2024-07-15 03:37:47.652977] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.547 [2024-07-15 03:37:47.652990] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:41.547 [2024-07-15 03:37:47.653018] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:41.547 qpair failed and we were unable to recover it. 00:34:41.547 [2024-07-15 03:37:47.662824] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.547 [2024-07-15 03:37:47.662945] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.547 [2024-07-15 03:37:47.662971] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.547 [2024-07-15 03:37:47.662985] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.547 [2024-07-15 03:37:47.662998] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:41.547 [2024-07-15 03:37:47.663028] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:41.547 qpair failed and we were unable to recover it. 00:34:41.547 [2024-07-15 03:37:47.672894] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.547 [2024-07-15 03:37:47.673016] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.547 [2024-07-15 03:37:47.673040] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.547 [2024-07-15 03:37:47.673054] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.547 [2024-07-15 03:37:47.673067] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:41.547 [2024-07-15 03:37:47.673095] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:41.547 qpair failed and we were unable to recover it. 00:34:41.547 [2024-07-15 03:37:47.682956] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.547 [2024-07-15 03:37:47.683072] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.547 [2024-07-15 03:37:47.683099] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.547 [2024-07-15 03:37:47.683114] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.547 [2024-07-15 03:37:47.683127] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:41.547 [2024-07-15 03:37:47.683157] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:41.547 qpair failed and we were unable to recover it. 00:34:41.806 [2024-07-15 03:37:47.692937] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.806 [2024-07-15 03:37:47.693101] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.806 [2024-07-15 03:37:47.693128] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.806 [2024-07-15 03:37:47.693143] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.806 [2024-07-15 03:37:47.693164] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:41.806 [2024-07-15 03:37:47.693195] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:41.806 qpair failed and we were unable to recover it. 00:34:41.806 [2024-07-15 03:37:47.702949] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.806 [2024-07-15 03:37:47.703061] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.806 [2024-07-15 03:37:47.703087] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.806 [2024-07-15 03:37:47.703101] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.806 [2024-07-15 03:37:47.703114] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:41.806 [2024-07-15 03:37:47.703142] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:41.806 qpair failed and we were unable to recover it. 00:34:41.806 [2024-07-15 03:37:47.712962] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.806 [2024-07-15 03:37:47.713079] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.806 [2024-07-15 03:37:47.713104] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.806 [2024-07-15 03:37:47.713119] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.806 [2024-07-15 03:37:47.713132] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:41.806 [2024-07-15 03:37:47.713166] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:41.806 qpair failed and we were unable to recover it. 00:34:41.806 [2024-07-15 03:37:47.722985] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.806 [2024-07-15 03:37:47.723102] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.806 [2024-07-15 03:37:47.723127] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.806 [2024-07-15 03:37:47.723141] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.806 [2024-07-15 03:37:47.723154] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:41.806 [2024-07-15 03:37:47.723182] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:41.806 qpair failed and we were unable to recover it. 00:34:41.806 [2024-07-15 03:37:47.733103] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.806 [2024-07-15 03:37:47.733241] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.807 [2024-07-15 03:37:47.733268] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.807 [2024-07-15 03:37:47.733282] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.807 [2024-07-15 03:37:47.733295] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:41.807 [2024-07-15 03:37:47.733324] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:41.807 qpair failed and we were unable to recover it. 00:34:41.807 [2024-07-15 03:37:47.743070] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.807 [2024-07-15 03:37:47.743195] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.807 [2024-07-15 03:37:47.743221] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.807 [2024-07-15 03:37:47.743235] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.807 [2024-07-15 03:37:47.743248] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:41.807 [2024-07-15 03:37:47.743276] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:41.807 qpair failed and we were unable to recover it. 00:34:41.807 [2024-07-15 03:37:47.753111] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.807 [2024-07-15 03:37:47.753239] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.807 [2024-07-15 03:37:47.753264] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.807 [2024-07-15 03:37:47.753279] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.807 [2024-07-15 03:37:47.753292] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:41.807 [2024-07-15 03:37:47.753319] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:41.807 qpair failed and we were unable to recover it. 00:34:41.807 [2024-07-15 03:37:47.763104] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.807 [2024-07-15 03:37:47.763219] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.807 [2024-07-15 03:37:47.763244] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.807 [2024-07-15 03:37:47.763258] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.807 [2024-07-15 03:37:47.763271] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:41.807 [2024-07-15 03:37:47.763299] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:41.807 qpair failed and we were unable to recover it. 00:34:41.807 [2024-07-15 03:37:47.773160] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.807 [2024-07-15 03:37:47.773291] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.807 [2024-07-15 03:37:47.773316] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.807 [2024-07-15 03:37:47.773330] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.807 [2024-07-15 03:37:47.773343] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:41.807 [2024-07-15 03:37:47.773370] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:41.807 qpair failed and we were unable to recover it. 00:34:41.807 [2024-07-15 03:37:47.783171] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.807 [2024-07-15 03:37:47.783283] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.807 [2024-07-15 03:37:47.783309] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.807 [2024-07-15 03:37:47.783323] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.807 [2024-07-15 03:37:47.783341] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:41.807 [2024-07-15 03:37:47.783369] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:41.807 qpair failed and we were unable to recover it. 00:34:41.807 [2024-07-15 03:37:47.793183] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.807 [2024-07-15 03:37:47.793295] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.807 [2024-07-15 03:37:47.793320] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.807 [2024-07-15 03:37:47.793334] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.807 [2024-07-15 03:37:47.793347] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:41.807 [2024-07-15 03:37:47.793374] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:41.807 qpair failed and we were unable to recover it. 00:34:41.807 [2024-07-15 03:37:47.803198] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.807 [2024-07-15 03:37:47.803307] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.807 [2024-07-15 03:37:47.803332] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.807 [2024-07-15 03:37:47.803346] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.807 [2024-07-15 03:37:47.803359] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:41.807 [2024-07-15 03:37:47.803386] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:41.807 qpair failed and we were unable to recover it. 00:34:41.807 [2024-07-15 03:37:47.813218] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.807 [2024-07-15 03:37:47.813327] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.807 [2024-07-15 03:37:47.813352] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.807 [2024-07-15 03:37:47.813366] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.807 [2024-07-15 03:37:47.813379] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:41.807 [2024-07-15 03:37:47.813406] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:41.807 qpair failed and we were unable to recover it. 00:34:41.807 [2024-07-15 03:37:47.823266] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.807 [2024-07-15 03:37:47.823387] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.807 [2024-07-15 03:37:47.823412] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.807 [2024-07-15 03:37:47.823426] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.807 [2024-07-15 03:37:47.823439] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:41.807 [2024-07-15 03:37:47.823466] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:41.807 qpair failed and we were unable to recover it. 00:34:41.807 [2024-07-15 03:37:47.833320] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.807 [2024-07-15 03:37:47.833441] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.807 [2024-07-15 03:37:47.833467] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.807 [2024-07-15 03:37:47.833480] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.807 [2024-07-15 03:37:47.833493] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:41.807 [2024-07-15 03:37:47.833521] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:41.807 qpair failed and we were unable to recover it. 00:34:41.807 [2024-07-15 03:37:47.843310] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.807 [2024-07-15 03:37:47.843420] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.807 [2024-07-15 03:37:47.843445] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.807 [2024-07-15 03:37:47.843459] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.807 [2024-07-15 03:37:47.843471] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:41.807 [2024-07-15 03:37:47.843499] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:41.807 qpair failed and we were unable to recover it. 00:34:41.807 [2024-07-15 03:37:47.853324] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.807 [2024-07-15 03:37:47.853425] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.807 [2024-07-15 03:37:47.853450] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.807 [2024-07-15 03:37:47.853464] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.807 [2024-07-15 03:37:47.853477] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:41.807 [2024-07-15 03:37:47.853505] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:41.807 qpair failed and we were unable to recover it. 00:34:41.808 [2024-07-15 03:37:47.863447] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.808 [2024-07-15 03:37:47.863556] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.808 [2024-07-15 03:37:47.863582] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.808 [2024-07-15 03:37:47.863596] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.808 [2024-07-15 03:37:47.863609] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:41.808 [2024-07-15 03:37:47.863636] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:41.808 qpair failed and we were unable to recover it. 00:34:41.808 [2024-07-15 03:37:47.873410] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.808 [2024-07-15 03:37:47.873518] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.808 [2024-07-15 03:37:47.873542] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.808 [2024-07-15 03:37:47.873562] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.808 [2024-07-15 03:37:47.873576] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:41.808 [2024-07-15 03:37:47.873603] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:41.808 qpair failed and we were unable to recover it. 00:34:41.808 [2024-07-15 03:37:47.883420] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.808 [2024-07-15 03:37:47.883524] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.808 [2024-07-15 03:37:47.883549] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.808 [2024-07-15 03:37:47.883563] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.808 [2024-07-15 03:37:47.883575] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:41.808 [2024-07-15 03:37:47.883602] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:41.808 qpair failed and we were unable to recover it. 00:34:41.808 [2024-07-15 03:37:47.893466] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.808 [2024-07-15 03:37:47.893574] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.808 [2024-07-15 03:37:47.893599] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.808 [2024-07-15 03:37:47.893613] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.808 [2024-07-15 03:37:47.893626] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:41.808 [2024-07-15 03:37:47.893654] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:41.808 qpair failed and we were unable to recover it. 00:34:41.808 [2024-07-15 03:37:47.903505] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.808 [2024-07-15 03:37:47.903631] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.808 [2024-07-15 03:37:47.903656] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.808 [2024-07-15 03:37:47.903670] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.808 [2024-07-15 03:37:47.903683] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:41.808 [2024-07-15 03:37:47.903710] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:41.808 qpair failed and we were unable to recover it. 00:34:41.808 [2024-07-15 03:37:47.913555] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.808 [2024-07-15 03:37:47.913673] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.808 [2024-07-15 03:37:47.913698] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.808 [2024-07-15 03:37:47.913712] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.808 [2024-07-15 03:37:47.913725] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:41.808 [2024-07-15 03:37:47.913752] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:41.808 qpair failed and we were unable to recover it. 00:34:41.808 [2024-07-15 03:37:47.923563] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.808 [2024-07-15 03:37:47.923687] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.808 [2024-07-15 03:37:47.923711] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.808 [2024-07-15 03:37:47.923724] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.808 [2024-07-15 03:37:47.923736] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:41.808 [2024-07-15 03:37:47.923763] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:41.808 qpair failed and we were unable to recover it. 00:34:41.808 [2024-07-15 03:37:47.933605] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.808 [2024-07-15 03:37:47.933718] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.808 [2024-07-15 03:37:47.933743] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.808 [2024-07-15 03:37:47.933757] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.808 [2024-07-15 03:37:47.933770] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:41.808 [2024-07-15 03:37:47.933797] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:41.808 qpair failed and we were unable to recover it. 00:34:41.808 [2024-07-15 03:37:47.943609] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.808 [2024-07-15 03:37:47.943716] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.808 [2024-07-15 03:37:47.943740] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.808 [2024-07-15 03:37:47.943754] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.808 [2024-07-15 03:37:47.943767] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:41.808 [2024-07-15 03:37:47.943795] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:41.808 qpair failed and we were unable to recover it. 00:34:42.067 [2024-07-15 03:37:47.953712] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.067 [2024-07-15 03:37:47.953836] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.067 [2024-07-15 03:37:47.953861] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.067 [2024-07-15 03:37:47.953882] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.067 [2024-07-15 03:37:47.953896] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:42.067 [2024-07-15 03:37:47.953925] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:42.067 qpair failed and we were unable to recover it. 00:34:42.067 [2024-07-15 03:37:47.963719] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.067 [2024-07-15 03:37:47.963898] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.067 [2024-07-15 03:37:47.963924] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.067 [2024-07-15 03:37:47.963950] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.067 [2024-07-15 03:37:47.963965] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:42.067 [2024-07-15 03:37:47.963994] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:42.067 qpair failed and we were unable to recover it. 00:34:42.067 [2024-07-15 03:37:47.973700] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.067 [2024-07-15 03:37:47.973804] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.067 [2024-07-15 03:37:47.973830] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.067 [2024-07-15 03:37:47.973843] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.067 [2024-07-15 03:37:47.973856] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:42.067 [2024-07-15 03:37:47.973892] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:42.067 qpair failed and we were unable to recover it. 00:34:42.067 [2024-07-15 03:37:47.983740] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.067 [2024-07-15 03:37:47.983858] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.067 [2024-07-15 03:37:47.983890] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.067 [2024-07-15 03:37:47.983906] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.067 [2024-07-15 03:37:47.983919] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:42.067 [2024-07-15 03:37:47.983947] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:42.067 qpair failed and we were unable to recover it. 00:34:42.067 [2024-07-15 03:37:47.993797] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.068 [2024-07-15 03:37:47.993916] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.068 [2024-07-15 03:37:47.993941] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.068 [2024-07-15 03:37:47.993955] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.068 [2024-07-15 03:37:47.993967] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:42.068 [2024-07-15 03:37:47.993995] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:42.068 qpair failed and we were unable to recover it. 00:34:42.068 [2024-07-15 03:37:48.003786] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.068 [2024-07-15 03:37:48.003915] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.068 [2024-07-15 03:37:48.003941] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.068 [2024-07-15 03:37:48.003955] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.068 [2024-07-15 03:37:48.003968] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:42.068 [2024-07-15 03:37:48.003995] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:42.068 qpair failed and we were unable to recover it. 00:34:42.068 [2024-07-15 03:37:48.013823] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.068 [2024-07-15 03:37:48.013937] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.068 [2024-07-15 03:37:48.013962] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.068 [2024-07-15 03:37:48.013976] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.068 [2024-07-15 03:37:48.013988] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:42.068 [2024-07-15 03:37:48.014016] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:42.068 qpair failed and we were unable to recover it. 00:34:42.068 [2024-07-15 03:37:48.023854] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.068 [2024-07-15 03:37:48.023969] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.068 [2024-07-15 03:37:48.023994] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.068 [2024-07-15 03:37:48.024009] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.068 [2024-07-15 03:37:48.024022] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:42.068 [2024-07-15 03:37:48.024050] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:42.068 qpair failed and we were unable to recover it. 00:34:42.068 [2024-07-15 03:37:48.033908] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.068 [2024-07-15 03:37:48.034018] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.068 [2024-07-15 03:37:48.034043] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.068 [2024-07-15 03:37:48.034057] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.068 [2024-07-15 03:37:48.034070] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:42.068 [2024-07-15 03:37:48.034097] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:42.068 qpair failed and we were unable to recover it. 00:34:42.068 [2024-07-15 03:37:48.043898] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.068 [2024-07-15 03:37:48.044053] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.068 [2024-07-15 03:37:48.044078] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.068 [2024-07-15 03:37:48.044092] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.068 [2024-07-15 03:37:48.044105] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:42.068 [2024-07-15 03:37:48.044132] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:42.068 qpair failed and we were unable to recover it. 00:34:42.068 [2024-07-15 03:37:48.053944] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.068 [2024-07-15 03:37:48.054058] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.068 [2024-07-15 03:37:48.054083] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.068 [2024-07-15 03:37:48.054103] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.068 [2024-07-15 03:37:48.054116] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:42.068 [2024-07-15 03:37:48.054144] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:42.068 qpair failed and we were unable to recover it. 00:34:42.068 [2024-07-15 03:37:48.063968] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.068 [2024-07-15 03:37:48.064076] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.068 [2024-07-15 03:37:48.064101] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.068 [2024-07-15 03:37:48.064114] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.068 [2024-07-15 03:37:48.064127] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:42.068 [2024-07-15 03:37:48.064155] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:42.068 qpair failed and we were unable to recover it. 00:34:42.068 [2024-07-15 03:37:48.074071] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.068 [2024-07-15 03:37:48.074189] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.068 [2024-07-15 03:37:48.074215] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.068 [2024-07-15 03:37:48.074230] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.068 [2024-07-15 03:37:48.074242] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:42.068 [2024-07-15 03:37:48.074270] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:42.068 qpair failed and we were unable to recover it. 00:34:42.068 [2024-07-15 03:37:48.084042] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.068 [2024-07-15 03:37:48.084150] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.068 [2024-07-15 03:37:48.084175] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.068 [2024-07-15 03:37:48.084189] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.068 [2024-07-15 03:37:48.084202] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:42.068 [2024-07-15 03:37:48.084230] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:42.068 qpair failed and we were unable to recover it. 00:34:42.068 [2024-07-15 03:37:48.094053] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.068 [2024-07-15 03:37:48.094165] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.068 [2024-07-15 03:37:48.094190] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.068 [2024-07-15 03:37:48.094204] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.068 [2024-07-15 03:37:48.094217] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:42.068 [2024-07-15 03:37:48.094244] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:42.068 qpair failed and we were unable to recover it. 00:34:42.068 [2024-07-15 03:37:48.104085] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.068 [2024-07-15 03:37:48.104197] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.068 [2024-07-15 03:37:48.104223] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.068 [2024-07-15 03:37:48.104236] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.068 [2024-07-15 03:37:48.104249] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:42.068 [2024-07-15 03:37:48.104276] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:42.068 qpair failed and we were unable to recover it. 00:34:42.068 [2024-07-15 03:37:48.114129] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.068 [2024-07-15 03:37:48.114240] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.068 [2024-07-15 03:37:48.114265] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.068 [2024-07-15 03:37:48.114280] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.068 [2024-07-15 03:37:48.114293] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:42.068 [2024-07-15 03:37:48.114320] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:42.069 qpair failed and we were unable to recover it. 00:34:42.069 [2024-07-15 03:37:48.124152] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.069 [2024-07-15 03:37:48.124263] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.069 [2024-07-15 03:37:48.124288] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.069 [2024-07-15 03:37:48.124302] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.069 [2024-07-15 03:37:48.124315] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:42.069 [2024-07-15 03:37:48.124343] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:42.069 qpair failed and we were unable to recover it. 00:34:42.069 [2024-07-15 03:37:48.134163] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.069 [2024-07-15 03:37:48.134277] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.069 [2024-07-15 03:37:48.134302] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.069 [2024-07-15 03:37:48.134316] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.069 [2024-07-15 03:37:48.134329] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:42.069 [2024-07-15 03:37:48.134357] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:42.069 qpair failed and we were unable to recover it. 00:34:42.069 [2024-07-15 03:37:48.144189] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.069 [2024-07-15 03:37:48.144306] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.069 [2024-07-15 03:37:48.144335] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.069 [2024-07-15 03:37:48.144351] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.069 [2024-07-15 03:37:48.144364] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:42.069 [2024-07-15 03:37:48.144391] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:42.069 qpair failed and we were unable to recover it. 00:34:42.069 [2024-07-15 03:37:48.154231] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.069 [2024-07-15 03:37:48.154350] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.069 [2024-07-15 03:37:48.154375] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.069 [2024-07-15 03:37:48.154389] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.069 [2024-07-15 03:37:48.154401] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:42.069 [2024-07-15 03:37:48.154429] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:42.069 qpair failed and we were unable to recover it. 00:34:42.069 [2024-07-15 03:37:48.164273] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.069 [2024-07-15 03:37:48.164403] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.069 [2024-07-15 03:37:48.164429] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.069 [2024-07-15 03:37:48.164443] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.069 [2024-07-15 03:37:48.164456] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:42.069 [2024-07-15 03:37:48.164484] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:42.069 qpair failed and we were unable to recover it. 00:34:42.069 [2024-07-15 03:37:48.174280] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.069 [2024-07-15 03:37:48.174384] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.069 [2024-07-15 03:37:48.174409] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.069 [2024-07-15 03:37:48.174423] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.069 [2024-07-15 03:37:48.174436] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:42.069 [2024-07-15 03:37:48.174464] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:42.069 qpair failed and we were unable to recover it. 00:34:42.069 [2024-07-15 03:37:48.184299] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.069 [2024-07-15 03:37:48.184404] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.069 [2024-07-15 03:37:48.184429] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.069 [2024-07-15 03:37:48.184443] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.069 [2024-07-15 03:37:48.184456] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:42.069 [2024-07-15 03:37:48.184489] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:42.069 qpair failed and we were unable to recover it. 00:34:42.069 [2024-07-15 03:37:48.194408] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.069 [2024-07-15 03:37:48.194520] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.069 [2024-07-15 03:37:48.194546] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.069 [2024-07-15 03:37:48.194560] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.069 [2024-07-15 03:37:48.194573] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:42.069 [2024-07-15 03:37:48.194601] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:42.069 qpair failed and we were unable to recover it. 00:34:42.069 [2024-07-15 03:37:48.204383] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.069 [2024-07-15 03:37:48.204531] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.069 [2024-07-15 03:37:48.204556] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.069 [2024-07-15 03:37:48.204570] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.069 [2024-07-15 03:37:48.204583] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:42.069 [2024-07-15 03:37:48.204611] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:42.069 qpair failed and we were unable to recover it. 00:34:42.328 [2024-07-15 03:37:48.214429] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.329 [2024-07-15 03:37:48.214570] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.329 [2024-07-15 03:37:48.214595] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.329 [2024-07-15 03:37:48.214609] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.329 [2024-07-15 03:37:48.214622] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:42.329 [2024-07-15 03:37:48.214649] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:42.329 qpair failed and we were unable to recover it. 00:34:42.329 [2024-07-15 03:37:48.224530] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.329 [2024-07-15 03:37:48.224656] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.329 [2024-07-15 03:37:48.224681] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.329 [2024-07-15 03:37:48.224694] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.329 [2024-07-15 03:37:48.224708] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:42.329 [2024-07-15 03:37:48.224736] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:42.329 qpair failed and we were unable to recover it. 00:34:42.329 [2024-07-15 03:37:48.234483] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.329 [2024-07-15 03:37:48.234602] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.329 [2024-07-15 03:37:48.234632] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.329 [2024-07-15 03:37:48.234646] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.329 [2024-07-15 03:37:48.234659] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:42.329 [2024-07-15 03:37:48.234687] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:42.329 qpair failed and we were unable to recover it. 00:34:42.329 [2024-07-15 03:37:48.244467] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.329 [2024-07-15 03:37:48.244576] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.329 [2024-07-15 03:37:48.244601] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.329 [2024-07-15 03:37:48.244617] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.329 [2024-07-15 03:37:48.244631] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:42.329 [2024-07-15 03:37:48.244659] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:42.329 qpair failed and we were unable to recover it. 00:34:42.329 [2024-07-15 03:37:48.254502] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.329 [2024-07-15 03:37:48.254606] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.329 [2024-07-15 03:37:48.254631] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.329 [2024-07-15 03:37:48.254645] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.329 [2024-07-15 03:37:48.254657] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:42.329 [2024-07-15 03:37:48.254686] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:42.329 qpair failed and we were unable to recover it. 00:34:42.329 [2024-07-15 03:37:48.264592] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.329 [2024-07-15 03:37:48.264700] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.329 [2024-07-15 03:37:48.264729] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.329 [2024-07-15 03:37:48.264743] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.329 [2024-07-15 03:37:48.264757] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:42.329 [2024-07-15 03:37:48.264785] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:42.329 qpair failed and we were unable to recover it. 00:34:42.329 [2024-07-15 03:37:48.274574] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.329 [2024-07-15 03:37:48.274685] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.329 [2024-07-15 03:37:48.274711] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.329 [2024-07-15 03:37:48.274725] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.329 [2024-07-15 03:37:48.274738] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:42.329 [2024-07-15 03:37:48.274772] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:42.329 qpair failed and we were unable to recover it. 00:34:42.329 [2024-07-15 03:37:48.284596] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.329 [2024-07-15 03:37:48.284702] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.329 [2024-07-15 03:37:48.284727] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.329 [2024-07-15 03:37:48.284741] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.329 [2024-07-15 03:37:48.284754] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:42.329 [2024-07-15 03:37:48.284782] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:42.329 qpair failed and we were unable to recover it. 00:34:42.329 [2024-07-15 03:37:48.294666] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.329 [2024-07-15 03:37:48.294778] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.329 [2024-07-15 03:37:48.294803] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.329 [2024-07-15 03:37:48.294817] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.329 [2024-07-15 03:37:48.294830] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:42.329 [2024-07-15 03:37:48.294857] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:42.329 qpair failed and we were unable to recover it. 00:34:42.329 [2024-07-15 03:37:48.304658] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.329 [2024-07-15 03:37:48.304757] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.329 [2024-07-15 03:37:48.304782] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.329 [2024-07-15 03:37:48.304796] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.329 [2024-07-15 03:37:48.304809] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:42.329 [2024-07-15 03:37:48.304836] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:42.329 qpair failed and we were unable to recover it. 00:34:42.329 [2024-07-15 03:37:48.314701] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.329 [2024-07-15 03:37:48.314821] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.329 [2024-07-15 03:37:48.314847] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.329 [2024-07-15 03:37:48.314861] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.329 [2024-07-15 03:37:48.314874] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:42.329 [2024-07-15 03:37:48.314915] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:42.329 qpair failed and we were unable to recover it. 00:34:42.329 [2024-07-15 03:37:48.324705] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.329 [2024-07-15 03:37:48.324810] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.329 [2024-07-15 03:37:48.324839] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.329 [2024-07-15 03:37:48.324854] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.329 [2024-07-15 03:37:48.324867] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:42.329 [2024-07-15 03:37:48.324900] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:42.329 qpair failed and we were unable to recover it. 00:34:42.329 [2024-07-15 03:37:48.334766] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.329 [2024-07-15 03:37:48.334899] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.329 [2024-07-15 03:37:48.334924] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.329 [2024-07-15 03:37:48.334938] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.329 [2024-07-15 03:37:48.334951] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:42.330 [2024-07-15 03:37:48.334979] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:42.330 qpair failed and we were unable to recover it. 00:34:42.330 [2024-07-15 03:37:48.344784] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.330 [2024-07-15 03:37:48.344918] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.330 [2024-07-15 03:37:48.344943] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.330 [2024-07-15 03:37:48.344957] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.330 [2024-07-15 03:37:48.344970] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:42.330 [2024-07-15 03:37:48.344998] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:42.330 qpair failed and we were unable to recover it. 00:34:42.330 [2024-07-15 03:37:48.354811] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.330 [2024-07-15 03:37:48.354975] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.330 [2024-07-15 03:37:48.354999] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.330 [2024-07-15 03:37:48.355013] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.330 [2024-07-15 03:37:48.355026] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:42.330 [2024-07-15 03:37:48.355054] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:42.330 qpair failed and we were unable to recover it. 00:34:42.330 [2024-07-15 03:37:48.364857] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.330 [2024-07-15 03:37:48.364975] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.330 [2024-07-15 03:37:48.365000] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.330 [2024-07-15 03:37:48.365014] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.330 [2024-07-15 03:37:48.365027] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:42.330 [2024-07-15 03:37:48.365060] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:42.330 qpair failed and we were unable to recover it. 00:34:42.330 [2024-07-15 03:37:48.374846] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.330 [2024-07-15 03:37:48.374972] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.330 [2024-07-15 03:37:48.374997] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.330 [2024-07-15 03:37:48.375012] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.330 [2024-07-15 03:37:48.375024] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:42.330 [2024-07-15 03:37:48.375052] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:42.330 qpair failed and we were unable to recover it. 00:34:42.330 [2024-07-15 03:37:48.384866] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.330 [2024-07-15 03:37:48.384983] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.330 [2024-07-15 03:37:48.385008] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.330 [2024-07-15 03:37:48.385022] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.330 [2024-07-15 03:37:48.385035] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:42.330 [2024-07-15 03:37:48.385063] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:42.330 qpair failed and we were unable to recover it. 00:34:42.330 [2024-07-15 03:37:48.394918] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.330 [2024-07-15 03:37:48.395030] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.330 [2024-07-15 03:37:48.395055] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.330 [2024-07-15 03:37:48.395069] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.330 [2024-07-15 03:37:48.395081] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:42.330 [2024-07-15 03:37:48.395110] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:42.330 qpair failed and we were unable to recover it. 00:34:42.330 [2024-07-15 03:37:48.404939] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.330 [2024-07-15 03:37:48.405052] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.330 [2024-07-15 03:37:48.405077] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.330 [2024-07-15 03:37:48.405091] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.330 [2024-07-15 03:37:48.405105] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:42.330 [2024-07-15 03:37:48.405132] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:42.330 qpair failed and we were unable to recover it. 00:34:42.330 [2024-07-15 03:37:48.414953] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.330 [2024-07-15 03:37:48.415062] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.330 [2024-07-15 03:37:48.415091] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.330 [2024-07-15 03:37:48.415106] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.330 [2024-07-15 03:37:48.415119] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:42.330 [2024-07-15 03:37:48.415147] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:42.330 qpair failed and we were unable to recover it. 00:34:42.330 [2024-07-15 03:37:48.425009] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.330 [2024-07-15 03:37:48.425144] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.330 [2024-07-15 03:37:48.425169] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.330 [2024-07-15 03:37:48.425183] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.330 [2024-07-15 03:37:48.425196] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:42.330 [2024-07-15 03:37:48.425223] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:42.330 qpair failed and we were unable to recover it. 00:34:42.330 [2024-07-15 03:37:48.435071] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.330 [2024-07-15 03:37:48.435208] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.330 [2024-07-15 03:37:48.435233] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.330 [2024-07-15 03:37:48.435248] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.330 [2024-07-15 03:37:48.435261] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:42.330 [2024-07-15 03:37:48.435290] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:42.330 qpair failed and we were unable to recover it. 00:34:42.330 [2024-07-15 03:37:48.445142] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.330 [2024-07-15 03:37:48.445255] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.330 [2024-07-15 03:37:48.445280] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.330 [2024-07-15 03:37:48.445294] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.330 [2024-07-15 03:37:48.445307] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:42.330 [2024-07-15 03:37:48.445334] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:42.330 qpair failed and we were unable to recover it. 00:34:42.330 [2024-07-15 03:37:48.455062] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.330 [2024-07-15 03:37:48.455165] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.330 [2024-07-15 03:37:48.455190] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.330 [2024-07-15 03:37:48.455204] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.330 [2024-07-15 03:37:48.455224] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:42.330 [2024-07-15 03:37:48.455252] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:42.330 qpair failed and we were unable to recover it. 00:34:42.330 [2024-07-15 03:37:48.465116] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.330 [2024-07-15 03:37:48.465241] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.330 [2024-07-15 03:37:48.465267] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.331 [2024-07-15 03:37:48.465281] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.331 [2024-07-15 03:37:48.465293] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:42.331 [2024-07-15 03:37:48.465321] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:42.331 qpair failed and we were unable to recover it. 00:34:42.590 [2024-07-15 03:37:48.475206] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.590 [2024-07-15 03:37:48.475331] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.590 [2024-07-15 03:37:48.475356] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.590 [2024-07-15 03:37:48.475370] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.590 [2024-07-15 03:37:48.475383] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:42.590 [2024-07-15 03:37:48.475410] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:42.590 qpair failed and we were unable to recover it. 00:34:42.590 [2024-07-15 03:37:48.485153] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.590 [2024-07-15 03:37:48.485271] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.590 [2024-07-15 03:37:48.485297] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.590 [2024-07-15 03:37:48.485311] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.590 [2024-07-15 03:37:48.485324] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:42.590 [2024-07-15 03:37:48.485352] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:42.590 qpair failed and we were unable to recover it. 00:34:42.590 [2024-07-15 03:37:48.495210] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.590 [2024-07-15 03:37:48.495339] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.590 [2024-07-15 03:37:48.495365] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.590 [2024-07-15 03:37:48.495379] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.590 [2024-07-15 03:37:48.495393] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:42.590 [2024-07-15 03:37:48.495420] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:42.590 qpair failed and we were unable to recover it. 00:34:42.590 [2024-07-15 03:37:48.505232] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.590 [2024-07-15 03:37:48.505345] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.590 [2024-07-15 03:37:48.505370] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.590 [2024-07-15 03:37:48.505384] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.590 [2024-07-15 03:37:48.505397] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:42.590 [2024-07-15 03:37:48.505425] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:42.590 qpair failed and we were unable to recover it. 00:34:42.590 [2024-07-15 03:37:48.515237] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.590 [2024-07-15 03:37:48.515354] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.590 [2024-07-15 03:37:48.515380] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.590 [2024-07-15 03:37:48.515394] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.590 [2024-07-15 03:37:48.515406] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:42.590 [2024-07-15 03:37:48.515434] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:42.590 qpair failed and we were unable to recover it. 00:34:42.590 [2024-07-15 03:37:48.525275] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.590 [2024-07-15 03:37:48.525389] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.590 [2024-07-15 03:37:48.525414] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.590 [2024-07-15 03:37:48.525428] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.590 [2024-07-15 03:37:48.525441] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:42.590 [2024-07-15 03:37:48.525469] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:42.590 qpair failed and we were unable to recover it. 00:34:42.590 [2024-07-15 03:37:48.535283] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.590 [2024-07-15 03:37:48.535392] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.590 [2024-07-15 03:37:48.535418] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.590 [2024-07-15 03:37:48.535432] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.590 [2024-07-15 03:37:48.535444] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:42.590 [2024-07-15 03:37:48.535472] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:42.590 qpair failed and we were unable to recover it. 00:34:42.590 [2024-07-15 03:37:48.545372] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.590 [2024-07-15 03:37:48.545484] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.590 [2024-07-15 03:37:48.545511] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.590 [2024-07-15 03:37:48.545525] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.590 [2024-07-15 03:37:48.545543] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:42.590 [2024-07-15 03:37:48.545573] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:42.590 qpair failed and we were unable to recover it. 00:34:42.590 [2024-07-15 03:37:48.555397] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.590 [2024-07-15 03:37:48.555515] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.590 [2024-07-15 03:37:48.555541] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.590 [2024-07-15 03:37:48.555555] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.590 [2024-07-15 03:37:48.555568] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:42.590 [2024-07-15 03:37:48.555596] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:42.590 qpair failed and we were unable to recover it. 00:34:42.590 [2024-07-15 03:37:48.565371] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.591 [2024-07-15 03:37:48.565480] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.591 [2024-07-15 03:37:48.565505] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.591 [2024-07-15 03:37:48.565519] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.591 [2024-07-15 03:37:48.565531] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:42.591 [2024-07-15 03:37:48.565559] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:42.591 qpair failed and we were unable to recover it. 00:34:42.591 [2024-07-15 03:37:48.575385] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.591 [2024-07-15 03:37:48.575494] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.591 [2024-07-15 03:37:48.575518] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.591 [2024-07-15 03:37:48.575532] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.591 [2024-07-15 03:37:48.575545] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:42.591 [2024-07-15 03:37:48.575573] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:42.591 qpair failed and we were unable to recover it. 00:34:42.591 [2024-07-15 03:37:48.585447] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.591 [2024-07-15 03:37:48.585561] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.591 [2024-07-15 03:37:48.585586] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.591 [2024-07-15 03:37:48.585600] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.591 [2024-07-15 03:37:48.585613] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:42.591 [2024-07-15 03:37:48.585641] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:42.591 qpair failed and we were unable to recover it. 00:34:42.591 [2024-07-15 03:37:48.595527] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.591 [2024-07-15 03:37:48.595646] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.591 [2024-07-15 03:37:48.595671] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.591 [2024-07-15 03:37:48.595685] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.591 [2024-07-15 03:37:48.595698] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:42.591 [2024-07-15 03:37:48.595725] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:42.591 qpair failed and we were unable to recover it. 00:34:42.591 [2024-07-15 03:37:48.605483] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.591 [2024-07-15 03:37:48.605588] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.591 [2024-07-15 03:37:48.605613] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.591 [2024-07-15 03:37:48.605627] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.591 [2024-07-15 03:37:48.605640] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:42.591 [2024-07-15 03:37:48.605668] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:42.591 qpair failed and we were unable to recover it. 00:34:42.591 [2024-07-15 03:37:48.615515] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.591 [2024-07-15 03:37:48.615619] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.591 [2024-07-15 03:37:48.615644] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.591 [2024-07-15 03:37:48.615658] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.591 [2024-07-15 03:37:48.615671] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:42.591 [2024-07-15 03:37:48.615698] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:42.591 qpair failed and we were unable to recover it. 00:34:42.591 [2024-07-15 03:37:48.625527] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.591 [2024-07-15 03:37:48.625632] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.591 [2024-07-15 03:37:48.625657] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.591 [2024-07-15 03:37:48.625671] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.591 [2024-07-15 03:37:48.625683] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:42.591 [2024-07-15 03:37:48.625712] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:42.591 qpair failed and we were unable to recover it. 00:34:42.591 [2024-07-15 03:37:48.635576] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.591 [2024-07-15 03:37:48.635687] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.591 [2024-07-15 03:37:48.635712] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.591 [2024-07-15 03:37:48.635731] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.591 [2024-07-15 03:37:48.635745] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:42.591 [2024-07-15 03:37:48.635773] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:42.591 qpair failed and we were unable to recover it. 00:34:42.591 [2024-07-15 03:37:48.645596] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.591 [2024-07-15 03:37:48.645760] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.591 [2024-07-15 03:37:48.645787] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.591 [2024-07-15 03:37:48.645801] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.591 [2024-07-15 03:37:48.645818] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:42.591 [2024-07-15 03:37:48.645847] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:42.591 qpair failed and we were unable to recover it. 00:34:42.591 [2024-07-15 03:37:48.655636] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.591 [2024-07-15 03:37:48.655744] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.591 [2024-07-15 03:37:48.655768] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.591 [2024-07-15 03:37:48.655782] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.591 [2024-07-15 03:37:48.655795] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:42.591 [2024-07-15 03:37:48.655823] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:42.591 qpair failed and we were unable to recover it. 00:34:42.591 [2024-07-15 03:37:48.665665] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.591 [2024-07-15 03:37:48.665795] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.591 [2024-07-15 03:37:48.665820] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.591 [2024-07-15 03:37:48.665834] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.591 [2024-07-15 03:37:48.665848] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:42.591 [2024-07-15 03:37:48.665882] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:42.591 qpair failed and we were unable to recover it. 00:34:42.591 [2024-07-15 03:37:48.675690] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.591 [2024-07-15 03:37:48.675805] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.591 [2024-07-15 03:37:48.675831] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.591 [2024-07-15 03:37:48.675845] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.591 [2024-07-15 03:37:48.675859] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:42.591 [2024-07-15 03:37:48.675899] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:42.591 qpair failed and we were unable to recover it. 00:34:42.591 [2024-07-15 03:37:48.685714] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.591 [2024-07-15 03:37:48.685824] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.591 [2024-07-15 03:37:48.685849] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.591 [2024-07-15 03:37:48.685873] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.591 [2024-07-15 03:37:48.685895] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:42.591 [2024-07-15 03:37:48.685923] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:42.591 qpair failed and we were unable to recover it. 00:34:42.592 [2024-07-15 03:37:48.695739] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.592 [2024-07-15 03:37:48.695847] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.592 [2024-07-15 03:37:48.695872] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.592 [2024-07-15 03:37:48.695893] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.592 [2024-07-15 03:37:48.695907] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:42.592 [2024-07-15 03:37:48.695936] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:42.592 qpair failed and we were unable to recover it. 00:34:42.592 [2024-07-15 03:37:48.705809] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.592 [2024-07-15 03:37:48.705935] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.592 [2024-07-15 03:37:48.705961] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.592 [2024-07-15 03:37:48.705975] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.592 [2024-07-15 03:37:48.705988] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:42.592 [2024-07-15 03:37:48.706015] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:42.592 qpair failed and we were unable to recover it. 00:34:42.592 [2024-07-15 03:37:48.715794] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.592 [2024-07-15 03:37:48.715912] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.592 [2024-07-15 03:37:48.715937] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.592 [2024-07-15 03:37:48.715951] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.592 [2024-07-15 03:37:48.715964] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:42.592 [2024-07-15 03:37:48.715994] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:42.592 qpair failed and we were unable to recover it. 00:34:42.592 [2024-07-15 03:37:48.725824] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.592 [2024-07-15 03:37:48.725950] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.592 [2024-07-15 03:37:48.725975] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.592 [2024-07-15 03:37:48.725996] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.592 [2024-07-15 03:37:48.726010] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:42.592 [2024-07-15 03:37:48.726039] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:42.592 qpair failed and we were unable to recover it. 00:34:42.851 [2024-07-15 03:37:48.735849] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.851 [2024-07-15 03:37:48.735984] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.851 [2024-07-15 03:37:48.736010] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.851 [2024-07-15 03:37:48.736024] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.851 [2024-07-15 03:37:48.736037] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:42.851 [2024-07-15 03:37:48.736065] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:42.851 qpair failed and we were unable to recover it. 00:34:42.851 [2024-07-15 03:37:48.745893] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.851 [2024-07-15 03:37:48.746002] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.851 [2024-07-15 03:37:48.746027] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.851 [2024-07-15 03:37:48.746042] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.851 [2024-07-15 03:37:48.746055] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:42.851 [2024-07-15 03:37:48.746083] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:42.851 qpair failed and we were unable to recover it. 00:34:42.851 [2024-07-15 03:37:48.755932] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.851 [2024-07-15 03:37:48.756045] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.851 [2024-07-15 03:37:48.756069] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.851 [2024-07-15 03:37:48.756083] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.851 [2024-07-15 03:37:48.756096] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:42.851 [2024-07-15 03:37:48.756124] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:42.851 qpair failed and we were unable to recover it. 00:34:42.851 [2024-07-15 03:37:48.765963] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.851 [2024-07-15 03:37:48.766078] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.851 [2024-07-15 03:37:48.766103] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.851 [2024-07-15 03:37:48.766117] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.851 [2024-07-15 03:37:48.766130] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:42.851 [2024-07-15 03:37:48.766158] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:42.851 qpair failed and we were unable to recover it. 00:34:42.851 [2024-07-15 03:37:48.775990] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.851 [2024-07-15 03:37:48.776110] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.851 [2024-07-15 03:37:48.776135] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.851 [2024-07-15 03:37:48.776149] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.851 [2024-07-15 03:37:48.776162] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:42.851 [2024-07-15 03:37:48.776189] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:42.851 qpair failed and we were unable to recover it. 00:34:42.851 [2024-07-15 03:37:48.786044] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.851 [2024-07-15 03:37:48.786167] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.851 [2024-07-15 03:37:48.786196] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.851 [2024-07-15 03:37:48.786210] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.851 [2024-07-15 03:37:48.786223] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:42.851 [2024-07-15 03:37:48.786251] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:42.851 qpair failed and we were unable to recover it. 00:34:42.852 [2024-07-15 03:37:48.796079] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.852 [2024-07-15 03:37:48.796262] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.852 [2024-07-15 03:37:48.796290] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.852 [2024-07-15 03:37:48.796307] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.852 [2024-07-15 03:37:48.796319] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:42.852 [2024-07-15 03:37:48.796349] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:42.852 qpair failed and we were unable to recover it. 00:34:42.852 [2024-07-15 03:37:48.806062] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.852 [2024-07-15 03:37:48.806174] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.852 [2024-07-15 03:37:48.806201] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.852 [2024-07-15 03:37:48.806217] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.852 [2024-07-15 03:37:48.806230] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:42.852 [2024-07-15 03:37:48.806258] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:42.852 qpair failed and we were unable to recover it. 00:34:42.852 [2024-07-15 03:37:48.816113] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.852 [2024-07-15 03:37:48.816218] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.852 [2024-07-15 03:37:48.816243] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.852 [2024-07-15 03:37:48.816263] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.852 [2024-07-15 03:37:48.816277] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:42.852 [2024-07-15 03:37:48.816305] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:42.852 qpair failed and we were unable to recover it. 00:34:42.852 [2024-07-15 03:37:48.826131] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.852 [2024-07-15 03:37:48.826248] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.852 [2024-07-15 03:37:48.826273] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.852 [2024-07-15 03:37:48.826287] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.852 [2024-07-15 03:37:48.826301] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:42.852 [2024-07-15 03:37:48.826329] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:42.852 qpair failed and we were unable to recover it. 00:34:42.852 [2024-07-15 03:37:48.836165] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.852 [2024-07-15 03:37:48.836321] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.852 [2024-07-15 03:37:48.836346] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.852 [2024-07-15 03:37:48.836360] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.852 [2024-07-15 03:37:48.836373] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:42.852 [2024-07-15 03:37:48.836400] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:42.852 qpair failed and we were unable to recover it. 00:34:42.852 [2024-07-15 03:37:48.846195] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.852 [2024-07-15 03:37:48.846316] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.852 [2024-07-15 03:37:48.846341] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.852 [2024-07-15 03:37:48.846355] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.852 [2024-07-15 03:37:48.846367] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:42.852 [2024-07-15 03:37:48.846395] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:42.852 qpair failed and we were unable to recover it. 00:34:42.852 [2024-07-15 03:37:48.856189] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.852 [2024-07-15 03:37:48.856301] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.852 [2024-07-15 03:37:48.856326] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.852 [2024-07-15 03:37:48.856341] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.852 [2024-07-15 03:37:48.856353] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:42.852 [2024-07-15 03:37:48.856381] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:42.852 qpair failed and we were unable to recover it. 00:34:42.852 [2024-07-15 03:37:48.866221] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.852 [2024-07-15 03:37:48.866327] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.852 [2024-07-15 03:37:48.866352] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.852 [2024-07-15 03:37:48.866366] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.852 [2024-07-15 03:37:48.866379] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:42.852 [2024-07-15 03:37:48.866406] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:42.852 qpair failed and we were unable to recover it. 00:34:42.852 [2024-07-15 03:37:48.876323] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.852 [2024-07-15 03:37:48.876441] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.852 [2024-07-15 03:37:48.876466] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.852 [2024-07-15 03:37:48.876479] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.852 [2024-07-15 03:37:48.876492] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:42.852 [2024-07-15 03:37:48.876519] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:42.852 qpair failed and we were unable to recover it. 00:34:42.852 [2024-07-15 03:37:48.886279] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.852 [2024-07-15 03:37:48.886393] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.852 [2024-07-15 03:37:48.886418] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.852 [2024-07-15 03:37:48.886432] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.852 [2024-07-15 03:37:48.886443] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:42.852 [2024-07-15 03:37:48.886470] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:42.852 qpair failed and we were unable to recover it. 00:34:42.852 [2024-07-15 03:37:48.896315] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.852 [2024-07-15 03:37:48.896417] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.852 [2024-07-15 03:37:48.896442] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.852 [2024-07-15 03:37:48.896456] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.852 [2024-07-15 03:37:48.896469] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:42.852 [2024-07-15 03:37:48.896497] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:42.852 qpair failed and we were unable to recover it. 00:34:42.852 [2024-07-15 03:37:48.906338] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.852 [2024-07-15 03:37:48.906471] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.852 [2024-07-15 03:37:48.906501] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.852 [2024-07-15 03:37:48.906516] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.852 [2024-07-15 03:37:48.906529] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:42.852 [2024-07-15 03:37:48.906557] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:42.852 qpair failed and we were unable to recover it. 00:34:42.852 [2024-07-15 03:37:48.916404] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.852 [2024-07-15 03:37:48.916545] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.852 [2024-07-15 03:37:48.916569] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.852 [2024-07-15 03:37:48.916583] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.852 [2024-07-15 03:37:48.916596] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:42.853 [2024-07-15 03:37:48.916623] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:42.853 qpair failed and we were unable to recover it. 00:34:42.853 [2024-07-15 03:37:48.926427] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.853 [2024-07-15 03:37:48.926534] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.853 [2024-07-15 03:37:48.926558] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.853 [2024-07-15 03:37:48.926572] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.853 [2024-07-15 03:37:48.926584] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:42.853 [2024-07-15 03:37:48.926611] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:42.853 qpair failed and we were unable to recover it. 00:34:42.853 [2024-07-15 03:37:48.936454] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.853 [2024-07-15 03:37:48.936562] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.853 [2024-07-15 03:37:48.936588] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.853 [2024-07-15 03:37:48.936602] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.853 [2024-07-15 03:37:48.936615] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:42.853 [2024-07-15 03:37:48.936643] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:42.853 qpair failed and we were unable to recover it. 00:34:42.853 [2024-07-15 03:37:48.946524] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.853 [2024-07-15 03:37:48.946648] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.853 [2024-07-15 03:37:48.946674] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.853 [2024-07-15 03:37:48.946688] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.853 [2024-07-15 03:37:48.946701] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:42.853 [2024-07-15 03:37:48.946728] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:42.853 qpair failed and we were unable to recover it. 00:34:42.853 [2024-07-15 03:37:48.956504] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.853 [2024-07-15 03:37:48.956615] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.853 [2024-07-15 03:37:48.956640] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.853 [2024-07-15 03:37:48.956654] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.853 [2024-07-15 03:37:48.956667] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:42.853 [2024-07-15 03:37:48.956695] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:42.853 qpair failed and we were unable to recover it. 00:34:42.853 [2024-07-15 03:37:48.966553] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.853 [2024-07-15 03:37:48.966665] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.853 [2024-07-15 03:37:48.966690] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.853 [2024-07-15 03:37:48.966704] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.853 [2024-07-15 03:37:48.966717] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:42.853 [2024-07-15 03:37:48.966744] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:42.853 qpair failed and we were unable to recover it. 00:34:42.853 [2024-07-15 03:37:48.976557] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.853 [2024-07-15 03:37:48.976657] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.853 [2024-07-15 03:37:48.976682] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.853 [2024-07-15 03:37:48.976696] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.853 [2024-07-15 03:37:48.976709] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:42.853 [2024-07-15 03:37:48.976736] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:42.853 qpair failed and we were unable to recover it. 00:34:42.853 [2024-07-15 03:37:48.986596] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.853 [2024-07-15 03:37:48.986709] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.853 [2024-07-15 03:37:48.986734] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.853 [2024-07-15 03:37:48.986748] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.853 [2024-07-15 03:37:48.986760] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:42.853 [2024-07-15 03:37:48.986788] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:42.853 qpair failed and we were unable to recover it. 00:34:43.112 [2024-07-15 03:37:48.996638] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:43.112 [2024-07-15 03:37:48.996767] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:43.112 [2024-07-15 03:37:48.996797] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:43.112 [2024-07-15 03:37:48.996812] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:43.112 [2024-07-15 03:37:48.996825] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:43.112 [2024-07-15 03:37:48.996853] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:43.112 qpair failed and we were unable to recover it. 00:34:43.112 [2024-07-15 03:37:49.006643] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:43.112 [2024-07-15 03:37:49.006753] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:43.112 [2024-07-15 03:37:49.006779] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:43.112 [2024-07-15 03:37:49.006793] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:43.112 [2024-07-15 03:37:49.006806] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:43.112 [2024-07-15 03:37:49.006833] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:43.112 qpair failed and we were unable to recover it. 00:34:43.112 [2024-07-15 03:37:49.016673] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:43.112 [2024-07-15 03:37:49.016820] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:43.112 [2024-07-15 03:37:49.016845] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:43.112 [2024-07-15 03:37:49.016859] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:43.112 [2024-07-15 03:37:49.016871] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:43.112 [2024-07-15 03:37:49.016908] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:43.112 qpair failed and we were unable to recover it. 00:34:43.112 [2024-07-15 03:37:49.026765] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:43.112 [2024-07-15 03:37:49.026892] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:43.112 [2024-07-15 03:37:49.026928] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:43.112 [2024-07-15 03:37:49.026946] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:43.112 [2024-07-15 03:37:49.026960] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:43.112 [2024-07-15 03:37:49.026991] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:43.112 qpair failed and we were unable to recover it. 00:34:43.112 [2024-07-15 03:37:49.036758] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:43.112 [2024-07-15 03:37:49.036869] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:43.112 [2024-07-15 03:37:49.036902] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:43.112 [2024-07-15 03:37:49.036917] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:43.112 [2024-07-15 03:37:49.036930] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:43.112 [2024-07-15 03:37:49.036964] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:43.112 qpair failed and we were unable to recover it. 00:34:43.112 [2024-07-15 03:37:49.046783] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:43.112 [2024-07-15 03:37:49.046911] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:43.112 [2024-07-15 03:37:49.046936] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:43.113 [2024-07-15 03:37:49.046949] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:43.113 [2024-07-15 03:37:49.046962] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:43.113 [2024-07-15 03:37:49.046990] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:43.113 qpair failed and we were unable to recover it. 00:34:43.113 [2024-07-15 03:37:49.056842] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:43.113 [2024-07-15 03:37:49.056968] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:43.113 [2024-07-15 03:37:49.057004] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:43.113 [2024-07-15 03:37:49.057019] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:43.113 [2024-07-15 03:37:49.057032] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:43.113 [2024-07-15 03:37:49.057060] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:43.113 qpair failed and we were unable to recover it. 00:34:43.113 [2024-07-15 03:37:49.066811] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:43.113 [2024-07-15 03:37:49.066917] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:43.113 [2024-07-15 03:37:49.066943] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:43.113 [2024-07-15 03:37:49.066957] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:43.113 [2024-07-15 03:37:49.066970] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:43.113 [2024-07-15 03:37:49.066997] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:43.113 qpair failed and we were unable to recover it. 00:34:43.113 [2024-07-15 03:37:49.076866] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:43.113 [2024-07-15 03:37:49.076995] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:43.113 [2024-07-15 03:37:49.077021] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:43.113 [2024-07-15 03:37:49.077036] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:43.113 [2024-07-15 03:37:49.077053] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:43.113 [2024-07-15 03:37:49.077082] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:43.113 qpair failed and we were unable to recover it. 00:34:43.113 [2024-07-15 03:37:49.086873] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:43.113 [2024-07-15 03:37:49.086993] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:43.113 [2024-07-15 03:37:49.087023] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:43.113 [2024-07-15 03:37:49.087038] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:43.113 [2024-07-15 03:37:49.087051] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:43.113 [2024-07-15 03:37:49.087078] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:43.113 qpair failed and we were unable to recover it. 00:34:43.113 [2024-07-15 03:37:49.096924] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:43.113 [2024-07-15 03:37:49.097038] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:43.113 [2024-07-15 03:37:49.097064] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:43.113 [2024-07-15 03:37:49.097078] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:43.113 [2024-07-15 03:37:49.097091] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:43.113 [2024-07-15 03:37:49.097119] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:43.113 qpair failed and we were unable to recover it. 00:34:43.113 [2024-07-15 03:37:49.106967] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:43.113 [2024-07-15 03:37:49.107095] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:43.113 [2024-07-15 03:37:49.107121] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:43.113 [2024-07-15 03:37:49.107135] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:43.113 [2024-07-15 03:37:49.107148] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:43.113 [2024-07-15 03:37:49.107175] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:43.113 qpair failed and we were unable to recover it. 00:34:43.113 [2024-07-15 03:37:49.117001] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:43.113 [2024-07-15 03:37:49.117123] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:43.113 [2024-07-15 03:37:49.117148] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:43.113 [2024-07-15 03:37:49.117162] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:43.113 [2024-07-15 03:37:49.117174] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:43.113 [2024-07-15 03:37:49.117202] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:43.113 qpair failed and we were unable to recover it. 00:34:43.113 [2024-07-15 03:37:49.126990] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:43.113 [2024-07-15 03:37:49.127114] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:43.113 [2024-07-15 03:37:49.127140] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:43.113 [2024-07-15 03:37:49.127154] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:43.113 [2024-07-15 03:37:49.127167] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:43.113 [2024-07-15 03:37:49.127200] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:43.113 qpair failed and we were unable to recover it. 00:34:43.113 [2024-07-15 03:37:49.137036] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:43.113 [2024-07-15 03:37:49.137150] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:43.113 [2024-07-15 03:37:49.137174] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:43.113 [2024-07-15 03:37:49.137189] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:43.113 [2024-07-15 03:37:49.137202] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:43.113 [2024-07-15 03:37:49.137229] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:43.113 qpair failed and we were unable to recover it. 00:34:43.113 [2024-07-15 03:37:49.147070] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:43.113 [2024-07-15 03:37:49.147202] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:43.113 [2024-07-15 03:37:49.147227] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:43.113 [2024-07-15 03:37:49.147242] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:43.113 [2024-07-15 03:37:49.147254] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:43.113 [2024-07-15 03:37:49.147281] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:43.113 qpair failed and we were unable to recover it. 00:34:43.113 [2024-07-15 03:37:49.157089] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:43.113 [2024-07-15 03:37:49.157200] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:43.113 [2024-07-15 03:37:49.157225] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:43.113 [2024-07-15 03:37:49.157239] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:43.113 [2024-07-15 03:37:49.157252] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:43.113 [2024-07-15 03:37:49.157279] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:43.113 qpair failed and we were unable to recover it. 00:34:43.113 [2024-07-15 03:37:49.167091] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:43.113 [2024-07-15 03:37:49.167203] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:43.113 [2024-07-15 03:37:49.167228] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:43.113 [2024-07-15 03:37:49.167242] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:43.113 [2024-07-15 03:37:49.167255] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:43.113 [2024-07-15 03:37:49.167282] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:43.113 qpair failed and we were unable to recover it. 00:34:43.113 [2024-07-15 03:37:49.177141] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:43.114 [2024-07-15 03:37:49.177257] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:43.114 [2024-07-15 03:37:49.177287] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:43.114 [2024-07-15 03:37:49.177301] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:43.114 [2024-07-15 03:37:49.177315] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:43.114 [2024-07-15 03:37:49.177343] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:43.114 qpair failed and we were unable to recover it. 00:34:43.114 [2024-07-15 03:37:49.187182] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:43.114 [2024-07-15 03:37:49.187288] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:43.114 [2024-07-15 03:37:49.187312] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:43.114 [2024-07-15 03:37:49.187326] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:43.114 [2024-07-15 03:37:49.187339] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:43.114 [2024-07-15 03:37:49.187366] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:43.114 qpair failed and we were unable to recover it. 00:34:43.114 [2024-07-15 03:37:49.197192] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:43.114 [2024-07-15 03:37:49.197304] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:43.114 [2024-07-15 03:37:49.197330] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:43.114 [2024-07-15 03:37:49.197344] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:43.114 [2024-07-15 03:37:49.197357] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:43.114 [2024-07-15 03:37:49.197385] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:43.114 qpair failed and we were unable to recover it. 00:34:43.114 [2024-07-15 03:37:49.207288] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:43.114 [2024-07-15 03:37:49.207403] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:43.114 [2024-07-15 03:37:49.207427] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:43.114 [2024-07-15 03:37:49.207441] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:43.114 [2024-07-15 03:37:49.207454] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:43.114 [2024-07-15 03:37:49.207482] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:43.114 qpair failed and we were unable to recover it. 00:34:43.114 [2024-07-15 03:37:49.217245] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:43.114 [2024-07-15 03:37:49.217375] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:43.114 [2024-07-15 03:37:49.217400] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:43.114 [2024-07-15 03:37:49.217413] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:43.114 [2024-07-15 03:37:49.217434] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:43.114 [2024-07-15 03:37:49.217462] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:43.114 qpair failed and we were unable to recover it. 00:34:43.114 [2024-07-15 03:37:49.227264] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:43.114 [2024-07-15 03:37:49.227366] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:43.114 [2024-07-15 03:37:49.227391] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:43.114 [2024-07-15 03:37:49.227405] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:43.114 [2024-07-15 03:37:49.227418] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:43.114 [2024-07-15 03:37:49.227445] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:43.114 qpair failed and we were unable to recover it. 00:34:43.114 [2024-07-15 03:37:49.237352] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:43.114 [2024-07-15 03:37:49.237469] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:43.114 [2024-07-15 03:37:49.237494] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:43.114 [2024-07-15 03:37:49.237507] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:43.114 [2024-07-15 03:37:49.237520] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:43.114 [2024-07-15 03:37:49.237548] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:43.114 qpair failed and we were unable to recover it. 00:34:43.114 [2024-07-15 03:37:49.247364] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:43.114 [2024-07-15 03:37:49.247473] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:43.114 [2024-07-15 03:37:49.247498] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:43.114 [2024-07-15 03:37:49.247512] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:43.114 [2024-07-15 03:37:49.247525] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:43.114 [2024-07-15 03:37:49.247552] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:43.114 qpair failed and we were unable to recover it. 00:34:43.374 [2024-07-15 03:37:49.257377] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:43.374 [2024-07-15 03:37:49.257489] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:43.374 [2024-07-15 03:37:49.257514] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:43.374 [2024-07-15 03:37:49.257528] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:43.374 [2024-07-15 03:37:49.257541] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:43.374 [2024-07-15 03:37:49.257569] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:43.374 qpair failed and we were unable to recover it. 00:34:43.374 [2024-07-15 03:37:49.267413] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:43.374 [2024-07-15 03:37:49.267525] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:43.374 [2024-07-15 03:37:49.267551] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:43.374 [2024-07-15 03:37:49.267564] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:43.374 [2024-07-15 03:37:49.267577] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:43.374 [2024-07-15 03:37:49.267605] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:43.374 qpair failed and we were unable to recover it. 00:34:43.374 [2024-07-15 03:37:49.277420] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:43.374 [2024-07-15 03:37:49.277529] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:43.374 [2024-07-15 03:37:49.277553] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:43.374 [2024-07-15 03:37:49.277567] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:43.374 [2024-07-15 03:37:49.277580] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:43.374 [2024-07-15 03:37:49.277607] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:43.374 qpair failed and we were unable to recover it. 00:34:43.374 [2024-07-15 03:37:49.287460] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:43.374 [2024-07-15 03:37:49.287587] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:43.374 [2024-07-15 03:37:49.287612] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:43.374 [2024-07-15 03:37:49.287626] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:43.374 [2024-07-15 03:37:49.287639] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:43.374 [2024-07-15 03:37:49.287666] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:43.374 qpair failed and we were unable to recover it. 00:34:43.374 [2024-07-15 03:37:49.297522] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:43.374 [2024-07-15 03:37:49.297640] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:43.374 [2024-07-15 03:37:49.297665] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:43.374 [2024-07-15 03:37:49.297679] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:43.374 [2024-07-15 03:37:49.297691] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:43.374 [2024-07-15 03:37:49.297718] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:43.374 qpair failed and we were unable to recover it. 00:34:43.374 [2024-07-15 03:37:49.307505] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:43.374 [2024-07-15 03:37:49.307609] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:43.374 [2024-07-15 03:37:49.307633] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:43.374 [2024-07-15 03:37:49.307647] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:43.374 [2024-07-15 03:37:49.307665] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:43.374 [2024-07-15 03:37:49.307693] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:43.374 qpair failed and we were unable to recover it. 00:34:43.374 [2024-07-15 03:37:49.317540] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:43.374 [2024-07-15 03:37:49.317658] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:43.374 [2024-07-15 03:37:49.317682] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:43.374 [2024-07-15 03:37:49.317696] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:43.374 [2024-07-15 03:37:49.317709] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:43.374 [2024-07-15 03:37:49.317736] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:43.374 qpair failed and we were unable to recover it. 00:34:43.374 [2024-07-15 03:37:49.327577] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:43.374 [2024-07-15 03:37:49.327687] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:43.374 [2024-07-15 03:37:49.327712] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:43.374 [2024-07-15 03:37:49.327726] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:43.374 [2024-07-15 03:37:49.327740] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:43.374 [2024-07-15 03:37:49.327768] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:43.374 qpair failed and we were unable to recover it. 00:34:43.374 [2024-07-15 03:37:49.337601] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:43.374 [2024-07-15 03:37:49.337714] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:43.374 [2024-07-15 03:37:49.337739] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:43.374 [2024-07-15 03:37:49.337754] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:43.374 [2024-07-15 03:37:49.337767] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:43.374 [2024-07-15 03:37:49.337794] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:43.374 qpair failed and we were unable to recover it. 00:34:43.374 [2024-07-15 03:37:49.347631] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:43.374 [2024-07-15 03:37:49.347771] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:43.374 [2024-07-15 03:37:49.347796] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:43.374 [2024-07-15 03:37:49.347809] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:43.374 [2024-07-15 03:37:49.347821] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:43.374 [2024-07-15 03:37:49.347850] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:43.374 qpair failed and we were unable to recover it. 00:34:43.375 [2024-07-15 03:37:49.357691] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:43.375 [2024-07-15 03:37:49.357861] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:43.375 [2024-07-15 03:37:49.357892] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:43.375 [2024-07-15 03:37:49.357907] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:43.375 [2024-07-15 03:37:49.357920] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:43.375 [2024-07-15 03:37:49.357947] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:43.375 qpair failed and we were unable to recover it. 00:34:43.375 [2024-07-15 03:37:49.367694] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:43.375 [2024-07-15 03:37:49.367804] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:43.375 [2024-07-15 03:37:49.367829] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:43.375 [2024-07-15 03:37:49.367843] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:43.375 [2024-07-15 03:37:49.367856] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:43.375 [2024-07-15 03:37:49.367888] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:43.375 qpair failed and we were unable to recover it. 00:34:43.375 [2024-07-15 03:37:49.377693] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:43.375 [2024-07-15 03:37:49.377842] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:43.375 [2024-07-15 03:37:49.377866] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:43.375 [2024-07-15 03:37:49.377887] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:43.375 [2024-07-15 03:37:49.377901] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:43.375 [2024-07-15 03:37:49.377929] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:43.375 qpair failed and we were unable to recover it. 00:34:43.375 [2024-07-15 03:37:49.387745] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:43.375 [2024-07-15 03:37:49.387861] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:43.375 [2024-07-15 03:37:49.387892] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:43.375 [2024-07-15 03:37:49.387907] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:43.375 [2024-07-15 03:37:49.387920] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:43.375 [2024-07-15 03:37:49.387948] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:43.375 qpair failed and we were unable to recover it. 00:34:43.375 [2024-07-15 03:37:49.397768] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:43.375 [2024-07-15 03:37:49.397901] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:43.375 [2024-07-15 03:37:49.397926] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:43.375 [2024-07-15 03:37:49.397940] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:43.375 [2024-07-15 03:37:49.397958] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:43.375 [2024-07-15 03:37:49.397987] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:43.375 qpair failed and we were unable to recover it. 00:34:43.375 [2024-07-15 03:37:49.407795] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:43.375 [2024-07-15 03:37:49.407910] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:43.375 [2024-07-15 03:37:49.407936] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:43.375 [2024-07-15 03:37:49.407950] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:43.375 [2024-07-15 03:37:49.407964] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:43.375 [2024-07-15 03:37:49.407992] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:43.375 qpair failed and we were unable to recover it. 00:34:43.375 [2024-07-15 03:37:49.417818] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:43.375 [2024-07-15 03:37:49.417942] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:43.375 [2024-07-15 03:37:49.417968] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:43.375 [2024-07-15 03:37:49.417982] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:43.375 [2024-07-15 03:37:49.417994] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:43.375 [2024-07-15 03:37:49.418022] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:43.375 qpair failed and we were unable to recover it. 00:34:43.375 [2024-07-15 03:37:49.427864] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:43.375 [2024-07-15 03:37:49.427978] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:43.375 [2024-07-15 03:37:49.428003] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:43.375 [2024-07-15 03:37:49.428017] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:43.375 [2024-07-15 03:37:49.428029] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:43.375 [2024-07-15 03:37:49.428057] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:43.375 qpair failed and we were unable to recover it. 00:34:43.375 [2024-07-15 03:37:49.437904] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:43.375 [2024-07-15 03:37:49.438023] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:43.375 [2024-07-15 03:37:49.438047] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:43.375 [2024-07-15 03:37:49.438061] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:43.375 [2024-07-15 03:37:49.438074] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:43.375 [2024-07-15 03:37:49.438101] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:43.375 qpair failed and we were unable to recover it. 00:34:43.375 [2024-07-15 03:37:49.447914] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:43.375 [2024-07-15 03:37:49.448024] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:43.375 [2024-07-15 03:37:49.448049] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:43.375 [2024-07-15 03:37:49.448062] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:43.375 [2024-07-15 03:37:49.448075] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:43.375 [2024-07-15 03:37:49.448103] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:43.375 qpair failed and we were unable to recover it. 00:34:43.375 [2024-07-15 03:37:49.457939] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:43.375 [2024-07-15 03:37:49.458066] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:43.375 [2024-07-15 03:37:49.458091] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:43.375 [2024-07-15 03:37:49.458106] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:43.375 [2024-07-15 03:37:49.458119] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:43.375 [2024-07-15 03:37:49.458147] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:43.375 qpair failed and we were unable to recover it. 00:34:43.375 [2024-07-15 03:37:49.467960] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:43.375 [2024-07-15 03:37:49.468068] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:43.375 [2024-07-15 03:37:49.468093] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:43.375 [2024-07-15 03:37:49.468107] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:43.375 [2024-07-15 03:37:49.468120] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:43.375 [2024-07-15 03:37:49.468148] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:43.375 qpair failed and we were unable to recover it. 00:34:43.375 [2024-07-15 03:37:49.478006] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:43.375 [2024-07-15 03:37:49.478124] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:43.375 [2024-07-15 03:37:49.478149] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:43.375 [2024-07-15 03:37:49.478163] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:43.375 [2024-07-15 03:37:49.478176] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:43.375 [2024-07-15 03:37:49.478203] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:43.376 qpair failed and we were unable to recover it. 00:34:43.376 [2024-07-15 03:37:49.488032] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:43.376 [2024-07-15 03:37:49.488167] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:43.376 [2024-07-15 03:37:49.488192] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:43.376 [2024-07-15 03:37:49.488212] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:43.376 [2024-07-15 03:37:49.488226] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:43.376 [2024-07-15 03:37:49.488253] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:43.376 qpair failed and we were unable to recover it. 00:34:43.376 [2024-07-15 03:37:49.498050] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:43.376 [2024-07-15 03:37:49.498171] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:43.376 [2024-07-15 03:37:49.498196] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:43.376 [2024-07-15 03:37:49.498210] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:43.376 [2024-07-15 03:37:49.498223] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:43.376 [2024-07-15 03:37:49.498251] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:43.376 qpair failed and we were unable to recover it. 00:34:43.376 [2024-07-15 03:37:49.508090] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:43.376 [2024-07-15 03:37:49.508223] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:43.376 [2024-07-15 03:37:49.508248] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:43.376 [2024-07-15 03:37:49.508261] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:43.376 [2024-07-15 03:37:49.508274] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:43.376 [2024-07-15 03:37:49.508301] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:43.376 qpair failed and we were unable to recover it. 00:34:43.636 [2024-07-15 03:37:49.518111] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:43.636 [2024-07-15 03:37:49.518234] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:43.636 [2024-07-15 03:37:49.518258] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:43.636 [2024-07-15 03:37:49.518271] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:43.636 [2024-07-15 03:37:49.518284] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:43.636 [2024-07-15 03:37:49.518312] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:43.636 qpair failed and we were unable to recover it. 00:34:43.636 [2024-07-15 03:37:49.528146] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:43.636 [2024-07-15 03:37:49.528259] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:43.636 [2024-07-15 03:37:49.528284] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:43.636 [2024-07-15 03:37:49.528298] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:43.636 [2024-07-15 03:37:49.528310] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:43.636 [2024-07-15 03:37:49.528337] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:43.636 qpair failed and we were unable to recover it. 00:34:43.636 [2024-07-15 03:37:49.538164] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:43.636 [2024-07-15 03:37:49.538273] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:43.636 [2024-07-15 03:37:49.538298] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:43.636 [2024-07-15 03:37:49.538312] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:43.636 [2024-07-15 03:37:49.538326] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:43.636 [2024-07-15 03:37:49.538353] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:43.636 qpair failed and we were unable to recover it. 00:34:43.636 [2024-07-15 03:37:49.548215] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:43.636 [2024-07-15 03:37:49.548323] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:43.636 [2024-07-15 03:37:49.548348] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:43.636 [2024-07-15 03:37:49.548362] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:43.636 [2024-07-15 03:37:49.548375] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:43.636 [2024-07-15 03:37:49.548402] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:43.636 qpair failed and we were unable to recover it. 00:34:43.636 [2024-07-15 03:37:49.558241] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:43.636 [2024-07-15 03:37:49.558362] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:43.636 [2024-07-15 03:37:49.558387] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:43.636 [2024-07-15 03:37:49.558401] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:43.636 [2024-07-15 03:37:49.558415] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:43.636 [2024-07-15 03:37:49.558442] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:43.636 qpair failed and we were unable to recover it. 00:34:43.636 [2024-07-15 03:37:49.568242] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:43.636 [2024-07-15 03:37:49.568399] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:43.636 [2024-07-15 03:37:49.568424] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:43.636 [2024-07-15 03:37:49.568438] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:43.636 [2024-07-15 03:37:49.568451] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:43.636 [2024-07-15 03:37:49.568479] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:43.637 qpair failed and we were unable to recover it. 00:34:43.637 [2024-07-15 03:37:49.578262] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:43.637 [2024-07-15 03:37:49.578371] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:43.637 [2024-07-15 03:37:49.578396] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:43.637 [2024-07-15 03:37:49.578417] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:43.637 [2024-07-15 03:37:49.578430] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:43.637 [2024-07-15 03:37:49.578458] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:43.637 qpair failed and we were unable to recover it. 00:34:43.637 [2024-07-15 03:37:49.588288] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:43.637 [2024-07-15 03:37:49.588394] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:43.637 [2024-07-15 03:37:49.588418] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:43.637 [2024-07-15 03:37:49.588432] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:43.637 [2024-07-15 03:37:49.588445] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:43.637 [2024-07-15 03:37:49.588473] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:43.637 qpair failed and we were unable to recover it. 00:34:43.637 [2024-07-15 03:37:49.598363] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:43.637 [2024-07-15 03:37:49.598481] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:43.637 [2024-07-15 03:37:49.598506] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:43.637 [2024-07-15 03:37:49.598519] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:43.637 [2024-07-15 03:37:49.598532] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:43.637 [2024-07-15 03:37:49.598560] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:43.637 qpair failed and we were unable to recover it. 00:34:43.637 [2024-07-15 03:37:49.608410] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:43.637 [2024-07-15 03:37:49.608532] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:43.637 [2024-07-15 03:37:49.608558] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:43.637 [2024-07-15 03:37:49.608571] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:43.637 [2024-07-15 03:37:49.608584] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:43.637 [2024-07-15 03:37:49.608613] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:43.637 qpair failed and we were unable to recover it. 00:34:43.637 [2024-07-15 03:37:49.618392] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:43.637 [2024-07-15 03:37:49.618504] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:43.637 [2024-07-15 03:37:49.618529] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:43.637 [2024-07-15 03:37:49.618543] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:43.637 [2024-07-15 03:37:49.618557] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:43.637 [2024-07-15 03:37:49.618585] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:43.637 qpair failed and we were unable to recover it. 00:34:43.637 [2024-07-15 03:37:49.628438] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:43.637 [2024-07-15 03:37:49.628550] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:43.637 [2024-07-15 03:37:49.628576] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:43.637 [2024-07-15 03:37:49.628591] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:43.637 [2024-07-15 03:37:49.628604] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:43.637 [2024-07-15 03:37:49.628632] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:43.637 qpair failed and we were unable to recover it. 00:34:43.637 [2024-07-15 03:37:49.638480] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:43.637 [2024-07-15 03:37:49.638602] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:43.637 [2024-07-15 03:37:49.638628] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:43.637 [2024-07-15 03:37:49.638643] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:43.637 [2024-07-15 03:37:49.638660] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:43.637 [2024-07-15 03:37:49.638690] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:43.637 qpair failed and we were unable to recover it. 00:34:43.637 [2024-07-15 03:37:49.648493] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:43.637 [2024-07-15 03:37:49.648604] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:43.637 [2024-07-15 03:37:49.648630] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:43.637 [2024-07-15 03:37:49.648644] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:43.637 [2024-07-15 03:37:49.648658] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:43.637 [2024-07-15 03:37:49.648685] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:43.637 qpair failed and we were unable to recover it. 00:34:43.637 [2024-07-15 03:37:49.658495] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:43.637 [2024-07-15 03:37:49.658614] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:43.637 [2024-07-15 03:37:49.658643] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:43.637 [2024-07-15 03:37:49.658657] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:43.637 [2024-07-15 03:37:49.658670] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:43.637 [2024-07-15 03:37:49.658699] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:43.637 qpair failed and we were unable to recover it. 00:34:43.637 [2024-07-15 03:37:49.668535] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:43.637 [2024-07-15 03:37:49.668648] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:43.637 [2024-07-15 03:37:49.668675] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:43.637 [2024-07-15 03:37:49.668695] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:43.637 [2024-07-15 03:37:49.668709] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:43.637 [2024-07-15 03:37:49.668737] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:43.637 qpair failed and we were unable to recover it. 00:34:43.637 [2024-07-15 03:37:49.678558] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:43.637 [2024-07-15 03:37:49.678699] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:43.637 [2024-07-15 03:37:49.678725] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:43.637 [2024-07-15 03:37:49.678739] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:43.637 [2024-07-15 03:37:49.678752] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:43.637 [2024-07-15 03:37:49.678779] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:43.637 qpair failed and we were unable to recover it. 00:34:43.637 [2024-07-15 03:37:49.688597] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:43.637 [2024-07-15 03:37:49.688715] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:43.637 [2024-07-15 03:37:49.688741] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:43.637 [2024-07-15 03:37:49.688756] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:43.637 [2024-07-15 03:37:49.688769] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:43.637 [2024-07-15 03:37:49.688798] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:43.637 qpair failed and we were unable to recover it. 00:34:43.637 [2024-07-15 03:37:49.698666] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:43.637 [2024-07-15 03:37:49.698789] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:43.637 [2024-07-15 03:37:49.698815] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:43.637 [2024-07-15 03:37:49.698830] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:43.637 [2024-07-15 03:37:49.698843] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:43.638 [2024-07-15 03:37:49.698870] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:43.638 qpair failed and we were unable to recover it. 00:34:43.638 [2024-07-15 03:37:49.708619] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:43.638 [2024-07-15 03:37:49.708730] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:43.638 [2024-07-15 03:37:49.708755] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:43.638 [2024-07-15 03:37:49.708769] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:43.638 [2024-07-15 03:37:49.708782] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:43.638 [2024-07-15 03:37:49.708810] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:43.638 qpair failed and we were unable to recover it. 00:34:43.638 [2024-07-15 03:37:49.718702] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:43.638 [2024-07-15 03:37:49.718831] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:43.638 [2024-07-15 03:37:49.718855] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:43.638 [2024-07-15 03:37:49.718869] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:43.638 [2024-07-15 03:37:49.718889] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:43.638 [2024-07-15 03:37:49.718920] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:43.638 qpair failed and we were unable to recover it. 00:34:43.638 [2024-07-15 03:37:49.728702] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:43.638 [2024-07-15 03:37:49.728812] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:43.638 [2024-07-15 03:37:49.728838] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:43.638 [2024-07-15 03:37:49.728852] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:43.638 [2024-07-15 03:37:49.728865] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:43.638 [2024-07-15 03:37:49.728899] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:43.638 qpair failed and we were unable to recover it. 00:34:43.638 [2024-07-15 03:37:49.738726] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:43.638 [2024-07-15 03:37:49.738838] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:43.638 [2024-07-15 03:37:49.738864] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:43.638 [2024-07-15 03:37:49.738887] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:43.638 [2024-07-15 03:37:49.738906] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:43.638 [2024-07-15 03:37:49.738935] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:43.638 qpair failed and we were unable to recover it. 00:34:43.638 [2024-07-15 03:37:49.748754] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:43.638 [2024-07-15 03:37:49.748891] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:43.638 [2024-07-15 03:37:49.748916] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:43.638 [2024-07-15 03:37:49.748930] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:43.638 [2024-07-15 03:37:49.748943] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:43.638 [2024-07-15 03:37:49.748971] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:43.638 qpair failed and we were unable to recover it. 00:34:43.638 [2024-07-15 03:37:49.758813] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:43.638 [2024-07-15 03:37:49.758942] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:43.638 [2024-07-15 03:37:49.758972] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:43.638 [2024-07-15 03:37:49.758988] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:43.638 [2024-07-15 03:37:49.759000] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:43.638 [2024-07-15 03:37:49.759028] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:43.638 qpair failed and we were unable to recover it. 00:34:43.638 [2024-07-15 03:37:49.768830] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:43.638 [2024-07-15 03:37:49.768959] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:43.638 [2024-07-15 03:37:49.768985] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:43.638 [2024-07-15 03:37:49.768999] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:43.638 [2024-07-15 03:37:49.769012] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:43.638 [2024-07-15 03:37:49.769040] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:43.638 qpair failed and we were unable to recover it. 00:34:43.897 [2024-07-15 03:37:49.778852] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:43.897 [2024-07-15 03:37:49.779019] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:43.897 [2024-07-15 03:37:49.779045] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:43.897 [2024-07-15 03:37:49.779060] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:43.897 [2024-07-15 03:37:49.779073] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:43.897 [2024-07-15 03:37:49.779100] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:43.897 qpair failed and we were unable to recover it. 00:34:43.897 [2024-07-15 03:37:49.788897] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:43.897 [2024-07-15 03:37:49.789023] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:43.897 [2024-07-15 03:37:49.789047] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:43.897 [2024-07-15 03:37:49.789061] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:43.897 [2024-07-15 03:37:49.789074] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:43.897 [2024-07-15 03:37:49.789101] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:43.897 qpair failed and we were unable to recover it. 00:34:43.897 [2024-07-15 03:37:49.798935] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:43.897 [2024-07-15 03:37:49.799059] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:43.897 [2024-07-15 03:37:49.799083] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:43.897 [2024-07-15 03:37:49.799097] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:43.897 [2024-07-15 03:37:49.799110] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:43.897 [2024-07-15 03:37:49.799144] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:43.897 qpair failed and we were unable to recover it. 00:34:43.897 [2024-07-15 03:37:49.808921] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:43.897 [2024-07-15 03:37:49.809043] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:43.897 [2024-07-15 03:37:49.809068] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:43.897 [2024-07-15 03:37:49.809081] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:43.897 [2024-07-15 03:37:49.809094] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:43.897 [2024-07-15 03:37:49.809123] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:43.897 qpair failed and we were unable to recover it. 00:34:43.897 [2024-07-15 03:37:49.818955] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:43.897 [2024-07-15 03:37:49.819078] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:43.897 [2024-07-15 03:37:49.819103] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:43.897 [2024-07-15 03:37:49.819117] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:43.897 [2024-07-15 03:37:49.819131] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:43.897 [2024-07-15 03:37:49.819158] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:43.897 qpair failed and we were unable to recover it. 00:34:43.897 [2024-07-15 03:37:49.828985] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:43.897 [2024-07-15 03:37:49.829092] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:43.897 [2024-07-15 03:37:49.829116] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:43.897 [2024-07-15 03:37:49.829130] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:43.897 [2024-07-15 03:37:49.829144] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:43.897 [2024-07-15 03:37:49.829171] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:43.897 qpair failed and we were unable to recover it. 00:34:43.897 [2024-07-15 03:37:49.839018] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:43.897 [2024-07-15 03:37:49.839145] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:43.897 [2024-07-15 03:37:49.839170] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:43.897 [2024-07-15 03:37:49.839184] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:43.897 [2024-07-15 03:37:49.839197] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:43.897 [2024-07-15 03:37:49.839224] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:43.897 qpair failed and we were unable to recover it. 00:34:43.897 [2024-07-15 03:37:49.849070] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:43.897 [2024-07-15 03:37:49.849181] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:43.897 [2024-07-15 03:37:49.849211] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:43.897 [2024-07-15 03:37:49.849226] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:43.897 [2024-07-15 03:37:49.849239] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:43.897 [2024-07-15 03:37:49.849267] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:43.897 qpair failed and we were unable to recover it. 00:34:43.897 [2024-07-15 03:37:49.859087] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:43.897 [2024-07-15 03:37:49.859218] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:43.897 [2024-07-15 03:37:49.859243] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:43.897 [2024-07-15 03:37:49.859257] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:43.897 [2024-07-15 03:37:49.859270] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:43.897 [2024-07-15 03:37:49.859298] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:43.897 qpair failed and we were unable to recover it. 00:34:43.897 [2024-07-15 03:37:49.869102] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:43.897 [2024-07-15 03:37:49.869208] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:43.897 [2024-07-15 03:37:49.869233] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:43.897 [2024-07-15 03:37:49.869247] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:43.897 [2024-07-15 03:37:49.869259] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:43.897 [2024-07-15 03:37:49.869286] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:43.897 qpair failed and we were unable to recover it. 00:34:43.897 [2024-07-15 03:37:49.879128] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:43.897 [2024-07-15 03:37:49.879240] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:43.897 [2024-07-15 03:37:49.879265] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:43.897 [2024-07-15 03:37:49.879279] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:43.897 [2024-07-15 03:37:49.879292] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:43.897 [2024-07-15 03:37:49.879319] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:43.897 qpair failed and we were unable to recover it. 00:34:43.897 [2024-07-15 03:37:49.889162] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:43.897 [2024-07-15 03:37:49.889318] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:43.897 [2024-07-15 03:37:49.889343] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:43.897 [2024-07-15 03:37:49.889357] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:43.897 [2024-07-15 03:37:49.889369] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:43.898 [2024-07-15 03:37:49.889401] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:43.898 qpair failed and we were unable to recover it. 00:34:43.898 [2024-07-15 03:37:49.899180] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:43.898 [2024-07-15 03:37:49.899288] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:43.898 [2024-07-15 03:37:49.899313] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:43.898 [2024-07-15 03:37:49.899327] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:43.898 [2024-07-15 03:37:49.899340] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:43.898 [2024-07-15 03:37:49.899367] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:43.898 qpair failed and we were unable to recover it. 00:34:43.898 [2024-07-15 03:37:49.909203] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:43.898 [2024-07-15 03:37:49.909308] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:43.898 [2024-07-15 03:37:49.909342] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:43.898 [2024-07-15 03:37:49.909356] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:43.898 [2024-07-15 03:37:49.909369] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:43.898 [2024-07-15 03:37:49.909396] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:43.898 qpair failed and we were unable to recover it. 00:34:43.898 [2024-07-15 03:37:49.919228] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:43.898 [2024-07-15 03:37:49.919337] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:43.898 [2024-07-15 03:37:49.919362] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:43.898 [2024-07-15 03:37:49.919375] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:43.898 [2024-07-15 03:37:49.919388] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:43.898 [2024-07-15 03:37:49.919415] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:43.898 qpair failed and we were unable to recover it. 00:34:43.898 [2024-07-15 03:37:49.929259] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:43.898 [2024-07-15 03:37:49.929371] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:43.898 [2024-07-15 03:37:49.929394] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:43.898 [2024-07-15 03:37:49.929407] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:43.898 [2024-07-15 03:37:49.929419] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:43.898 [2024-07-15 03:37:49.929446] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:43.898 qpair failed and we were unable to recover it. 00:34:43.898 [2024-07-15 03:37:49.939353] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:43.898 [2024-07-15 03:37:49.939472] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:43.898 [2024-07-15 03:37:49.939503] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:43.898 [2024-07-15 03:37:49.939518] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:43.898 [2024-07-15 03:37:49.939531] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:43.898 [2024-07-15 03:37:49.939559] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:43.898 qpair failed and we were unable to recover it. 00:34:43.898 [2024-07-15 03:37:49.949347] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:43.898 [2024-07-15 03:37:49.949499] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:43.898 [2024-07-15 03:37:49.949524] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:43.898 [2024-07-15 03:37:49.949539] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:43.898 [2024-07-15 03:37:49.949552] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:43.898 [2024-07-15 03:37:49.949580] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:43.898 qpair failed and we were unable to recover it. 00:34:43.898 [2024-07-15 03:37:49.959357] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:43.898 [2024-07-15 03:37:49.959477] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:43.898 [2024-07-15 03:37:49.959501] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:43.898 [2024-07-15 03:37:49.959515] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:43.898 [2024-07-15 03:37:49.959527] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:43.898 [2024-07-15 03:37:49.959555] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:43.898 qpair failed and we were unable to recover it. 00:34:43.898 [2024-07-15 03:37:49.969390] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:43.898 [2024-07-15 03:37:49.969500] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:43.898 [2024-07-15 03:37:49.969526] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:43.898 [2024-07-15 03:37:49.969539] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:43.898 [2024-07-15 03:37:49.969552] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:43.898 [2024-07-15 03:37:49.969580] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:43.898 qpair failed and we were unable to recover it. 00:34:43.898 [2024-07-15 03:37:49.979445] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:43.898 [2024-07-15 03:37:49.979553] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:43.898 [2024-07-15 03:37:49.979578] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:43.898 [2024-07-15 03:37:49.979592] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:43.898 [2024-07-15 03:37:49.979605] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:43.898 [2024-07-15 03:37:49.979640] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:43.898 qpair failed and we were unable to recover it. 00:34:43.898 [2024-07-15 03:37:49.989476] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:43.898 [2024-07-15 03:37:49.989624] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:43.898 [2024-07-15 03:37:49.989649] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:43.898 [2024-07-15 03:37:49.989663] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:43.898 [2024-07-15 03:37:49.989675] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:43.898 [2024-07-15 03:37:49.989705] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:43.898 qpair failed and we were unable to recover it. 00:34:43.898 [2024-07-15 03:37:49.999505] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:43.898 [2024-07-15 03:37:49.999636] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:43.898 [2024-07-15 03:37:49.999664] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:43.898 [2024-07-15 03:37:49.999684] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:43.898 [2024-07-15 03:37:49.999698] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:43.898 [2024-07-15 03:37:49.999727] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:43.898 qpair failed and we were unable to recover it. 00:34:43.898 [2024-07-15 03:37:50.009496] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:43.898 [2024-07-15 03:37:50.009607] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:43.898 [2024-07-15 03:37:50.009636] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:43.898 [2024-07-15 03:37:50.009653] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:43.898 [2024-07-15 03:37:50.009667] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:43.898 [2024-07-15 03:37:50.009697] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:43.898 qpair failed and we were unable to recover it. 00:34:43.898 [2024-07-15 03:37:50.019571] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:43.898 [2024-07-15 03:37:50.019727] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:43.898 [2024-07-15 03:37:50.019756] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:43.899 [2024-07-15 03:37:50.019771] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:43.899 [2024-07-15 03:37:50.019783] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:43.899 [2024-07-15 03:37:50.019815] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:43.899 qpair failed and we were unable to recover it. 00:34:43.899 [2024-07-15 03:37:50.029613] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:43.899 [2024-07-15 03:37:50.029736] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:43.899 [2024-07-15 03:37:50.029769] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:43.899 [2024-07-15 03:37:50.029785] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:43.899 [2024-07-15 03:37:50.029797] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:43.899 [2024-07-15 03:37:50.029827] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:43.899 qpair failed and we were unable to recover it. 00:34:44.158 [2024-07-15 03:37:50.039704] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:44.158 [2024-07-15 03:37:50.039842] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:44.158 [2024-07-15 03:37:50.039868] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:44.158 [2024-07-15 03:37:50.039891] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:44.158 [2024-07-15 03:37:50.039904] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:44.158 [2024-07-15 03:37:50.039934] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:44.158 qpair failed and we were unable to recover it. 00:34:44.158 [2024-07-15 03:37:50.049653] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:44.158 [2024-07-15 03:37:50.049776] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:44.158 [2024-07-15 03:37:50.049801] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:44.158 [2024-07-15 03:37:50.049815] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:44.158 [2024-07-15 03:37:50.049827] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:44.158 [2024-07-15 03:37:50.049856] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:44.158 qpair failed and we were unable to recover it. 00:34:44.158 [2024-07-15 03:37:50.059669] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:44.158 [2024-07-15 03:37:50.059794] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:44.158 [2024-07-15 03:37:50.059821] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:44.158 [2024-07-15 03:37:50.059836] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:44.158 [2024-07-15 03:37:50.059849] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:44.158 [2024-07-15 03:37:50.059885] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:44.158 qpair failed and we were unable to recover it. 00:34:44.158 [2024-07-15 03:37:50.069677] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:44.158 [2024-07-15 03:37:50.069783] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:44.158 [2024-07-15 03:37:50.069809] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:44.158 [2024-07-15 03:37:50.069823] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:44.158 [2024-07-15 03:37:50.069843] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:44.158 [2024-07-15 03:37:50.069872] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:44.158 qpair failed and we were unable to recover it. 00:34:44.158 [2024-07-15 03:37:50.079754] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:44.158 [2024-07-15 03:37:50.079871] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:44.158 [2024-07-15 03:37:50.079906] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:44.158 [2024-07-15 03:37:50.079921] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:44.158 [2024-07-15 03:37:50.079934] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:44.158 [2024-07-15 03:37:50.079962] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:44.158 qpair failed and we were unable to recover it. 00:34:44.158 [2024-07-15 03:37:50.089751] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:44.158 [2024-07-15 03:37:50.089865] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:44.159 [2024-07-15 03:37:50.089899] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:44.159 [2024-07-15 03:37:50.089914] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:44.159 [2024-07-15 03:37:50.089927] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:44.159 [2024-07-15 03:37:50.089956] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:44.159 qpair failed and we were unable to recover it. 00:34:44.159 [2024-07-15 03:37:50.099805] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:44.159 [2024-07-15 03:37:50.099920] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:44.159 [2024-07-15 03:37:50.099947] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:44.159 [2024-07-15 03:37:50.099962] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:44.159 [2024-07-15 03:37:50.099975] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:44.159 [2024-07-15 03:37:50.100003] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:44.159 qpair failed and we were unable to recover it. 00:34:44.159 [2024-07-15 03:37:50.109787] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:44.159 [2024-07-15 03:37:50.109910] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:44.159 [2024-07-15 03:37:50.109936] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:44.159 [2024-07-15 03:37:50.109950] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:44.159 [2024-07-15 03:37:50.109962] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:44.159 [2024-07-15 03:37:50.109990] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:44.159 qpair failed and we were unable to recover it. 00:34:44.159 [2024-07-15 03:37:50.119847] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:44.159 [2024-07-15 03:37:50.120012] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:44.159 [2024-07-15 03:37:50.120037] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:44.159 [2024-07-15 03:37:50.120051] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:44.159 [2024-07-15 03:37:50.120064] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:44.159 [2024-07-15 03:37:50.120092] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:44.159 qpair failed and we were unable to recover it. 00:34:44.159 [2024-07-15 03:37:50.129857] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:44.159 [2024-07-15 03:37:50.129986] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:44.159 [2024-07-15 03:37:50.130011] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:44.159 [2024-07-15 03:37:50.130025] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:44.159 [2024-07-15 03:37:50.130038] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:44.159 [2024-07-15 03:37:50.130066] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:44.159 qpair failed and we were unable to recover it. 00:34:44.159 [2024-07-15 03:37:50.139867] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:44.159 [2024-07-15 03:37:50.139983] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:44.159 [2024-07-15 03:37:50.140008] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:44.159 [2024-07-15 03:37:50.140023] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:44.159 [2024-07-15 03:37:50.140036] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:44.159 [2024-07-15 03:37:50.140064] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:44.159 qpair failed and we were unable to recover it. 00:34:44.159 [2024-07-15 03:37:50.149934] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:44.159 [2024-07-15 03:37:50.150046] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:44.159 [2024-07-15 03:37:50.150071] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:44.159 [2024-07-15 03:37:50.150086] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:44.159 [2024-07-15 03:37:50.150103] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:44.159 [2024-07-15 03:37:50.150133] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:44.159 qpair failed and we were unable to recover it. 00:34:44.159 [2024-07-15 03:37:50.160019] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:44.159 [2024-07-15 03:37:50.160162] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:44.159 [2024-07-15 03:37:50.160188] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:44.159 [2024-07-15 03:37:50.160202] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:44.159 [2024-07-15 03:37:50.160220] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:44.159 [2024-07-15 03:37:50.160249] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:44.159 qpair failed and we were unable to recover it. 00:34:44.159 [2024-07-15 03:37:50.169980] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:44.159 [2024-07-15 03:37:50.170091] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:44.159 [2024-07-15 03:37:50.170116] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:44.159 [2024-07-15 03:37:50.170130] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:44.159 [2024-07-15 03:37:50.170143] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:44.159 [2024-07-15 03:37:50.170171] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:44.159 qpair failed and we were unable to recover it. 00:34:44.159 [2024-07-15 03:37:50.180018] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:44.159 [2024-07-15 03:37:50.180131] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:44.159 [2024-07-15 03:37:50.180156] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:44.159 [2024-07-15 03:37:50.180170] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:44.159 [2024-07-15 03:37:50.180183] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2300f20 00:34:44.159 [2024-07-15 03:37:50.180211] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:44.159 qpair failed and we were unable to recover it. 00:34:44.159 [2024-07-15 03:37:50.190017] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:44.159 [2024-07-15 03:37:50.190150] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:44.159 [2024-07-15 03:37:50.190182] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:44.159 [2024-07-15 03:37:50.190201] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:44.159 [2024-07-15 03:37:50.190215] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcbe8000b90 00:34:44.159 [2024-07-15 03:37:50.190246] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:44.160 qpair failed and we were unable to recover it. 00:34:44.160 [2024-07-15 03:37:50.200082] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:44.160 [2024-07-15 03:37:50.200211] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:44.160 [2024-07-15 03:37:50.200238] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:44.160 [2024-07-15 03:37:50.200253] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:44.160 [2024-07-15 03:37:50.200266] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcbe8000b90 00:34:44.160 [2024-07-15 03:37:50.200297] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:44.160 qpair failed and we were unable to recover it. 00:34:44.160 [2024-07-15 03:37:50.210103] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:44.160 [2024-07-15 03:37:50.210224] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:44.160 [2024-07-15 03:37:50.210256] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:44.160 [2024-07-15 03:37:50.210272] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:44.160 [2024-07-15 03:37:50.210286] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcbe0000b90 00:34:44.160 [2024-07-15 03:37:50.210320] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:44.160 qpair failed and we were unable to recover it. 00:34:44.160 [2024-07-15 03:37:50.220120] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:44.160 [2024-07-15 03:37:50.220232] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:44.160 [2024-07-15 03:37:50.220260] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:44.160 [2024-07-15 03:37:50.220276] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:44.160 [2024-07-15 03:37:50.220289] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcbe0000b90 00:34:44.160 [2024-07-15 03:37:50.220320] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:44.160 qpair failed and we were unable to recover it. 00:34:44.160 [2024-07-15 03:37:50.220458] nvme_ctrlr.c:4476:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Submitting Keep Alive failed 00:34:44.160 A controller has encountered a failure and is being reset. 00:34:44.160 Controller properly reset. 00:34:44.160 Initializing NVMe Controllers 00:34:44.160 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:34:44.160 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:34:44.160 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:34:44.160 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:34:44.160 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:34:44.160 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:34:44.160 Initialization complete. Launching workers. 00:34:44.160 Starting thread on core 1 00:34:44.160 Starting thread on core 2 00:34:44.160 Starting thread on core 3 00:34:44.160 Starting thread on core 0 00:34:44.160 03:37:50 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:34:44.160 00:34:44.160 real 0m10.809s 00:34:44.160 user 0m18.106s 00:34:44.160 sys 0m5.424s 00:34:44.160 03:37:50 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:34:44.160 03:37:50 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:44.160 ************************************ 00:34:44.160 END TEST nvmf_target_disconnect_tc2 00:34:44.160 ************************************ 00:34:44.160 03:37:50 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1142 -- # return 0 00:34:44.160 03:37:50 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:34:44.160 03:37:50 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:34:44.160 03:37:50 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:34:44.160 03:37:50 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:34:44.160 03:37:50 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@117 -- # sync 00:34:44.160 03:37:50 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:34:44.160 03:37:50 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@120 -- # set +e 00:34:44.160 03:37:50 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:34:44.160 03:37:50 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:34:44.160 rmmod nvme_tcp 00:34:44.160 rmmod nvme_fabrics 00:34:44.160 rmmod nvme_keyring 00:34:44.419 03:37:50 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:34:44.419 03:37:50 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set -e 00:34:44.419 03:37:50 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@125 -- # return 0 00:34:44.419 03:37:50 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@489 -- # '[' -n 3350942 ']' 00:34:44.419 03:37:50 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@490 -- # killprocess 3350942 00:34:44.419 03:37:50 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@948 -- # '[' -z 3350942 ']' 00:34:44.419 03:37:50 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@952 -- # kill -0 3350942 00:34:44.419 03:37:50 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@953 -- # uname 00:34:44.419 03:37:50 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:34:44.419 03:37:50 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3350942 00:34:44.419 03:37:50 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # process_name=reactor_4 00:34:44.419 03:37:50 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@958 -- # '[' reactor_4 = sudo ']' 00:34:44.419 03:37:50 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3350942' 00:34:44.419 killing process with pid 3350942 00:34:44.419 03:37:50 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@967 -- # kill 3350942 00:34:44.419 03:37:50 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@972 -- # wait 3350942 00:34:44.678 03:37:50 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:34:44.678 03:37:50 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:34:44.678 03:37:50 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:34:44.678 03:37:50 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:34:44.678 03:37:50 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:34:44.678 03:37:50 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:44.678 03:37:50 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:34:44.678 03:37:50 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:46.578 03:37:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:34:46.578 00:34:46.578 real 0m15.451s 00:34:46.578 user 0m44.370s 00:34:46.578 sys 0m7.284s 00:34:46.578 03:37:52 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1124 -- # xtrace_disable 00:34:46.578 03:37:52 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:34:46.578 ************************************ 00:34:46.578 END TEST nvmf_target_disconnect 00:34:46.578 ************************************ 00:34:46.578 03:37:52 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:34:46.578 03:37:52 nvmf_tcp -- nvmf/nvmf.sh@126 -- # timing_exit host 00:34:46.578 03:37:52 nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:34:46.578 03:37:52 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:46.578 03:37:52 nvmf_tcp -- nvmf/nvmf.sh@128 -- # trap - SIGINT SIGTERM EXIT 00:34:46.578 00:34:46.578 real 27m6.940s 00:34:46.578 user 74m6.490s 00:34:46.578 sys 6m19.132s 00:34:46.578 03:37:52 nvmf_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:34:46.578 03:37:52 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:46.578 ************************************ 00:34:46.578 END TEST nvmf_tcp 00:34:46.578 ************************************ 00:34:46.578 03:37:52 -- common/autotest_common.sh@1142 -- # return 0 00:34:46.578 03:37:52 -- spdk/autotest.sh@288 -- # [[ 0 -eq 0 ]] 00:34:46.578 03:37:52 -- spdk/autotest.sh@289 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:34:46.578 03:37:52 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:34:46.578 03:37:52 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:34:46.578 03:37:52 -- common/autotest_common.sh@10 -- # set +x 00:34:46.578 ************************************ 00:34:46.578 START TEST spdkcli_nvmf_tcp 00:34:46.578 ************************************ 00:34:46.578 03:37:52 spdkcli_nvmf_tcp -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:34:46.836 * Looking for test storage... 00:34:46.836 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:34:46.836 03:37:52 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:34:46.836 03:37:52 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:34:46.836 03:37:52 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:34:46.836 03:37:52 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:46.836 03:37:52 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:34:46.836 03:37:52 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:46.836 03:37:52 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:46.836 03:37:52 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:46.836 03:37:52 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:46.836 03:37:52 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:46.836 03:37:52 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:46.836 03:37:52 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:46.836 03:37:52 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:46.836 03:37:52 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:46.836 03:37:52 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:46.836 03:37:52 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:34:46.836 03:37:52 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:34:46.836 03:37:52 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:46.836 03:37:52 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:46.836 03:37:52 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:46.836 03:37:52 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:46.836 03:37:52 spdkcli_nvmf_tcp -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:46.836 03:37:52 spdkcli_nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:46.836 03:37:52 spdkcli_nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:46.836 03:37:52 spdkcli_nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:46.836 03:37:52 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:46.836 03:37:52 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:46.836 03:37:52 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:46.836 03:37:52 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:34:46.836 03:37:52 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:46.836 03:37:52 spdkcli_nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:34:46.836 03:37:52 spdkcli_nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:34:46.836 03:37:52 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:34:46.836 03:37:52 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:46.836 03:37:52 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:46.836 03:37:52 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:46.836 03:37:52 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:34:46.836 03:37:52 spdkcli_nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:34:46.836 03:37:52 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:34:46.836 03:37:52 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:34:46.836 03:37:52 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:34:46.836 03:37:52 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:34:46.836 03:37:52 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:34:46.836 03:37:52 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:34:46.836 03:37:52 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:46.836 03:37:52 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:34:46.836 03:37:52 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=3352134 00:34:46.836 03:37:52 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:34:46.836 03:37:52 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 3352134 00:34:46.836 03:37:52 spdkcli_nvmf_tcp -- common/autotest_common.sh@829 -- # '[' -z 3352134 ']' 00:34:46.836 03:37:52 spdkcli_nvmf_tcp -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:46.836 03:37:52 spdkcli_nvmf_tcp -- common/autotest_common.sh@834 -- # local max_retries=100 00:34:46.836 03:37:52 spdkcli_nvmf_tcp -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:46.836 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:46.836 03:37:52 spdkcli_nvmf_tcp -- common/autotest_common.sh@838 -- # xtrace_disable 00:34:46.836 03:37:52 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:46.836 [2024-07-15 03:37:52.821679] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:34:46.836 [2024-07-15 03:37:52.821771] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3352134 ] 00:34:46.836 EAL: No free 2048 kB hugepages reported on node 1 00:34:46.836 [2024-07-15 03:37:52.878869] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:34:46.836 [2024-07-15 03:37:52.963421] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:34:46.836 [2024-07-15 03:37:52.963425] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:34:47.093 03:37:53 spdkcli_nvmf_tcp -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:34:47.093 03:37:53 spdkcli_nvmf_tcp -- common/autotest_common.sh@862 -- # return 0 00:34:47.093 03:37:53 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:34:47.093 03:37:53 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:34:47.093 03:37:53 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:47.093 03:37:53 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:34:47.093 03:37:53 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:34:47.093 03:37:53 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:34:47.093 03:37:53 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:34:47.093 03:37:53 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:47.093 03:37:53 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:34:47.093 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:34:47.093 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:34:47.093 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:34:47.093 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:34:47.093 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:34:47.093 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:34:47.093 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:34:47.093 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:34:47.093 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:34:47.094 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:34:47.094 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:34:47.094 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:34:47.094 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:34:47.094 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:34:47.094 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:34:47.094 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:34:47.094 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:34:47.094 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:34:47.094 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:34:47.094 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:34:47.094 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:34:47.094 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:34:47.094 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:34:47.094 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:34:47.094 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:34:47.094 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:34:47.094 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:34:47.094 ' 00:34:49.616 [2024-07-15 03:37:55.630753] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:50.986 [2024-07-15 03:37:56.867116] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:34:53.513 [2024-07-15 03:37:59.126239] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:34:55.412 [2024-07-15 03:38:01.112611] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:34:56.783 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:34:56.783 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:34:56.783 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:34:56.783 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:34:56.783 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:34:56.783 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:34:56.783 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:34:56.783 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:34:56.783 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:34:56.783 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:34:56.783 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:34:56.783 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:34:56.783 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:34:56.783 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:34:56.783 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:34:56.783 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:34:56.783 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:34:56.783 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:34:56.783 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:34:56.783 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:34:56.783 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:34:56.783 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:34:56.783 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:34:56.783 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:34:56.783 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:34:56.783 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:34:56.783 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:34:56.783 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:34:56.783 03:38:02 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:34:56.783 03:38:02 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:34:56.783 03:38:02 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:56.783 03:38:02 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:34:56.783 03:38:02 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:34:56.783 03:38:02 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:56.783 03:38:02 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:34:56.783 03:38:02 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:34:57.041 03:38:03 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:34:57.298 03:38:03 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:34:57.298 03:38:03 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:34:57.298 03:38:03 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:34:57.298 03:38:03 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:57.298 03:38:03 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:34:57.298 03:38:03 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:34:57.298 03:38:03 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:57.298 03:38:03 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:34:57.298 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:34:57.298 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:34:57.298 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:34:57.298 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:34:57.298 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:34:57.298 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:34:57.298 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:34:57.298 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:34:57.298 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:34:57.298 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:34:57.298 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:34:57.298 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:34:57.298 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:34:57.298 ' 00:35:02.573 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:35:02.573 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:35:02.573 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:35:02.573 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:35:02.573 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:35:02.573 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:35:02.573 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:35:02.573 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:35:02.573 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:35:02.573 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:35:02.573 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:35:02.573 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:35:02.573 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:35:02.573 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:35:02.573 03:38:08 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:35:02.573 03:38:08 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:35:02.573 03:38:08 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:02.573 03:38:08 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 3352134 00:35:02.573 03:38:08 spdkcli_nvmf_tcp -- common/autotest_common.sh@948 -- # '[' -z 3352134 ']' 00:35:02.573 03:38:08 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # kill -0 3352134 00:35:02.573 03:38:08 spdkcli_nvmf_tcp -- common/autotest_common.sh@953 -- # uname 00:35:02.573 03:38:08 spdkcli_nvmf_tcp -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:35:02.573 03:38:08 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3352134 00:35:02.573 03:38:08 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:35:02.573 03:38:08 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:35:02.573 03:38:08 spdkcli_nvmf_tcp -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3352134' 00:35:02.573 killing process with pid 3352134 00:35:02.573 03:38:08 spdkcli_nvmf_tcp -- common/autotest_common.sh@967 -- # kill 3352134 00:35:02.573 03:38:08 spdkcli_nvmf_tcp -- common/autotest_common.sh@972 -- # wait 3352134 00:35:02.832 03:38:08 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:35:02.832 03:38:08 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:35:02.832 03:38:08 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 3352134 ']' 00:35:02.832 03:38:08 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 3352134 00:35:02.832 03:38:08 spdkcli_nvmf_tcp -- common/autotest_common.sh@948 -- # '[' -z 3352134 ']' 00:35:02.832 03:38:08 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # kill -0 3352134 00:35:02.832 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (3352134) - No such process 00:35:02.832 03:38:08 spdkcli_nvmf_tcp -- common/autotest_common.sh@975 -- # echo 'Process with pid 3352134 is not found' 00:35:02.832 Process with pid 3352134 is not found 00:35:02.832 03:38:08 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:35:02.832 03:38:08 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:35:02.832 03:38:08 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:35:02.832 00:35:02.832 real 0m16.063s 00:35:02.832 user 0m34.052s 00:35:02.832 sys 0m0.806s 00:35:02.832 03:38:08 spdkcli_nvmf_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:35:02.832 03:38:08 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:02.832 ************************************ 00:35:02.832 END TEST spdkcli_nvmf_tcp 00:35:02.832 ************************************ 00:35:02.832 03:38:08 -- common/autotest_common.sh@1142 -- # return 0 00:35:02.832 03:38:08 -- spdk/autotest.sh@290 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:35:02.832 03:38:08 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:35:02.832 03:38:08 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:35:02.832 03:38:08 -- common/autotest_common.sh@10 -- # set +x 00:35:02.832 ************************************ 00:35:02.832 START TEST nvmf_identify_passthru 00:35:02.832 ************************************ 00:35:02.832 03:38:08 nvmf_identify_passthru -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:35:02.832 * Looking for test storage... 00:35:02.832 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:35:02.832 03:38:08 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:02.832 03:38:08 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:35:02.832 03:38:08 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:02.832 03:38:08 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:02.832 03:38:08 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:02.832 03:38:08 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:02.832 03:38:08 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:02.832 03:38:08 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:02.832 03:38:08 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:02.832 03:38:08 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:02.832 03:38:08 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:02.832 03:38:08 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:02.832 03:38:08 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:35:02.832 03:38:08 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:35:02.832 03:38:08 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:02.832 03:38:08 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:02.832 03:38:08 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:02.832 03:38:08 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:02.832 03:38:08 nvmf_identify_passthru -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:02.832 03:38:08 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:02.832 03:38:08 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:02.832 03:38:08 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:02.832 03:38:08 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:02.832 03:38:08 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:02.832 03:38:08 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:02.832 03:38:08 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:35:02.832 03:38:08 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:02.832 03:38:08 nvmf_identify_passthru -- nvmf/common.sh@47 -- # : 0 00:35:02.832 03:38:08 nvmf_identify_passthru -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:35:02.832 03:38:08 nvmf_identify_passthru -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:35:02.832 03:38:08 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:02.832 03:38:08 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:02.832 03:38:08 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:02.832 03:38:08 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:35:02.832 03:38:08 nvmf_identify_passthru -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:35:02.832 03:38:08 nvmf_identify_passthru -- nvmf/common.sh@51 -- # have_pci_nics=0 00:35:02.832 03:38:08 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:02.832 03:38:08 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:02.832 03:38:08 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:02.832 03:38:08 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:02.832 03:38:08 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:02.832 03:38:08 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:02.832 03:38:08 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:02.832 03:38:08 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:35:02.832 03:38:08 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:02.832 03:38:08 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:35:02.832 03:38:08 nvmf_identify_passthru -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:35:02.832 03:38:08 nvmf_identify_passthru -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:02.832 03:38:08 nvmf_identify_passthru -- nvmf/common.sh@448 -- # prepare_net_devs 00:35:02.832 03:38:08 nvmf_identify_passthru -- nvmf/common.sh@410 -- # local -g is_hw=no 00:35:02.832 03:38:08 nvmf_identify_passthru -- nvmf/common.sh@412 -- # remove_spdk_ns 00:35:02.832 03:38:08 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:02.832 03:38:08 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:35:02.832 03:38:08 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:02.832 03:38:08 nvmf_identify_passthru -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:35:02.833 03:38:08 nvmf_identify_passthru -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:35:02.833 03:38:08 nvmf_identify_passthru -- nvmf/common.sh@285 -- # xtrace_disable 00:35:02.833 03:38:08 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:04.735 03:38:10 nvmf_identify_passthru -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:04.735 03:38:10 nvmf_identify_passthru -- nvmf/common.sh@291 -- # pci_devs=() 00:35:04.735 03:38:10 nvmf_identify_passthru -- nvmf/common.sh@291 -- # local -a pci_devs 00:35:04.735 03:38:10 nvmf_identify_passthru -- nvmf/common.sh@292 -- # pci_net_devs=() 00:35:04.735 03:38:10 nvmf_identify_passthru -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:35:04.735 03:38:10 nvmf_identify_passthru -- nvmf/common.sh@293 -- # pci_drivers=() 00:35:04.735 03:38:10 nvmf_identify_passthru -- nvmf/common.sh@293 -- # local -A pci_drivers 00:35:04.735 03:38:10 nvmf_identify_passthru -- nvmf/common.sh@295 -- # net_devs=() 00:35:04.735 03:38:10 nvmf_identify_passthru -- nvmf/common.sh@295 -- # local -ga net_devs 00:35:04.735 03:38:10 nvmf_identify_passthru -- nvmf/common.sh@296 -- # e810=() 00:35:04.735 03:38:10 nvmf_identify_passthru -- nvmf/common.sh@296 -- # local -ga e810 00:35:04.735 03:38:10 nvmf_identify_passthru -- nvmf/common.sh@297 -- # x722=() 00:35:04.736 03:38:10 nvmf_identify_passthru -- nvmf/common.sh@297 -- # local -ga x722 00:35:04.736 03:38:10 nvmf_identify_passthru -- nvmf/common.sh@298 -- # mlx=() 00:35:04.736 03:38:10 nvmf_identify_passthru -- nvmf/common.sh@298 -- # local -ga mlx 00:35:04.736 03:38:10 nvmf_identify_passthru -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:04.736 03:38:10 nvmf_identify_passthru -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:04.736 03:38:10 nvmf_identify_passthru -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:04.736 03:38:10 nvmf_identify_passthru -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:04.736 03:38:10 nvmf_identify_passthru -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:04.736 03:38:10 nvmf_identify_passthru -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:04.736 03:38:10 nvmf_identify_passthru -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:04.736 03:38:10 nvmf_identify_passthru -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:04.736 03:38:10 nvmf_identify_passthru -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:04.736 03:38:10 nvmf_identify_passthru -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:04.736 03:38:10 nvmf_identify_passthru -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:04.736 03:38:10 nvmf_identify_passthru -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:35:04.736 03:38:10 nvmf_identify_passthru -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:35:04.736 03:38:10 nvmf_identify_passthru -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:35:04.736 03:38:10 nvmf_identify_passthru -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:35:04.736 03:38:10 nvmf_identify_passthru -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:35:04.736 03:38:10 nvmf_identify_passthru -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:35:04.736 03:38:10 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:35:04.736 03:38:10 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:35:04.736 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:35:04.736 03:38:10 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:35:04.736 03:38:10 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:35:04.736 03:38:10 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:04.736 03:38:10 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:04.736 03:38:10 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:35:04.736 03:38:10 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:35:04.736 03:38:10 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:35:04.736 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:35:04.736 03:38:10 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:35:04.736 03:38:10 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:35:04.736 03:38:10 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:04.736 03:38:10 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:04.736 03:38:10 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:35:04.736 03:38:10 nvmf_identify_passthru -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:35:04.736 03:38:10 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:35:04.736 03:38:10 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:35:04.736 03:38:10 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:35:04.736 03:38:10 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:04.736 03:38:10 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:35:04.736 03:38:10 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:04.736 03:38:10 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:35:04.736 03:38:10 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:35:04.736 03:38:10 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:04.736 03:38:10 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:35:04.736 Found net devices under 0000:0a:00.0: cvl_0_0 00:35:04.736 03:38:10 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:35:04.736 03:38:10 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:35:04.736 03:38:10 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:04.736 03:38:10 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:35:04.736 03:38:10 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:04.736 03:38:10 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:35:04.736 03:38:10 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:35:04.736 03:38:10 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:04.736 03:38:10 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:35:04.736 Found net devices under 0000:0a:00.1: cvl_0_1 00:35:04.736 03:38:10 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:35:04.736 03:38:10 nvmf_identify_passthru -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:35:04.736 03:38:10 nvmf_identify_passthru -- nvmf/common.sh@414 -- # is_hw=yes 00:35:04.736 03:38:10 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:35:04.736 03:38:10 nvmf_identify_passthru -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:35:04.736 03:38:10 nvmf_identify_passthru -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:35:04.994 03:38:10 nvmf_identify_passthru -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:04.994 03:38:10 nvmf_identify_passthru -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:04.994 03:38:10 nvmf_identify_passthru -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:04.994 03:38:10 nvmf_identify_passthru -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:35:04.994 03:38:10 nvmf_identify_passthru -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:04.994 03:38:10 nvmf_identify_passthru -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:04.994 03:38:10 nvmf_identify_passthru -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:35:04.994 03:38:10 nvmf_identify_passthru -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:04.994 03:38:10 nvmf_identify_passthru -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:04.994 03:38:10 nvmf_identify_passthru -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:35:04.994 03:38:10 nvmf_identify_passthru -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:35:04.994 03:38:10 nvmf_identify_passthru -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:35:04.994 03:38:10 nvmf_identify_passthru -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:04.994 03:38:10 nvmf_identify_passthru -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:04.994 03:38:10 nvmf_identify_passthru -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:04.994 03:38:10 nvmf_identify_passthru -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:35:04.994 03:38:10 nvmf_identify_passthru -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:04.994 03:38:10 nvmf_identify_passthru -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:04.994 03:38:10 nvmf_identify_passthru -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:04.994 03:38:10 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:35:04.994 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:04.994 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.137 ms 00:35:04.994 00:35:04.994 --- 10.0.0.2 ping statistics --- 00:35:04.994 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:04.994 rtt min/avg/max/mdev = 0.137/0.137/0.137/0.000 ms 00:35:04.994 03:38:11 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:04.994 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:04.994 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.121 ms 00:35:04.994 00:35:04.994 --- 10.0.0.1 ping statistics --- 00:35:04.994 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:04.994 rtt min/avg/max/mdev = 0.121/0.121/0.121/0.000 ms 00:35:04.994 03:38:11 nvmf_identify_passthru -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:04.994 03:38:11 nvmf_identify_passthru -- nvmf/common.sh@422 -- # return 0 00:35:04.994 03:38:11 nvmf_identify_passthru -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:35:04.994 03:38:11 nvmf_identify_passthru -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:04.994 03:38:11 nvmf_identify_passthru -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:35:04.994 03:38:11 nvmf_identify_passthru -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:35:04.994 03:38:11 nvmf_identify_passthru -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:04.994 03:38:11 nvmf_identify_passthru -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:35:04.994 03:38:11 nvmf_identify_passthru -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:35:04.995 03:38:11 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:35:04.995 03:38:11 nvmf_identify_passthru -- common/autotest_common.sh@722 -- # xtrace_disable 00:35:04.995 03:38:11 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:04.995 03:38:11 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:35:04.995 03:38:11 nvmf_identify_passthru -- common/autotest_common.sh@1524 -- # bdfs=() 00:35:04.995 03:38:11 nvmf_identify_passthru -- common/autotest_common.sh@1524 -- # local bdfs 00:35:04.995 03:38:11 nvmf_identify_passthru -- common/autotest_common.sh@1525 -- # bdfs=($(get_nvme_bdfs)) 00:35:04.995 03:38:11 nvmf_identify_passthru -- common/autotest_common.sh@1525 -- # get_nvme_bdfs 00:35:04.995 03:38:11 nvmf_identify_passthru -- common/autotest_common.sh@1513 -- # bdfs=() 00:35:04.995 03:38:11 nvmf_identify_passthru -- common/autotest_common.sh@1513 -- # local bdfs 00:35:04.995 03:38:11 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:35:04.995 03:38:11 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:35:04.995 03:38:11 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:35:04.995 03:38:11 nvmf_identify_passthru -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:35:04.995 03:38:11 nvmf_identify_passthru -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:88:00.0 00:35:04.995 03:38:11 nvmf_identify_passthru -- common/autotest_common.sh@1527 -- # echo 0000:88:00.0 00:35:04.995 03:38:11 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:88:00.0 00:35:04.995 03:38:11 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:88:00.0 ']' 00:35:04.995 03:38:11 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:88:00.0' -i 0 00:35:04.995 03:38:11 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:35:04.995 03:38:11 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:35:05.253 EAL: No free 2048 kB hugepages reported on node 1 00:35:09.436 03:38:15 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=PHLJ916004901P0FGN 00:35:09.436 03:38:15 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:88:00.0' -i 0 00:35:09.436 03:38:15 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:35:09.436 03:38:15 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:35:09.436 EAL: No free 2048 kB hugepages reported on node 1 00:35:13.645 03:38:19 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=INTEL 00:35:13.645 03:38:19 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:35:13.645 03:38:19 nvmf_identify_passthru -- common/autotest_common.sh@728 -- # xtrace_disable 00:35:13.645 03:38:19 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:13.645 03:38:19 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:35:13.645 03:38:19 nvmf_identify_passthru -- common/autotest_common.sh@722 -- # xtrace_disable 00:35:13.645 03:38:19 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:13.645 03:38:19 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=3356630 00:35:13.645 03:38:19 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:35:13.645 03:38:19 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:35:13.645 03:38:19 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 3356630 00:35:13.645 03:38:19 nvmf_identify_passthru -- common/autotest_common.sh@829 -- # '[' -z 3356630 ']' 00:35:13.645 03:38:19 nvmf_identify_passthru -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:13.645 03:38:19 nvmf_identify_passthru -- common/autotest_common.sh@834 -- # local max_retries=100 00:35:13.645 03:38:19 nvmf_identify_passthru -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:13.646 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:13.646 03:38:19 nvmf_identify_passthru -- common/autotest_common.sh@838 -- # xtrace_disable 00:35:13.646 03:38:19 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:13.646 [2024-07-15 03:38:19.565075] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:35:13.646 [2024-07-15 03:38:19.565188] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:13.646 EAL: No free 2048 kB hugepages reported on node 1 00:35:13.646 [2024-07-15 03:38:19.632501] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:35:13.646 [2024-07-15 03:38:19.719844] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:13.646 [2024-07-15 03:38:19.719919] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:13.646 [2024-07-15 03:38:19.719933] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:13.646 [2024-07-15 03:38:19.719944] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:13.646 [2024-07-15 03:38:19.719954] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:13.646 [2024-07-15 03:38:19.720229] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:35:13.646 [2024-07-15 03:38:19.720289] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:35:13.646 [2024-07-15 03:38:19.720351] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:35:13.646 [2024-07-15 03:38:19.720354] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:35:13.927 03:38:19 nvmf_identify_passthru -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:35:13.927 03:38:19 nvmf_identify_passthru -- common/autotest_common.sh@862 -- # return 0 00:35:13.927 03:38:19 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:35:13.927 03:38:19 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:13.927 03:38:19 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:13.927 INFO: Log level set to 20 00:35:13.927 INFO: Requests: 00:35:13.927 { 00:35:13.927 "jsonrpc": "2.0", 00:35:13.927 "method": "nvmf_set_config", 00:35:13.927 "id": 1, 00:35:13.928 "params": { 00:35:13.928 "admin_cmd_passthru": { 00:35:13.928 "identify_ctrlr": true 00:35:13.928 } 00:35:13.928 } 00:35:13.928 } 00:35:13.928 00:35:13.928 INFO: response: 00:35:13.928 { 00:35:13.928 "jsonrpc": "2.0", 00:35:13.928 "id": 1, 00:35:13.928 "result": true 00:35:13.928 } 00:35:13.928 00:35:13.928 03:38:19 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:13.928 03:38:19 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:35:13.928 03:38:19 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:13.928 03:38:19 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:13.928 INFO: Setting log level to 20 00:35:13.928 INFO: Setting log level to 20 00:35:13.928 INFO: Log level set to 20 00:35:13.928 INFO: Log level set to 20 00:35:13.928 INFO: Requests: 00:35:13.928 { 00:35:13.928 "jsonrpc": "2.0", 00:35:13.928 "method": "framework_start_init", 00:35:13.928 "id": 1 00:35:13.928 } 00:35:13.928 00:35:13.928 INFO: Requests: 00:35:13.928 { 00:35:13.928 "jsonrpc": "2.0", 00:35:13.928 "method": "framework_start_init", 00:35:13.928 "id": 1 00:35:13.928 } 00:35:13.928 00:35:13.928 [2024-07-15 03:38:19.891063] nvmf_tgt.c: 451:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:35:13.928 INFO: response: 00:35:13.928 { 00:35:13.928 "jsonrpc": "2.0", 00:35:13.928 "id": 1, 00:35:13.928 "result": true 00:35:13.928 } 00:35:13.928 00:35:13.928 INFO: response: 00:35:13.928 { 00:35:13.928 "jsonrpc": "2.0", 00:35:13.928 "id": 1, 00:35:13.928 "result": true 00:35:13.928 } 00:35:13.928 00:35:13.928 03:38:19 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:13.928 03:38:19 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:35:13.928 03:38:19 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:13.928 03:38:19 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:13.928 INFO: Setting log level to 40 00:35:13.928 INFO: Setting log level to 40 00:35:13.928 INFO: Setting log level to 40 00:35:13.928 [2024-07-15 03:38:19.900974] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:13.928 03:38:19 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:13.928 03:38:19 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:35:13.928 03:38:19 nvmf_identify_passthru -- common/autotest_common.sh@728 -- # xtrace_disable 00:35:13.928 03:38:19 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:13.928 03:38:19 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:88:00.0 00:35:13.928 03:38:19 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:13.928 03:38:19 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:17.205 Nvme0n1 00:35:17.205 03:38:22 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:17.205 03:38:22 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:35:17.205 03:38:22 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:17.205 03:38:22 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:17.205 03:38:22 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:17.205 03:38:22 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:35:17.205 03:38:22 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:17.205 03:38:22 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:17.205 03:38:22 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:17.205 03:38:22 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:17.205 03:38:22 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:17.205 03:38:22 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:17.205 [2024-07-15 03:38:22.790220] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:17.205 03:38:22 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:17.205 03:38:22 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:35:17.205 03:38:22 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:17.205 03:38:22 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:17.205 [ 00:35:17.205 { 00:35:17.205 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:35:17.205 "subtype": "Discovery", 00:35:17.205 "listen_addresses": [], 00:35:17.205 "allow_any_host": true, 00:35:17.205 "hosts": [] 00:35:17.205 }, 00:35:17.205 { 00:35:17.205 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:35:17.205 "subtype": "NVMe", 00:35:17.205 "listen_addresses": [ 00:35:17.205 { 00:35:17.205 "trtype": "TCP", 00:35:17.205 "adrfam": "IPv4", 00:35:17.205 "traddr": "10.0.0.2", 00:35:17.205 "trsvcid": "4420" 00:35:17.205 } 00:35:17.205 ], 00:35:17.205 "allow_any_host": true, 00:35:17.205 "hosts": [], 00:35:17.205 "serial_number": "SPDK00000000000001", 00:35:17.205 "model_number": "SPDK bdev Controller", 00:35:17.205 "max_namespaces": 1, 00:35:17.205 "min_cntlid": 1, 00:35:17.205 "max_cntlid": 65519, 00:35:17.205 "namespaces": [ 00:35:17.205 { 00:35:17.205 "nsid": 1, 00:35:17.205 "bdev_name": "Nvme0n1", 00:35:17.205 "name": "Nvme0n1", 00:35:17.205 "nguid": "2BC2A0BA720D4AE89D27AF32DAE07E76", 00:35:17.205 "uuid": "2bc2a0ba-720d-4ae8-9d27-af32dae07e76" 00:35:17.205 } 00:35:17.205 ] 00:35:17.205 } 00:35:17.205 ] 00:35:17.205 03:38:22 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:17.205 03:38:22 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:35:17.205 03:38:22 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:35:17.205 03:38:22 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:35:17.205 EAL: No free 2048 kB hugepages reported on node 1 00:35:17.205 03:38:22 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=PHLJ916004901P0FGN 00:35:17.205 03:38:22 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:35:17.205 03:38:22 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:35:17.205 03:38:22 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:35:17.205 EAL: No free 2048 kB hugepages reported on node 1 00:35:17.205 03:38:23 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=INTEL 00:35:17.205 03:38:23 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' PHLJ916004901P0FGN '!=' PHLJ916004901P0FGN ']' 00:35:17.205 03:38:23 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' INTEL '!=' INTEL ']' 00:35:17.205 03:38:23 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:35:17.205 03:38:23 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:17.206 03:38:23 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:17.206 03:38:23 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:17.206 03:38:23 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:35:17.206 03:38:23 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:35:17.206 03:38:23 nvmf_identify_passthru -- nvmf/common.sh@488 -- # nvmfcleanup 00:35:17.206 03:38:23 nvmf_identify_passthru -- nvmf/common.sh@117 -- # sync 00:35:17.206 03:38:23 nvmf_identify_passthru -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:35:17.206 03:38:23 nvmf_identify_passthru -- nvmf/common.sh@120 -- # set +e 00:35:17.206 03:38:23 nvmf_identify_passthru -- nvmf/common.sh@121 -- # for i in {1..20} 00:35:17.206 03:38:23 nvmf_identify_passthru -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:35:17.206 rmmod nvme_tcp 00:35:17.206 rmmod nvme_fabrics 00:35:17.206 rmmod nvme_keyring 00:35:17.206 03:38:23 nvmf_identify_passthru -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:35:17.206 03:38:23 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set -e 00:35:17.206 03:38:23 nvmf_identify_passthru -- nvmf/common.sh@125 -- # return 0 00:35:17.206 03:38:23 nvmf_identify_passthru -- nvmf/common.sh@489 -- # '[' -n 3356630 ']' 00:35:17.206 03:38:23 nvmf_identify_passthru -- nvmf/common.sh@490 -- # killprocess 3356630 00:35:17.206 03:38:23 nvmf_identify_passthru -- common/autotest_common.sh@948 -- # '[' -z 3356630 ']' 00:35:17.206 03:38:23 nvmf_identify_passthru -- common/autotest_common.sh@952 -- # kill -0 3356630 00:35:17.206 03:38:23 nvmf_identify_passthru -- common/autotest_common.sh@953 -- # uname 00:35:17.206 03:38:23 nvmf_identify_passthru -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:35:17.206 03:38:23 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3356630 00:35:17.206 03:38:23 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:35:17.206 03:38:23 nvmf_identify_passthru -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:35:17.206 03:38:23 nvmf_identify_passthru -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3356630' 00:35:17.206 killing process with pid 3356630 00:35:17.206 03:38:23 nvmf_identify_passthru -- common/autotest_common.sh@967 -- # kill 3356630 00:35:17.206 03:38:23 nvmf_identify_passthru -- common/autotest_common.sh@972 -- # wait 3356630 00:35:18.575 03:38:24 nvmf_identify_passthru -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:35:18.575 03:38:24 nvmf_identify_passthru -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:35:18.575 03:38:24 nvmf_identify_passthru -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:35:18.575 03:38:24 nvmf_identify_passthru -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:35:18.575 03:38:24 nvmf_identify_passthru -- nvmf/common.sh@278 -- # remove_spdk_ns 00:35:18.575 03:38:24 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:18.575 03:38:24 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:35:18.575 03:38:24 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:21.104 03:38:26 nvmf_identify_passthru -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:35:21.104 00:35:21.104 real 0m17.857s 00:35:21.104 user 0m26.148s 00:35:21.104 sys 0m2.253s 00:35:21.104 03:38:26 nvmf_identify_passthru -- common/autotest_common.sh@1124 -- # xtrace_disable 00:35:21.104 03:38:26 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:21.104 ************************************ 00:35:21.104 END TEST nvmf_identify_passthru 00:35:21.104 ************************************ 00:35:21.104 03:38:26 -- common/autotest_common.sh@1142 -- # return 0 00:35:21.104 03:38:26 -- spdk/autotest.sh@292 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:35:21.104 03:38:26 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:35:21.104 03:38:26 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:35:21.104 03:38:26 -- common/autotest_common.sh@10 -- # set +x 00:35:21.104 ************************************ 00:35:21.104 START TEST nvmf_dif 00:35:21.104 ************************************ 00:35:21.104 03:38:26 nvmf_dif -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:35:21.104 * Looking for test storage... 00:35:21.104 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:35:21.104 03:38:26 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:21.104 03:38:26 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:35:21.104 03:38:26 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:21.104 03:38:26 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:21.104 03:38:26 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:21.104 03:38:26 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:21.104 03:38:26 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:21.104 03:38:26 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:21.104 03:38:26 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:21.104 03:38:26 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:21.104 03:38:26 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:21.104 03:38:26 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:21.104 03:38:26 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:35:21.104 03:38:26 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:35:21.104 03:38:26 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:21.104 03:38:26 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:21.104 03:38:26 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:21.104 03:38:26 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:21.104 03:38:26 nvmf_dif -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:21.104 03:38:26 nvmf_dif -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:21.104 03:38:26 nvmf_dif -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:21.104 03:38:26 nvmf_dif -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:21.104 03:38:26 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:21.104 03:38:26 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:21.104 03:38:26 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:21.104 03:38:26 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:35:21.104 03:38:26 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:21.104 03:38:26 nvmf_dif -- nvmf/common.sh@47 -- # : 0 00:35:21.104 03:38:26 nvmf_dif -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:35:21.104 03:38:26 nvmf_dif -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:35:21.104 03:38:26 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:21.104 03:38:26 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:21.104 03:38:26 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:21.104 03:38:26 nvmf_dif -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:35:21.104 03:38:26 nvmf_dif -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:35:21.104 03:38:26 nvmf_dif -- nvmf/common.sh@51 -- # have_pci_nics=0 00:35:21.104 03:38:26 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:35:21.104 03:38:26 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:35:21.105 03:38:26 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:35:21.105 03:38:26 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:35:21.105 03:38:26 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:35:21.105 03:38:26 nvmf_dif -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:35:21.105 03:38:26 nvmf_dif -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:21.105 03:38:26 nvmf_dif -- nvmf/common.sh@448 -- # prepare_net_devs 00:35:21.105 03:38:26 nvmf_dif -- nvmf/common.sh@410 -- # local -g is_hw=no 00:35:21.105 03:38:26 nvmf_dif -- nvmf/common.sh@412 -- # remove_spdk_ns 00:35:21.105 03:38:26 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:21.105 03:38:26 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:35:21.105 03:38:26 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:21.105 03:38:26 nvmf_dif -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:35:21.105 03:38:26 nvmf_dif -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:35:21.105 03:38:26 nvmf_dif -- nvmf/common.sh@285 -- # xtrace_disable 00:35:21.105 03:38:26 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:23.007 03:38:28 nvmf_dif -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:23.007 03:38:28 nvmf_dif -- nvmf/common.sh@291 -- # pci_devs=() 00:35:23.007 03:38:28 nvmf_dif -- nvmf/common.sh@291 -- # local -a pci_devs 00:35:23.007 03:38:28 nvmf_dif -- nvmf/common.sh@292 -- # pci_net_devs=() 00:35:23.007 03:38:28 nvmf_dif -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:35:23.007 03:38:28 nvmf_dif -- nvmf/common.sh@293 -- # pci_drivers=() 00:35:23.007 03:38:28 nvmf_dif -- nvmf/common.sh@293 -- # local -A pci_drivers 00:35:23.007 03:38:28 nvmf_dif -- nvmf/common.sh@295 -- # net_devs=() 00:35:23.007 03:38:28 nvmf_dif -- nvmf/common.sh@295 -- # local -ga net_devs 00:35:23.007 03:38:28 nvmf_dif -- nvmf/common.sh@296 -- # e810=() 00:35:23.007 03:38:28 nvmf_dif -- nvmf/common.sh@296 -- # local -ga e810 00:35:23.007 03:38:28 nvmf_dif -- nvmf/common.sh@297 -- # x722=() 00:35:23.007 03:38:28 nvmf_dif -- nvmf/common.sh@297 -- # local -ga x722 00:35:23.007 03:38:28 nvmf_dif -- nvmf/common.sh@298 -- # mlx=() 00:35:23.007 03:38:28 nvmf_dif -- nvmf/common.sh@298 -- # local -ga mlx 00:35:23.007 03:38:28 nvmf_dif -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:23.007 03:38:28 nvmf_dif -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:23.007 03:38:28 nvmf_dif -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:23.007 03:38:28 nvmf_dif -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:23.007 03:38:28 nvmf_dif -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:23.007 03:38:28 nvmf_dif -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:23.007 03:38:28 nvmf_dif -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:23.007 03:38:28 nvmf_dif -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:23.007 03:38:28 nvmf_dif -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:23.007 03:38:28 nvmf_dif -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:23.007 03:38:28 nvmf_dif -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:23.007 03:38:28 nvmf_dif -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:35:23.007 03:38:28 nvmf_dif -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:35:23.007 03:38:28 nvmf_dif -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:35:23.007 03:38:28 nvmf_dif -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:35:23.007 03:38:28 nvmf_dif -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:35:23.007 03:38:28 nvmf_dif -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:35:23.007 03:38:28 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:35:23.007 03:38:28 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:35:23.007 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:35:23.007 03:38:28 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:35:23.007 03:38:28 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:35:23.007 03:38:28 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:23.007 03:38:28 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:23.007 03:38:28 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:35:23.007 03:38:28 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:35:23.007 03:38:28 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:35:23.007 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:35:23.007 03:38:28 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:35:23.007 03:38:28 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:35:23.007 03:38:28 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:23.007 03:38:28 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:23.007 03:38:28 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:35:23.007 03:38:28 nvmf_dif -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:35:23.007 03:38:28 nvmf_dif -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:35:23.007 03:38:28 nvmf_dif -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:35:23.007 03:38:28 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:35:23.007 03:38:28 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:23.007 03:38:28 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:35:23.007 03:38:28 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:23.007 03:38:28 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:35:23.007 03:38:28 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:35:23.007 03:38:28 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:23.007 03:38:28 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:35:23.007 Found net devices under 0000:0a:00.0: cvl_0_0 00:35:23.007 03:38:28 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:35:23.007 03:38:28 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:35:23.007 03:38:28 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:23.007 03:38:28 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:35:23.007 03:38:28 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:23.007 03:38:28 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:35:23.007 03:38:28 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:35:23.007 03:38:28 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:23.007 03:38:28 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:35:23.007 Found net devices under 0000:0a:00.1: cvl_0_1 00:35:23.007 03:38:28 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:35:23.007 03:38:28 nvmf_dif -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:35:23.007 03:38:28 nvmf_dif -- nvmf/common.sh@414 -- # is_hw=yes 00:35:23.007 03:38:28 nvmf_dif -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:35:23.007 03:38:28 nvmf_dif -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:35:23.007 03:38:28 nvmf_dif -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:35:23.007 03:38:28 nvmf_dif -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:23.007 03:38:28 nvmf_dif -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:23.007 03:38:28 nvmf_dif -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:23.007 03:38:28 nvmf_dif -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:35:23.007 03:38:28 nvmf_dif -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:23.007 03:38:28 nvmf_dif -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:23.007 03:38:28 nvmf_dif -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:35:23.007 03:38:28 nvmf_dif -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:23.007 03:38:28 nvmf_dif -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:23.007 03:38:28 nvmf_dif -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:35:23.007 03:38:28 nvmf_dif -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:35:23.007 03:38:28 nvmf_dif -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:35:23.007 03:38:28 nvmf_dif -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:23.007 03:38:28 nvmf_dif -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:23.007 03:38:28 nvmf_dif -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:23.007 03:38:28 nvmf_dif -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:35:23.007 03:38:28 nvmf_dif -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:23.007 03:38:28 nvmf_dif -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:23.007 03:38:28 nvmf_dif -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:23.007 03:38:28 nvmf_dif -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:35:23.007 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:23.007 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.141 ms 00:35:23.007 00:35:23.007 --- 10.0.0.2 ping statistics --- 00:35:23.007 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:23.007 rtt min/avg/max/mdev = 0.141/0.141/0.141/0.000 ms 00:35:23.007 03:38:28 nvmf_dif -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:23.007 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:23.007 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.169 ms 00:35:23.007 00:35:23.007 --- 10.0.0.1 ping statistics --- 00:35:23.007 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:23.007 rtt min/avg/max/mdev = 0.169/0.169/0.169/0.000 ms 00:35:23.007 03:38:28 nvmf_dif -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:23.007 03:38:28 nvmf_dif -- nvmf/common.sh@422 -- # return 0 00:35:23.007 03:38:28 nvmf_dif -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:35:23.007 03:38:28 nvmf_dif -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:35:23.942 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:35:23.942 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:35:23.942 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:35:23.942 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:35:23.942 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:35:23.942 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:35:23.942 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:35:23.942 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:35:23.942 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:35:23.942 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:35:23.942 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:35:23.942 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:35:23.942 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:35:23.942 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:35:23.942 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:35:23.942 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:35:23.942 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:35:24.201 03:38:30 nvmf_dif -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:24.201 03:38:30 nvmf_dif -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:35:24.201 03:38:30 nvmf_dif -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:35:24.201 03:38:30 nvmf_dif -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:24.201 03:38:30 nvmf_dif -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:35:24.201 03:38:30 nvmf_dif -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:35:24.201 03:38:30 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:35:24.201 03:38:30 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:35:24.201 03:38:30 nvmf_dif -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:35:24.201 03:38:30 nvmf_dif -- common/autotest_common.sh@722 -- # xtrace_disable 00:35:24.201 03:38:30 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:24.201 03:38:30 nvmf_dif -- nvmf/common.sh@481 -- # nvmfpid=3359885 00:35:24.201 03:38:30 nvmf_dif -- nvmf/common.sh@482 -- # waitforlisten 3359885 00:35:24.201 03:38:30 nvmf_dif -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:35:24.201 03:38:30 nvmf_dif -- common/autotest_common.sh@829 -- # '[' -z 3359885 ']' 00:35:24.201 03:38:30 nvmf_dif -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:24.201 03:38:30 nvmf_dif -- common/autotest_common.sh@834 -- # local max_retries=100 00:35:24.201 03:38:30 nvmf_dif -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:24.201 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:24.201 03:38:30 nvmf_dif -- common/autotest_common.sh@838 -- # xtrace_disable 00:35:24.201 03:38:30 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:24.201 [2024-07-15 03:38:30.185471] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:35:24.201 [2024-07-15 03:38:30.185553] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:24.201 EAL: No free 2048 kB hugepages reported on node 1 00:35:24.201 [2024-07-15 03:38:30.252304] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:24.201 [2024-07-15 03:38:30.340353] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:24.201 [2024-07-15 03:38:30.340423] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:24.201 [2024-07-15 03:38:30.340438] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:24.201 [2024-07-15 03:38:30.340449] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:24.201 [2024-07-15 03:38:30.340458] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:24.201 [2024-07-15 03:38:30.340484] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:35:24.460 03:38:30 nvmf_dif -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:35:24.460 03:38:30 nvmf_dif -- common/autotest_common.sh@862 -- # return 0 00:35:24.460 03:38:30 nvmf_dif -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:35:24.460 03:38:30 nvmf_dif -- common/autotest_common.sh@728 -- # xtrace_disable 00:35:24.460 03:38:30 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:24.460 03:38:30 nvmf_dif -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:24.460 03:38:30 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:35:24.460 03:38:30 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:35:24.460 03:38:30 nvmf_dif -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:24.460 03:38:30 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:24.460 [2024-07-15 03:38:30.472901] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:24.460 03:38:30 nvmf_dif -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:24.460 03:38:30 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:35:24.460 03:38:30 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:35:24.460 03:38:30 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:35:24.460 03:38:30 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:24.460 ************************************ 00:35:24.460 START TEST fio_dif_1_default 00:35:24.460 ************************************ 00:35:24.460 03:38:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1123 -- # fio_dif_1 00:35:24.460 03:38:30 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:35:24.460 03:38:30 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:35:24.460 03:38:30 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:35:24.460 03:38:30 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:35:24.460 03:38:30 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:35:24.460 03:38:30 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:35:24.460 03:38:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:24.460 03:38:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:35:24.460 bdev_null0 00:35:24.460 03:38:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:24.460 03:38:30 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:35:24.460 03:38:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:24.460 03:38:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:35:24.460 03:38:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:24.460 03:38:30 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:35:24.460 03:38:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:24.460 03:38:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:35:24.460 03:38:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:24.460 03:38:30 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:35:24.460 03:38:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:24.460 03:38:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:35:24.460 [2024-07-15 03:38:30.529179] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:24.460 03:38:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:24.460 03:38:30 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:35:24.460 03:38:30 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:35:24.460 03:38:30 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:35:24.460 03:38:30 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # config=() 00:35:24.460 03:38:30 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # local subsystem config 00:35:24.460 03:38:30 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:35:24.460 03:38:30 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:24.460 03:38:30 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:35:24.460 { 00:35:24.460 "params": { 00:35:24.460 "name": "Nvme$subsystem", 00:35:24.460 "trtype": "$TEST_TRANSPORT", 00:35:24.460 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:24.460 "adrfam": "ipv4", 00:35:24.460 "trsvcid": "$NVMF_PORT", 00:35:24.460 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:24.460 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:24.460 "hdgst": ${hdgst:-false}, 00:35:24.460 "ddgst": ${ddgst:-false} 00:35:24.460 }, 00:35:24.460 "method": "bdev_nvme_attach_controller" 00:35:24.460 } 00:35:24.460 EOF 00:35:24.460 )") 00:35:24.460 03:38:30 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:35:24.460 03:38:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:24.460 03:38:30 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:35:24.460 03:38:30 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:35:24.460 03:38:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:35:24.460 03:38:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:35:24.460 03:38:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # local sanitizers 00:35:24.460 03:38:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:24.460 03:38:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # shift 00:35:24.460 03:38:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local asan_lib= 00:35:24.460 03:38:30 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # cat 00:35:24.460 03:38:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:35:24.460 03:38:30 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:35:24.460 03:38:30 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:35:24.460 03:38:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:24.460 03:38:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libasan 00:35:24.460 03:38:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:35:24.460 03:38:30 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@556 -- # jq . 00:35:24.460 03:38:30 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@557 -- # IFS=, 00:35:24.460 03:38:30 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:35:24.460 "params": { 00:35:24.460 "name": "Nvme0", 00:35:24.460 "trtype": "tcp", 00:35:24.460 "traddr": "10.0.0.2", 00:35:24.460 "adrfam": "ipv4", 00:35:24.460 "trsvcid": "4420", 00:35:24.460 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:24.460 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:24.460 "hdgst": false, 00:35:24.460 "ddgst": false 00:35:24.460 }, 00:35:24.460 "method": "bdev_nvme_attach_controller" 00:35:24.460 }' 00:35:24.460 03:38:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:35:24.460 03:38:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:35:24.460 03:38:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:35:24.460 03:38:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:24.460 03:38:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:35:24.460 03:38:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:35:24.460 03:38:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:35:24.460 03:38:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:35:24.460 03:38:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:35:24.460 03:38:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:24.718 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:35:24.719 fio-3.35 00:35:24.719 Starting 1 thread 00:35:24.719 EAL: No free 2048 kB hugepages reported on node 1 00:35:36.911 00:35:36.911 filename0: (groupid=0, jobs=1): err= 0: pid=3360108: Mon Jul 15 03:38:41 2024 00:35:36.911 read: IOPS=189, BW=758KiB/s (776kB/s)(7584KiB/10002msec) 00:35:36.911 slat (nsec): min=4757, max=73433, avg=10339.49, stdev=3265.94 00:35:36.911 clat (usec): min=660, max=47890, avg=21069.23, stdev=20201.90 00:35:36.911 lat (usec): min=669, max=47914, avg=21079.57, stdev=20202.08 00:35:36.911 clat percentiles (usec): 00:35:36.911 | 1.00th=[ 701], 5.00th=[ 725], 10.00th=[ 742], 20.00th=[ 791], 00:35:36.911 | 30.00th=[ 816], 40.00th=[ 824], 50.00th=[41157], 60.00th=[41157], 00:35:36.911 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:35:36.911 | 99.00th=[41157], 99.50th=[41157], 99.90th=[47973], 99.95th=[47973], 00:35:36.911 | 99.99th=[47973] 00:35:36.911 bw ( KiB/s): min= 672, max= 768, per=100.00%, avg=759.58, stdev=25.78, samples=19 00:35:36.911 iops : min= 168, max= 192, avg=189.89, stdev= 6.45, samples=19 00:35:36.911 lat (usec) : 750=12.18%, 1000=37.61% 00:35:36.911 lat (msec) : 50=50.21% 00:35:36.911 cpu : usr=89.83%, sys=9.89%, ctx=11, majf=0, minf=231 00:35:36.912 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:36.912 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:36.912 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:36.912 issued rwts: total=1896,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:36.912 latency : target=0, window=0, percentile=100.00%, depth=4 00:35:36.912 00:35:36.912 Run status group 0 (all jobs): 00:35:36.912 READ: bw=758KiB/s (776kB/s), 758KiB/s-758KiB/s (776kB/s-776kB/s), io=7584KiB (7766kB), run=10002-10002msec 00:35:36.912 03:38:41 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:35:36.912 03:38:41 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:35:36.912 03:38:41 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:35:36.912 03:38:41 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:35:36.912 03:38:41 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:35:36.912 03:38:41 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:35:36.912 03:38:41 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:36.912 03:38:41 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:35:36.912 03:38:41 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:36.912 03:38:41 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:35:36.912 03:38:41 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:36.912 03:38:41 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:35:36.912 03:38:41 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:36.912 00:35:36.912 real 0m10.963s 00:35:36.912 user 0m9.938s 00:35:36.912 sys 0m1.236s 00:35:36.912 03:38:41 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1124 -- # xtrace_disable 00:35:36.912 03:38:41 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:35:36.912 ************************************ 00:35:36.912 END TEST fio_dif_1_default 00:35:36.912 ************************************ 00:35:36.912 03:38:41 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:35:36.912 03:38:41 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:35:36.912 03:38:41 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:35:36.912 03:38:41 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:35:36.912 03:38:41 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:36.912 ************************************ 00:35:36.912 START TEST fio_dif_1_multi_subsystems 00:35:36.912 ************************************ 00:35:36.912 03:38:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1123 -- # fio_dif_1_multi_subsystems 00:35:36.912 03:38:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:35:36.912 03:38:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:35:36.912 03:38:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:35:36.912 03:38:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:35:36.912 03:38:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:35:36.912 03:38:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:35:36.912 03:38:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:35:36.912 03:38:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:36.912 03:38:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:36.912 bdev_null0 00:35:36.912 03:38:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:36.912 03:38:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:35:36.912 03:38:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:36.912 03:38:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:36.912 03:38:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:36.912 03:38:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:35:36.912 03:38:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:36.912 03:38:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:36.912 03:38:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:36.912 03:38:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:35:36.912 03:38:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:36.912 03:38:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:36.912 [2024-07-15 03:38:41.547534] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:36.912 03:38:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:36.912 03:38:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:35:36.912 03:38:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:35:36.912 03:38:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:35:36.912 03:38:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:35:36.912 03:38:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:36.912 03:38:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:36.912 bdev_null1 00:35:36.912 03:38:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:36.912 03:38:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:35:36.912 03:38:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:36.912 03:38:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:36.912 03:38:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:36.912 03:38:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:35:36.912 03:38:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:36.912 03:38:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:36.912 03:38:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:36.912 03:38:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:36.912 03:38:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:36.912 03:38:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:36.912 03:38:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:36.912 03:38:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:35:36.913 03:38:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:35:36.913 03:38:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:36.913 03:38:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:35:36.913 03:38:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:36.913 03:38:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:35:36.913 03:38:41 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # config=() 00:35:36.913 03:38:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:35:36.913 03:38:41 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # local subsystem config 00:35:36.913 03:38:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:35:36.913 03:38:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:35:36.913 03:38:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # local sanitizers 00:35:36.913 03:38:41 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:35:36.913 03:38:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:35:36.913 03:38:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:36.913 03:38:41 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:35:36.913 { 00:35:36.913 "params": { 00:35:36.913 "name": "Nvme$subsystem", 00:35:36.913 "trtype": "$TEST_TRANSPORT", 00:35:36.913 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:36.913 "adrfam": "ipv4", 00:35:36.913 "trsvcid": "$NVMF_PORT", 00:35:36.913 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:36.913 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:36.913 "hdgst": ${hdgst:-false}, 00:35:36.913 "ddgst": ${ddgst:-false} 00:35:36.913 }, 00:35:36.913 "method": "bdev_nvme_attach_controller" 00:35:36.913 } 00:35:36.913 EOF 00:35:36.913 )") 00:35:36.913 03:38:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # shift 00:35:36.913 03:38:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local asan_lib= 00:35:36.913 03:38:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:35:36.913 03:38:41 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:35:36.913 03:38:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:36.913 03:38:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:35:36.913 03:38:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libasan 00:35:36.913 03:38:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:35:36.913 03:38:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:35:36.913 03:38:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:35:36.913 03:38:41 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:35:36.913 03:38:41 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:35:36.913 { 00:35:36.913 "params": { 00:35:36.913 "name": "Nvme$subsystem", 00:35:36.913 "trtype": "$TEST_TRANSPORT", 00:35:36.913 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:36.913 "adrfam": "ipv4", 00:35:36.913 "trsvcid": "$NVMF_PORT", 00:35:36.913 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:36.913 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:36.913 "hdgst": ${hdgst:-false}, 00:35:36.913 "ddgst": ${ddgst:-false} 00:35:36.913 }, 00:35:36.913 "method": "bdev_nvme_attach_controller" 00:35:36.913 } 00:35:36.913 EOF 00:35:36.913 )") 00:35:36.913 03:38:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:35:36.913 03:38:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:35:36.913 03:38:41 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:35:36.913 03:38:41 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@556 -- # jq . 00:35:36.913 03:38:41 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@557 -- # IFS=, 00:35:36.913 03:38:41 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:35:36.913 "params": { 00:35:36.913 "name": "Nvme0", 00:35:36.913 "trtype": "tcp", 00:35:36.913 "traddr": "10.0.0.2", 00:35:36.913 "adrfam": "ipv4", 00:35:36.913 "trsvcid": "4420", 00:35:36.913 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:36.913 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:36.913 "hdgst": false, 00:35:36.913 "ddgst": false 00:35:36.913 }, 00:35:36.913 "method": "bdev_nvme_attach_controller" 00:35:36.913 },{ 00:35:36.913 "params": { 00:35:36.913 "name": "Nvme1", 00:35:36.913 "trtype": "tcp", 00:35:36.913 "traddr": "10.0.0.2", 00:35:36.913 "adrfam": "ipv4", 00:35:36.913 "trsvcid": "4420", 00:35:36.913 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:35:36.913 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:35:36.913 "hdgst": false, 00:35:36.913 "ddgst": false 00:35:36.913 }, 00:35:36.913 "method": "bdev_nvme_attach_controller" 00:35:36.913 }' 00:35:36.913 03:38:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:35:36.913 03:38:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:35:36.913 03:38:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:35:36.913 03:38:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:36.913 03:38:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:35:36.913 03:38:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:35:36.913 03:38:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:35:36.913 03:38:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:35:36.913 03:38:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:35:36.913 03:38:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:36.913 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:35:36.913 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:35:36.913 fio-3.35 00:35:36.913 Starting 2 threads 00:35:36.913 EAL: No free 2048 kB hugepages reported on node 1 00:35:46.875 00:35:46.875 filename0: (groupid=0, jobs=1): err= 0: pid=3361510: Mon Jul 15 03:38:52 2024 00:35:46.875 read: IOPS=97, BW=389KiB/s (398kB/s)(3888KiB/10004msec) 00:35:46.875 slat (nsec): min=7032, max=50695, avg=10199.99, stdev=4794.52 00:35:46.875 clat (usec): min=40813, max=42541, avg=41134.79, stdev=379.33 00:35:46.875 lat (usec): min=40821, max=42570, avg=41144.99, stdev=379.80 00:35:46.875 clat percentiles (usec): 00:35:46.875 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:35:46.875 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:35:46.875 | 70.00th=[41157], 80.00th=[41157], 90.00th=[42206], 95.00th=[42206], 00:35:46.875 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42730], 99.95th=[42730], 00:35:46.875 | 99.99th=[42730] 00:35:46.875 bw ( KiB/s): min= 384, max= 416, per=33.70%, avg=387.20, stdev= 9.85, samples=20 00:35:46.875 iops : min= 96, max= 104, avg=96.80, stdev= 2.46, samples=20 00:35:46.875 lat (msec) : 50=100.00% 00:35:46.875 cpu : usr=94.60%, sys=5.11%, ctx=15, majf=0, minf=134 00:35:46.875 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:46.875 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:46.875 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:46.875 issued rwts: total=972,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:46.875 latency : target=0, window=0, percentile=100.00%, depth=4 00:35:46.875 filename1: (groupid=0, jobs=1): err= 0: pid=3361511: Mon Jul 15 03:38:52 2024 00:35:46.875 read: IOPS=189, BW=760KiB/s (778kB/s)(7600KiB/10003msec) 00:35:46.875 slat (nsec): min=6892, max=77917, avg=10177.29, stdev=5255.55 00:35:46.875 clat (usec): min=687, max=42934, avg=21026.17, stdev=20227.86 00:35:46.875 lat (usec): min=694, max=42960, avg=21036.35, stdev=20228.12 00:35:46.875 clat percentiles (usec): 00:35:46.875 | 1.00th=[ 701], 5.00th=[ 717], 10.00th=[ 725], 20.00th=[ 742], 00:35:46.875 | 30.00th=[ 758], 40.00th=[ 783], 50.00th=[41157], 60.00th=[41157], 00:35:46.875 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:35:46.875 | 99.00th=[41681], 99.50th=[41681], 99.90th=[42730], 99.95th=[42730], 00:35:46.875 | 99.99th=[42730] 00:35:46.875 bw ( KiB/s): min= 704, max= 768, per=66.27%, avg=761.26, stdev=17.13, samples=19 00:35:46.875 iops : min= 176, max= 192, avg=190.32, stdev= 4.28, samples=19 00:35:46.875 lat (usec) : 750=26.68%, 1000=22.47% 00:35:46.875 lat (msec) : 2=0.74%, 50=50.11% 00:35:46.875 cpu : usr=94.55%, sys=5.16%, ctx=19, majf=0, minf=228 00:35:46.875 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:46.875 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:46.875 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:46.875 issued rwts: total=1900,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:46.875 latency : target=0, window=0, percentile=100.00%, depth=4 00:35:46.875 00:35:46.875 Run status group 0 (all jobs): 00:35:46.875 READ: bw=1148KiB/s (1176kB/s), 389KiB/s-760KiB/s (398kB/s-778kB/s), io=11.2MiB (11.8MB), run=10003-10004msec 00:35:46.875 03:38:52 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:35:46.875 03:38:52 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:35:46.875 03:38:52 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:35:46.875 03:38:52 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:35:46.875 03:38:52 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:35:46.875 03:38:52 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:35:46.875 03:38:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:46.875 03:38:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:46.875 03:38:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:46.875 03:38:52 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:35:46.875 03:38:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:46.875 03:38:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:46.875 03:38:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:46.875 03:38:52 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:35:46.875 03:38:52 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:35:46.875 03:38:52 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:35:46.875 03:38:52 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:35:46.875 03:38:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:46.875 03:38:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:46.875 03:38:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:46.875 03:38:52 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:35:46.875 03:38:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:46.875 03:38:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:46.875 03:38:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:46.875 00:35:46.875 real 0m11.426s 00:35:46.875 user 0m20.246s 00:35:46.875 sys 0m1.327s 00:35:46.875 03:38:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1124 -- # xtrace_disable 00:35:46.875 03:38:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:46.875 ************************************ 00:35:46.875 END TEST fio_dif_1_multi_subsystems 00:35:46.875 ************************************ 00:35:46.875 03:38:52 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:35:46.875 03:38:52 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:35:46.875 03:38:52 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:35:46.875 03:38:52 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:35:46.875 03:38:52 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:46.875 ************************************ 00:35:46.875 START TEST fio_dif_rand_params 00:35:46.875 ************************************ 00:35:46.875 03:38:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1123 -- # fio_dif_rand_params 00:35:46.875 03:38:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:35:46.875 03:38:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:35:46.875 03:38:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:35:46.875 03:38:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:35:46.875 03:38:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:35:46.875 03:38:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:35:46.875 03:38:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:35:46.875 03:38:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:35:46.875 03:38:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:35:46.875 03:38:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:35:46.875 03:38:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:35:46.875 03:38:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:35:46.875 03:38:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:35:46.875 03:38:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:46.875 03:38:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:46.875 bdev_null0 00:35:46.875 03:38:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:46.875 03:38:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:35:46.875 03:38:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:46.875 03:38:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:46.875 03:38:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:46.875 03:38:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:35:46.875 03:38:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:46.875 03:38:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:47.140 03:38:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:47.140 03:38:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:35:47.140 03:38:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:47.140 03:38:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:47.140 [2024-07-15 03:38:53.026768] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:47.140 03:38:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:47.140 03:38:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:35:47.140 03:38:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:35:47.140 03:38:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:47.140 03:38:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:35:47.140 03:38:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:47.140 03:38:53 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:35:47.141 03:38:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:35:47.141 03:38:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:35:47.141 03:38:53 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:35:47.141 03:38:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:35:47.141 03:38:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:35:47.141 03:38:53 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:35:47.141 03:38:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:35:47.141 03:38:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:35:47.141 03:38:53 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:35:47.141 { 00:35:47.141 "params": { 00:35:47.141 "name": "Nvme$subsystem", 00:35:47.141 "trtype": "$TEST_TRANSPORT", 00:35:47.141 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:47.141 "adrfam": "ipv4", 00:35:47.141 "trsvcid": "$NVMF_PORT", 00:35:47.141 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:47.141 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:47.141 "hdgst": ${hdgst:-false}, 00:35:47.141 "ddgst": ${ddgst:-false} 00:35:47.141 }, 00:35:47.141 "method": "bdev_nvme_attach_controller" 00:35:47.141 } 00:35:47.141 EOF 00:35:47.141 )") 00:35:47.141 03:38:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:47.141 03:38:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:35:47.141 03:38:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:35:47.141 03:38:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:35:47.141 03:38:53 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:35:47.141 03:38:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:47.141 03:38:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:35:47.141 03:38:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:35:47.141 03:38:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:35:47.141 03:38:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:35:47.141 03:38:53 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:35:47.141 03:38:53 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:35:47.141 03:38:53 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:35:47.141 "params": { 00:35:47.141 "name": "Nvme0", 00:35:47.141 "trtype": "tcp", 00:35:47.141 "traddr": "10.0.0.2", 00:35:47.141 "adrfam": "ipv4", 00:35:47.141 "trsvcid": "4420", 00:35:47.141 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:47.141 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:47.141 "hdgst": false, 00:35:47.141 "ddgst": false 00:35:47.141 }, 00:35:47.141 "method": "bdev_nvme_attach_controller" 00:35:47.141 }' 00:35:47.141 03:38:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:35:47.141 03:38:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:35:47.141 03:38:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:35:47.141 03:38:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:47.141 03:38:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:35:47.141 03:38:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:35:47.141 03:38:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:35:47.141 03:38:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:35:47.141 03:38:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:35:47.141 03:38:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:47.414 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:35:47.414 ... 00:35:47.414 fio-3.35 00:35:47.414 Starting 3 threads 00:35:47.414 EAL: No free 2048 kB hugepages reported on node 1 00:35:53.980 00:35:53.980 filename0: (groupid=0, jobs=1): err= 0: pid=3362802: Mon Jul 15 03:38:58 2024 00:35:53.980 read: IOPS=204, BW=25.6MiB/s (26.8MB/s)(128MiB/5007msec) 00:35:53.980 slat (nsec): min=7259, max=48693, avg=14387.70, stdev=4658.88 00:35:53.980 clat (usec): min=5588, max=57022, avg=14634.69, stdev=9870.60 00:35:53.980 lat (usec): min=5599, max=57041, avg=14649.07, stdev=9870.48 00:35:53.980 clat percentiles (usec): 00:35:53.980 | 1.00th=[ 5932], 5.00th=[ 6783], 10.00th=[ 8717], 20.00th=[ 9896], 00:35:53.980 | 30.00th=[11076], 40.00th=[11994], 50.00th=[12518], 60.00th=[13173], 00:35:53.980 | 70.00th=[14091], 80.00th=[14877], 90.00th=[16909], 95.00th=[48497], 00:35:53.980 | 99.00th=[54264], 99.50th=[54789], 99.90th=[56886], 99.95th=[56886], 00:35:53.980 | 99.99th=[56886] 00:35:53.980 bw ( KiB/s): min=21803, max=30208, per=32.97%, avg=26167.50, stdev=3478.68, samples=10 00:35:53.980 iops : min= 170, max= 236, avg=204.40, stdev=27.22, samples=10 00:35:53.980 lat (msec) : 10=20.88%, 20=72.88%, 50=1.95%, 100=4.29% 00:35:53.980 cpu : usr=92.09%, sys=7.49%, ctx=22, majf=0, minf=113 00:35:53.981 IO depths : 1=0.4%, 2=99.6%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:53.981 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:53.981 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:53.981 issued rwts: total=1025,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:53.981 latency : target=0, window=0, percentile=100.00%, depth=3 00:35:53.981 filename0: (groupid=0, jobs=1): err= 0: pid=3362803: Mon Jul 15 03:38:58 2024 00:35:53.981 read: IOPS=219, BW=27.4MiB/s (28.8MB/s)(137MiB/5005msec) 00:35:53.981 slat (nsec): min=7352, max=52201, avg=18323.44, stdev=6564.95 00:35:53.981 clat (usec): min=5050, max=56403, avg=13639.63, stdev=8040.02 00:35:53.981 lat (usec): min=5062, max=56433, avg=13657.95, stdev=8040.53 00:35:53.981 clat percentiles (usec): 00:35:53.981 | 1.00th=[ 5669], 5.00th=[ 6456], 10.00th=[ 8356], 20.00th=[ 9372], 00:35:53.981 | 30.00th=[10159], 40.00th=[11469], 50.00th=[12518], 60.00th=[13435], 00:35:53.981 | 70.00th=[14353], 80.00th=[15270], 90.00th=[16712], 95.00th=[18482], 00:35:53.981 | 99.00th=[51119], 99.50th=[52691], 99.90th=[56361], 99.95th=[56361], 00:35:53.981 | 99.99th=[56361] 00:35:53.981 bw ( KiB/s): min=24576, max=33024, per=35.35%, avg=28057.60, stdev=2843.53, samples=10 00:35:53.981 iops : min= 192, max= 258, avg=219.20, stdev=22.22, samples=10 00:35:53.981 lat (msec) : 10=28.30%, 20=67.88%, 50=1.64%, 100=2.18% 00:35:53.981 cpu : usr=92.59%, sys=6.93%, ctx=12, majf=0, minf=64 00:35:53.981 IO depths : 1=0.7%, 2=99.3%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:53.981 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:53.981 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:53.981 issued rwts: total=1099,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:53.981 latency : target=0, window=0, percentile=100.00%, depth=3 00:35:53.981 filename0: (groupid=0, jobs=1): err= 0: pid=3362804: Mon Jul 15 03:38:58 2024 00:35:53.981 read: IOPS=198, BW=24.8MiB/s (26.0MB/s)(125MiB/5042msec) 00:35:53.981 slat (nsec): min=7347, max=48681, avg=14853.62, stdev=4819.51 00:35:53.981 clat (usec): min=4945, max=54628, avg=15035.11, stdev=10114.53 00:35:53.981 lat (usec): min=4957, max=54640, avg=15049.97, stdev=10114.42 00:35:53.981 clat percentiles (usec): 00:35:53.981 | 1.00th=[ 6063], 5.00th=[ 8291], 10.00th=[ 9110], 20.00th=[10290], 00:35:53.981 | 30.00th=[11338], 40.00th=[12125], 50.00th=[12649], 60.00th=[13173], 00:35:53.981 | 70.00th=[14091], 80.00th=[15008], 90.00th=[16712], 95.00th=[49546], 00:35:53.981 | 99.00th=[52691], 99.50th=[53216], 99.90th=[54789], 99.95th=[54789], 00:35:53.981 | 99.99th=[54789] 00:35:53.981 bw ( KiB/s): min=19456, max=28928, per=32.26%, avg=25605.10, stdev=2611.30, samples=10 00:35:53.981 iops : min= 152, max= 226, avg=200.00, stdev=20.40, samples=10 00:35:53.981 lat (msec) : 10=17.56%, 20=75.35%, 50=2.69%, 100=4.39% 00:35:53.981 cpu : usr=91.95%, sys=7.62%, ctx=14, majf=0, minf=105 00:35:53.981 IO depths : 1=0.5%, 2=99.5%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:53.981 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:53.981 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:53.981 issued rwts: total=1002,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:53.981 latency : target=0, window=0, percentile=100.00%, depth=3 00:35:53.981 00:35:53.981 Run status group 0 (all jobs): 00:35:53.981 READ: bw=77.5MiB/s (81.3MB/s), 24.8MiB/s-27.4MiB/s (26.0MB/s-28.8MB/s), io=391MiB (410MB), run=5005-5042msec 00:35:53.981 03:38:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:35:53.981 03:38:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:35:53.981 03:38:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:35:53.981 03:38:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:35:53.981 03:38:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:35:53.981 03:38:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:35:53.981 03:38:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:53.981 03:38:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:53.981 03:38:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:53.981 03:38:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:35:53.981 03:38:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:53.981 03:38:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:53.981 03:38:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:53.981 03:38:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:35:53.981 03:38:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:35:53.981 03:38:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:35:53.981 03:38:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:35:53.981 03:38:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:35:53.981 03:38:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:35:53.981 03:38:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:35:53.981 03:38:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:35:53.981 03:38:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:35:53.981 03:38:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:35:53.981 03:38:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:35:53.981 03:38:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:35:53.981 03:38:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:53.981 03:38:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:53.981 bdev_null0 00:35:53.981 03:38:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:53.981 03:38:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:35:53.981 03:38:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:53.981 03:38:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:53.981 03:38:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:53.981 03:38:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:35:53.981 03:38:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:53.981 03:38:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:53.981 03:38:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:53.981 03:38:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:35:53.981 03:38:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:53.981 03:38:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:53.981 [2024-07-15 03:38:59.180316] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:53.981 03:38:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:53.981 03:38:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:35:53.981 03:38:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:35:53.981 03:38:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:35:53.981 03:38:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:35:53.981 03:38:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:53.981 03:38:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:53.981 bdev_null1 00:35:53.981 03:38:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:53.981 03:38:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:35:53.981 03:38:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:53.981 03:38:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:53.981 03:38:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:53.981 03:38:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:35:53.981 03:38:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:53.981 03:38:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:53.981 03:38:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:53.981 03:38:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:53.981 03:38:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:53.981 03:38:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:53.981 03:38:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:53.981 03:38:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:35:53.981 03:38:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:35:53.981 03:38:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:35:53.981 03:38:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:35:53.981 03:38:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:53.981 03:38:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:53.981 bdev_null2 00:35:53.981 03:38:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:53.981 03:38:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:35:53.981 03:38:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:53.981 03:38:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:53.981 03:38:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:53.981 03:38:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:35:53.981 03:38:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:53.981 03:38:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:53.982 03:38:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:53.982 03:38:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:35:53.982 03:38:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:53.982 03:38:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:53.982 03:38:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:53.982 03:38:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:35:53.982 03:38:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:53.982 03:38:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:35:53.982 03:38:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:53.982 03:38:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:35:53.982 03:38:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:35:53.982 03:38:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:35:53.982 03:38:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:35:53.982 03:38:59 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:35:53.982 03:38:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:35:53.982 03:38:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:53.982 03:38:59 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:35:53.982 03:38:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:35:53.982 03:38:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:35:53.982 03:38:59 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:35:53.982 03:38:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:35:53.982 03:38:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:35:53.982 03:38:59 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:35:53.982 { 00:35:53.982 "params": { 00:35:53.982 "name": "Nvme$subsystem", 00:35:53.982 "trtype": "$TEST_TRANSPORT", 00:35:53.982 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:53.982 "adrfam": "ipv4", 00:35:53.982 "trsvcid": "$NVMF_PORT", 00:35:53.982 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:53.982 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:53.982 "hdgst": ${hdgst:-false}, 00:35:53.982 "ddgst": ${ddgst:-false} 00:35:53.982 }, 00:35:53.982 "method": "bdev_nvme_attach_controller" 00:35:53.982 } 00:35:53.982 EOF 00:35:53.982 )") 00:35:53.982 03:38:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:35:53.982 03:38:59 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:35:53.982 03:38:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:53.982 03:38:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:35:53.982 03:38:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:35:53.982 03:38:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:35:53.982 03:38:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:35:53.982 03:38:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:35:53.982 03:38:59 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:35:53.982 03:38:59 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:35:53.982 { 00:35:53.982 "params": { 00:35:53.982 "name": "Nvme$subsystem", 00:35:53.982 "trtype": "$TEST_TRANSPORT", 00:35:53.982 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:53.982 "adrfam": "ipv4", 00:35:53.982 "trsvcid": "$NVMF_PORT", 00:35:53.982 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:53.982 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:53.982 "hdgst": ${hdgst:-false}, 00:35:53.982 "ddgst": ${ddgst:-false} 00:35:53.982 }, 00:35:53.982 "method": "bdev_nvme_attach_controller" 00:35:53.982 } 00:35:53.982 EOF 00:35:53.982 )") 00:35:53.982 03:38:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:35:53.982 03:38:59 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:35:53.982 03:38:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:35:53.982 03:38:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:35:53.982 03:38:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:35:53.982 03:38:59 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:35:53.982 03:38:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:35:53.982 03:38:59 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:35:53.982 { 00:35:53.982 "params": { 00:35:53.982 "name": "Nvme$subsystem", 00:35:53.982 "trtype": "$TEST_TRANSPORT", 00:35:53.982 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:53.982 "adrfam": "ipv4", 00:35:53.982 "trsvcid": "$NVMF_PORT", 00:35:53.982 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:53.982 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:53.982 "hdgst": ${hdgst:-false}, 00:35:53.982 "ddgst": ${ddgst:-false} 00:35:53.982 }, 00:35:53.982 "method": "bdev_nvme_attach_controller" 00:35:53.982 } 00:35:53.982 EOF 00:35:53.982 )") 00:35:53.982 03:38:59 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:35:53.982 03:38:59 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:35:53.982 03:38:59 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:35:53.982 03:38:59 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:35:53.982 "params": { 00:35:53.982 "name": "Nvme0", 00:35:53.982 "trtype": "tcp", 00:35:53.982 "traddr": "10.0.0.2", 00:35:53.982 "adrfam": "ipv4", 00:35:53.982 "trsvcid": "4420", 00:35:53.982 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:53.982 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:53.982 "hdgst": false, 00:35:53.982 "ddgst": false 00:35:53.982 }, 00:35:53.982 "method": "bdev_nvme_attach_controller" 00:35:53.982 },{ 00:35:53.982 "params": { 00:35:53.982 "name": "Nvme1", 00:35:53.982 "trtype": "tcp", 00:35:53.982 "traddr": "10.0.0.2", 00:35:53.982 "adrfam": "ipv4", 00:35:53.982 "trsvcid": "4420", 00:35:53.982 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:35:53.982 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:35:53.982 "hdgst": false, 00:35:53.982 "ddgst": false 00:35:53.982 }, 00:35:53.982 "method": "bdev_nvme_attach_controller" 00:35:53.982 },{ 00:35:53.982 "params": { 00:35:53.982 "name": "Nvme2", 00:35:53.982 "trtype": "tcp", 00:35:53.982 "traddr": "10.0.0.2", 00:35:53.982 "adrfam": "ipv4", 00:35:53.982 "trsvcid": "4420", 00:35:53.982 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:35:53.982 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:35:53.982 "hdgst": false, 00:35:53.982 "ddgst": false 00:35:53.982 }, 00:35:53.982 "method": "bdev_nvme_attach_controller" 00:35:53.982 }' 00:35:53.982 03:38:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:35:53.982 03:38:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:35:53.982 03:38:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:35:53.982 03:38:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:53.982 03:38:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:35:53.982 03:38:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:35:53.982 03:38:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:35:53.982 03:38:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:35:53.982 03:38:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:35:53.982 03:38:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:53.982 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:35:53.982 ... 00:35:53.982 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:35:53.982 ... 00:35:53.982 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:35:53.982 ... 00:35:53.982 fio-3.35 00:35:53.982 Starting 24 threads 00:35:53.982 EAL: No free 2048 kB hugepages reported on node 1 00:36:06.185 00:36:06.185 filename0: (groupid=0, jobs=1): err= 0: pid=3363680: Mon Jul 15 03:39:10 2024 00:36:06.185 read: IOPS=476, BW=1905KiB/s (1951kB/s)(18.6MiB/10011msec) 00:36:06.185 slat (usec): min=6, max=317, avg=51.33, stdev=21.19 00:36:06.185 clat (usec): min=12385, max=63932, avg=33161.43, stdev=2507.37 00:36:06.185 lat (usec): min=12422, max=63950, avg=33212.76, stdev=2505.97 00:36:06.185 clat percentiles (usec): 00:36:06.185 | 1.00th=[31065], 5.00th=[31851], 10.00th=[32113], 20.00th=[32375], 00:36:06.185 | 30.00th=[32637], 40.00th=[32637], 50.00th=[32900], 60.00th=[33162], 00:36:06.185 | 70.00th=[33424], 80.00th=[33817], 90.00th=[34341], 95.00th=[35390], 00:36:06.185 | 99.00th=[37487], 99.50th=[42206], 99.90th=[63701], 99.95th=[63701], 00:36:06.185 | 99.99th=[63701] 00:36:06.185 bw ( KiB/s): min= 1664, max= 2032, per=4.15%, avg=1899.79, stdev=74.25, samples=19 00:36:06.185 iops : min= 416, max= 508, avg=474.95, stdev=18.56, samples=19 00:36:06.185 lat (msec) : 20=0.34%, 50=99.29%, 100=0.38% 00:36:06.185 cpu : usr=95.93%, sys=2.56%, ctx=119, majf=0, minf=25 00:36:06.185 IO depths : 1=4.0%, 2=10.2%, 4=24.9%, 8=52.4%, 16=8.5%, 32=0.0%, >=64=0.0% 00:36:06.185 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:06.185 complete : 0=0.0%, 4=94.2%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:06.185 issued rwts: total=4768,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:06.185 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:06.185 filename0: (groupid=0, jobs=1): err= 0: pid=3363681: Mon Jul 15 03:39:10 2024 00:36:06.185 read: IOPS=478, BW=1915KiB/s (1961kB/s)(18.8MiB/10025msec) 00:36:06.185 slat (usec): min=8, max=175, avg=19.18, stdev=17.25 00:36:06.185 clat (usec): min=12031, max=39570, avg=33246.71, stdev=2029.85 00:36:06.185 lat (usec): min=12052, max=39585, avg=33265.89, stdev=2029.38 00:36:06.185 clat percentiles (usec): 00:36:06.185 | 1.00th=[27657], 5.00th=[32375], 10.00th=[32637], 20.00th=[32637], 00:36:06.185 | 30.00th=[32900], 40.00th=[32900], 50.00th=[33162], 60.00th=[33424], 00:36:06.185 | 70.00th=[33817], 80.00th=[33817], 90.00th=[34866], 95.00th=[35390], 00:36:06.185 | 99.00th=[37487], 99.50th=[38011], 99.90th=[39060], 99.95th=[39060], 00:36:06.185 | 99.99th=[39584] 00:36:06.185 bw ( KiB/s): min= 1792, max= 2032, per=4.18%, avg=1913.60, stdev=46.55, samples=20 00:36:06.185 iops : min= 448, max= 508, avg=478.40, stdev=11.64, samples=20 00:36:06.185 lat (msec) : 20=0.67%, 50=99.33% 00:36:06.185 cpu : usr=97.76%, sys=1.73%, ctx=30, majf=0, minf=35 00:36:06.185 IO depths : 1=4.0%, 2=10.2%, 4=25.0%, 8=52.3%, 16=8.5%, 32=0.0%, >=64=0.0% 00:36:06.185 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:06.185 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:06.185 issued rwts: total=4800,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:06.185 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:06.185 filename0: (groupid=0, jobs=1): err= 0: pid=3363682: Mon Jul 15 03:39:10 2024 00:36:06.185 read: IOPS=478, BW=1915KiB/s (1961kB/s)(18.8MiB/10024msec) 00:36:06.185 slat (usec): min=8, max=149, avg=21.72, stdev=15.80 00:36:06.185 clat (usec): min=9049, max=37957, avg=33218.30, stdev=1986.69 00:36:06.185 lat (usec): min=9065, max=37965, avg=33240.02, stdev=1986.26 00:36:06.185 clat percentiles (usec): 00:36:06.185 | 1.00th=[30802], 5.00th=[32375], 10.00th=[32375], 20.00th=[32637], 00:36:06.185 | 30.00th=[32900], 40.00th=[32900], 50.00th=[33162], 60.00th=[33424], 00:36:06.185 | 70.00th=[33817], 80.00th=[33817], 90.00th=[34341], 95.00th=[35390], 00:36:06.185 | 99.00th=[36963], 99.50th=[37487], 99.90th=[37487], 99.95th=[38011], 00:36:06.185 | 99.99th=[38011] 00:36:06.185 bw ( KiB/s): min= 1792, max= 2048, per=4.18%, avg=1913.75, stdev=50.46, samples=20 00:36:06.185 iops : min= 448, max= 512, avg=478.40, stdev=12.61, samples=20 00:36:06.185 lat (msec) : 10=0.15%, 20=0.52%, 50=99.33% 00:36:06.185 cpu : usr=97.74%, sys=1.87%, ctx=22, majf=0, minf=60 00:36:06.185 IO depths : 1=6.1%, 2=12.3%, 4=24.9%, 8=50.3%, 16=6.4%, 32=0.0%, >=64=0.0% 00:36:06.185 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:06.185 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:06.185 issued rwts: total=4800,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:06.185 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:06.186 filename0: (groupid=0, jobs=1): err= 0: pid=3363683: Mon Jul 15 03:39:10 2024 00:36:06.186 read: IOPS=474, BW=1899KiB/s (1945kB/s)(18.6MiB/10007msec) 00:36:06.186 slat (usec): min=7, max=111, avg=43.33, stdev=16.70 00:36:06.186 clat (usec): min=25197, max=72539, avg=33337.73, stdev=2480.41 00:36:06.186 lat (usec): min=25212, max=72559, avg=33381.06, stdev=2477.90 00:36:06.186 clat percentiles (usec): 00:36:06.186 | 1.00th=[31851], 5.00th=[32113], 10.00th=[32375], 20.00th=[32375], 00:36:06.186 | 30.00th=[32637], 40.00th=[32900], 50.00th=[32900], 60.00th=[33162], 00:36:06.186 | 70.00th=[33424], 80.00th=[33817], 90.00th=[34341], 95.00th=[35390], 00:36:06.186 | 99.00th=[36963], 99.50th=[37487], 99.90th=[72877], 99.95th=[72877], 00:36:06.186 | 99.99th=[72877] 00:36:06.186 bw ( KiB/s): min= 1536, max= 2048, per=4.13%, avg=1893.05, stdev=100.78, samples=19 00:36:06.186 iops : min= 384, max= 512, avg=473.26, stdev=25.19, samples=19 00:36:06.186 lat (msec) : 50=99.66%, 100=0.34% 00:36:06.186 cpu : usr=93.78%, sys=3.63%, ctx=189, majf=0, minf=31 00:36:06.186 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:36:06.186 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:06.186 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:06.186 issued rwts: total=4752,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:06.186 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:06.186 filename0: (groupid=0, jobs=1): err= 0: pid=3363684: Mon Jul 15 03:39:10 2024 00:36:06.186 read: IOPS=475, BW=1903KiB/s (1948kB/s)(18.6MiB/10007msec) 00:36:06.186 slat (usec): min=8, max=110, avg=28.22, stdev=20.34 00:36:06.186 clat (usec): min=11461, max=90444, avg=33502.17, stdev=3750.24 00:36:06.186 lat (usec): min=11481, max=90484, avg=33530.38, stdev=3749.68 00:36:06.186 clat percentiles (usec): 00:36:06.186 | 1.00th=[26870], 5.00th=[32375], 10.00th=[32637], 20.00th=[32637], 00:36:06.186 | 30.00th=[32900], 40.00th=[32900], 50.00th=[33162], 60.00th=[33424], 00:36:06.186 | 70.00th=[33817], 80.00th=[33817], 90.00th=[34866], 95.00th=[35390], 00:36:06.186 | 99.00th=[37487], 99.50th=[43779], 99.90th=[90702], 99.95th=[90702], 00:36:06.186 | 99.99th=[90702] 00:36:06.186 bw ( KiB/s): min= 1536, max= 2048, per=4.14%, avg=1896.42, stdev=103.74, samples=19 00:36:06.186 iops : min= 384, max= 512, avg=474.11, stdev=25.94, samples=19 00:36:06.186 lat (msec) : 20=0.34%, 50=99.33%, 100=0.34% 00:36:06.186 cpu : usr=97.94%, sys=1.55%, ctx=49, majf=0, minf=34 00:36:06.186 IO depths : 1=1.4%, 2=2.9%, 4=6.1%, 8=73.9%, 16=15.8%, 32=0.0%, >=64=0.0% 00:36:06.186 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:06.186 complete : 0=0.0%, 4=90.5%, 8=8.2%, 16=1.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:06.186 issued rwts: total=4760,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:06.186 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:06.186 filename0: (groupid=0, jobs=1): err= 0: pid=3363685: Mon Jul 15 03:39:10 2024 00:36:06.186 read: IOPS=475, BW=1904KiB/s (1949kB/s)(18.6MiB/10019msec) 00:36:06.186 slat (usec): min=11, max=125, avg=52.88, stdev=20.60 00:36:06.186 clat (usec): min=20650, max=66932, avg=33163.96, stdev=1581.69 00:36:06.186 lat (usec): min=20696, max=66973, avg=33216.84, stdev=1580.33 00:36:06.186 clat percentiles (usec): 00:36:06.186 | 1.00th=[31589], 5.00th=[32113], 10.00th=[32113], 20.00th=[32375], 00:36:06.186 | 30.00th=[32637], 40.00th=[32637], 50.00th=[32900], 60.00th=[33162], 00:36:06.186 | 70.00th=[33162], 80.00th=[33817], 90.00th=[34341], 95.00th=[34866], 00:36:06.186 | 99.00th=[36963], 99.50th=[38011], 99.90th=[51119], 99.95th=[51643], 00:36:06.186 | 99.99th=[66847] 00:36:06.186 bw ( KiB/s): min= 1664, max= 2048, per=4.15%, avg=1900.80, stdev=75.15, samples=20 00:36:06.186 iops : min= 416, max= 512, avg=475.20, stdev=18.79, samples=20 00:36:06.186 lat (msec) : 50=99.66%, 100=0.34% 00:36:06.186 cpu : usr=93.26%, sys=3.67%, ctx=251, majf=0, minf=26 00:36:06.186 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:36:06.186 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:06.186 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:06.186 issued rwts: total=4768,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:06.186 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:06.186 filename0: (groupid=0, jobs=1): err= 0: pid=3363686: Mon Jul 15 03:39:10 2024 00:36:06.186 read: IOPS=475, BW=1901KiB/s (1946kB/s)(18.6MiB/10001msec) 00:36:06.186 slat (usec): min=13, max=125, avg=50.85, stdev=19.00 00:36:06.186 clat (usec): min=25243, max=66031, avg=33234.25, stdev=2160.67 00:36:06.186 lat (usec): min=25258, max=66067, avg=33285.11, stdev=2159.23 00:36:06.186 clat percentiles (usec): 00:36:06.186 | 1.00th=[31589], 5.00th=[32113], 10.00th=[32113], 20.00th=[32375], 00:36:06.186 | 30.00th=[32637], 40.00th=[32637], 50.00th=[32900], 60.00th=[33162], 00:36:06.186 | 70.00th=[33424], 80.00th=[33817], 90.00th=[34341], 95.00th=[35390], 00:36:06.186 | 99.00th=[36963], 99.50th=[38011], 99.90th=[65799], 99.95th=[65799], 00:36:06.186 | 99.99th=[65799] 00:36:06.186 bw ( KiB/s): min= 1648, max= 2048, per=4.15%, avg=1899.79, stdev=78.72, samples=19 00:36:06.186 iops : min= 412, max= 512, avg=474.95, stdev=19.68, samples=19 00:36:06.186 lat (msec) : 50=99.66%, 100=0.34% 00:36:06.186 cpu : usr=97.14%, sys=1.98%, ctx=99, majf=0, minf=35 00:36:06.186 IO depths : 1=5.2%, 2=11.5%, 4=25.0%, 8=51.0%, 16=7.3%, 32=0.0%, >=64=0.0% 00:36:06.186 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:06.186 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:06.186 issued rwts: total=4752,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:06.186 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:06.186 filename0: (groupid=0, jobs=1): err= 0: pid=3363687: Mon Jul 15 03:39:10 2024 00:36:06.186 read: IOPS=477, BW=1909KiB/s (1955kB/s)(18.7MiB/10025msec) 00:36:06.186 slat (usec): min=8, max=100, avg=26.26, stdev=17.03 00:36:06.186 clat (usec): min=14782, max=54293, avg=33314.27, stdev=1465.96 00:36:06.186 lat (usec): min=14794, max=54329, avg=33340.54, stdev=1466.38 00:36:06.186 clat percentiles (usec): 00:36:06.186 | 1.00th=[31327], 5.00th=[32113], 10.00th=[32375], 20.00th=[32637], 00:36:06.186 | 30.00th=[32900], 40.00th=[32900], 50.00th=[33162], 60.00th=[33424], 00:36:06.186 | 70.00th=[33817], 80.00th=[33817], 90.00th=[34341], 95.00th=[35390], 00:36:06.186 | 99.00th=[37487], 99.50th=[38011], 99.90th=[42206], 99.95th=[42206], 00:36:06.186 | 99.99th=[54264] 00:36:06.186 bw ( KiB/s): min= 1779, max= 2048, per=4.16%, avg=1907.35, stdev=58.98, samples=20 00:36:06.186 iops : min= 444, max= 512, avg=476.80, stdev=14.83, samples=20 00:36:06.186 lat (msec) : 20=0.04%, 50=99.92%, 100=0.04% 00:36:06.186 cpu : usr=96.80%, sys=2.16%, ctx=84, majf=0, minf=33 00:36:06.186 IO depths : 1=5.1%, 2=11.4%, 4=25.0%, 8=51.1%, 16=7.4%, 32=0.0%, >=64=0.0% 00:36:06.186 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:06.186 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:06.186 issued rwts: total=4784,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:06.186 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:06.186 filename1: (groupid=0, jobs=1): err= 0: pid=3363688: Mon Jul 15 03:39:10 2024 00:36:06.186 read: IOPS=476, BW=1906KiB/s (1952kB/s)(18.6MiB/10007msec) 00:36:06.186 slat (usec): min=13, max=102, avg=43.57, stdev=14.10 00:36:06.186 clat (usec): min=12316, max=59362, avg=33190.47, stdev=2143.96 00:36:06.186 lat (usec): min=12335, max=59397, avg=33234.04, stdev=2144.70 00:36:06.186 clat percentiles (usec): 00:36:06.186 | 1.00th=[31851], 5.00th=[32113], 10.00th=[32375], 20.00th=[32375], 00:36:06.186 | 30.00th=[32637], 40.00th=[32637], 50.00th=[32900], 60.00th=[33162], 00:36:06.186 | 70.00th=[33424], 80.00th=[33817], 90.00th=[34341], 95.00th=[35390], 00:36:06.186 | 99.00th=[36963], 99.50th=[37487], 99.90th=[58983], 99.95th=[59507], 00:36:06.186 | 99.99th=[59507] 00:36:06.186 bw ( KiB/s): min= 1664, max= 2048, per=4.15%, avg=1899.79, stdev=77.07, samples=19 00:36:06.186 iops : min= 416, max= 512, avg=474.95, stdev=19.27, samples=19 00:36:06.186 lat (msec) : 20=0.34%, 50=99.33%, 100=0.34% 00:36:06.186 cpu : usr=97.77%, sys=1.71%, ctx=38, majf=0, minf=30 00:36:06.186 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:36:06.186 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:06.186 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:06.186 issued rwts: total=4768,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:06.186 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:06.186 filename1: (groupid=0, jobs=1): err= 0: pid=3363689: Mon Jul 15 03:39:10 2024 00:36:06.186 read: IOPS=476, BW=1906KiB/s (1952kB/s)(18.6MiB/10007msec) 00:36:06.186 slat (usec): min=8, max=121, avg=42.98, stdev=13.98 00:36:06.186 clat (usec): min=12178, max=59348, avg=33175.69, stdev=2150.54 00:36:06.186 lat (usec): min=12187, max=59391, avg=33218.67, stdev=2151.96 00:36:06.186 clat percentiles (usec): 00:36:06.186 | 1.00th=[31851], 5.00th=[32113], 10.00th=[32375], 20.00th=[32375], 00:36:06.186 | 30.00th=[32637], 40.00th=[32637], 50.00th=[32900], 60.00th=[33162], 00:36:06.186 | 70.00th=[33424], 80.00th=[33817], 90.00th=[34341], 95.00th=[35390], 00:36:06.187 | 99.00th=[36963], 99.50th=[37487], 99.90th=[58983], 99.95th=[59507], 00:36:06.187 | 99.99th=[59507] 00:36:06.187 bw ( KiB/s): min= 1664, max= 2048, per=4.15%, avg=1899.79, stdev=77.07, samples=19 00:36:06.187 iops : min= 416, max= 512, avg=474.95, stdev=19.27, samples=19 00:36:06.187 lat (msec) : 20=0.34%, 50=99.33%, 100=0.34% 00:36:06.187 cpu : usr=97.36%, sys=1.78%, ctx=26, majf=0, minf=30 00:36:06.187 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:36:06.187 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:06.187 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:06.187 issued rwts: total=4768,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:06.187 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:06.187 filename1: (groupid=0, jobs=1): err= 0: pid=3363690: Mon Jul 15 03:39:10 2024 00:36:06.187 read: IOPS=476, BW=1908KiB/s (1953kB/s)(18.7MiB/10027msec) 00:36:06.187 slat (usec): min=8, max=141, avg=32.67, stdev=20.20 00:36:06.187 clat (usec): min=21658, max=42600, avg=33294.61, stdev=1262.20 00:36:06.187 lat (usec): min=21683, max=42662, avg=33327.27, stdev=1261.70 00:36:06.187 clat percentiles (usec): 00:36:06.187 | 1.00th=[31851], 5.00th=[32375], 10.00th=[32375], 20.00th=[32637], 00:36:06.187 | 30.00th=[32637], 40.00th=[32900], 50.00th=[33162], 60.00th=[33162], 00:36:06.187 | 70.00th=[33424], 80.00th=[33817], 90.00th=[34341], 95.00th=[35390], 00:36:06.187 | 99.00th=[37487], 99.50th=[38011], 99.90th=[40109], 99.95th=[40633], 00:36:06.187 | 99.99th=[42730] 00:36:06.187 bw ( KiB/s): min= 1792, max= 2032, per=4.16%, avg=1907.20, stdev=53.85, samples=20 00:36:06.187 iops : min= 448, max= 508, avg=476.80, stdev=13.46, samples=20 00:36:06.187 lat (msec) : 50=100.00% 00:36:06.187 cpu : usr=98.13%, sys=1.43%, ctx=21, majf=0, minf=30 00:36:06.187 IO depths : 1=1.7%, 2=7.9%, 4=25.0%, 8=54.6%, 16=10.8%, 32=0.0%, >=64=0.0% 00:36:06.187 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:06.187 complete : 0=0.0%, 4=94.4%, 8=0.0%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:06.187 issued rwts: total=4782,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:06.187 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:06.187 filename1: (groupid=0, jobs=1): err= 0: pid=3363691: Mon Jul 15 03:39:10 2024 00:36:06.187 read: IOPS=474, BW=1900KiB/s (1945kB/s)(18.6MiB/10006msec) 00:36:06.187 slat (usec): min=8, max=102, avg=42.00, stdev=14.93 00:36:06.187 clat (usec): min=25165, max=80969, avg=33321.96, stdev=2475.42 00:36:06.187 lat (usec): min=25176, max=81001, avg=33363.96, stdev=2473.33 00:36:06.187 clat percentiles (usec): 00:36:06.187 | 1.00th=[31851], 5.00th=[32113], 10.00th=[32375], 20.00th=[32375], 00:36:06.187 | 30.00th=[32637], 40.00th=[32637], 50.00th=[32900], 60.00th=[33162], 00:36:06.187 | 70.00th=[33424], 80.00th=[33817], 90.00th=[34341], 95.00th=[35390], 00:36:06.187 | 99.00th=[37487], 99.50th=[38011], 99.90th=[70779], 99.95th=[70779], 00:36:06.187 | 99.99th=[81265] 00:36:06.187 bw ( KiB/s): min= 1536, max= 2048, per=4.13%, avg=1893.05, stdev=100.78, samples=19 00:36:06.187 iops : min= 384, max= 512, avg=473.26, stdev=25.19, samples=19 00:36:06.187 lat (msec) : 50=99.66%, 100=0.34% 00:36:06.187 cpu : usr=95.59%, sys=2.61%, ctx=204, majf=0, minf=25 00:36:06.187 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:36:06.187 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:06.187 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:06.187 issued rwts: total=4752,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:06.187 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:06.187 filename1: (groupid=0, jobs=1): err= 0: pid=3363692: Mon Jul 15 03:39:10 2024 00:36:06.187 read: IOPS=515, BW=2063KiB/s (2113kB/s)(20.2MiB/10007msec) 00:36:06.187 slat (usec): min=8, max=138, avg=27.55, stdev=20.46 00:36:06.187 clat (msec): min=10, max=102, avg=30.84, stdev= 6.45 00:36:06.187 lat (msec): min=10, max=102, avg=30.86, stdev= 6.45 00:36:06.187 clat percentiles (msec): 00:36:06.187 | 1.00th=[ 20], 5.00th=[ 21], 10.00th=[ 22], 20.00th=[ 26], 00:36:06.187 | 30.00th=[ 30], 40.00th=[ 33], 50.00th=[ 33], 60.00th=[ 33], 00:36:06.187 | 70.00th=[ 33], 80.00th=[ 34], 90.00th=[ 36], 95.00th=[ 38], 00:36:06.187 | 99.00th=[ 46], 99.50th=[ 48], 99.90th=[ 91], 99.95th=[ 91], 00:36:06.187 | 99.99th=[ 103] 00:36:06.187 bw ( KiB/s): min= 1680, max= 2400, per=4.51%, avg=2065.68, stdev=174.53, samples=19 00:36:06.187 iops : min= 420, max= 600, avg=516.42, stdev=43.63, samples=19 00:36:06.187 lat (msec) : 20=2.13%, 50=97.48%, 100=0.35%, 250=0.04% 00:36:06.187 cpu : usr=97.08%, sys=2.01%, ctx=148, majf=0, minf=47 00:36:06.187 IO depths : 1=1.4%, 2=3.4%, 4=10.4%, 8=72.2%, 16=12.7%, 32=0.0%, >=64=0.0% 00:36:06.187 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:06.187 complete : 0=0.0%, 4=90.4%, 8=5.6%, 16=4.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:06.187 issued rwts: total=5162,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:06.187 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:06.187 filename1: (groupid=0, jobs=1): err= 0: pid=3363693: Mon Jul 15 03:39:10 2024 00:36:06.187 read: IOPS=477, BW=1909KiB/s (1954kB/s)(18.7MiB/10026msec) 00:36:06.187 slat (usec): min=8, max=110, avg=29.99, stdev=18.22 00:36:06.187 clat (usec): min=16997, max=56141, avg=33254.45, stdev=1463.72 00:36:06.187 lat (usec): min=17032, max=56176, avg=33284.45, stdev=1463.49 00:36:06.187 clat percentiles (usec): 00:36:06.187 | 1.00th=[31589], 5.00th=[32113], 10.00th=[32375], 20.00th=[32637], 00:36:06.187 | 30.00th=[32637], 40.00th=[32900], 50.00th=[33162], 60.00th=[33162], 00:36:06.187 | 70.00th=[33817], 80.00th=[33817], 90.00th=[34341], 95.00th=[35390], 00:36:06.187 | 99.00th=[36963], 99.50th=[38011], 99.90th=[43254], 99.95th=[43254], 00:36:06.187 | 99.99th=[56361] 00:36:06.187 bw ( KiB/s): min= 1792, max= 2048, per=4.16%, avg=1907.20, stdev=57.24, samples=20 00:36:06.187 iops : min= 448, max= 512, avg=476.80, stdev=14.31, samples=20 00:36:06.187 lat (msec) : 20=0.04%, 50=99.92%, 100=0.04% 00:36:06.187 cpu : usr=96.92%, sys=1.99%, ctx=136, majf=0, minf=34 00:36:06.187 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:36:06.187 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:06.187 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:06.187 issued rwts: total=4784,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:06.187 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:06.187 filename1: (groupid=0, jobs=1): err= 0: pid=3363694: Mon Jul 15 03:39:10 2024 00:36:06.187 read: IOPS=478, BW=1913KiB/s (1959kB/s)(18.7MiB/10001msec) 00:36:06.187 slat (usec): min=8, max=152, avg=61.13, stdev=26.90 00:36:06.187 clat (usec): min=12904, max=37870, avg=32881.33, stdev=1745.00 00:36:06.187 lat (usec): min=12913, max=37908, avg=32942.46, stdev=1747.85 00:36:06.187 clat percentiles (usec): 00:36:06.187 | 1.00th=[31327], 5.00th=[31851], 10.00th=[32113], 20.00th=[32113], 00:36:06.187 | 30.00th=[32375], 40.00th=[32637], 50.00th=[32637], 60.00th=[32900], 00:36:06.187 | 70.00th=[33162], 80.00th=[33817], 90.00th=[34341], 95.00th=[35390], 00:36:06.187 | 99.00th=[36439], 99.50th=[36963], 99.90th=[37487], 99.95th=[38011], 00:36:06.187 | 99.99th=[38011] 00:36:06.187 bw ( KiB/s): min= 1792, max= 2048, per=4.18%, avg=1913.26, stdev=51.80, samples=19 00:36:06.187 iops : min= 448, max= 512, avg=478.32, stdev=12.95, samples=19 00:36:06.187 lat (msec) : 20=0.52%, 50=99.48% 00:36:06.187 cpu : usr=94.12%, sys=3.24%, ctx=89, majf=0, minf=27 00:36:06.187 IO depths : 1=6.2%, 2=12.4%, 4=24.9%, 8=50.2%, 16=6.4%, 32=0.0%, >=64=0.0% 00:36:06.187 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:06.187 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:06.187 issued rwts: total=4784,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:06.187 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:06.187 filename1: (groupid=0, jobs=1): err= 0: pid=3363695: Mon Jul 15 03:39:10 2024 00:36:06.187 read: IOPS=475, BW=1904KiB/s (1949kB/s)(18.6MiB/10019msec) 00:36:06.187 slat (usec): min=8, max=111, avg=42.63, stdev=18.65 00:36:06.187 clat (usec): min=24752, max=51453, avg=33255.72, stdev=1465.54 00:36:06.187 lat (usec): min=24762, max=51494, avg=33298.36, stdev=1464.85 00:36:06.187 clat percentiles (usec): 00:36:06.187 | 1.00th=[31589], 5.00th=[32113], 10.00th=[32375], 20.00th=[32375], 00:36:06.187 | 30.00th=[32637], 40.00th=[32900], 50.00th=[32900], 60.00th=[33162], 00:36:06.187 | 70.00th=[33424], 80.00th=[33817], 90.00th=[34341], 95.00th=[35390], 00:36:06.187 | 99.00th=[36963], 99.50th=[38011], 99.90th=[51119], 99.95th=[51643], 00:36:06.187 | 99.99th=[51643] 00:36:06.187 bw ( KiB/s): min= 1664, max= 2048, per=4.15%, avg=1900.80, stdev=75.15, samples=20 00:36:06.187 iops : min= 416, max= 512, avg=475.20, stdev=18.79, samples=20 00:36:06.187 lat (msec) : 50=99.66%, 100=0.34% 00:36:06.187 cpu : usr=98.03%, sys=1.54%, ctx=30, majf=0, minf=32 00:36:06.187 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:36:06.187 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:06.188 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:06.188 issued rwts: total=4768,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:06.188 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:06.188 filename2: (groupid=0, jobs=1): err= 0: pid=3363696: Mon Jul 15 03:39:10 2024 00:36:06.188 read: IOPS=476, BW=1904KiB/s (1950kB/s)(18.6MiB/10016msec) 00:36:06.188 slat (usec): min=6, max=103, avg=31.95, stdev=21.61 00:36:06.188 clat (usec): min=30268, max=51515, avg=33315.92, stdev=1443.02 00:36:06.188 lat (usec): min=30310, max=51533, avg=33347.86, stdev=1441.00 00:36:06.188 clat percentiles (usec): 00:36:06.188 | 1.00th=[31589], 5.00th=[32113], 10.00th=[32375], 20.00th=[32637], 00:36:06.188 | 30.00th=[32637], 40.00th=[32900], 50.00th=[32900], 60.00th=[33162], 00:36:06.188 | 70.00th=[33424], 80.00th=[33817], 90.00th=[34341], 95.00th=[35390], 00:36:06.188 | 99.00th=[36963], 99.50th=[37487], 99.90th=[51643], 99.95th=[51643], 00:36:06.188 | 99.99th=[51643] 00:36:06.188 bw ( KiB/s): min= 1667, max= 2048, per=4.15%, avg=1900.95, stdev=74.66, samples=20 00:36:06.188 iops : min= 416, max= 512, avg=475.20, stdev=18.79, samples=20 00:36:06.188 lat (msec) : 50=99.66%, 100=0.34% 00:36:06.188 cpu : usr=98.19%, sys=1.32%, ctx=69, majf=0, minf=33 00:36:06.188 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:36:06.188 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:06.188 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:06.188 issued rwts: total=4768,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:06.188 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:06.188 filename2: (groupid=0, jobs=1): err= 0: pid=3363697: Mon Jul 15 03:39:10 2024 00:36:06.188 read: IOPS=477, BW=1909KiB/s (1955kB/s)(18.7MiB/10025msec) 00:36:06.188 slat (usec): min=8, max=107, avg=36.66, stdev=21.94 00:36:06.188 clat (usec): min=13136, max=53258, avg=33185.45, stdev=1916.34 00:36:06.188 lat (usec): min=13145, max=53268, avg=33222.11, stdev=1916.06 00:36:06.188 clat percentiles (usec): 00:36:06.188 | 1.00th=[31589], 5.00th=[32113], 10.00th=[32375], 20.00th=[32375], 00:36:06.188 | 30.00th=[32637], 40.00th=[32900], 50.00th=[32900], 60.00th=[33162], 00:36:06.188 | 70.00th=[33424], 80.00th=[33817], 90.00th=[34341], 95.00th=[35390], 00:36:06.188 | 99.00th=[37487], 99.50th=[41681], 99.90th=[51643], 99.95th=[52691], 00:36:06.188 | 99.99th=[53216] 00:36:06.188 bw ( KiB/s): min= 1792, max= 2048, per=4.16%, avg=1907.35, stdev=56.93, samples=20 00:36:06.188 iops : min= 448, max= 512, avg=476.80, stdev=14.31, samples=20 00:36:06.188 lat (msec) : 20=0.31%, 50=99.44%, 100=0.25% 00:36:06.188 cpu : usr=97.85%, sys=1.75%, ctx=27, majf=0, minf=31 00:36:06.188 IO depths : 1=6.1%, 2=12.3%, 4=25.0%, 8=50.2%, 16=6.4%, 32=0.0%, >=64=0.0% 00:36:06.188 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:06.188 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:06.188 issued rwts: total=4784,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:06.188 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:06.188 filename2: (groupid=0, jobs=1): err= 0: pid=3363698: Mon Jul 15 03:39:10 2024 00:36:06.188 read: IOPS=477, BW=1909KiB/s (1955kB/s)(18.7MiB/10048msec) 00:36:06.188 slat (nsec): min=8140, max=95071, avg=19687.18, stdev=9287.42 00:36:06.188 clat (usec): min=13011, max=90142, avg=33278.86, stdev=4213.43 00:36:06.188 lat (usec): min=13029, max=90185, avg=33298.55, stdev=4214.23 00:36:06.188 clat percentiles (usec): 00:36:06.188 | 1.00th=[20579], 5.00th=[30016], 10.00th=[32375], 20.00th=[32637], 00:36:06.188 | 30.00th=[32637], 40.00th=[32900], 50.00th=[33162], 60.00th=[33162], 00:36:06.188 | 70.00th=[33424], 80.00th=[33817], 90.00th=[34866], 95.00th=[35914], 00:36:06.188 | 99.00th=[42206], 99.50th=[52167], 99.90th=[89654], 99.95th=[89654], 00:36:06.188 | 99.99th=[89654] 00:36:06.188 bw ( KiB/s): min= 1536, max= 2160, per=4.16%, avg=1907.90, stdev=128.59, samples=20 00:36:06.188 iops : min= 384, max= 540, avg=476.95, stdev=32.17, samples=20 00:36:06.188 lat (msec) : 20=0.17%, 50=99.17%, 100=0.67% 00:36:06.188 cpu : usr=98.02%, sys=1.59%, ctx=17, majf=0, minf=37 00:36:06.188 IO depths : 1=5.1%, 2=10.5%, 4=22.1%, 8=54.5%, 16=7.8%, 32=0.0%, >=64=0.0% 00:36:06.188 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:06.188 complete : 0=0.0%, 4=93.4%, 8=1.3%, 16=5.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:06.188 issued rwts: total=4796,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:06.188 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:06.188 filename2: (groupid=0, jobs=1): err= 0: pid=3363699: Mon Jul 15 03:39:10 2024 00:36:06.188 read: IOPS=478, BW=1915KiB/s (1961kB/s)(18.8MiB/10025msec) 00:36:06.188 slat (usec): min=8, max=190, avg=46.60, stdev=28.30 00:36:06.188 clat (usec): min=12141, max=37703, avg=32990.47, stdev=1960.17 00:36:06.188 lat (usec): min=12153, max=37781, avg=33037.07, stdev=1960.63 00:36:06.188 clat percentiles (usec): 00:36:06.188 | 1.00th=[30540], 5.00th=[31851], 10.00th=[32113], 20.00th=[32375], 00:36:06.188 | 30.00th=[32637], 40.00th=[32637], 50.00th=[32900], 60.00th=[33162], 00:36:06.188 | 70.00th=[33162], 80.00th=[33817], 90.00th=[34341], 95.00th=[35390], 00:36:06.188 | 99.00th=[36963], 99.50th=[36963], 99.90th=[37487], 99.95th=[37487], 00:36:06.188 | 99.99th=[37487] 00:36:06.188 bw ( KiB/s): min= 1792, max= 2048, per=4.18%, avg=1913.60, stdev=50.44, samples=20 00:36:06.188 iops : min= 448, max= 512, avg=478.40, stdev=12.61, samples=20 00:36:06.188 lat (msec) : 20=0.67%, 50=99.33% 00:36:06.188 cpu : usr=98.17%, sys=1.40%, ctx=14, majf=0, minf=28 00:36:06.188 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:36:06.188 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:06.188 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:06.188 issued rwts: total=4800,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:06.188 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:06.188 filename2: (groupid=0, jobs=1): err= 0: pid=3363700: Mon Jul 15 03:39:10 2024 00:36:06.188 read: IOPS=476, BW=1905KiB/s (1951kB/s)(18.6MiB/10012msec) 00:36:06.188 slat (usec): min=6, max=108, avg=45.60, stdev=17.01 00:36:06.188 clat (usec): min=12409, max=74330, avg=33195.06, stdev=2432.15 00:36:06.188 lat (usec): min=12425, max=74350, avg=33240.66, stdev=2431.70 00:36:06.188 clat percentiles (usec): 00:36:06.188 | 1.00th=[31851], 5.00th=[32113], 10.00th=[32375], 20.00th=[32375], 00:36:06.188 | 30.00th=[32637], 40.00th=[32637], 50.00th=[32900], 60.00th=[33162], 00:36:06.188 | 70.00th=[33424], 80.00th=[33817], 90.00th=[34341], 95.00th=[35390], 00:36:06.188 | 99.00th=[36963], 99.50th=[37487], 99.90th=[64226], 99.95th=[64226], 00:36:06.188 | 99.99th=[73925] 00:36:06.188 bw ( KiB/s): min= 1667, max= 2048, per=4.15%, avg=1900.95, stdev=74.66, samples=20 00:36:06.188 iops : min= 416, max= 512, avg=475.20, stdev=18.79, samples=20 00:36:06.188 lat (msec) : 20=0.34%, 50=99.33%, 100=0.34% 00:36:06.188 cpu : usr=93.09%, sys=3.80%, ctx=340, majf=0, minf=24 00:36:06.188 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:36:06.188 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:06.188 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:06.188 issued rwts: total=4768,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:06.188 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:06.188 filename2: (groupid=0, jobs=1): err= 0: pid=3363701: Mon Jul 15 03:39:10 2024 00:36:06.188 read: IOPS=477, BW=1910KiB/s (1956kB/s)(18.7MiB/10012msec) 00:36:06.188 slat (usec): min=10, max=132, avg=48.74, stdev=21.63 00:36:06.188 clat (usec): min=12126, max=87071, avg=33037.60, stdev=2975.18 00:36:06.188 lat (usec): min=12171, max=87108, avg=33086.35, stdev=2975.40 00:36:06.188 clat percentiles (usec): 00:36:06.188 | 1.00th=[23987], 5.00th=[31851], 10.00th=[32113], 20.00th=[32375], 00:36:06.188 | 30.00th=[32375], 40.00th=[32637], 50.00th=[32900], 60.00th=[32900], 00:36:06.188 | 70.00th=[33162], 80.00th=[33817], 90.00th=[34341], 95.00th=[34866], 00:36:06.188 | 99.00th=[37487], 99.50th=[54789], 99.90th=[64750], 99.95th=[64750], 00:36:06.188 | 99.99th=[87557] 00:36:06.188 bw ( KiB/s): min= 1763, max= 2048, per=4.16%, avg=1905.75, stdev=60.58, samples=20 00:36:06.188 iops : min= 440, max= 512, avg=476.40, stdev=15.24, samples=20 00:36:06.188 lat (msec) : 20=0.42%, 50=99.00%, 100=0.59% 00:36:06.188 cpu : usr=93.46%, sys=3.66%, ctx=321, majf=0, minf=26 00:36:06.188 IO depths : 1=6.0%, 2=12.1%, 4=24.5%, 8=50.9%, 16=6.5%, 32=0.0%, >=64=0.0% 00:36:06.188 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:06.188 complete : 0=0.0%, 4=94.0%, 8=0.2%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:06.188 issued rwts: total=4780,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:06.188 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:06.188 filename2: (groupid=0, jobs=1): err= 0: pid=3363702: Mon Jul 15 03:39:10 2024 00:36:06.188 read: IOPS=477, BW=1908KiB/s (1954kB/s)(18.7MiB/10027msec) 00:36:06.188 slat (usec): min=10, max=151, avg=41.90, stdev=17.70 00:36:06.188 clat (usec): min=21686, max=38103, avg=33173.92, stdev=1202.75 00:36:06.188 lat (usec): min=21711, max=38146, avg=33215.82, stdev=1201.88 00:36:06.188 clat percentiles (usec): 00:36:06.189 | 1.00th=[31851], 5.00th=[32113], 10.00th=[32375], 20.00th=[32375], 00:36:06.189 | 30.00th=[32637], 40.00th=[32637], 50.00th=[32900], 60.00th=[33162], 00:36:06.189 | 70.00th=[33424], 80.00th=[33817], 90.00th=[34341], 95.00th=[35390], 00:36:06.189 | 99.00th=[37487], 99.50th=[37487], 99.90th=[38011], 99.95th=[38011], 00:36:06.189 | 99.99th=[38011] 00:36:06.189 bw ( KiB/s): min= 1792, max= 2048, per=4.16%, avg=1907.20, stdev=57.24, samples=20 00:36:06.189 iops : min= 448, max= 512, avg=476.80, stdev=14.31, samples=20 00:36:06.189 lat (msec) : 50=100.00% 00:36:06.189 cpu : usr=98.12%, sys=1.35%, ctx=31, majf=0, minf=33 00:36:06.189 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:36:06.189 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:06.189 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:06.189 issued rwts: total=4784,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:06.189 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:06.189 filename2: (groupid=0, jobs=1): err= 0: pid=3363703: Mon Jul 15 03:39:10 2024 00:36:06.189 read: IOPS=477, BW=1908KiB/s (1954kB/s)(18.7MiB/10027msec) 00:36:06.189 slat (usec): min=8, max=129, avg=30.90, stdev=17.22 00:36:06.189 clat (usec): min=15021, max=44840, avg=33273.03, stdev=1237.28 00:36:06.189 lat (usec): min=15066, max=44862, avg=33303.93, stdev=1236.04 00:36:06.189 clat percentiles (usec): 00:36:06.189 | 1.00th=[31851], 5.00th=[32375], 10.00th=[32375], 20.00th=[32637], 00:36:06.189 | 30.00th=[32637], 40.00th=[32900], 50.00th=[33162], 60.00th=[33162], 00:36:06.189 | 70.00th=[33424], 80.00th=[33817], 90.00th=[34341], 95.00th=[35390], 00:36:06.189 | 99.00th=[36963], 99.50th=[37487], 99.90th=[38011], 99.95th=[38011], 00:36:06.189 | 99.99th=[44827] 00:36:06.189 bw ( KiB/s): min= 1792, max= 2048, per=4.16%, avg=1907.20, stdev=57.24, samples=20 00:36:06.189 iops : min= 448, max= 512, avg=476.80, stdev=14.31, samples=20 00:36:06.189 lat (msec) : 20=0.04%, 50=99.96% 00:36:06.189 cpu : usr=95.44%, sys=2.66%, ctx=157, majf=0, minf=35 00:36:06.189 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:36:06.189 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:06.189 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:06.189 issued rwts: total=4784,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:06.189 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:06.189 00:36:06.189 Run status group 0 (all jobs): 00:36:06.189 READ: bw=44.7MiB/s (46.9MB/s), 1899KiB/s-2063KiB/s (1945kB/s-2113kB/s), io=449MiB (471MB), run=10001-10048msec 00:36:06.189 03:39:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:36:06.189 03:39:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:36:06.189 03:39:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:36:06.189 03:39:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:36:06.189 03:39:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:36:06.189 03:39:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:36:06.189 03:39:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:06.189 03:39:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:06.189 03:39:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:06.189 03:39:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:36:06.189 03:39:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:06.189 03:39:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:06.189 03:39:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:06.189 03:39:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:36:06.189 03:39:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:36:06.189 03:39:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:36:06.189 03:39:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:36:06.189 03:39:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:06.189 03:39:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:06.189 03:39:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:06.189 03:39:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:36:06.189 03:39:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:06.189 03:39:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:06.189 03:39:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:06.189 03:39:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:36:06.189 03:39:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:36:06.189 03:39:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:36:06.189 03:39:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:36:06.189 03:39:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:06.189 03:39:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:06.189 03:39:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:06.189 03:39:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:36:06.189 03:39:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:06.189 03:39:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:06.189 03:39:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:06.189 03:39:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:36:06.189 03:39:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:36:06.189 03:39:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:36:06.189 03:39:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:36:06.189 03:39:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:36:06.189 03:39:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:36:06.189 03:39:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:36:06.189 03:39:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:36:06.189 03:39:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:36:06.189 03:39:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:36:06.189 03:39:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:36:06.189 03:39:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:36:06.189 03:39:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:06.189 03:39:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:06.189 bdev_null0 00:36:06.189 03:39:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:06.189 03:39:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:36:06.189 03:39:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:06.189 03:39:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:06.189 03:39:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:06.189 03:39:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:36:06.189 03:39:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:06.189 03:39:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:06.189 03:39:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:06.189 03:39:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:36:06.189 03:39:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:06.189 03:39:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:06.189 [2024-07-15 03:39:10.891278] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:06.189 03:39:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:06.189 03:39:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:36:06.189 03:39:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:36:06.189 03:39:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:36:06.189 03:39:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:36:06.189 03:39:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:06.189 03:39:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:06.189 bdev_null1 00:36:06.189 03:39:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:06.189 03:39:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:36:06.189 03:39:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:06.189 03:39:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:06.190 03:39:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:06.190 03:39:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:36:06.190 03:39:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:06.190 03:39:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:06.190 03:39:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:06.190 03:39:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:36:06.190 03:39:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:06.190 03:39:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:06.190 03:39:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:06.190 03:39:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:36:06.190 03:39:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:36:06.190 03:39:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:36:06.190 03:39:10 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:36:06.190 03:39:10 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:36:06.190 03:39:10 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:36:06.190 03:39:10 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:36:06.190 { 00:36:06.190 "params": { 00:36:06.190 "name": "Nvme$subsystem", 00:36:06.190 "trtype": "$TEST_TRANSPORT", 00:36:06.190 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:06.190 "adrfam": "ipv4", 00:36:06.190 "trsvcid": "$NVMF_PORT", 00:36:06.190 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:06.190 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:06.190 "hdgst": ${hdgst:-false}, 00:36:06.190 "ddgst": ${ddgst:-false} 00:36:06.190 }, 00:36:06.190 "method": "bdev_nvme_attach_controller" 00:36:06.190 } 00:36:06.190 EOF 00:36:06.190 )") 00:36:06.190 03:39:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:06.190 03:39:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:06.190 03:39:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:36:06.190 03:39:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:36:06.190 03:39:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:36:06.190 03:39:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:06.190 03:39:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:36:06.190 03:39:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:36:06.190 03:39:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:36:06.190 03:39:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:36:06.190 03:39:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:36:06.190 03:39:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:36:06.190 03:39:10 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:36:06.190 03:39:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:06.190 03:39:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:36:06.190 03:39:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:36:06.190 03:39:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:36:06.190 03:39:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:36:06.190 03:39:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:36:06.190 03:39:10 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:36:06.190 03:39:10 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:36:06.190 { 00:36:06.190 "params": { 00:36:06.190 "name": "Nvme$subsystem", 00:36:06.190 "trtype": "$TEST_TRANSPORT", 00:36:06.190 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:06.190 "adrfam": "ipv4", 00:36:06.190 "trsvcid": "$NVMF_PORT", 00:36:06.190 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:06.190 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:06.190 "hdgst": ${hdgst:-false}, 00:36:06.190 "ddgst": ${ddgst:-false} 00:36:06.190 }, 00:36:06.190 "method": "bdev_nvme_attach_controller" 00:36:06.190 } 00:36:06.190 EOF 00:36:06.190 )") 00:36:06.190 03:39:10 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:36:06.190 03:39:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:36:06.190 03:39:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:36:06.190 03:39:10 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:36:06.190 03:39:10 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:36:06.190 03:39:10 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:36:06.190 "params": { 00:36:06.190 "name": "Nvme0", 00:36:06.190 "trtype": "tcp", 00:36:06.190 "traddr": "10.0.0.2", 00:36:06.190 "adrfam": "ipv4", 00:36:06.190 "trsvcid": "4420", 00:36:06.190 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:06.190 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:06.190 "hdgst": false, 00:36:06.190 "ddgst": false 00:36:06.190 }, 00:36:06.190 "method": "bdev_nvme_attach_controller" 00:36:06.190 },{ 00:36:06.190 "params": { 00:36:06.190 "name": "Nvme1", 00:36:06.190 "trtype": "tcp", 00:36:06.190 "traddr": "10.0.0.2", 00:36:06.190 "adrfam": "ipv4", 00:36:06.190 "trsvcid": "4420", 00:36:06.190 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:36:06.190 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:36:06.190 "hdgst": false, 00:36:06.190 "ddgst": false 00:36:06.190 }, 00:36:06.190 "method": "bdev_nvme_attach_controller" 00:36:06.190 }' 00:36:06.190 03:39:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:36:06.190 03:39:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:36:06.190 03:39:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:36:06.190 03:39:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:06.190 03:39:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:36:06.190 03:39:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:36:06.190 03:39:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:36:06.190 03:39:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:36:06.190 03:39:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:36:06.190 03:39:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:06.190 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:36:06.190 ... 00:36:06.190 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:36:06.190 ... 00:36:06.190 fio-3.35 00:36:06.190 Starting 4 threads 00:36:06.190 EAL: No free 2048 kB hugepages reported on node 1 00:36:11.453 00:36:11.453 filename0: (groupid=0, jobs=1): err= 0: pid=3365691: Mon Jul 15 03:39:17 2024 00:36:11.453 read: IOPS=1828, BW=14.3MiB/s (15.0MB/s)(71.5MiB/5002msec) 00:36:11.453 slat (nsec): min=4720, max=58606, avg=14378.23, stdev=7525.24 00:36:11.453 clat (usec): min=966, max=8002, avg=4323.99, stdev=664.54 00:36:11.453 lat (usec): min=987, max=8027, avg=4338.37, stdev=664.42 00:36:11.453 clat percentiles (usec): 00:36:11.453 | 1.00th=[ 2704], 5.00th=[ 3392], 10.00th=[ 3687], 20.00th=[ 3949], 00:36:11.453 | 30.00th=[ 4080], 40.00th=[ 4178], 50.00th=[ 4293], 60.00th=[ 4359], 00:36:11.453 | 70.00th=[ 4490], 80.00th=[ 4621], 90.00th=[ 5080], 95.00th=[ 5538], 00:36:11.453 | 99.00th=[ 6521], 99.50th=[ 6980], 99.90th=[ 7504], 99.95th=[ 7635], 00:36:11.453 | 99.99th=[ 8029] 00:36:11.453 bw ( KiB/s): min=14160, max=15168, per=24.60%, avg=14606.22, stdev=378.92, samples=9 00:36:11.454 iops : min= 1770, max= 1896, avg=1825.78, stdev=47.37, samples=9 00:36:11.454 lat (usec) : 1000=0.07% 00:36:11.454 lat (msec) : 2=0.37%, 4=22.37%, 10=77.19% 00:36:11.454 cpu : usr=92.26%, sys=6.88%, ctx=29, majf=0, minf=0 00:36:11.454 IO depths : 1=0.1%, 2=10.3%, 4=61.9%, 8=27.8%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:11.454 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:11.454 complete : 0=0.0%, 4=92.4%, 8=7.6%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:11.454 issued rwts: total=9147,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:11.454 latency : target=0, window=0, percentile=100.00%, depth=8 00:36:11.454 filename0: (groupid=0, jobs=1): err= 0: pid=3365692: Mon Jul 15 03:39:17 2024 00:36:11.454 read: IOPS=1908, BW=14.9MiB/s (15.6MB/s)(74.6MiB/5003msec) 00:36:11.454 slat (nsec): min=4244, max=64498, avg=12600.35, stdev=6363.54 00:36:11.454 clat (usec): min=1093, max=8217, avg=4150.55, stdev=697.51 00:36:11.454 lat (usec): min=1105, max=8232, avg=4163.15, stdev=697.90 00:36:11.454 clat percentiles (usec): 00:36:11.454 | 1.00th=[ 2474], 5.00th=[ 3032], 10.00th=[ 3326], 20.00th=[ 3720], 00:36:11.454 | 30.00th=[ 3916], 40.00th=[ 4080], 50.00th=[ 4178], 60.00th=[ 4293], 00:36:11.454 | 70.00th=[ 4424], 80.00th=[ 4490], 90.00th=[ 4752], 95.00th=[ 5276], 00:36:11.454 | 99.00th=[ 6718], 99.50th=[ 6980], 99.90th=[ 7767], 99.95th=[ 7898], 00:36:11.454 | 99.99th=[ 8225] 00:36:11.454 bw ( KiB/s): min=14272, max=16032, per=25.73%, avg=15273.50, stdev=627.78, samples=10 00:36:11.454 iops : min= 1784, max= 2004, avg=1909.10, stdev=78.37, samples=10 00:36:11.454 lat (msec) : 2=0.26%, 4=34.46%, 10=65.28% 00:36:11.454 cpu : usr=92.40%, sys=7.02%, ctx=11, majf=0, minf=0 00:36:11.454 IO depths : 1=0.1%, 2=8.6%, 4=62.6%, 8=28.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:11.454 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:11.454 complete : 0=0.0%, 4=93.1%, 8=6.9%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:11.454 issued rwts: total=9550,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:11.454 latency : target=0, window=0, percentile=100.00%, depth=8 00:36:11.454 filename1: (groupid=0, jobs=1): err= 0: pid=3365693: Mon Jul 15 03:39:17 2024 00:36:11.454 read: IOPS=1815, BW=14.2MiB/s (14.9MB/s)(70.9MiB/5001msec) 00:36:11.454 slat (nsec): min=7279, max=61219, avg=13710.37, stdev=7075.78 00:36:11.454 clat (usec): min=721, max=8128, avg=4361.56, stdev=738.57 00:36:11.454 lat (usec): min=729, max=8149, avg=4375.27, stdev=738.30 00:36:11.454 clat percentiles (usec): 00:36:11.454 | 1.00th=[ 2737], 5.00th=[ 3425], 10.00th=[ 3720], 20.00th=[ 3949], 00:36:11.454 | 30.00th=[ 4080], 40.00th=[ 4178], 50.00th=[ 4293], 60.00th=[ 4359], 00:36:11.454 | 70.00th=[ 4490], 80.00th=[ 4621], 90.00th=[ 5211], 95.00th=[ 5932], 00:36:11.454 | 99.00th=[ 6980], 99.50th=[ 7242], 99.90th=[ 7767], 99.95th=[ 7963], 00:36:11.454 | 99.99th=[ 8160] 00:36:11.454 bw ( KiB/s): min=13808, max=15088, per=24.36%, avg=14463.67, stdev=393.28, samples=9 00:36:11.454 iops : min= 1726, max= 1886, avg=1807.89, stdev=49.12, samples=9 00:36:11.454 lat (usec) : 750=0.03% 00:36:11.454 lat (msec) : 2=0.26%, 4=23.13%, 10=76.57% 00:36:11.454 cpu : usr=92.72%, sys=6.68%, ctx=14, majf=0, minf=0 00:36:11.454 IO depths : 1=0.1%, 2=9.6%, 4=61.7%, 8=28.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:11.454 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:11.454 complete : 0=0.0%, 4=93.1%, 8=6.9%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:11.454 issued rwts: total=9079,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:11.454 latency : target=0, window=0, percentile=100.00%, depth=8 00:36:11.454 filename1: (groupid=0, jobs=1): err= 0: pid=3365694: Mon Jul 15 03:39:17 2024 00:36:11.454 read: IOPS=1870, BW=14.6MiB/s (15.3MB/s)(73.1MiB/5004msec) 00:36:11.454 slat (nsec): min=4215, max=73326, avg=12759.04, stdev=6470.39 00:36:11.454 clat (usec): min=987, max=8068, avg=4233.20, stdev=660.36 00:36:11.454 lat (usec): min=995, max=8084, avg=4245.96, stdev=660.54 00:36:11.454 clat percentiles (usec): 00:36:11.454 | 1.00th=[ 2540], 5.00th=[ 3195], 10.00th=[ 3490], 20.00th=[ 3851], 00:36:11.454 | 30.00th=[ 4015], 40.00th=[ 4146], 50.00th=[ 4228], 60.00th=[ 4359], 00:36:11.454 | 70.00th=[ 4424], 80.00th=[ 4555], 90.00th=[ 4883], 95.00th=[ 5473], 00:36:11.454 | 99.00th=[ 6390], 99.50th=[ 6783], 99.90th=[ 7570], 99.95th=[ 7701], 00:36:11.454 | 99.99th=[ 8094] 00:36:11.454 bw ( KiB/s): min=13712, max=15920, per=25.20%, avg=14963.20, stdev=733.79, samples=10 00:36:11.454 iops : min= 1714, max= 1990, avg=1870.40, stdev=91.72, samples=10 00:36:11.454 lat (usec) : 1000=0.02% 00:36:11.454 lat (msec) : 2=0.26%, 4=28.48%, 10=71.24% 00:36:11.454 cpu : usr=92.32%, sys=6.86%, ctx=12, majf=0, minf=0 00:36:11.454 IO depths : 1=0.2%, 2=11.8%, 4=60.4%, 8=27.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:11.454 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:11.454 complete : 0=0.0%, 4=92.2%, 8=7.8%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:11.454 issued rwts: total=9360,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:11.454 latency : target=0, window=0, percentile=100.00%, depth=8 00:36:11.454 00:36:11.454 Run status group 0 (all jobs): 00:36:11.454 READ: bw=58.0MiB/s (60.8MB/s), 14.2MiB/s-14.9MiB/s (14.9MB/s-15.6MB/s), io=290MiB (304MB), run=5001-5004msec 00:36:11.454 03:39:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:36:11.454 03:39:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:36:11.454 03:39:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:36:11.454 03:39:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:36:11.454 03:39:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:36:11.454 03:39:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:36:11.454 03:39:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:11.454 03:39:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:11.454 03:39:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:11.454 03:39:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:36:11.454 03:39:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:11.454 03:39:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:11.454 03:39:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:11.454 03:39:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:36:11.454 03:39:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:36:11.454 03:39:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:36:11.454 03:39:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:36:11.454 03:39:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:11.454 03:39:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:11.454 03:39:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:11.454 03:39:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:36:11.454 03:39:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:11.454 03:39:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:11.454 03:39:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:11.454 00:36:11.454 real 0m24.345s 00:36:11.454 user 4m29.367s 00:36:11.454 sys 0m8.767s 00:36:11.454 03:39:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1124 -- # xtrace_disable 00:36:11.454 03:39:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:11.454 ************************************ 00:36:11.454 END TEST fio_dif_rand_params 00:36:11.454 ************************************ 00:36:11.454 03:39:17 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:36:11.454 03:39:17 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:36:11.454 03:39:17 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:36:11.454 03:39:17 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:36:11.454 03:39:17 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:36:11.454 ************************************ 00:36:11.454 START TEST fio_dif_digest 00:36:11.454 ************************************ 00:36:11.454 03:39:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1123 -- # fio_dif_digest 00:36:11.454 03:39:17 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:36:11.454 03:39:17 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:36:11.454 03:39:17 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:36:11.454 03:39:17 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:36:11.454 03:39:17 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:36:11.454 03:39:17 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:36:11.454 03:39:17 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:36:11.454 03:39:17 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:36:11.454 03:39:17 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:36:11.454 03:39:17 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:36:11.454 03:39:17 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:36:11.454 03:39:17 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:36:11.454 03:39:17 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:36:11.454 03:39:17 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:36:11.454 03:39:17 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:36:11.454 03:39:17 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:36:11.454 03:39:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:11.454 03:39:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:36:11.454 bdev_null0 00:36:11.454 03:39:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:11.454 03:39:17 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:36:11.454 03:39:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:11.454 03:39:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:36:11.454 03:39:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:11.454 03:39:17 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:36:11.454 03:39:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:11.454 03:39:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:36:11.455 03:39:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:11.455 03:39:17 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:36:11.455 03:39:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:11.455 03:39:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:36:11.455 [2024-07-15 03:39:17.412239] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:11.455 03:39:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:11.455 03:39:17 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:36:11.455 03:39:17 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:36:11.455 03:39:17 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:36:11.455 03:39:17 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # config=() 00:36:11.455 03:39:17 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # local subsystem config 00:36:11.455 03:39:17 nvmf_dif.fio_dif_digest -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:36:11.455 03:39:17 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:11.455 03:39:17 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:36:11.455 { 00:36:11.455 "params": { 00:36:11.455 "name": "Nvme$subsystem", 00:36:11.455 "trtype": "$TEST_TRANSPORT", 00:36:11.455 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:11.455 "adrfam": "ipv4", 00:36:11.455 "trsvcid": "$NVMF_PORT", 00:36:11.455 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:11.455 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:11.455 "hdgst": ${hdgst:-false}, 00:36:11.455 "ddgst": ${ddgst:-false} 00:36:11.455 }, 00:36:11.455 "method": "bdev_nvme_attach_controller" 00:36:11.455 } 00:36:11.455 EOF 00:36:11.455 )") 00:36:11.455 03:39:17 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:36:11.455 03:39:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:11.455 03:39:17 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:36:11.455 03:39:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:36:11.455 03:39:17 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:36:11.455 03:39:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:36:11.455 03:39:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # local sanitizers 00:36:11.455 03:39:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:11.455 03:39:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # shift 00:36:11.455 03:39:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local asan_lib= 00:36:11.455 03:39:17 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # cat 00:36:11.455 03:39:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:36:11.455 03:39:17 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:36:11.455 03:39:17 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:36:11.455 03:39:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:11.455 03:39:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libasan 00:36:11.455 03:39:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:36:11.455 03:39:17 nvmf_dif.fio_dif_digest -- nvmf/common.sh@556 -- # jq . 00:36:11.455 03:39:17 nvmf_dif.fio_dif_digest -- nvmf/common.sh@557 -- # IFS=, 00:36:11.455 03:39:17 nvmf_dif.fio_dif_digest -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:36:11.455 "params": { 00:36:11.455 "name": "Nvme0", 00:36:11.455 "trtype": "tcp", 00:36:11.455 "traddr": "10.0.0.2", 00:36:11.455 "adrfam": "ipv4", 00:36:11.455 "trsvcid": "4420", 00:36:11.455 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:11.455 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:11.455 "hdgst": true, 00:36:11.455 "ddgst": true 00:36:11.455 }, 00:36:11.455 "method": "bdev_nvme_attach_controller" 00:36:11.455 }' 00:36:11.455 03:39:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:36:11.455 03:39:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:36:11.455 03:39:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:36:11.455 03:39:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:11.455 03:39:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:36:11.455 03:39:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:36:11.455 03:39:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:36:11.455 03:39:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:36:11.455 03:39:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:36:11.455 03:39:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:11.715 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:36:11.715 ... 00:36:11.715 fio-3.35 00:36:11.715 Starting 3 threads 00:36:11.715 EAL: No free 2048 kB hugepages reported on node 1 00:36:23.916 00:36:23.916 filename0: (groupid=0, jobs=1): err= 0: pid=3366555: Mon Jul 15 03:39:28 2024 00:36:23.916 read: IOPS=190, BW=23.8MiB/s (25.0MB/s)(239MiB/10047msec) 00:36:23.916 slat (nsec): min=7878, max=66803, avg=24298.50, stdev=7076.26 00:36:23.916 clat (usec): min=11976, max=56596, avg=15703.87, stdev=2281.31 00:36:23.916 lat (usec): min=12002, max=56623, avg=15728.17, stdev=2281.26 00:36:23.916 clat percentiles (usec): 00:36:23.916 | 1.00th=[12911], 5.00th=[13698], 10.00th=[14222], 20.00th=[14615], 00:36:23.916 | 30.00th=[15008], 40.00th=[15270], 50.00th=[15533], 60.00th=[15926], 00:36:23.916 | 70.00th=[16188], 80.00th=[16581], 90.00th=[17171], 95.00th=[17695], 00:36:23.916 | 99.00th=[18744], 99.50th=[19530], 99.90th=[56361], 99.95th=[56361], 00:36:23.916 | 99.99th=[56361] 00:36:23.916 bw ( KiB/s): min=23552, max=25600, per=33.30%, avg=24460.80, stdev=590.09, samples=20 00:36:23.916 iops : min= 184, max= 200, avg=191.10, stdev= 4.61, samples=20 00:36:23.916 lat (msec) : 20=99.53%, 50=0.26%, 100=0.21% 00:36:23.916 cpu : usr=93.23%, sys=6.19%, ctx=17, majf=0, minf=160 00:36:23.916 IO depths : 1=0.3%, 2=99.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:23.916 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:23.916 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:23.916 issued rwts: total=1913,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:23.916 latency : target=0, window=0, percentile=100.00%, depth=3 00:36:23.916 filename0: (groupid=0, jobs=1): err= 0: pid=3366556: Mon Jul 15 03:39:28 2024 00:36:23.916 read: IOPS=193, BW=24.2MiB/s (25.4MB/s)(243MiB/10049msec) 00:36:23.916 slat (nsec): min=6276, max=87343, avg=16620.20, stdev=5780.05 00:36:23.916 clat (usec): min=9056, max=52000, avg=15465.13, stdev=1731.25 00:36:23.916 lat (usec): min=9069, max=52014, avg=15481.75, stdev=1731.23 00:36:23.916 clat percentiles (usec): 00:36:23.916 | 1.00th=[12256], 5.00th=[13435], 10.00th=[13829], 20.00th=[14353], 00:36:23.916 | 30.00th=[14746], 40.00th=[15139], 50.00th=[15401], 60.00th=[15795], 00:36:23.916 | 70.00th=[16057], 80.00th=[16450], 90.00th=[16909], 95.00th=[17695], 00:36:23.916 | 99.00th=[18744], 99.50th=[19006], 99.90th=[49021], 99.95th=[52167], 00:36:23.916 | 99.99th=[52167] 00:36:23.916 bw ( KiB/s): min=23040, max=27648, per=33.82%, avg=24844.80, stdev=1125.04, samples=20 00:36:23.916 iops : min= 180, max= 216, avg=194.10, stdev= 8.79, samples=20 00:36:23.916 lat (msec) : 10=0.05%, 20=99.85%, 50=0.05%, 100=0.05% 00:36:23.916 cpu : usr=92.66%, sys=6.87%, ctx=18, majf=0, minf=181 00:36:23.916 IO depths : 1=0.3%, 2=99.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:23.916 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:23.916 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:23.916 issued rwts: total=1944,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:23.916 latency : target=0, window=0, percentile=100.00%, depth=3 00:36:23.916 filename0: (groupid=0, jobs=1): err= 0: pid=3366557: Mon Jul 15 03:39:28 2024 00:36:23.916 read: IOPS=190, BW=23.8MiB/s (24.9MB/s)(239MiB/10048msec) 00:36:23.916 slat (nsec): min=6089, max=45701, avg=16759.38, stdev=5500.98 00:36:23.916 clat (usec): min=9949, max=51688, avg=15737.53, stdev=1652.02 00:36:23.916 lat (usec): min=9964, max=51707, avg=15754.29, stdev=1651.90 00:36:23.916 clat percentiles (usec): 00:36:23.916 | 1.00th=[12780], 5.00th=[13698], 10.00th=[14222], 20.00th=[14746], 00:36:23.916 | 30.00th=[15139], 40.00th=[15401], 50.00th=[15664], 60.00th=[15926], 00:36:23.916 | 70.00th=[16319], 80.00th=[16581], 90.00th=[17171], 95.00th=[17695], 00:36:23.916 | 99.00th=[18482], 99.50th=[19268], 99.90th=[49021], 99.95th=[51643], 00:36:23.916 | 99.99th=[51643] 00:36:23.916 bw ( KiB/s): min=23040, max=26368, per=33.23%, avg=24412.05, stdev=740.17, samples=20 00:36:23.916 iops : min= 180, max= 206, avg=190.70, stdev= 5.78, samples=20 00:36:23.916 lat (msec) : 10=0.10%, 20=99.63%, 50=0.21%, 100=0.05% 00:36:23.916 cpu : usr=93.24%, sys=6.28%, ctx=24, majf=0, minf=125 00:36:23.916 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:23.917 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:23.917 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:23.917 issued rwts: total=1910,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:23.917 latency : target=0, window=0, percentile=100.00%, depth=3 00:36:23.917 00:36:23.917 Run status group 0 (all jobs): 00:36:23.917 READ: bw=71.7MiB/s (75.2MB/s), 23.8MiB/s-24.2MiB/s (24.9MB/s-25.4MB/s), io=721MiB (756MB), run=10047-10049msec 00:36:23.917 03:39:28 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:36:23.917 03:39:28 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:36:23.917 03:39:28 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:36:23.917 03:39:28 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:36:23.917 03:39:28 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:36:23.917 03:39:28 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:36:23.917 03:39:28 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:23.917 03:39:28 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:36:23.917 03:39:28 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:23.917 03:39:28 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:36:23.917 03:39:28 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:23.917 03:39:28 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:36:23.917 03:39:28 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:23.917 00:36:23.917 real 0m11.171s 00:36:23.917 user 0m29.216s 00:36:23.917 sys 0m2.231s 00:36:23.917 03:39:28 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1124 -- # xtrace_disable 00:36:23.917 03:39:28 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:36:23.917 ************************************ 00:36:23.917 END TEST fio_dif_digest 00:36:23.917 ************************************ 00:36:23.917 03:39:28 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:36:23.917 03:39:28 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:36:23.917 03:39:28 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:36:23.917 03:39:28 nvmf_dif -- nvmf/common.sh@488 -- # nvmfcleanup 00:36:23.917 03:39:28 nvmf_dif -- nvmf/common.sh@117 -- # sync 00:36:23.917 03:39:28 nvmf_dif -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:36:23.917 03:39:28 nvmf_dif -- nvmf/common.sh@120 -- # set +e 00:36:23.917 03:39:28 nvmf_dif -- nvmf/common.sh@121 -- # for i in {1..20} 00:36:23.917 03:39:28 nvmf_dif -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:36:23.917 rmmod nvme_tcp 00:36:23.917 rmmod nvme_fabrics 00:36:23.917 rmmod nvme_keyring 00:36:23.917 03:39:28 nvmf_dif -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:36:23.917 03:39:28 nvmf_dif -- nvmf/common.sh@124 -- # set -e 00:36:23.917 03:39:28 nvmf_dif -- nvmf/common.sh@125 -- # return 0 00:36:23.917 03:39:28 nvmf_dif -- nvmf/common.sh@489 -- # '[' -n 3359885 ']' 00:36:23.917 03:39:28 nvmf_dif -- nvmf/common.sh@490 -- # killprocess 3359885 00:36:23.917 03:39:28 nvmf_dif -- common/autotest_common.sh@948 -- # '[' -z 3359885 ']' 00:36:23.917 03:39:28 nvmf_dif -- common/autotest_common.sh@952 -- # kill -0 3359885 00:36:23.917 03:39:28 nvmf_dif -- common/autotest_common.sh@953 -- # uname 00:36:23.917 03:39:28 nvmf_dif -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:36:23.917 03:39:28 nvmf_dif -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3359885 00:36:23.917 03:39:28 nvmf_dif -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:36:23.917 03:39:28 nvmf_dif -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:36:23.917 03:39:28 nvmf_dif -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3359885' 00:36:23.917 killing process with pid 3359885 00:36:23.917 03:39:28 nvmf_dif -- common/autotest_common.sh@967 -- # kill 3359885 00:36:23.917 03:39:28 nvmf_dif -- common/autotest_common.sh@972 -- # wait 3359885 00:36:23.917 03:39:28 nvmf_dif -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:36:23.917 03:39:28 nvmf_dif -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:36:23.917 Waiting for block devices as requested 00:36:23.917 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:36:24.175 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:36:24.175 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:36:24.175 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:36:24.433 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:36:24.433 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:36:24.433 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:36:24.433 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:36:24.692 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:36:24.692 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:36:24.692 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:36:24.692 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:36:24.956 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:36:24.956 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:36:24.956 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:36:25.223 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:36:25.223 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:36:25.223 03:39:31 nvmf_dif -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:36:25.223 03:39:31 nvmf_dif -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:36:25.223 03:39:31 nvmf_dif -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:36:25.223 03:39:31 nvmf_dif -- nvmf/common.sh@278 -- # remove_spdk_ns 00:36:25.223 03:39:31 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:25.223 03:39:31 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:36:25.223 03:39:31 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:27.758 03:39:33 nvmf_dif -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:36:27.759 00:36:27.759 real 1m6.619s 00:36:27.759 user 6m25.573s 00:36:27.759 sys 0m20.265s 00:36:27.759 03:39:33 nvmf_dif -- common/autotest_common.sh@1124 -- # xtrace_disable 00:36:27.759 03:39:33 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:36:27.759 ************************************ 00:36:27.759 END TEST nvmf_dif 00:36:27.759 ************************************ 00:36:27.759 03:39:33 -- common/autotest_common.sh@1142 -- # return 0 00:36:27.759 03:39:33 -- spdk/autotest.sh@293 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:36:27.759 03:39:33 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:36:27.759 03:39:33 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:36:27.759 03:39:33 -- common/autotest_common.sh@10 -- # set +x 00:36:27.759 ************************************ 00:36:27.759 START TEST nvmf_abort_qd_sizes 00:36:27.759 ************************************ 00:36:27.759 03:39:33 nvmf_abort_qd_sizes -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:36:27.759 * Looking for test storage... 00:36:27.759 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:36:27.759 03:39:33 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:27.759 03:39:33 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:36:27.759 03:39:33 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:27.759 03:39:33 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:27.759 03:39:33 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:27.759 03:39:33 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:27.759 03:39:33 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:27.759 03:39:33 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:27.759 03:39:33 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:27.759 03:39:33 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:27.759 03:39:33 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:27.759 03:39:33 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:27.759 03:39:33 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:36:27.759 03:39:33 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:36:27.759 03:39:33 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:27.759 03:39:33 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:27.759 03:39:33 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:27.759 03:39:33 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:27.759 03:39:33 nvmf_abort_qd_sizes -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:27.759 03:39:33 nvmf_abort_qd_sizes -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:27.759 03:39:33 nvmf_abort_qd_sizes -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:27.759 03:39:33 nvmf_abort_qd_sizes -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:27.759 03:39:33 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:27.759 03:39:33 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:27.759 03:39:33 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:27.759 03:39:33 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:36:27.759 03:39:33 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:27.759 03:39:33 nvmf_abort_qd_sizes -- nvmf/common.sh@47 -- # : 0 00:36:27.759 03:39:33 nvmf_abort_qd_sizes -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:36:27.759 03:39:33 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:36:27.759 03:39:33 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:27.759 03:39:33 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:27.759 03:39:33 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:27.759 03:39:33 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:36:27.759 03:39:33 nvmf_abort_qd_sizes -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:36:27.759 03:39:33 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # have_pci_nics=0 00:36:27.759 03:39:33 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:36:27.759 03:39:33 nvmf_abort_qd_sizes -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:36:27.759 03:39:33 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:27.759 03:39:33 nvmf_abort_qd_sizes -- nvmf/common.sh@448 -- # prepare_net_devs 00:36:27.759 03:39:33 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # local -g is_hw=no 00:36:27.759 03:39:33 nvmf_abort_qd_sizes -- nvmf/common.sh@412 -- # remove_spdk_ns 00:36:27.759 03:39:33 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:27.759 03:39:33 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:36:27.759 03:39:33 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:27.759 03:39:33 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:36:27.759 03:39:33 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:36:27.759 03:39:33 nvmf_abort_qd_sizes -- nvmf/common.sh@285 -- # xtrace_disable 00:36:27.759 03:39:33 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:36:29.661 03:39:35 nvmf_abort_qd_sizes -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:29.661 03:39:35 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # pci_devs=() 00:36:29.661 03:39:35 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # local -a pci_devs 00:36:29.661 03:39:35 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # pci_net_devs=() 00:36:29.661 03:39:35 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:36:29.661 03:39:35 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # pci_drivers=() 00:36:29.661 03:39:35 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # local -A pci_drivers 00:36:29.661 03:39:35 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # net_devs=() 00:36:29.661 03:39:35 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # local -ga net_devs 00:36:29.661 03:39:35 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # e810=() 00:36:29.661 03:39:35 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # local -ga e810 00:36:29.661 03:39:35 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # x722=() 00:36:29.661 03:39:35 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # local -ga x722 00:36:29.661 03:39:35 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # mlx=() 00:36:29.661 03:39:35 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # local -ga mlx 00:36:29.661 03:39:35 nvmf_abort_qd_sizes -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:29.661 03:39:35 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:29.661 03:39:35 nvmf_abort_qd_sizes -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:29.661 03:39:35 nvmf_abort_qd_sizes -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:29.661 03:39:35 nvmf_abort_qd_sizes -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:29.661 03:39:35 nvmf_abort_qd_sizes -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:29.661 03:39:35 nvmf_abort_qd_sizes -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:29.661 03:39:35 nvmf_abort_qd_sizes -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:29.661 03:39:35 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:29.661 03:39:35 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:29.661 03:39:35 nvmf_abort_qd_sizes -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:29.661 03:39:35 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:36:29.661 03:39:35 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:36:29.661 03:39:35 nvmf_abort_qd_sizes -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:36:29.661 03:39:35 nvmf_abort_qd_sizes -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:36:29.661 03:39:35 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:36:29.661 03:39:35 nvmf_abort_qd_sizes -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:36:29.661 03:39:35 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:36:29.661 03:39:35 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:36:29.661 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:36:29.661 03:39:35 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:36:29.661 03:39:35 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:36:29.661 03:39:35 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:29.661 03:39:35 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:29.661 03:39:35 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:36:29.661 03:39:35 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:36:29.661 03:39:35 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:36:29.661 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:36:29.661 03:39:35 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:36:29.661 03:39:35 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:36:29.661 03:39:35 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:29.661 03:39:35 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:29.661 03:39:35 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:36:29.661 03:39:35 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:36:29.662 03:39:35 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:36:29.662 03:39:35 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:36:29.662 03:39:35 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:36:29.662 03:39:35 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:29.662 03:39:35 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:36:29.662 03:39:35 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:29.662 03:39:35 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:36:29.662 03:39:35 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:36:29.662 03:39:35 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:29.662 03:39:35 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:36:29.662 Found net devices under 0000:0a:00.0: cvl_0_0 00:36:29.662 03:39:35 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:36:29.662 03:39:35 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:36:29.662 03:39:35 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:29.662 03:39:35 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:36:29.662 03:39:35 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:29.662 03:39:35 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:36:29.662 03:39:35 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:36:29.662 03:39:35 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:29.662 03:39:35 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:36:29.662 Found net devices under 0000:0a:00.1: cvl_0_1 00:36:29.662 03:39:35 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:36:29.662 03:39:35 nvmf_abort_qd_sizes -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:36:29.662 03:39:35 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # is_hw=yes 00:36:29.662 03:39:35 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:36:29.662 03:39:35 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:36:29.662 03:39:35 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:36:29.662 03:39:35 nvmf_abort_qd_sizes -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:29.662 03:39:35 nvmf_abort_qd_sizes -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:29.662 03:39:35 nvmf_abort_qd_sizes -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:36:29.662 03:39:35 nvmf_abort_qd_sizes -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:36:29.662 03:39:35 nvmf_abort_qd_sizes -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:36:29.662 03:39:35 nvmf_abort_qd_sizes -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:36:29.662 03:39:35 nvmf_abort_qd_sizes -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:36:29.662 03:39:35 nvmf_abort_qd_sizes -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:36:29.662 03:39:35 nvmf_abort_qd_sizes -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:29.662 03:39:35 nvmf_abort_qd_sizes -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:36:29.662 03:39:35 nvmf_abort_qd_sizes -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:36:29.662 03:39:35 nvmf_abort_qd_sizes -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:36:29.662 03:39:35 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:36:29.662 03:39:35 nvmf_abort_qd_sizes -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:36:29.662 03:39:35 nvmf_abort_qd_sizes -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:36:29.662 03:39:35 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:36:29.662 03:39:35 nvmf_abort_qd_sizes -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:36:29.662 03:39:35 nvmf_abort_qd_sizes -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:36:29.662 03:39:35 nvmf_abort_qd_sizes -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:36:29.662 03:39:35 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:36:29.662 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:29.662 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.133 ms 00:36:29.662 00:36:29.662 --- 10.0.0.2 ping statistics --- 00:36:29.662 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:29.662 rtt min/avg/max/mdev = 0.133/0.133/0.133/0.000 ms 00:36:29.662 03:39:35 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:36:29.662 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:29.662 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.059 ms 00:36:29.662 00:36:29.662 --- 10.0.0.1 ping statistics --- 00:36:29.662 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:29.662 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:36:29.662 03:39:35 nvmf_abort_qd_sizes -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:29.662 03:39:35 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # return 0 00:36:29.662 03:39:35 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:36:29.662 03:39:35 nvmf_abort_qd_sizes -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:36:30.638 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:36:30.638 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:36:30.638 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:36:30.638 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:36:30.638 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:36:30.638 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:36:30.638 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:36:30.638 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:36:30.638 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:36:30.638 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:36:30.638 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:36:30.638 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:36:30.638 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:36:30.638 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:36:30.638 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:36:30.638 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:36:31.573 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:36:31.831 03:39:37 nvmf_abort_qd_sizes -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:31.831 03:39:37 nvmf_abort_qd_sizes -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:36:31.831 03:39:37 nvmf_abort_qd_sizes -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:36:31.831 03:39:37 nvmf_abort_qd_sizes -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:31.831 03:39:37 nvmf_abort_qd_sizes -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:36:31.831 03:39:37 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:36:31.831 03:39:37 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:36:31.831 03:39:37 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:36:31.831 03:39:37 nvmf_abort_qd_sizes -- common/autotest_common.sh@722 -- # xtrace_disable 00:36:31.831 03:39:37 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:36:31.831 03:39:37 nvmf_abort_qd_sizes -- nvmf/common.sh@481 -- # nvmfpid=3371339 00:36:31.831 03:39:37 nvmf_abort_qd_sizes -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:36:31.831 03:39:37 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # waitforlisten 3371339 00:36:31.831 03:39:37 nvmf_abort_qd_sizes -- common/autotest_common.sh@829 -- # '[' -z 3371339 ']' 00:36:31.831 03:39:37 nvmf_abort_qd_sizes -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:31.831 03:39:37 nvmf_abort_qd_sizes -- common/autotest_common.sh@834 -- # local max_retries=100 00:36:31.831 03:39:37 nvmf_abort_qd_sizes -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:31.831 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:31.831 03:39:37 nvmf_abort_qd_sizes -- common/autotest_common.sh@838 -- # xtrace_disable 00:36:31.831 03:39:37 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:36:31.831 [2024-07-15 03:39:37.916885] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:36:31.831 [2024-07-15 03:39:37.916978] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:31.831 EAL: No free 2048 kB hugepages reported on node 1 00:36:32.090 [2024-07-15 03:39:37.987843] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:36:32.090 [2024-07-15 03:39:38.079915] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:32.090 [2024-07-15 03:39:38.079979] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:32.090 [2024-07-15 03:39:38.079996] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:32.090 [2024-07-15 03:39:38.080011] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:32.090 [2024-07-15 03:39:38.080023] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:32.090 [2024-07-15 03:39:38.080079] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:36:32.090 [2024-07-15 03:39:38.080133] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:36:32.090 [2024-07-15 03:39:38.080252] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:36:32.090 [2024-07-15 03:39:38.080255] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:36:32.090 03:39:38 nvmf_abort_qd_sizes -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:36:32.090 03:39:38 nvmf_abort_qd_sizes -- common/autotest_common.sh@862 -- # return 0 00:36:32.090 03:39:38 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:36:32.090 03:39:38 nvmf_abort_qd_sizes -- common/autotest_common.sh@728 -- # xtrace_disable 00:36:32.090 03:39:38 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:36:32.090 03:39:38 nvmf_abort_qd_sizes -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:32.090 03:39:38 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:36:32.090 03:39:38 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:36:32.090 03:39:38 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:36:32.090 03:39:38 nvmf_abort_qd_sizes -- scripts/common.sh@309 -- # local bdf bdfs 00:36:32.090 03:39:38 nvmf_abort_qd_sizes -- scripts/common.sh@310 -- # local nvmes 00:36:32.090 03:39:38 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # [[ -n 0000:88:00.0 ]] 00:36:32.090 03:39:38 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:36:32.090 03:39:38 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:36:32.090 03:39:38 nvmf_abort_qd_sizes -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:88:00.0 ]] 00:36:32.090 03:39:38 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # uname -s 00:36:32.348 03:39:38 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:36:32.348 03:39:38 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:36:32.348 03:39:38 nvmf_abort_qd_sizes -- scripts/common.sh@325 -- # (( 1 )) 00:36:32.348 03:39:38 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # printf '%s\n' 0000:88:00.0 00:36:32.348 03:39:38 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:36:32.348 03:39:38 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:88:00.0 00:36:32.348 03:39:38 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:36:32.348 03:39:38 nvmf_abort_qd_sizes -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:36:32.348 03:39:38 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # xtrace_disable 00:36:32.348 03:39:38 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:36:32.348 ************************************ 00:36:32.348 START TEST spdk_target_abort 00:36:32.348 ************************************ 00:36:32.348 03:39:38 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1123 -- # spdk_target 00:36:32.348 03:39:38 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:36:32.348 03:39:38 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:88:00.0 -b spdk_target 00:36:32.348 03:39:38 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:32.348 03:39:38 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:35.628 spdk_targetn1 00:36:35.628 03:39:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:35.628 03:39:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:36:35.628 03:39:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:35.628 03:39:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:35.628 [2024-07-15 03:39:41.106022] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:35.628 03:39:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:35.628 03:39:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:36:35.628 03:39:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:35.628 03:39:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:35.628 03:39:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:35.628 03:39:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:36:35.628 03:39:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:35.628 03:39:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:35.628 03:39:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:35.628 03:39:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:36:35.628 03:39:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:35.628 03:39:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:35.628 [2024-07-15 03:39:41.138281] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:35.628 03:39:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:35.628 03:39:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:36:35.628 03:39:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:36:35.628 03:39:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:36:35.628 03:39:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:36:35.628 03:39:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:36:35.628 03:39:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:36:35.628 03:39:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:36:35.628 03:39:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:36:35.628 03:39:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:36:35.628 03:39:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:35.628 03:39:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:36:35.628 03:39:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:35.628 03:39:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:36:35.628 03:39:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:35.628 03:39:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:36:35.628 03:39:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:35.628 03:39:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:36:35.628 03:39:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:35.628 03:39:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:35.628 03:39:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:36:35.629 03:39:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:35.629 EAL: No free 2048 kB hugepages reported on node 1 00:36:38.148 Initializing NVMe Controllers 00:36:38.148 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:36:38.148 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:36:38.148 Initialization complete. Launching workers. 00:36:38.148 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 11135, failed: 0 00:36:38.148 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1197, failed to submit 9938 00:36:38.148 success 751, unsuccess 446, failed 0 00:36:38.148 03:39:44 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:36:38.148 03:39:44 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:38.406 EAL: No free 2048 kB hugepages reported on node 1 00:36:41.677 Initializing NVMe Controllers 00:36:41.677 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:36:41.677 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:36:41.677 Initialization complete. Launching workers. 00:36:41.677 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8552, failed: 0 00:36:41.677 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1246, failed to submit 7306 00:36:41.677 success 339, unsuccess 907, failed 0 00:36:41.677 03:39:47 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:36:41.677 03:39:47 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:41.677 EAL: No free 2048 kB hugepages reported on node 1 00:36:44.953 Initializing NVMe Controllers 00:36:44.953 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:36:44.953 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:36:44.953 Initialization complete. Launching workers. 00:36:44.953 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 32051, failed: 0 00:36:44.953 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2647, failed to submit 29404 00:36:44.953 success 555, unsuccess 2092, failed 0 00:36:44.953 03:39:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:36:44.953 03:39:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:44.953 03:39:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:44.953 03:39:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:44.953 03:39:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:36:44.953 03:39:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:44.953 03:39:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:46.326 03:39:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:46.326 03:39:52 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 3371339 00:36:46.326 03:39:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@948 -- # '[' -z 3371339 ']' 00:36:46.326 03:39:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@952 -- # kill -0 3371339 00:36:46.326 03:39:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@953 -- # uname 00:36:46.326 03:39:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:36:46.326 03:39:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3371339 00:36:46.326 03:39:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:36:46.326 03:39:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:36:46.326 03:39:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3371339' 00:36:46.326 killing process with pid 3371339 00:36:46.326 03:39:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@967 -- # kill 3371339 00:36:46.326 03:39:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@972 -- # wait 3371339 00:36:46.326 00:36:46.326 real 0m14.106s 00:36:46.326 user 0m53.228s 00:36:46.326 sys 0m2.598s 00:36:46.326 03:39:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:36:46.326 03:39:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:46.326 ************************************ 00:36:46.326 END TEST spdk_target_abort 00:36:46.326 ************************************ 00:36:46.326 03:39:52 nvmf_abort_qd_sizes -- common/autotest_common.sh@1142 -- # return 0 00:36:46.326 03:39:52 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:36:46.326 03:39:52 nvmf_abort_qd_sizes -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:36:46.326 03:39:52 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # xtrace_disable 00:36:46.326 03:39:52 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:36:46.326 ************************************ 00:36:46.326 START TEST kernel_target_abort 00:36:46.326 ************************************ 00:36:46.326 03:39:52 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1123 -- # kernel_target 00:36:46.326 03:39:52 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:36:46.326 03:39:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@741 -- # local ip 00:36:46.326 03:39:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:46.326 03:39:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:46.327 03:39:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:46.327 03:39:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:46.327 03:39:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:46.327 03:39:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:46.327 03:39:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:46.327 03:39:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:46.327 03:39:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:46.327 03:39:52 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:36:46.327 03:39:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:36:46.327 03:39:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:36:46.327 03:39:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:36:46.327 03:39:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:36:46.327 03:39:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:36:46.327 03:39:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@639 -- # local block nvme 00:36:46.327 03:39:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:36:46.327 03:39:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@642 -- # modprobe nvmet 00:36:46.327 03:39:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:36:46.327 03:39:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:36:47.699 Waiting for block devices as requested 00:36:47.699 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:36:47.699 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:36:47.699 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:36:47.699 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:36:47.957 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:36:47.957 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:36:47.958 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:36:47.958 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:36:48.216 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:36:48.216 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:36:48.216 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:36:48.216 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:36:48.474 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:36:48.474 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:36:48.474 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:36:48.732 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:36:48.732 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:36:48.732 03:39:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:36:48.732 03:39:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:36:48.732 03:39:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:36:48.732 03:39:54 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:36:48.732 03:39:54 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:36:48.732 03:39:54 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:36:48.732 03:39:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:36:48.732 03:39:54 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:36:48.732 03:39:54 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:36:48.732 No valid GPT data, bailing 00:36:48.732 03:39:54 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:36:48.990 03:39:54 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:36:48.990 03:39:54 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:36:48.990 03:39:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:36:48.990 03:39:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:36:48.990 03:39:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:36:48.990 03:39:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:36:48.990 03:39:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:36:48.990 03:39:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:36:48.990 03:39:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # echo 1 00:36:48.990 03:39:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:36:48.990 03:39:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # echo 1 00:36:48.990 03:39:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:36:48.990 03:39:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@672 -- # echo tcp 00:36:48.990 03:39:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # echo 4420 00:36:48.990 03:39:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@674 -- # echo ipv4 00:36:48.990 03:39:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:36:48.991 03:39:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.1 -t tcp -s 4420 00:36:48.991 00:36:48.991 Discovery Log Number of Records 2, Generation counter 2 00:36:48.991 =====Discovery Log Entry 0====== 00:36:48.991 trtype: tcp 00:36:48.991 adrfam: ipv4 00:36:48.991 subtype: current discovery subsystem 00:36:48.991 treq: not specified, sq flow control disable supported 00:36:48.991 portid: 1 00:36:48.991 trsvcid: 4420 00:36:48.991 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:36:48.991 traddr: 10.0.0.1 00:36:48.991 eflags: none 00:36:48.991 sectype: none 00:36:48.991 =====Discovery Log Entry 1====== 00:36:48.991 trtype: tcp 00:36:48.991 adrfam: ipv4 00:36:48.991 subtype: nvme subsystem 00:36:48.991 treq: not specified, sq flow control disable supported 00:36:48.991 portid: 1 00:36:48.991 trsvcid: 4420 00:36:48.991 subnqn: nqn.2016-06.io.spdk:testnqn 00:36:48.991 traddr: 10.0.0.1 00:36:48.991 eflags: none 00:36:48.991 sectype: none 00:36:48.991 03:39:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:36:48.991 03:39:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:36:48.991 03:39:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:36:48.991 03:39:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:36:48.991 03:39:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:36:48.991 03:39:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:36:48.991 03:39:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:36:48.991 03:39:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:36:48.991 03:39:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:36:48.991 03:39:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:48.991 03:39:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:36:48.991 03:39:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:48.991 03:39:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:36:48.991 03:39:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:48.991 03:39:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:36:48.991 03:39:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:48.991 03:39:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:36:48.991 03:39:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:48.991 03:39:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:48.991 03:39:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:36:48.991 03:39:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:48.991 EAL: No free 2048 kB hugepages reported on node 1 00:36:52.268 Initializing NVMe Controllers 00:36:52.268 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:36:52.268 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:36:52.268 Initialization complete. Launching workers. 00:36:52.268 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 36375, failed: 0 00:36:52.268 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 36375, failed to submit 0 00:36:52.268 success 0, unsuccess 36375, failed 0 00:36:52.268 03:39:58 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:36:52.268 03:39:58 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:52.268 EAL: No free 2048 kB hugepages reported on node 1 00:36:55.548 Initializing NVMe Controllers 00:36:55.548 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:36:55.548 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:36:55.548 Initialization complete. Launching workers. 00:36:55.548 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 71486, failed: 0 00:36:55.548 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 18014, failed to submit 53472 00:36:55.548 success 0, unsuccess 18014, failed 0 00:36:55.548 03:40:01 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:36:55.548 03:40:01 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:55.548 EAL: No free 2048 kB hugepages reported on node 1 00:36:58.860 Initializing NVMe Controllers 00:36:58.860 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:36:58.860 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:36:58.860 Initialization complete. Launching workers. 00:36:58.860 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 71910, failed: 0 00:36:58.860 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 17954, failed to submit 53956 00:36:58.860 success 0, unsuccess 17954, failed 0 00:36:58.860 03:40:04 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:36:58.860 03:40:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:36:58.860 03:40:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # echo 0 00:36:58.860 03:40:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:36:58.860 03:40:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:36:58.860 03:40:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:36:58.860 03:40:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:36:58.861 03:40:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:36:58.861 03:40:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:36:58.861 03:40:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:36:59.426 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:36:59.427 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:36:59.427 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:36:59.427 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:36:59.427 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:36:59.686 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:36:59.686 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:36:59.686 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:36:59.686 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:36:59.686 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:36:59.686 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:36:59.686 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:36:59.686 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:36:59.686 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:36:59.686 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:36:59.686 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:37:00.623 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:37:00.623 00:37:00.623 real 0m14.311s 00:37:00.623 user 0m5.556s 00:37:00.623 sys 0m3.399s 00:37:00.623 03:40:06 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:37:00.623 03:40:06 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:37:00.623 ************************************ 00:37:00.623 END TEST kernel_target_abort 00:37:00.623 ************************************ 00:37:00.623 03:40:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@1142 -- # return 0 00:37:00.623 03:40:06 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:37:00.623 03:40:06 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:37:00.624 03:40:06 nvmf_abort_qd_sizes -- nvmf/common.sh@488 -- # nvmfcleanup 00:37:00.624 03:40:06 nvmf_abort_qd_sizes -- nvmf/common.sh@117 -- # sync 00:37:00.624 03:40:06 nvmf_abort_qd_sizes -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:37:00.624 03:40:06 nvmf_abort_qd_sizes -- nvmf/common.sh@120 -- # set +e 00:37:00.624 03:40:06 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # for i in {1..20} 00:37:00.624 03:40:06 nvmf_abort_qd_sizes -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:37:00.624 rmmod nvme_tcp 00:37:00.882 rmmod nvme_fabrics 00:37:00.882 rmmod nvme_keyring 00:37:00.882 03:40:06 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:37:00.882 03:40:06 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set -e 00:37:00.882 03:40:06 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # return 0 00:37:00.882 03:40:06 nvmf_abort_qd_sizes -- nvmf/common.sh@489 -- # '[' -n 3371339 ']' 00:37:00.882 03:40:06 nvmf_abort_qd_sizes -- nvmf/common.sh@490 -- # killprocess 3371339 00:37:00.882 03:40:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@948 -- # '[' -z 3371339 ']' 00:37:00.882 03:40:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@952 -- # kill -0 3371339 00:37:00.882 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (3371339) - No such process 00:37:00.882 03:40:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@975 -- # echo 'Process with pid 3371339 is not found' 00:37:00.882 Process with pid 3371339 is not found 00:37:00.882 03:40:06 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:37:00.882 03:40:06 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:37:01.815 Waiting for block devices as requested 00:37:01.815 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:37:01.815 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:37:02.074 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:37:02.074 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:37:02.074 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:37:02.333 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:37:02.333 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:37:02.333 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:37:02.333 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:37:02.592 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:37:02.592 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:37:02.592 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:37:02.592 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:37:02.850 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:37:02.850 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:37:02.850 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:37:03.109 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:37:03.109 03:40:09 nvmf_abort_qd_sizes -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:37:03.109 03:40:09 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:37:03.109 03:40:09 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:37:03.109 03:40:09 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # remove_spdk_ns 00:37:03.109 03:40:09 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:03.109 03:40:09 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:37:03.109 03:40:09 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:05.638 03:40:11 nvmf_abort_qd_sizes -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:37:05.638 00:37:05.638 real 0m37.789s 00:37:05.638 user 1m0.826s 00:37:05.638 sys 0m9.301s 00:37:05.638 03:40:11 nvmf_abort_qd_sizes -- common/autotest_common.sh@1124 -- # xtrace_disable 00:37:05.638 03:40:11 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:37:05.638 ************************************ 00:37:05.638 END TEST nvmf_abort_qd_sizes 00:37:05.638 ************************************ 00:37:05.638 03:40:11 -- common/autotest_common.sh@1142 -- # return 0 00:37:05.638 03:40:11 -- spdk/autotest.sh@295 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:37:05.638 03:40:11 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:37:05.638 03:40:11 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:37:05.638 03:40:11 -- common/autotest_common.sh@10 -- # set +x 00:37:05.638 ************************************ 00:37:05.638 START TEST keyring_file 00:37:05.638 ************************************ 00:37:05.638 03:40:11 keyring_file -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:37:05.638 * Looking for test storage... 00:37:05.638 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:37:05.638 03:40:11 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:37:05.638 03:40:11 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:05.638 03:40:11 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:37:05.638 03:40:11 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:05.638 03:40:11 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:05.638 03:40:11 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:05.638 03:40:11 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:05.638 03:40:11 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:05.638 03:40:11 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:05.638 03:40:11 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:05.638 03:40:11 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:05.638 03:40:11 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:05.638 03:40:11 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:05.638 03:40:11 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:37:05.638 03:40:11 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:37:05.638 03:40:11 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:05.638 03:40:11 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:05.638 03:40:11 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:05.638 03:40:11 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:05.638 03:40:11 keyring_file -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:05.638 03:40:11 keyring_file -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:05.638 03:40:11 keyring_file -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:05.638 03:40:11 keyring_file -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:05.638 03:40:11 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:05.638 03:40:11 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:05.638 03:40:11 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:05.638 03:40:11 keyring_file -- paths/export.sh@5 -- # export PATH 00:37:05.638 03:40:11 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:05.638 03:40:11 keyring_file -- nvmf/common.sh@47 -- # : 0 00:37:05.638 03:40:11 keyring_file -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:37:05.638 03:40:11 keyring_file -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:37:05.638 03:40:11 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:05.638 03:40:11 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:05.638 03:40:11 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:05.639 03:40:11 keyring_file -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:37:05.639 03:40:11 keyring_file -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:37:05.639 03:40:11 keyring_file -- nvmf/common.sh@51 -- # have_pci_nics=0 00:37:05.639 03:40:11 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:37:05.639 03:40:11 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:37:05.639 03:40:11 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:37:05.639 03:40:11 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:37:05.639 03:40:11 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:37:05.639 03:40:11 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:37:05.639 03:40:11 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:37:05.639 03:40:11 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:37:05.639 03:40:11 keyring_file -- keyring/common.sh@17 -- # name=key0 00:37:05.639 03:40:11 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:37:05.639 03:40:11 keyring_file -- keyring/common.sh@17 -- # digest=0 00:37:05.639 03:40:11 keyring_file -- keyring/common.sh@18 -- # mktemp 00:37:05.639 03:40:11 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.qTxdDbvDic 00:37:05.639 03:40:11 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:37:05.639 03:40:11 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:37:05.639 03:40:11 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:37:05.639 03:40:11 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:37:05.639 03:40:11 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:37:05.639 03:40:11 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:37:05.639 03:40:11 keyring_file -- nvmf/common.sh@705 -- # python - 00:37:05.639 03:40:11 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.qTxdDbvDic 00:37:05.639 03:40:11 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.qTxdDbvDic 00:37:05.639 03:40:11 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.qTxdDbvDic 00:37:05.639 03:40:11 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:37:05.639 03:40:11 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:37:05.639 03:40:11 keyring_file -- keyring/common.sh@17 -- # name=key1 00:37:05.639 03:40:11 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:37:05.639 03:40:11 keyring_file -- keyring/common.sh@17 -- # digest=0 00:37:05.639 03:40:11 keyring_file -- keyring/common.sh@18 -- # mktemp 00:37:05.639 03:40:11 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.IQGqMC0lLt 00:37:05.639 03:40:11 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:37:05.639 03:40:11 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:37:05.639 03:40:11 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:37:05.639 03:40:11 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:37:05.639 03:40:11 keyring_file -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:37:05.639 03:40:11 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:37:05.639 03:40:11 keyring_file -- nvmf/common.sh@705 -- # python - 00:37:05.639 03:40:11 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.IQGqMC0lLt 00:37:05.639 03:40:11 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.IQGqMC0lLt 00:37:05.639 03:40:11 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.IQGqMC0lLt 00:37:05.639 03:40:11 keyring_file -- keyring/file.sh@30 -- # tgtpid=3377103 00:37:05.639 03:40:11 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:37:05.639 03:40:11 keyring_file -- keyring/file.sh@32 -- # waitforlisten 3377103 00:37:05.639 03:40:11 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 3377103 ']' 00:37:05.639 03:40:11 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:05.639 03:40:11 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:37:05.639 03:40:11 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:05.639 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:05.639 03:40:11 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:37:05.639 03:40:11 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:37:05.639 [2024-07-15 03:40:11.444965] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:37:05.639 [2024-07-15 03:40:11.445046] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3377103 ] 00:37:05.639 EAL: No free 2048 kB hugepages reported on node 1 00:37:05.639 [2024-07-15 03:40:11.503609] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:05.639 [2024-07-15 03:40:11.587780] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:37:05.898 03:40:11 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:37:05.898 03:40:11 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:37:05.898 03:40:11 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:37:05.898 03:40:11 keyring_file -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:05.898 03:40:11 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:37:05.898 [2024-07-15 03:40:11.842776] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:05.898 null0 00:37:05.898 [2024-07-15 03:40:11.874823] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:37:05.898 [2024-07-15 03:40:11.875347] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:37:05.898 [2024-07-15 03:40:11.882841] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:37:05.898 03:40:11 keyring_file -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:05.898 03:40:11 keyring_file -- keyring/file.sh@43 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:37:05.898 03:40:11 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:37:05.898 03:40:11 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:37:05.898 03:40:11 keyring_file -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:37:05.898 03:40:11 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:37:05.898 03:40:11 keyring_file -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:37:05.898 03:40:11 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:37:05.898 03:40:11 keyring_file -- common/autotest_common.sh@651 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:37:05.898 03:40:11 keyring_file -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:05.898 03:40:11 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:37:05.898 [2024-07-15 03:40:11.894874] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:37:05.898 request: 00:37:05.898 { 00:37:05.898 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:37:05.898 "secure_channel": false, 00:37:05.898 "listen_address": { 00:37:05.898 "trtype": "tcp", 00:37:05.898 "traddr": "127.0.0.1", 00:37:05.898 "trsvcid": "4420" 00:37:05.898 }, 00:37:05.898 "method": "nvmf_subsystem_add_listener", 00:37:05.898 "req_id": 1 00:37:05.898 } 00:37:05.898 Got JSON-RPC error response 00:37:05.898 response: 00:37:05.898 { 00:37:05.898 "code": -32602, 00:37:05.898 "message": "Invalid parameters" 00:37:05.898 } 00:37:05.898 03:40:11 keyring_file -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:37:05.898 03:40:11 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:37:05.898 03:40:11 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:37:05.898 03:40:11 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:37:05.898 03:40:11 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:37:05.898 03:40:11 keyring_file -- keyring/file.sh@46 -- # bperfpid=3377107 00:37:05.898 03:40:11 keyring_file -- keyring/file.sh@48 -- # waitforlisten 3377107 /var/tmp/bperf.sock 00:37:05.898 03:40:11 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 3377107 ']' 00:37:05.898 03:40:11 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:37:05.898 03:40:11 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:37:05.898 03:40:11 keyring_file -- keyring/file.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:37:05.898 03:40:11 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:37:05.898 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:37:05.898 03:40:11 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:37:05.898 03:40:11 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:37:05.898 [2024-07-15 03:40:11.944461] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:37:05.898 [2024-07-15 03:40:11.944529] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3377107 ] 00:37:05.898 EAL: No free 2048 kB hugepages reported on node 1 00:37:05.898 [2024-07-15 03:40:12.002383] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:06.156 [2024-07-15 03:40:12.088564] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:37:06.156 03:40:12 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:37:06.156 03:40:12 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:37:06.156 03:40:12 keyring_file -- keyring/file.sh@49 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.qTxdDbvDic 00:37:06.156 03:40:12 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.qTxdDbvDic 00:37:06.413 03:40:12 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.IQGqMC0lLt 00:37:06.414 03:40:12 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.IQGqMC0lLt 00:37:06.671 03:40:12 keyring_file -- keyring/file.sh@51 -- # get_key key0 00:37:06.671 03:40:12 keyring_file -- keyring/file.sh@51 -- # jq -r .path 00:37:06.671 03:40:12 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:06.671 03:40:12 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:37:06.671 03:40:12 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:06.929 03:40:12 keyring_file -- keyring/file.sh@51 -- # [[ /tmp/tmp.qTxdDbvDic == \/\t\m\p\/\t\m\p\.\q\T\x\d\D\b\v\D\i\c ]] 00:37:06.929 03:40:12 keyring_file -- keyring/file.sh@52 -- # get_key key1 00:37:06.929 03:40:12 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:37:06.929 03:40:12 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:06.929 03:40:12 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:37:06.929 03:40:12 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:07.187 03:40:13 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.IQGqMC0lLt == \/\t\m\p\/\t\m\p\.\I\Q\G\q\M\C\0\l\L\t ]] 00:37:07.187 03:40:13 keyring_file -- keyring/file.sh@53 -- # get_refcnt key0 00:37:07.187 03:40:13 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:37:07.187 03:40:13 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:07.187 03:40:13 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:07.187 03:40:13 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:37:07.187 03:40:13 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:07.444 03:40:13 keyring_file -- keyring/file.sh@53 -- # (( 1 == 1 )) 00:37:07.444 03:40:13 keyring_file -- keyring/file.sh@54 -- # get_refcnt key1 00:37:07.444 03:40:13 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:37:07.444 03:40:13 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:07.444 03:40:13 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:07.445 03:40:13 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:07.445 03:40:13 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:37:07.701 03:40:13 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:37:07.701 03:40:13 keyring_file -- keyring/file.sh@57 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:37:07.701 03:40:13 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:37:07.958 [2024-07-15 03:40:13.935449] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:37:07.958 nvme0n1 00:37:07.958 03:40:14 keyring_file -- keyring/file.sh@59 -- # get_refcnt key0 00:37:07.958 03:40:14 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:37:07.958 03:40:14 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:07.958 03:40:14 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:07.958 03:40:14 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:07.958 03:40:14 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:37:08.215 03:40:14 keyring_file -- keyring/file.sh@59 -- # (( 2 == 2 )) 00:37:08.215 03:40:14 keyring_file -- keyring/file.sh@60 -- # get_refcnt key1 00:37:08.215 03:40:14 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:37:08.215 03:40:14 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:08.215 03:40:14 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:08.215 03:40:14 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:08.215 03:40:14 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:37:08.473 03:40:14 keyring_file -- keyring/file.sh@60 -- # (( 1 == 1 )) 00:37:08.473 03:40:14 keyring_file -- keyring/file.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:37:08.473 Running I/O for 1 seconds... 00:37:09.845 00:37:09.845 Latency(us) 00:37:09.845 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:09.845 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:37:09.845 nvme0n1 : 1.01 6527.78 25.50 0.00 0.00 19507.82 4951.61 28932.93 00:37:09.845 =================================================================================================================== 00:37:09.845 Total : 6527.78 25.50 0.00 0.00 19507.82 4951.61 28932.93 00:37:09.845 0 00:37:09.845 03:40:15 keyring_file -- keyring/file.sh@64 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:37:09.845 03:40:15 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:37:09.845 03:40:15 keyring_file -- keyring/file.sh@65 -- # get_refcnt key0 00:37:09.845 03:40:15 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:37:09.845 03:40:15 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:09.845 03:40:15 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:09.845 03:40:15 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:09.845 03:40:15 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:37:10.103 03:40:16 keyring_file -- keyring/file.sh@65 -- # (( 1 == 1 )) 00:37:10.103 03:40:16 keyring_file -- keyring/file.sh@66 -- # get_refcnt key1 00:37:10.103 03:40:16 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:37:10.103 03:40:16 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:10.103 03:40:16 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:10.103 03:40:16 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:10.103 03:40:16 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:37:10.361 03:40:16 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:37:10.361 03:40:16 keyring_file -- keyring/file.sh@69 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:37:10.361 03:40:16 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:37:10.361 03:40:16 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:37:10.361 03:40:16 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:37:10.361 03:40:16 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:37:10.361 03:40:16 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:37:10.361 03:40:16 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:37:10.361 03:40:16 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:37:10.361 03:40:16 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:37:10.619 [2024-07-15 03:40:16.640103] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:37:10.619 [2024-07-15 03:40:16.640699] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x131b8f0 (107): Transport endpoint is not connected 00:37:10.619 [2024-07-15 03:40:16.641689] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x131b8f0 (9): Bad file descriptor 00:37:10.619 [2024-07-15 03:40:16.642688] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:37:10.619 [2024-07-15 03:40:16.642711] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:37:10.619 [2024-07-15 03:40:16.642728] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:37:10.619 request: 00:37:10.619 { 00:37:10.619 "name": "nvme0", 00:37:10.619 "trtype": "tcp", 00:37:10.619 "traddr": "127.0.0.1", 00:37:10.619 "adrfam": "ipv4", 00:37:10.619 "trsvcid": "4420", 00:37:10.619 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:10.619 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:10.619 "prchk_reftag": false, 00:37:10.619 "prchk_guard": false, 00:37:10.619 "hdgst": false, 00:37:10.619 "ddgst": false, 00:37:10.619 "psk": "key1", 00:37:10.619 "method": "bdev_nvme_attach_controller", 00:37:10.619 "req_id": 1 00:37:10.619 } 00:37:10.619 Got JSON-RPC error response 00:37:10.619 response: 00:37:10.619 { 00:37:10.619 "code": -5, 00:37:10.619 "message": "Input/output error" 00:37:10.619 } 00:37:10.619 03:40:16 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:37:10.619 03:40:16 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:37:10.619 03:40:16 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:37:10.619 03:40:16 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:37:10.619 03:40:16 keyring_file -- keyring/file.sh@71 -- # get_refcnt key0 00:37:10.619 03:40:16 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:37:10.619 03:40:16 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:10.619 03:40:16 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:10.619 03:40:16 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:37:10.619 03:40:16 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:10.878 03:40:16 keyring_file -- keyring/file.sh@71 -- # (( 1 == 1 )) 00:37:10.878 03:40:16 keyring_file -- keyring/file.sh@72 -- # get_refcnt key1 00:37:10.878 03:40:16 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:37:10.878 03:40:16 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:10.878 03:40:16 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:10.878 03:40:16 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:10.878 03:40:16 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:37:11.136 03:40:17 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:37:11.136 03:40:17 keyring_file -- keyring/file.sh@75 -- # bperf_cmd keyring_file_remove_key key0 00:37:11.136 03:40:17 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:37:11.394 03:40:17 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key1 00:37:11.394 03:40:17 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:37:11.652 03:40:17 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_get_keys 00:37:11.652 03:40:17 keyring_file -- keyring/file.sh@77 -- # jq length 00:37:11.652 03:40:17 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:11.910 03:40:17 keyring_file -- keyring/file.sh@77 -- # (( 0 == 0 )) 00:37:11.910 03:40:17 keyring_file -- keyring/file.sh@80 -- # chmod 0660 /tmp/tmp.qTxdDbvDic 00:37:11.910 03:40:17 keyring_file -- keyring/file.sh@81 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.qTxdDbvDic 00:37:11.910 03:40:17 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:37:11.910 03:40:17 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.qTxdDbvDic 00:37:11.910 03:40:17 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:37:11.910 03:40:17 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:37:11.910 03:40:17 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:37:11.910 03:40:17 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:37:11.910 03:40:17 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.qTxdDbvDic 00:37:11.910 03:40:17 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.qTxdDbvDic 00:37:12.168 [2024-07-15 03:40:18.158599] keyring.c: 34:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.qTxdDbvDic': 0100660 00:37:12.168 [2024-07-15 03:40:18.158642] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:37:12.168 request: 00:37:12.168 { 00:37:12.168 "name": "key0", 00:37:12.168 "path": "/tmp/tmp.qTxdDbvDic", 00:37:12.168 "method": "keyring_file_add_key", 00:37:12.168 "req_id": 1 00:37:12.168 } 00:37:12.168 Got JSON-RPC error response 00:37:12.168 response: 00:37:12.168 { 00:37:12.168 "code": -1, 00:37:12.168 "message": "Operation not permitted" 00:37:12.168 } 00:37:12.168 03:40:18 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:37:12.168 03:40:18 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:37:12.168 03:40:18 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:37:12.168 03:40:18 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:37:12.168 03:40:18 keyring_file -- keyring/file.sh@84 -- # chmod 0600 /tmp/tmp.qTxdDbvDic 00:37:12.168 03:40:18 keyring_file -- keyring/file.sh@85 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.qTxdDbvDic 00:37:12.168 03:40:18 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.qTxdDbvDic 00:37:12.427 03:40:18 keyring_file -- keyring/file.sh@86 -- # rm -f /tmp/tmp.qTxdDbvDic 00:37:12.427 03:40:18 keyring_file -- keyring/file.sh@88 -- # get_refcnt key0 00:37:12.427 03:40:18 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:37:12.427 03:40:18 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:12.427 03:40:18 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:12.427 03:40:18 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:37:12.427 03:40:18 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:12.685 03:40:18 keyring_file -- keyring/file.sh@88 -- # (( 1 == 1 )) 00:37:12.685 03:40:18 keyring_file -- keyring/file.sh@90 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:37:12.685 03:40:18 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:37:12.685 03:40:18 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:37:12.685 03:40:18 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:37:12.685 03:40:18 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:37:12.685 03:40:18 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:37:12.685 03:40:18 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:37:12.685 03:40:18 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:37:12.685 03:40:18 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:37:12.943 [2024-07-15 03:40:18.900656] keyring.c: 29:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.qTxdDbvDic': No such file or directory 00:37:12.943 [2024-07-15 03:40:18.900709] nvme_tcp.c:2582:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:37:12.943 [2024-07-15 03:40:18.900741] nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:37:12.943 [2024-07-15 03:40:18.900754] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:37:12.943 [2024-07-15 03:40:18.900767] bdev_nvme.c:6268:bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:37:12.943 request: 00:37:12.943 { 00:37:12.943 "name": "nvme0", 00:37:12.943 "trtype": "tcp", 00:37:12.943 "traddr": "127.0.0.1", 00:37:12.943 "adrfam": "ipv4", 00:37:12.943 "trsvcid": "4420", 00:37:12.943 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:12.943 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:12.943 "prchk_reftag": false, 00:37:12.943 "prchk_guard": false, 00:37:12.943 "hdgst": false, 00:37:12.943 "ddgst": false, 00:37:12.943 "psk": "key0", 00:37:12.943 "method": "bdev_nvme_attach_controller", 00:37:12.943 "req_id": 1 00:37:12.943 } 00:37:12.943 Got JSON-RPC error response 00:37:12.943 response: 00:37:12.943 { 00:37:12.943 "code": -19, 00:37:12.943 "message": "No such device" 00:37:12.943 } 00:37:12.943 03:40:18 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:37:12.943 03:40:18 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:37:12.943 03:40:18 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:37:12.943 03:40:18 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:37:12.943 03:40:18 keyring_file -- keyring/file.sh@92 -- # bperf_cmd keyring_file_remove_key key0 00:37:12.943 03:40:18 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:37:13.202 03:40:19 keyring_file -- keyring/file.sh@95 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:37:13.202 03:40:19 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:37:13.202 03:40:19 keyring_file -- keyring/common.sh@17 -- # name=key0 00:37:13.202 03:40:19 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:37:13.202 03:40:19 keyring_file -- keyring/common.sh@17 -- # digest=0 00:37:13.202 03:40:19 keyring_file -- keyring/common.sh@18 -- # mktemp 00:37:13.202 03:40:19 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.j5n7jk3ieV 00:37:13.202 03:40:19 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:37:13.202 03:40:19 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:37:13.202 03:40:19 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:37:13.202 03:40:19 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:37:13.202 03:40:19 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:37:13.202 03:40:19 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:37:13.202 03:40:19 keyring_file -- nvmf/common.sh@705 -- # python - 00:37:13.202 03:40:19 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.j5n7jk3ieV 00:37:13.202 03:40:19 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.j5n7jk3ieV 00:37:13.202 03:40:19 keyring_file -- keyring/file.sh@95 -- # key0path=/tmp/tmp.j5n7jk3ieV 00:37:13.202 03:40:19 keyring_file -- keyring/file.sh@96 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.j5n7jk3ieV 00:37:13.202 03:40:19 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.j5n7jk3ieV 00:37:13.460 03:40:19 keyring_file -- keyring/file.sh@97 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:37:13.460 03:40:19 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:37:13.719 nvme0n1 00:37:13.719 03:40:19 keyring_file -- keyring/file.sh@99 -- # get_refcnt key0 00:37:13.719 03:40:19 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:37:13.719 03:40:19 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:13.719 03:40:19 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:13.719 03:40:19 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:13.719 03:40:19 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:37:13.977 03:40:20 keyring_file -- keyring/file.sh@99 -- # (( 2 == 2 )) 00:37:13.977 03:40:20 keyring_file -- keyring/file.sh@100 -- # bperf_cmd keyring_file_remove_key key0 00:37:13.977 03:40:20 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:37:14.235 03:40:20 keyring_file -- keyring/file.sh@101 -- # get_key key0 00:37:14.235 03:40:20 keyring_file -- keyring/file.sh@101 -- # jq -r .removed 00:37:14.235 03:40:20 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:14.235 03:40:20 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:14.235 03:40:20 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:37:14.492 03:40:20 keyring_file -- keyring/file.sh@101 -- # [[ true == \t\r\u\e ]] 00:37:14.492 03:40:20 keyring_file -- keyring/file.sh@102 -- # get_refcnt key0 00:37:14.492 03:40:20 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:37:14.492 03:40:20 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:14.492 03:40:20 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:14.492 03:40:20 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:14.492 03:40:20 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:37:14.749 03:40:20 keyring_file -- keyring/file.sh@102 -- # (( 1 == 1 )) 00:37:14.749 03:40:20 keyring_file -- keyring/file.sh@103 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:37:14.749 03:40:20 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:37:15.033 03:40:21 keyring_file -- keyring/file.sh@104 -- # bperf_cmd keyring_get_keys 00:37:15.033 03:40:21 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:15.033 03:40:21 keyring_file -- keyring/file.sh@104 -- # jq length 00:37:15.292 03:40:21 keyring_file -- keyring/file.sh@104 -- # (( 0 == 0 )) 00:37:15.292 03:40:21 keyring_file -- keyring/file.sh@107 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.j5n7jk3ieV 00:37:15.292 03:40:21 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.j5n7jk3ieV 00:37:15.550 03:40:21 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.IQGqMC0lLt 00:37:15.550 03:40:21 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.IQGqMC0lLt 00:37:15.808 03:40:21 keyring_file -- keyring/file.sh@109 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:37:15.808 03:40:21 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:37:16.065 nvme0n1 00:37:16.065 03:40:22 keyring_file -- keyring/file.sh@112 -- # bperf_cmd save_config 00:37:16.065 03:40:22 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:37:16.323 03:40:22 keyring_file -- keyring/file.sh@112 -- # config='{ 00:37:16.323 "subsystems": [ 00:37:16.323 { 00:37:16.323 "subsystem": "keyring", 00:37:16.323 "config": [ 00:37:16.323 { 00:37:16.323 "method": "keyring_file_add_key", 00:37:16.323 "params": { 00:37:16.323 "name": "key0", 00:37:16.323 "path": "/tmp/tmp.j5n7jk3ieV" 00:37:16.323 } 00:37:16.323 }, 00:37:16.323 { 00:37:16.323 "method": "keyring_file_add_key", 00:37:16.323 "params": { 00:37:16.323 "name": "key1", 00:37:16.323 "path": "/tmp/tmp.IQGqMC0lLt" 00:37:16.323 } 00:37:16.323 } 00:37:16.323 ] 00:37:16.323 }, 00:37:16.323 { 00:37:16.323 "subsystem": "iobuf", 00:37:16.323 "config": [ 00:37:16.323 { 00:37:16.323 "method": "iobuf_set_options", 00:37:16.323 "params": { 00:37:16.323 "small_pool_count": 8192, 00:37:16.323 "large_pool_count": 1024, 00:37:16.323 "small_bufsize": 8192, 00:37:16.323 "large_bufsize": 135168 00:37:16.323 } 00:37:16.323 } 00:37:16.323 ] 00:37:16.323 }, 00:37:16.323 { 00:37:16.323 "subsystem": "sock", 00:37:16.323 "config": [ 00:37:16.323 { 00:37:16.323 "method": "sock_set_default_impl", 00:37:16.323 "params": { 00:37:16.323 "impl_name": "posix" 00:37:16.323 } 00:37:16.323 }, 00:37:16.323 { 00:37:16.323 "method": "sock_impl_set_options", 00:37:16.323 "params": { 00:37:16.323 "impl_name": "ssl", 00:37:16.323 "recv_buf_size": 4096, 00:37:16.323 "send_buf_size": 4096, 00:37:16.323 "enable_recv_pipe": true, 00:37:16.323 "enable_quickack": false, 00:37:16.323 "enable_placement_id": 0, 00:37:16.323 "enable_zerocopy_send_server": true, 00:37:16.323 "enable_zerocopy_send_client": false, 00:37:16.323 "zerocopy_threshold": 0, 00:37:16.323 "tls_version": 0, 00:37:16.323 "enable_ktls": false 00:37:16.323 } 00:37:16.323 }, 00:37:16.323 { 00:37:16.323 "method": "sock_impl_set_options", 00:37:16.323 "params": { 00:37:16.323 "impl_name": "posix", 00:37:16.323 "recv_buf_size": 2097152, 00:37:16.323 "send_buf_size": 2097152, 00:37:16.323 "enable_recv_pipe": true, 00:37:16.323 "enable_quickack": false, 00:37:16.323 "enable_placement_id": 0, 00:37:16.323 "enable_zerocopy_send_server": true, 00:37:16.323 "enable_zerocopy_send_client": false, 00:37:16.323 "zerocopy_threshold": 0, 00:37:16.323 "tls_version": 0, 00:37:16.323 "enable_ktls": false 00:37:16.323 } 00:37:16.323 } 00:37:16.323 ] 00:37:16.323 }, 00:37:16.323 { 00:37:16.323 "subsystem": "vmd", 00:37:16.323 "config": [] 00:37:16.323 }, 00:37:16.323 { 00:37:16.323 "subsystem": "accel", 00:37:16.323 "config": [ 00:37:16.323 { 00:37:16.323 "method": "accel_set_options", 00:37:16.323 "params": { 00:37:16.323 "small_cache_size": 128, 00:37:16.323 "large_cache_size": 16, 00:37:16.323 "task_count": 2048, 00:37:16.323 "sequence_count": 2048, 00:37:16.323 "buf_count": 2048 00:37:16.323 } 00:37:16.323 } 00:37:16.323 ] 00:37:16.323 }, 00:37:16.323 { 00:37:16.323 "subsystem": "bdev", 00:37:16.323 "config": [ 00:37:16.323 { 00:37:16.323 "method": "bdev_set_options", 00:37:16.323 "params": { 00:37:16.323 "bdev_io_pool_size": 65535, 00:37:16.323 "bdev_io_cache_size": 256, 00:37:16.323 "bdev_auto_examine": true, 00:37:16.323 "iobuf_small_cache_size": 128, 00:37:16.323 "iobuf_large_cache_size": 16 00:37:16.323 } 00:37:16.323 }, 00:37:16.323 { 00:37:16.323 "method": "bdev_raid_set_options", 00:37:16.323 "params": { 00:37:16.323 "process_window_size_kb": 1024 00:37:16.323 } 00:37:16.323 }, 00:37:16.323 { 00:37:16.323 "method": "bdev_iscsi_set_options", 00:37:16.323 "params": { 00:37:16.323 "timeout_sec": 30 00:37:16.323 } 00:37:16.323 }, 00:37:16.323 { 00:37:16.323 "method": "bdev_nvme_set_options", 00:37:16.323 "params": { 00:37:16.323 "action_on_timeout": "none", 00:37:16.323 "timeout_us": 0, 00:37:16.323 "timeout_admin_us": 0, 00:37:16.323 "keep_alive_timeout_ms": 10000, 00:37:16.323 "arbitration_burst": 0, 00:37:16.323 "low_priority_weight": 0, 00:37:16.323 "medium_priority_weight": 0, 00:37:16.323 "high_priority_weight": 0, 00:37:16.323 "nvme_adminq_poll_period_us": 10000, 00:37:16.323 "nvme_ioq_poll_period_us": 0, 00:37:16.323 "io_queue_requests": 512, 00:37:16.323 "delay_cmd_submit": true, 00:37:16.323 "transport_retry_count": 4, 00:37:16.323 "bdev_retry_count": 3, 00:37:16.323 "transport_ack_timeout": 0, 00:37:16.323 "ctrlr_loss_timeout_sec": 0, 00:37:16.323 "reconnect_delay_sec": 0, 00:37:16.323 "fast_io_fail_timeout_sec": 0, 00:37:16.323 "disable_auto_failback": false, 00:37:16.323 "generate_uuids": false, 00:37:16.323 "transport_tos": 0, 00:37:16.323 "nvme_error_stat": false, 00:37:16.323 "rdma_srq_size": 0, 00:37:16.323 "io_path_stat": false, 00:37:16.323 "allow_accel_sequence": false, 00:37:16.323 "rdma_max_cq_size": 0, 00:37:16.323 "rdma_cm_event_timeout_ms": 0, 00:37:16.323 "dhchap_digests": [ 00:37:16.323 "sha256", 00:37:16.323 "sha384", 00:37:16.323 "sha512" 00:37:16.323 ], 00:37:16.323 "dhchap_dhgroups": [ 00:37:16.323 "null", 00:37:16.323 "ffdhe2048", 00:37:16.323 "ffdhe3072", 00:37:16.323 "ffdhe4096", 00:37:16.323 "ffdhe6144", 00:37:16.323 "ffdhe8192" 00:37:16.323 ] 00:37:16.323 } 00:37:16.323 }, 00:37:16.323 { 00:37:16.323 "method": "bdev_nvme_attach_controller", 00:37:16.323 "params": { 00:37:16.323 "name": "nvme0", 00:37:16.323 "trtype": "TCP", 00:37:16.323 "adrfam": "IPv4", 00:37:16.323 "traddr": "127.0.0.1", 00:37:16.323 "trsvcid": "4420", 00:37:16.323 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:16.323 "prchk_reftag": false, 00:37:16.323 "prchk_guard": false, 00:37:16.323 "ctrlr_loss_timeout_sec": 0, 00:37:16.323 "reconnect_delay_sec": 0, 00:37:16.323 "fast_io_fail_timeout_sec": 0, 00:37:16.323 "psk": "key0", 00:37:16.323 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:16.323 "hdgst": false, 00:37:16.323 "ddgst": false 00:37:16.323 } 00:37:16.323 }, 00:37:16.323 { 00:37:16.323 "method": "bdev_nvme_set_hotplug", 00:37:16.323 "params": { 00:37:16.323 "period_us": 100000, 00:37:16.323 "enable": false 00:37:16.323 } 00:37:16.323 }, 00:37:16.323 { 00:37:16.323 "method": "bdev_wait_for_examine" 00:37:16.323 } 00:37:16.323 ] 00:37:16.323 }, 00:37:16.323 { 00:37:16.323 "subsystem": "nbd", 00:37:16.323 "config": [] 00:37:16.323 } 00:37:16.323 ] 00:37:16.323 }' 00:37:16.323 03:40:22 keyring_file -- keyring/file.sh@114 -- # killprocess 3377107 00:37:16.323 03:40:22 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 3377107 ']' 00:37:16.323 03:40:22 keyring_file -- common/autotest_common.sh@952 -- # kill -0 3377107 00:37:16.323 03:40:22 keyring_file -- common/autotest_common.sh@953 -- # uname 00:37:16.323 03:40:22 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:37:16.323 03:40:22 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3377107 00:37:16.323 03:40:22 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:37:16.323 03:40:22 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:37:16.323 03:40:22 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3377107' 00:37:16.323 killing process with pid 3377107 00:37:16.323 03:40:22 keyring_file -- common/autotest_common.sh@967 -- # kill 3377107 00:37:16.323 Received shutdown signal, test time was about 1.000000 seconds 00:37:16.323 00:37:16.323 Latency(us) 00:37:16.323 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:16.323 =================================================================================================================== 00:37:16.324 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:37:16.324 03:40:22 keyring_file -- common/autotest_common.sh@972 -- # wait 3377107 00:37:16.581 03:40:22 keyring_file -- keyring/file.sh@117 -- # bperfpid=3378560 00:37:16.581 03:40:22 keyring_file -- keyring/file.sh@119 -- # waitforlisten 3378560 /var/tmp/bperf.sock 00:37:16.581 03:40:22 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 3378560 ']' 00:37:16.581 03:40:22 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:37:16.581 03:40:22 keyring_file -- keyring/file.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:37:16.581 03:40:22 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:37:16.581 03:40:22 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:37:16.581 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:37:16.582 03:40:22 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:37:16.582 03:40:22 keyring_file -- keyring/file.sh@115 -- # echo '{ 00:37:16.582 "subsystems": [ 00:37:16.582 { 00:37:16.582 "subsystem": "keyring", 00:37:16.582 "config": [ 00:37:16.582 { 00:37:16.582 "method": "keyring_file_add_key", 00:37:16.582 "params": { 00:37:16.582 "name": "key0", 00:37:16.582 "path": "/tmp/tmp.j5n7jk3ieV" 00:37:16.582 } 00:37:16.582 }, 00:37:16.582 { 00:37:16.582 "method": "keyring_file_add_key", 00:37:16.582 "params": { 00:37:16.582 "name": "key1", 00:37:16.582 "path": "/tmp/tmp.IQGqMC0lLt" 00:37:16.582 } 00:37:16.582 } 00:37:16.582 ] 00:37:16.582 }, 00:37:16.582 { 00:37:16.582 "subsystem": "iobuf", 00:37:16.582 "config": [ 00:37:16.582 { 00:37:16.582 "method": "iobuf_set_options", 00:37:16.582 "params": { 00:37:16.582 "small_pool_count": 8192, 00:37:16.582 "large_pool_count": 1024, 00:37:16.582 "small_bufsize": 8192, 00:37:16.582 "large_bufsize": 135168 00:37:16.582 } 00:37:16.582 } 00:37:16.582 ] 00:37:16.582 }, 00:37:16.582 { 00:37:16.582 "subsystem": "sock", 00:37:16.582 "config": [ 00:37:16.582 { 00:37:16.582 "method": "sock_set_default_impl", 00:37:16.582 "params": { 00:37:16.582 "impl_name": "posix" 00:37:16.582 } 00:37:16.582 }, 00:37:16.582 { 00:37:16.582 "method": "sock_impl_set_options", 00:37:16.582 "params": { 00:37:16.582 "impl_name": "ssl", 00:37:16.582 "recv_buf_size": 4096, 00:37:16.582 "send_buf_size": 4096, 00:37:16.582 "enable_recv_pipe": true, 00:37:16.582 "enable_quickack": false, 00:37:16.582 "enable_placement_id": 0, 00:37:16.582 "enable_zerocopy_send_server": true, 00:37:16.582 "enable_zerocopy_send_client": false, 00:37:16.582 "zerocopy_threshold": 0, 00:37:16.582 "tls_version": 0, 00:37:16.582 "enable_ktls": false 00:37:16.582 } 00:37:16.582 }, 00:37:16.582 { 00:37:16.582 "method": "sock_impl_set_options", 00:37:16.582 "params": { 00:37:16.582 "impl_name": "posix", 00:37:16.582 "recv_buf_size": 2097152, 00:37:16.582 "send_buf_size": 2097152, 00:37:16.582 "enable_recv_pipe": true, 00:37:16.582 "enable_quickack": false, 00:37:16.582 "enable_placement_id": 0, 00:37:16.582 "enable_zerocopy_send_server": true, 00:37:16.582 "enable_zerocopy_send_client": false, 00:37:16.582 "zerocopy_threshold": 0, 00:37:16.582 "tls_version": 0, 00:37:16.582 "enable_ktls": false 00:37:16.582 } 00:37:16.582 } 00:37:16.582 ] 00:37:16.582 }, 00:37:16.582 { 00:37:16.582 "subsystem": "vmd", 00:37:16.582 "config": [] 00:37:16.582 }, 00:37:16.582 { 00:37:16.582 "subsystem": "accel", 00:37:16.582 "config": [ 00:37:16.582 { 00:37:16.582 "method": "accel_set_options", 00:37:16.582 "params": { 00:37:16.582 "small_cache_size": 128, 00:37:16.582 "large_cache_size": 16, 00:37:16.582 "task_count": 2048, 00:37:16.582 "sequence_count": 2048, 00:37:16.582 "buf_count": 2048 00:37:16.582 } 00:37:16.582 } 00:37:16.582 ] 00:37:16.582 }, 00:37:16.582 { 00:37:16.582 "subsystem": "bdev", 00:37:16.582 "config": [ 00:37:16.582 { 00:37:16.582 "method": "bdev_set_options", 00:37:16.582 "params": { 00:37:16.582 "bdev_io_pool_size": 65535, 00:37:16.582 "bdev_io_cache_size": 256, 00:37:16.582 "bdev_auto_examine": true, 00:37:16.582 "iobuf_small_cache_size": 128, 00:37:16.582 "iobuf_large_cache_size": 16 00:37:16.582 } 00:37:16.582 }, 00:37:16.582 { 00:37:16.582 "method": "bdev_raid_set_options", 00:37:16.582 "params": { 00:37:16.582 "process_window_size_kb": 1024 00:37:16.582 } 00:37:16.582 }, 00:37:16.582 { 00:37:16.582 "method": "bdev_iscsi_set_options", 00:37:16.582 "params": { 00:37:16.582 "timeout_sec": 30 00:37:16.582 } 00:37:16.582 }, 00:37:16.582 { 00:37:16.582 "method": "bdev_nvme_set_options", 00:37:16.582 "params": { 00:37:16.582 "action_on_timeout": "none", 00:37:16.582 "timeout_us": 0, 00:37:16.582 "timeout_admin_us": 0, 00:37:16.582 "keep_alive_timeout_ms": 10000, 00:37:16.582 "arbitration_burst": 0, 00:37:16.582 "low_priority_weight": 0, 00:37:16.582 "medium_priority_weight": 0, 00:37:16.582 "high_priority_weight": 0, 00:37:16.582 "nvme_adminq_poll_period_us": 10000, 00:37:16.582 "nvme_ioq_poll_period_us": 0, 00:37:16.582 "io_queue_requests": 512, 00:37:16.582 "delay_cmd_submit": true, 00:37:16.582 "transport_retry_count": 4, 00:37:16.582 "bdev_retry_count": 3, 00:37:16.582 "transport_ack_timeout": 0, 00:37:16.582 "ctrlr_loss_timeout_sec": 0, 00:37:16.582 "reconnect_delay_sec": 0, 00:37:16.582 "fast_io_fail_timeout_sec": 0, 00:37:16.582 "disable_auto_failback": false, 00:37:16.582 "generate_uuids": false, 00:37:16.582 "transport_tos": 0, 00:37:16.582 "nvme_error_stat": false, 00:37:16.582 "rdma_srq_size": 0, 00:37:16.582 "io_path_stat": false, 00:37:16.582 "allow_accel_sequence": false, 00:37:16.582 "rdma_max_cq_size": 0, 00:37:16.582 "rdma_cm_event_timeout_ms": 0, 00:37:16.582 "dhchap_digests": [ 00:37:16.582 "sha256", 00:37:16.582 "sha384", 00:37:16.582 "sha512" 00:37:16.582 ], 00:37:16.582 "dhchap_dhgroups": [ 00:37:16.582 "null", 00:37:16.582 "ffdhe2048", 00:37:16.582 "ffdhe3072", 00:37:16.582 "ffdhe4096", 00:37:16.582 "ffdhe6144", 00:37:16.582 "ffdhe8192" 00:37:16.582 ] 00:37:16.582 } 00:37:16.582 }, 00:37:16.582 { 00:37:16.582 "method": "bdev_nvme_attach_controller", 00:37:16.582 "params": { 00:37:16.582 "name": "nvme0", 00:37:16.582 "trtype": "TCP", 00:37:16.582 "adrfam": "IPv4", 00:37:16.582 "traddr": "127.0.0.1", 00:37:16.582 "trsvcid": "4420", 00:37:16.582 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:16.582 "prchk_reftag": false, 00:37:16.582 "prchk_guard": false, 00:37:16.582 "ctrlr_loss_timeout_sec": 0, 00:37:16.582 "reconnect_delay_sec": 0, 00:37:16.582 "fast_io_fail_timeout_sec": 0, 00:37:16.582 "psk": "key0", 00:37:16.582 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:16.582 "hdgst": false, 00:37:16.582 "ddgst": false 00:37:16.582 } 00:37:16.582 }, 00:37:16.582 { 00:37:16.582 "method": "bdev_nvme_set_hotplug", 00:37:16.582 "params": { 00:37:16.582 "period_us": 100000, 00:37:16.582 "enable": false 00:37:16.582 } 00:37:16.582 }, 00:37:16.582 { 00:37:16.582 "method": "bdev_wait_for_examine" 00:37:16.582 } 00:37:16.582 ] 00:37:16.582 }, 00:37:16.582 { 00:37:16.582 "subsystem": "nbd", 00:37:16.582 "config": [] 00:37:16.582 } 00:37:16.582 ] 00:37:16.582 }' 00:37:16.582 03:40:22 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:37:16.582 [2024-07-15 03:40:22.706998] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:37:16.582 [2024-07-15 03:40:22.707075] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3378560 ] 00:37:16.840 EAL: No free 2048 kB hugepages reported on node 1 00:37:16.840 [2024-07-15 03:40:22.767262] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:16.840 [2024-07-15 03:40:22.857545] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:37:17.099 [2024-07-15 03:40:23.038984] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:37:17.662 03:40:23 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:37:17.662 03:40:23 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:37:17.662 03:40:23 keyring_file -- keyring/file.sh@120 -- # bperf_cmd keyring_get_keys 00:37:17.662 03:40:23 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:17.662 03:40:23 keyring_file -- keyring/file.sh@120 -- # jq length 00:37:17.919 03:40:23 keyring_file -- keyring/file.sh@120 -- # (( 2 == 2 )) 00:37:17.919 03:40:23 keyring_file -- keyring/file.sh@121 -- # get_refcnt key0 00:37:17.919 03:40:23 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:37:17.919 03:40:23 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:17.919 03:40:23 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:17.919 03:40:23 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:17.919 03:40:23 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:37:18.176 03:40:24 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:37:18.176 03:40:24 keyring_file -- keyring/file.sh@122 -- # get_refcnt key1 00:37:18.176 03:40:24 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:37:18.176 03:40:24 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:18.176 03:40:24 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:18.176 03:40:24 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:37:18.176 03:40:24 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:18.433 03:40:24 keyring_file -- keyring/file.sh@122 -- # (( 1 == 1 )) 00:37:18.433 03:40:24 keyring_file -- keyring/file.sh@123 -- # bperf_cmd bdev_nvme_get_controllers 00:37:18.433 03:40:24 keyring_file -- keyring/file.sh@123 -- # jq -r '.[].name' 00:37:18.434 03:40:24 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:37:18.692 03:40:24 keyring_file -- keyring/file.sh@123 -- # [[ nvme0 == nvme0 ]] 00:37:18.692 03:40:24 keyring_file -- keyring/file.sh@1 -- # cleanup 00:37:18.692 03:40:24 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.j5n7jk3ieV /tmp/tmp.IQGqMC0lLt 00:37:18.692 03:40:24 keyring_file -- keyring/file.sh@20 -- # killprocess 3378560 00:37:18.692 03:40:24 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 3378560 ']' 00:37:18.692 03:40:24 keyring_file -- common/autotest_common.sh@952 -- # kill -0 3378560 00:37:18.692 03:40:24 keyring_file -- common/autotest_common.sh@953 -- # uname 00:37:18.692 03:40:24 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:37:18.692 03:40:24 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3378560 00:37:18.692 03:40:24 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:37:18.692 03:40:24 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:37:18.692 03:40:24 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3378560' 00:37:18.692 killing process with pid 3378560 00:37:18.692 03:40:24 keyring_file -- common/autotest_common.sh@967 -- # kill 3378560 00:37:18.692 Received shutdown signal, test time was about 1.000000 seconds 00:37:18.692 00:37:18.692 Latency(us) 00:37:18.692 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:18.692 =================================================================================================================== 00:37:18.692 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:37:18.692 03:40:24 keyring_file -- common/autotest_common.sh@972 -- # wait 3378560 00:37:18.949 03:40:24 keyring_file -- keyring/file.sh@21 -- # killprocess 3377103 00:37:18.949 03:40:24 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 3377103 ']' 00:37:18.949 03:40:24 keyring_file -- common/autotest_common.sh@952 -- # kill -0 3377103 00:37:18.949 03:40:24 keyring_file -- common/autotest_common.sh@953 -- # uname 00:37:18.949 03:40:24 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:37:18.949 03:40:24 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3377103 00:37:18.949 03:40:24 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:37:18.949 03:40:24 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:37:18.949 03:40:24 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3377103' 00:37:18.949 killing process with pid 3377103 00:37:18.949 03:40:24 keyring_file -- common/autotest_common.sh@967 -- # kill 3377103 00:37:18.949 [2024-07-15 03:40:24.943962] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:37:18.949 03:40:24 keyring_file -- common/autotest_common.sh@972 -- # wait 3377103 00:37:19.207 00:37:19.207 real 0m14.092s 00:37:19.207 user 0m35.024s 00:37:19.207 sys 0m3.318s 00:37:19.207 03:40:25 keyring_file -- common/autotest_common.sh@1124 -- # xtrace_disable 00:37:19.207 03:40:25 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:37:19.207 ************************************ 00:37:19.207 END TEST keyring_file 00:37:19.207 ************************************ 00:37:19.207 03:40:25 -- common/autotest_common.sh@1142 -- # return 0 00:37:19.207 03:40:25 -- spdk/autotest.sh@296 -- # [[ y == y ]] 00:37:19.207 03:40:25 -- spdk/autotest.sh@297 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:37:19.207 03:40:25 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:37:19.207 03:40:25 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:37:19.207 03:40:25 -- common/autotest_common.sh@10 -- # set +x 00:37:19.466 ************************************ 00:37:19.466 START TEST keyring_linux 00:37:19.466 ************************************ 00:37:19.466 03:40:25 keyring_linux -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:37:19.466 * Looking for test storage... 00:37:19.466 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:37:19.466 03:40:25 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:37:19.466 03:40:25 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:19.466 03:40:25 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:37:19.466 03:40:25 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:19.466 03:40:25 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:19.466 03:40:25 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:19.466 03:40:25 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:19.466 03:40:25 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:19.466 03:40:25 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:19.466 03:40:25 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:19.466 03:40:25 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:19.466 03:40:25 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:19.466 03:40:25 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:19.466 03:40:25 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:37:19.466 03:40:25 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:37:19.466 03:40:25 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:19.466 03:40:25 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:19.466 03:40:25 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:19.466 03:40:25 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:19.466 03:40:25 keyring_linux -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:19.466 03:40:25 keyring_linux -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:19.466 03:40:25 keyring_linux -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:19.466 03:40:25 keyring_linux -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:19.466 03:40:25 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:19.466 03:40:25 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:19.466 03:40:25 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:19.466 03:40:25 keyring_linux -- paths/export.sh@5 -- # export PATH 00:37:19.466 03:40:25 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:19.466 03:40:25 keyring_linux -- nvmf/common.sh@47 -- # : 0 00:37:19.466 03:40:25 keyring_linux -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:37:19.466 03:40:25 keyring_linux -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:37:19.466 03:40:25 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:19.466 03:40:25 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:19.466 03:40:25 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:19.466 03:40:25 keyring_linux -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:37:19.466 03:40:25 keyring_linux -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:37:19.466 03:40:25 keyring_linux -- nvmf/common.sh@51 -- # have_pci_nics=0 00:37:19.466 03:40:25 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:37:19.466 03:40:25 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:37:19.466 03:40:25 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:37:19.466 03:40:25 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:37:19.466 03:40:25 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:37:19.466 03:40:25 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:37:19.466 03:40:25 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:37:19.466 03:40:25 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:37:19.466 03:40:25 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:37:19.466 03:40:25 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:37:19.466 03:40:25 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:37:19.466 03:40:25 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:37:19.466 03:40:25 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:37:19.466 03:40:25 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:37:19.466 03:40:25 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:37:19.466 03:40:25 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:37:19.466 03:40:25 keyring_linux -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:37:19.466 03:40:25 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:37:19.466 03:40:25 keyring_linux -- nvmf/common.sh@705 -- # python - 00:37:19.466 03:40:25 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:37:19.466 03:40:25 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:37:19.466 /tmp/:spdk-test:key0 00:37:19.466 03:40:25 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:37:19.466 03:40:25 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:37:19.466 03:40:25 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:37:19.466 03:40:25 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:37:19.466 03:40:25 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:37:19.466 03:40:25 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:37:19.466 03:40:25 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:37:19.466 03:40:25 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:37:19.466 03:40:25 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:37:19.466 03:40:25 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:37:19.466 03:40:25 keyring_linux -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:37:19.466 03:40:25 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:37:19.466 03:40:25 keyring_linux -- nvmf/common.sh@705 -- # python - 00:37:19.466 03:40:25 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:37:19.466 03:40:25 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:37:19.466 /tmp/:spdk-test:key1 00:37:19.466 03:40:25 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=3378925 00:37:19.466 03:40:25 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:37:19.466 03:40:25 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 3378925 00:37:19.466 03:40:25 keyring_linux -- common/autotest_common.sh@829 -- # '[' -z 3378925 ']' 00:37:19.466 03:40:25 keyring_linux -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:19.466 03:40:25 keyring_linux -- common/autotest_common.sh@834 -- # local max_retries=100 00:37:19.466 03:40:25 keyring_linux -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:19.466 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:19.466 03:40:25 keyring_linux -- common/autotest_common.sh@838 -- # xtrace_disable 00:37:19.466 03:40:25 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:37:19.466 [2024-07-15 03:40:25.563746] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:37:19.466 [2024-07-15 03:40:25.563823] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3378925 ] 00:37:19.467 EAL: No free 2048 kB hugepages reported on node 1 00:37:19.725 [2024-07-15 03:40:25.625346] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:19.725 [2024-07-15 03:40:25.710119] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:37:19.983 03:40:25 keyring_linux -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:37:19.983 03:40:25 keyring_linux -- common/autotest_common.sh@862 -- # return 0 00:37:19.983 03:40:25 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:37:19.983 03:40:25 keyring_linux -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:19.983 03:40:25 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:37:19.983 [2024-07-15 03:40:25.943936] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:19.983 null0 00:37:19.983 [2024-07-15 03:40:25.975972] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:37:19.983 [2024-07-15 03:40:25.976447] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:37:19.983 03:40:25 keyring_linux -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:19.983 03:40:25 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:37:19.983 200928794 00:37:19.983 03:40:25 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:37:19.983 197982773 00:37:19.983 03:40:25 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=3378993 00:37:19.983 03:40:25 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:37:19.983 03:40:25 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 3378993 /var/tmp/bperf.sock 00:37:19.983 03:40:25 keyring_linux -- common/autotest_common.sh@829 -- # '[' -z 3378993 ']' 00:37:19.983 03:40:25 keyring_linux -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:37:19.983 03:40:25 keyring_linux -- common/autotest_common.sh@834 -- # local max_retries=100 00:37:19.983 03:40:26 keyring_linux -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:37:19.983 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:37:19.983 03:40:26 keyring_linux -- common/autotest_common.sh@838 -- # xtrace_disable 00:37:19.983 03:40:26 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:37:19.983 [2024-07-15 03:40:26.044855] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:37:19.983 [2024-07-15 03:40:26.044982] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3378993 ] 00:37:19.983 EAL: No free 2048 kB hugepages reported on node 1 00:37:19.983 [2024-07-15 03:40:26.114827] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:20.240 [2024-07-15 03:40:26.210199] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:37:20.240 03:40:26 keyring_linux -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:37:20.240 03:40:26 keyring_linux -- common/autotest_common.sh@862 -- # return 0 00:37:20.240 03:40:26 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:37:20.240 03:40:26 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:37:20.498 03:40:26 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:37:20.498 03:40:26 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:37:20.755 03:40:26 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:37:20.755 03:40:26 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:37:21.011 [2024-07-15 03:40:27.050996] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:37:21.011 nvme0n1 00:37:21.011 03:40:27 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:37:21.011 03:40:27 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:37:21.011 03:40:27 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:37:21.011 03:40:27 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:37:21.011 03:40:27 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:37:21.011 03:40:27 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:21.268 03:40:27 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:37:21.268 03:40:27 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:37:21.268 03:40:27 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:37:21.268 03:40:27 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:37:21.268 03:40:27 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:21.268 03:40:27 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:21.268 03:40:27 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:37:21.526 03:40:27 keyring_linux -- keyring/linux.sh@25 -- # sn=200928794 00:37:21.526 03:40:27 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:37:21.526 03:40:27 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:37:21.526 03:40:27 keyring_linux -- keyring/linux.sh@26 -- # [[ 200928794 == \2\0\0\9\2\8\7\9\4 ]] 00:37:21.526 03:40:27 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 200928794 00:37:21.526 03:40:27 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:37:21.526 03:40:27 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:37:21.783 Running I/O for 1 seconds... 00:37:22.716 00:37:22.716 Latency(us) 00:37:22.716 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:22.716 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:37:22.716 nvme0n1 : 1.01 6113.36 23.88 0.00 0.00 20788.35 5194.33 27379.48 00:37:22.716 =================================================================================================================== 00:37:22.716 Total : 6113.36 23.88 0.00 0.00 20788.35 5194.33 27379.48 00:37:22.716 0 00:37:22.716 03:40:28 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:37:22.716 03:40:28 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:37:22.974 03:40:29 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:37:22.974 03:40:29 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:37:22.974 03:40:29 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:37:22.974 03:40:29 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:37:22.974 03:40:29 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:22.974 03:40:29 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:37:23.232 03:40:29 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:37:23.232 03:40:29 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:37:23.232 03:40:29 keyring_linux -- keyring/linux.sh@23 -- # return 00:37:23.232 03:40:29 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:37:23.232 03:40:29 keyring_linux -- common/autotest_common.sh@648 -- # local es=0 00:37:23.232 03:40:29 keyring_linux -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:37:23.232 03:40:29 keyring_linux -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:37:23.232 03:40:29 keyring_linux -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:37:23.232 03:40:29 keyring_linux -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:37:23.232 03:40:29 keyring_linux -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:37:23.232 03:40:29 keyring_linux -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:37:23.232 03:40:29 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:37:23.490 [2024-07-15 03:40:29.531949] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:37:23.490 [2024-07-15 03:40:29.532255] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11b9860 (107): Transport endpoint is not connected 00:37:23.490 [2024-07-15 03:40:29.533244] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11b9860 (9): Bad file descriptor 00:37:23.490 [2024-07-15 03:40:29.534242] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:37:23.490 [2024-07-15 03:40:29.534266] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:37:23.490 [2024-07-15 03:40:29.534281] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:37:23.490 request: 00:37:23.490 { 00:37:23.490 "name": "nvme0", 00:37:23.490 "trtype": "tcp", 00:37:23.490 "traddr": "127.0.0.1", 00:37:23.490 "adrfam": "ipv4", 00:37:23.490 "trsvcid": "4420", 00:37:23.490 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:23.490 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:23.490 "prchk_reftag": false, 00:37:23.490 "prchk_guard": false, 00:37:23.490 "hdgst": false, 00:37:23.490 "ddgst": false, 00:37:23.490 "psk": ":spdk-test:key1", 00:37:23.490 "method": "bdev_nvme_attach_controller", 00:37:23.490 "req_id": 1 00:37:23.490 } 00:37:23.490 Got JSON-RPC error response 00:37:23.490 response: 00:37:23.490 { 00:37:23.490 "code": -5, 00:37:23.490 "message": "Input/output error" 00:37:23.490 } 00:37:23.490 03:40:29 keyring_linux -- common/autotest_common.sh@651 -- # es=1 00:37:23.490 03:40:29 keyring_linux -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:37:23.490 03:40:29 keyring_linux -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:37:23.490 03:40:29 keyring_linux -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:37:23.491 03:40:29 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:37:23.491 03:40:29 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:37:23.491 03:40:29 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:37:23.491 03:40:29 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:37:23.491 03:40:29 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:37:23.491 03:40:29 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:37:23.491 03:40:29 keyring_linux -- keyring/linux.sh@33 -- # sn=200928794 00:37:23.491 03:40:29 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 200928794 00:37:23.491 1 links removed 00:37:23.491 03:40:29 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:37:23.491 03:40:29 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:37:23.491 03:40:29 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:37:23.491 03:40:29 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:37:23.491 03:40:29 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:37:23.491 03:40:29 keyring_linux -- keyring/linux.sh@33 -- # sn=197982773 00:37:23.491 03:40:29 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 197982773 00:37:23.491 1 links removed 00:37:23.491 03:40:29 keyring_linux -- keyring/linux.sh@41 -- # killprocess 3378993 00:37:23.491 03:40:29 keyring_linux -- common/autotest_common.sh@948 -- # '[' -z 3378993 ']' 00:37:23.491 03:40:29 keyring_linux -- common/autotest_common.sh@952 -- # kill -0 3378993 00:37:23.491 03:40:29 keyring_linux -- common/autotest_common.sh@953 -- # uname 00:37:23.491 03:40:29 keyring_linux -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:37:23.491 03:40:29 keyring_linux -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3378993 00:37:23.491 03:40:29 keyring_linux -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:37:23.491 03:40:29 keyring_linux -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:37:23.491 03:40:29 keyring_linux -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3378993' 00:37:23.491 killing process with pid 3378993 00:37:23.491 03:40:29 keyring_linux -- common/autotest_common.sh@967 -- # kill 3378993 00:37:23.491 Received shutdown signal, test time was about 1.000000 seconds 00:37:23.491 00:37:23.491 Latency(us) 00:37:23.491 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:23.491 =================================================================================================================== 00:37:23.491 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:37:23.491 03:40:29 keyring_linux -- common/autotest_common.sh@972 -- # wait 3378993 00:37:23.748 03:40:29 keyring_linux -- keyring/linux.sh@42 -- # killprocess 3378925 00:37:23.748 03:40:29 keyring_linux -- common/autotest_common.sh@948 -- # '[' -z 3378925 ']' 00:37:23.748 03:40:29 keyring_linux -- common/autotest_common.sh@952 -- # kill -0 3378925 00:37:23.748 03:40:29 keyring_linux -- common/autotest_common.sh@953 -- # uname 00:37:23.748 03:40:29 keyring_linux -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:37:23.748 03:40:29 keyring_linux -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3378925 00:37:23.748 03:40:29 keyring_linux -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:37:23.748 03:40:29 keyring_linux -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:37:23.748 03:40:29 keyring_linux -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3378925' 00:37:23.748 killing process with pid 3378925 00:37:23.748 03:40:29 keyring_linux -- common/autotest_common.sh@967 -- # kill 3378925 00:37:23.748 03:40:29 keyring_linux -- common/autotest_common.sh@972 -- # wait 3378925 00:37:24.336 00:37:24.336 real 0m4.812s 00:37:24.336 user 0m9.102s 00:37:24.336 sys 0m1.609s 00:37:24.336 03:40:30 keyring_linux -- common/autotest_common.sh@1124 -- # xtrace_disable 00:37:24.336 03:40:30 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:37:24.336 ************************************ 00:37:24.336 END TEST keyring_linux 00:37:24.336 ************************************ 00:37:24.336 03:40:30 -- common/autotest_common.sh@1142 -- # return 0 00:37:24.336 03:40:30 -- spdk/autotest.sh@308 -- # '[' 0 -eq 1 ']' 00:37:24.336 03:40:30 -- spdk/autotest.sh@312 -- # '[' 0 -eq 1 ']' 00:37:24.336 03:40:30 -- spdk/autotest.sh@316 -- # '[' 0 -eq 1 ']' 00:37:24.336 03:40:30 -- spdk/autotest.sh@321 -- # '[' 0 -eq 1 ']' 00:37:24.336 03:40:30 -- spdk/autotest.sh@330 -- # '[' 0 -eq 1 ']' 00:37:24.336 03:40:30 -- spdk/autotest.sh@335 -- # '[' 0 -eq 1 ']' 00:37:24.336 03:40:30 -- spdk/autotest.sh@339 -- # '[' 0 -eq 1 ']' 00:37:24.336 03:40:30 -- spdk/autotest.sh@343 -- # '[' 0 -eq 1 ']' 00:37:24.336 03:40:30 -- spdk/autotest.sh@347 -- # '[' 0 -eq 1 ']' 00:37:24.336 03:40:30 -- spdk/autotest.sh@352 -- # '[' 0 -eq 1 ']' 00:37:24.336 03:40:30 -- spdk/autotest.sh@356 -- # '[' 0 -eq 1 ']' 00:37:24.336 03:40:30 -- spdk/autotest.sh@363 -- # [[ 0 -eq 1 ]] 00:37:24.336 03:40:30 -- spdk/autotest.sh@367 -- # [[ 0 -eq 1 ]] 00:37:24.336 03:40:30 -- spdk/autotest.sh@371 -- # [[ 0 -eq 1 ]] 00:37:24.336 03:40:30 -- spdk/autotest.sh@375 -- # [[ 0 -eq 1 ]] 00:37:24.336 03:40:30 -- spdk/autotest.sh@380 -- # trap - SIGINT SIGTERM EXIT 00:37:24.336 03:40:30 -- spdk/autotest.sh@382 -- # timing_enter post_cleanup 00:37:24.336 03:40:30 -- common/autotest_common.sh@722 -- # xtrace_disable 00:37:24.336 03:40:30 -- common/autotest_common.sh@10 -- # set +x 00:37:24.336 03:40:30 -- spdk/autotest.sh@383 -- # autotest_cleanup 00:37:24.336 03:40:30 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:37:24.336 03:40:30 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:37:24.336 03:40:30 -- common/autotest_common.sh@10 -- # set +x 00:37:26.237 INFO: APP EXITING 00:37:26.237 INFO: killing all VMs 00:37:26.237 INFO: killing vhost app 00:37:26.237 INFO: EXIT DONE 00:37:27.172 0000:88:00.0 (8086 0a54): Already using the nvme driver 00:37:27.172 0000:00:04.7 (8086 0e27): Already using the ioatdma driver 00:37:27.172 0000:00:04.6 (8086 0e26): Already using the ioatdma driver 00:37:27.172 0000:00:04.5 (8086 0e25): Already using the ioatdma driver 00:37:27.172 0000:00:04.4 (8086 0e24): Already using the ioatdma driver 00:37:27.172 0000:00:04.3 (8086 0e23): Already using the ioatdma driver 00:37:27.172 0000:00:04.2 (8086 0e22): Already using the ioatdma driver 00:37:27.172 0000:00:04.1 (8086 0e21): Already using the ioatdma driver 00:37:27.172 0000:00:04.0 (8086 0e20): Already using the ioatdma driver 00:37:27.172 0000:80:04.7 (8086 0e27): Already using the ioatdma driver 00:37:27.172 0000:80:04.6 (8086 0e26): Already using the ioatdma driver 00:37:27.172 0000:80:04.5 (8086 0e25): Already using the ioatdma driver 00:37:27.172 0000:80:04.4 (8086 0e24): Already using the ioatdma driver 00:37:27.172 0000:80:04.3 (8086 0e23): Already using the ioatdma driver 00:37:27.172 0000:80:04.2 (8086 0e22): Already using the ioatdma driver 00:37:27.172 0000:80:04.1 (8086 0e21): Already using the ioatdma driver 00:37:27.172 0000:80:04.0 (8086 0e20): Already using the ioatdma driver 00:37:28.544 Cleaning 00:37:28.544 Removing: /var/run/dpdk/spdk0/config 00:37:28.544 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:37:28.544 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:37:28.544 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:37:28.544 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:37:28.544 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:37:28.544 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:37:28.544 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:37:28.544 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:37:28.544 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:37:28.544 Removing: /var/run/dpdk/spdk0/hugepage_info 00:37:28.544 Removing: /var/run/dpdk/spdk1/config 00:37:28.544 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:37:28.544 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:37:28.544 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:37:28.544 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:37:28.544 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:37:28.544 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:37:28.544 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:37:28.544 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:37:28.544 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:37:28.544 Removing: /var/run/dpdk/spdk1/hugepage_info 00:37:28.544 Removing: /var/run/dpdk/spdk1/mp_socket 00:37:28.544 Removing: /var/run/dpdk/spdk2/config 00:37:28.544 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:37:28.544 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:37:28.544 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:37:28.544 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:37:28.544 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:37:28.544 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:37:28.544 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:37:28.544 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:37:28.544 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:37:28.544 Removing: /var/run/dpdk/spdk2/hugepage_info 00:37:28.544 Removing: /var/run/dpdk/spdk3/config 00:37:28.544 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:37:28.544 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:37:28.544 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:37:28.544 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:37:28.544 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:37:28.544 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:37:28.544 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:37:28.544 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:37:28.544 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:37:28.544 Removing: /var/run/dpdk/spdk3/hugepage_info 00:37:28.544 Removing: /var/run/dpdk/spdk4/config 00:37:28.544 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:37:28.544 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:37:28.544 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:37:28.544 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:37:28.544 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:37:28.544 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:37:28.544 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:37:28.544 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:37:28.544 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:37:28.544 Removing: /var/run/dpdk/spdk4/hugepage_info 00:37:28.544 Removing: /dev/shm/bdev_svc_trace.1 00:37:28.544 Removing: /dev/shm/nvmf_trace.0 00:37:28.545 Removing: /dev/shm/spdk_tgt_trace.pid3058706 00:37:28.545 Removing: /var/run/dpdk/spdk0 00:37:28.545 Removing: /var/run/dpdk/spdk1 00:37:28.545 Removing: /var/run/dpdk/spdk2 00:37:28.545 Removing: /var/run/dpdk/spdk3 00:37:28.545 Removing: /var/run/dpdk/spdk4 00:37:28.545 Removing: /var/run/dpdk/spdk_pid3057161 00:37:28.545 Removing: /var/run/dpdk/spdk_pid3057890 00:37:28.545 Removing: /var/run/dpdk/spdk_pid3058706 00:37:28.545 Removing: /var/run/dpdk/spdk_pid3059141 00:37:28.545 Removing: /var/run/dpdk/spdk_pid3059828 00:37:28.545 Removing: /var/run/dpdk/spdk_pid3059968 00:37:28.545 Removing: /var/run/dpdk/spdk_pid3060815 00:37:28.545 Removing: /var/run/dpdk/spdk_pid3060820 00:37:28.545 Removing: /var/run/dpdk/spdk_pid3061065 00:37:28.545 Removing: /var/run/dpdk/spdk_pid3062812 00:37:28.545 Removing: /var/run/dpdk/spdk_pid3063804 00:37:28.545 Removing: /var/run/dpdk/spdk_pid3064007 00:37:28.545 Removing: /var/run/dpdk/spdk_pid3064304 00:37:28.545 Removing: /var/run/dpdk/spdk_pid3064506 00:37:28.545 Removing: /var/run/dpdk/spdk_pid3064694 00:37:28.545 Removing: /var/run/dpdk/spdk_pid3064851 00:37:28.545 Removing: /var/run/dpdk/spdk_pid3065009 00:37:28.545 Removing: /var/run/dpdk/spdk_pid3065193 00:37:28.545 Removing: /var/run/dpdk/spdk_pid3065503 00:37:28.545 Removing: /var/run/dpdk/spdk_pid3067848 00:37:28.545 Removing: /var/run/dpdk/spdk_pid3068017 00:37:28.545 Removing: /var/run/dpdk/spdk_pid3068179 00:37:28.545 Removing: /var/run/dpdk/spdk_pid3068187 00:37:28.545 Removing: /var/run/dpdk/spdk_pid3068615 00:37:28.545 Removing: /var/run/dpdk/spdk_pid3068629 00:37:28.545 Removing: /var/run/dpdk/spdk_pid3069055 00:37:28.545 Removing: /var/run/dpdk/spdk_pid3069063 00:37:28.545 Removing: /var/run/dpdk/spdk_pid3069350 00:37:28.545 Removing: /var/run/dpdk/spdk_pid3069362 00:37:28.545 Removing: /var/run/dpdk/spdk_pid3069524 00:37:28.545 Removing: /var/run/dpdk/spdk_pid3069633 00:37:28.545 Removing: /var/run/dpdk/spdk_pid3070018 00:37:28.545 Removing: /var/run/dpdk/spdk_pid3070180 00:37:28.545 Removing: /var/run/dpdk/spdk_pid3070376 00:37:28.545 Removing: /var/run/dpdk/spdk_pid3070545 00:37:28.545 Removing: /var/run/dpdk/spdk_pid3070591 00:37:28.545 Removing: /var/run/dpdk/spdk_pid3070754 00:37:28.545 Removing: /var/run/dpdk/spdk_pid3070910 00:37:28.545 Removing: /var/run/dpdk/spdk_pid3071148 00:37:28.545 Removing: /var/run/dpdk/spdk_pid3071339 00:37:28.545 Removing: /var/run/dpdk/spdk_pid3071499 00:37:28.545 Removing: /var/run/dpdk/spdk_pid3071655 00:37:28.545 Removing: /var/run/dpdk/spdk_pid3071928 00:37:28.545 Removing: /var/run/dpdk/spdk_pid3072085 00:37:28.545 Removing: /var/run/dpdk/spdk_pid3072238 00:37:28.545 Removing: /var/run/dpdk/spdk_pid3072400 00:37:28.545 Removing: /var/run/dpdk/spdk_pid3072668 00:37:28.545 Removing: /var/run/dpdk/spdk_pid3072830 00:37:28.545 Removing: /var/run/dpdk/spdk_pid3072987 00:37:28.545 Removing: /var/run/dpdk/spdk_pid3073225 00:37:28.545 Removing: /var/run/dpdk/spdk_pid3073417 00:37:28.545 Removing: /var/run/dpdk/spdk_pid3073571 00:37:28.545 Removing: /var/run/dpdk/spdk_pid3073735 00:37:28.545 Removing: /var/run/dpdk/spdk_pid3074004 00:37:28.545 Removing: /var/run/dpdk/spdk_pid3074171 00:37:28.545 Removing: /var/run/dpdk/spdk_pid3074323 00:37:28.545 Removing: /var/run/dpdk/spdk_pid3074588 00:37:28.545 Removing: /var/run/dpdk/spdk_pid3074669 00:37:28.545 Removing: /var/run/dpdk/spdk_pid3074873 00:37:28.803 Removing: /var/run/dpdk/spdk_pid3076922 00:37:28.803 Removing: /var/run/dpdk/spdk_pid3130674 00:37:28.803 Removing: /var/run/dpdk/spdk_pid3133229 00:37:28.803 Removing: /var/run/dpdk/spdk_pid3140043 00:37:28.803 Removing: /var/run/dpdk/spdk_pid3143218 00:37:28.803 Removing: /var/run/dpdk/spdk_pid3145560 00:37:28.803 Removing: /var/run/dpdk/spdk_pid3146011 00:37:28.803 Removing: /var/run/dpdk/spdk_pid3150012 00:37:28.803 Removing: /var/run/dpdk/spdk_pid3154257 00:37:28.803 Removing: /var/run/dpdk/spdk_pid3154260 00:37:28.803 Removing: /var/run/dpdk/spdk_pid3154914 00:37:28.803 Removing: /var/run/dpdk/spdk_pid3155568 00:37:28.803 Removing: /var/run/dpdk/spdk_pid3156122 00:37:28.803 Removing: /var/run/dpdk/spdk_pid3156645 00:37:28.803 Removing: /var/run/dpdk/spdk_pid3156656 00:37:28.803 Removing: /var/run/dpdk/spdk_pid3156795 00:37:28.803 Removing: /var/run/dpdk/spdk_pid3156926 00:37:28.803 Removing: /var/run/dpdk/spdk_pid3156932 00:37:28.803 Removing: /var/run/dpdk/spdk_pid3157587 00:37:28.803 Removing: /var/run/dpdk/spdk_pid3158246 00:37:28.803 Removing: /var/run/dpdk/spdk_pid3158786 00:37:28.803 Removing: /var/run/dpdk/spdk_pid3159179 00:37:28.803 Removing: /var/run/dpdk/spdk_pid3159308 00:37:28.803 Removing: /var/run/dpdk/spdk_pid3159442 00:37:28.803 Removing: /var/run/dpdk/spdk_pid3160340 00:37:28.803 Removing: /var/run/dpdk/spdk_pid3161055 00:37:28.803 Removing: /var/run/dpdk/spdk_pid3166403 00:37:28.803 Removing: /var/run/dpdk/spdk_pid3166673 00:37:28.803 Removing: /var/run/dpdk/spdk_pid3169177 00:37:28.803 Removing: /var/run/dpdk/spdk_pid3172775 00:37:28.803 Removing: /var/run/dpdk/spdk_pid3174916 00:37:28.803 Removing: /var/run/dpdk/spdk_pid3181280 00:37:28.803 Removing: /var/run/dpdk/spdk_pid3186975 00:37:28.803 Removing: /var/run/dpdk/spdk_pid3188283 00:37:28.803 Removing: /var/run/dpdk/spdk_pid3188950 00:37:28.803 Removing: /var/run/dpdk/spdk_pid3199005 00:37:28.803 Removing: /var/run/dpdk/spdk_pid3201212 00:37:28.803 Removing: /var/run/dpdk/spdk_pid3226372 00:37:28.803 Removing: /var/run/dpdk/spdk_pid3229153 00:37:28.803 Removing: /var/run/dpdk/spdk_pid3230328 00:37:28.803 Removing: /var/run/dpdk/spdk_pid3231525 00:37:28.803 Removing: /var/run/dpdk/spdk_pid3231655 00:37:28.803 Removing: /var/run/dpdk/spdk_pid3231792 00:37:28.803 Removing: /var/run/dpdk/spdk_pid3231814 00:37:28.803 Removing: /var/run/dpdk/spdk_pid3232250 00:37:28.803 Removing: /var/run/dpdk/spdk_pid3233560 00:37:28.803 Removing: /var/run/dpdk/spdk_pid3234162 00:37:28.803 Removing: /var/run/dpdk/spdk_pid3234584 00:37:28.803 Removing: /var/run/dpdk/spdk_pid3236195 00:37:28.803 Removing: /var/run/dpdk/spdk_pid3236505 00:37:28.803 Removing: /var/run/dpdk/spdk_pid3237066 00:37:28.803 Removing: /var/run/dpdk/spdk_pid3239888 00:37:28.803 Removing: /var/run/dpdk/spdk_pid3243316 00:37:28.803 Removing: /var/run/dpdk/spdk_pid3246838 00:37:28.803 Removing: /var/run/dpdk/spdk_pid3270352 00:37:28.803 Removing: /var/run/dpdk/spdk_pid3272990 00:37:28.803 Removing: /var/run/dpdk/spdk_pid3276877 00:37:28.803 Removing: /var/run/dpdk/spdk_pid3277814 00:37:28.803 Removing: /var/run/dpdk/spdk_pid3278902 00:37:28.803 Removing: /var/run/dpdk/spdk_pid3281445 00:37:28.803 Removing: /var/run/dpdk/spdk_pid3283683 00:37:28.803 Removing: /var/run/dpdk/spdk_pid3287882 00:37:28.803 Removing: /var/run/dpdk/spdk_pid3287891 00:37:28.803 Removing: /var/run/dpdk/spdk_pid3290653 00:37:28.803 Removing: /var/run/dpdk/spdk_pid3290787 00:37:28.803 Removing: /var/run/dpdk/spdk_pid3290919 00:37:28.803 Removing: /var/run/dpdk/spdk_pid3291192 00:37:28.803 Removing: /var/run/dpdk/spdk_pid3291312 00:37:28.803 Removing: /var/run/dpdk/spdk_pid3292382 00:37:28.803 Removing: /var/run/dpdk/spdk_pid3293561 00:37:28.803 Removing: /var/run/dpdk/spdk_pid3294737 00:37:28.803 Removing: /var/run/dpdk/spdk_pid3295919 00:37:28.803 Removing: /var/run/dpdk/spdk_pid3297095 00:37:28.803 Removing: /var/run/dpdk/spdk_pid3298303 00:37:28.803 Removing: /var/run/dpdk/spdk_pid3302696 00:37:28.803 Removing: /var/run/dpdk/spdk_pid3303144 00:37:28.803 Removing: /var/run/dpdk/spdk_pid3304421 00:37:28.803 Removing: /var/run/dpdk/spdk_pid3305161 00:37:28.803 Removing: /var/run/dpdk/spdk_pid3308744 00:37:28.803 Removing: /var/run/dpdk/spdk_pid3310739 00:37:28.803 Removing: /var/run/dpdk/spdk_pid3314139 00:37:28.803 Removing: /var/run/dpdk/spdk_pid3317452 00:37:28.803 Removing: /var/run/dpdk/spdk_pid3323661 00:37:28.803 Removing: /var/run/dpdk/spdk_pid3328001 00:37:28.803 Removing: /var/run/dpdk/spdk_pid3328003 00:37:28.804 Removing: /var/run/dpdk/spdk_pid3340806 00:37:28.804 Removing: /var/run/dpdk/spdk_pid3341215 00:37:28.804 Removing: /var/run/dpdk/spdk_pid3341741 00:37:28.804 Removing: /var/run/dpdk/spdk_pid3342149 00:37:28.804 Removing: /var/run/dpdk/spdk_pid3342724 00:37:28.804 Removing: /var/run/dpdk/spdk_pid3343133 00:37:28.804 Removing: /var/run/dpdk/spdk_pid3343542 00:37:28.804 Removing: /var/run/dpdk/spdk_pid3343947 00:37:28.804 Removing: /var/run/dpdk/spdk_pid3346439 00:37:28.804 Removing: /var/run/dpdk/spdk_pid3346575 00:37:28.804 Removing: /var/run/dpdk/spdk_pid3350358 00:37:28.804 Removing: /var/run/dpdk/spdk_pid3350531 00:37:28.804 Removing: /var/run/dpdk/spdk_pid3352134 00:37:28.804 Removing: /var/run/dpdk/spdk_pid3357038 00:37:28.804 Removing: /var/run/dpdk/spdk_pid3357047 00:37:28.804 Removing: /var/run/dpdk/spdk_pid3359935 00:37:28.804 Removing: /var/run/dpdk/spdk_pid3361332 00:37:28.804 Removing: /var/run/dpdk/spdk_pid3362727 00:37:28.804 Removing: /var/run/dpdk/spdk_pid3363472 00:37:28.804 Removing: /var/run/dpdk/spdk_pid3365507 00:37:28.804 Removing: /var/run/dpdk/spdk_pid3366380 00:37:28.804 Removing: /var/run/dpdk/spdk_pid3371724 00:37:28.804 Removing: /var/run/dpdk/spdk_pid3372038 00:37:28.804 Removing: /var/run/dpdk/spdk_pid3372430 00:37:28.804 Removing: /var/run/dpdk/spdk_pid3373978 00:37:28.804 Removing: /var/run/dpdk/spdk_pid3374260 00:37:28.804 Removing: /var/run/dpdk/spdk_pid3374662 00:37:28.804 Removing: /var/run/dpdk/spdk_pid3377103 00:37:28.804 Removing: /var/run/dpdk/spdk_pid3377107 00:37:29.061 Removing: /var/run/dpdk/spdk_pid3378560 00:37:29.061 Removing: /var/run/dpdk/spdk_pid3378925 00:37:29.061 Removing: /var/run/dpdk/spdk_pid3378993 00:37:29.061 Clean 00:37:29.061 03:40:35 -- common/autotest_common.sh@1451 -- # return 0 00:37:29.061 03:40:35 -- spdk/autotest.sh@384 -- # timing_exit post_cleanup 00:37:29.061 03:40:35 -- common/autotest_common.sh@728 -- # xtrace_disable 00:37:29.061 03:40:35 -- common/autotest_common.sh@10 -- # set +x 00:37:29.061 03:40:35 -- spdk/autotest.sh@386 -- # timing_exit autotest 00:37:29.061 03:40:35 -- common/autotest_common.sh@728 -- # xtrace_disable 00:37:29.061 03:40:35 -- common/autotest_common.sh@10 -- # set +x 00:37:29.061 03:40:35 -- spdk/autotest.sh@387 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:37:29.061 03:40:35 -- spdk/autotest.sh@389 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:37:29.061 03:40:35 -- spdk/autotest.sh@389 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:37:29.061 03:40:35 -- spdk/autotest.sh@391 -- # hash lcov 00:37:29.061 03:40:35 -- spdk/autotest.sh@391 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:37:29.061 03:40:35 -- spdk/autotest.sh@393 -- # hostname 00:37:29.061 03:40:35 -- spdk/autotest.sh@393 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-gp-11 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:37:29.318 geninfo: WARNING: invalid characters removed from testname! 00:38:01.423 03:41:03 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:38:01.423 03:41:07 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:38:04.701 03:41:10 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:38:07.981 03:41:13 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:38:10.504 03:41:16 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:38:13.784 03:41:19 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:38:16.313 03:41:22 -- spdk/autotest.sh@400 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:38:16.313 03:41:22 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:16.313 03:41:22 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:38:16.313 03:41:22 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:16.313 03:41:22 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:16.313 03:41:22 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:16.313 03:41:22 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:16.313 03:41:22 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:16.313 03:41:22 -- paths/export.sh@5 -- $ export PATH 00:38:16.313 03:41:22 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:16.313 03:41:22 -- common/autobuild_common.sh@443 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:38:16.313 03:41:22 -- common/autobuild_common.sh@444 -- $ date +%s 00:38:16.313 03:41:22 -- common/autobuild_common.sh@444 -- $ mktemp -dt spdk_1721007682.XXXXXX 00:38:16.313 03:41:22 -- common/autobuild_common.sh@444 -- $ SPDK_WORKSPACE=/tmp/spdk_1721007682.F7Cn6n 00:38:16.313 03:41:22 -- common/autobuild_common.sh@446 -- $ [[ -n '' ]] 00:38:16.313 03:41:22 -- common/autobuild_common.sh@450 -- $ '[' -n v23.11 ']' 00:38:16.313 03:41:22 -- common/autobuild_common.sh@451 -- $ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:38:16.313 03:41:22 -- common/autobuild_common.sh@451 -- $ scanbuild_exclude=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk' 00:38:16.313 03:41:22 -- common/autobuild_common.sh@457 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:38:16.313 03:41:22 -- common/autobuild_common.sh@459 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:38:16.313 03:41:22 -- common/autobuild_common.sh@460 -- $ get_config_params 00:38:16.313 03:41:22 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:38:16.313 03:41:22 -- common/autotest_common.sh@10 -- $ set +x 00:38:16.313 03:41:22 -- common/autobuild_common.sh@460 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-dpdk=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build' 00:38:16.313 03:41:22 -- common/autobuild_common.sh@462 -- $ start_monitor_resources 00:38:16.313 03:41:22 -- pm/common@17 -- $ local monitor 00:38:16.313 03:41:22 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:38:16.313 03:41:22 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:38:16.313 03:41:22 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:38:16.313 03:41:22 -- pm/common@21 -- $ date +%s 00:38:16.313 03:41:22 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:38:16.313 03:41:22 -- pm/common@21 -- $ date +%s 00:38:16.313 03:41:22 -- pm/common@25 -- $ sleep 1 00:38:16.313 03:41:22 -- pm/common@21 -- $ date +%s 00:38:16.313 03:41:22 -- pm/common@21 -- $ date +%s 00:38:16.313 03:41:22 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721007682 00:38:16.313 03:41:22 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721007682 00:38:16.313 03:41:22 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721007682 00:38:16.313 03:41:22 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721007682 00:38:16.313 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721007682_collect-vmstat.pm.log 00:38:16.313 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721007682_collect-cpu-load.pm.log 00:38:16.313 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721007682_collect-cpu-temp.pm.log 00:38:16.313 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721007682_collect-bmc-pm.bmc.pm.log 00:38:17.248 03:41:23 -- common/autobuild_common.sh@463 -- $ trap stop_monitor_resources EXIT 00:38:17.248 03:41:23 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j48 00:38:17.248 03:41:23 -- spdk/autopackage.sh@11 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:38:17.248 03:41:23 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:38:17.248 03:41:23 -- spdk/autopackage.sh@18 -- $ [[ 1 -eq 0 ]] 00:38:17.248 03:41:23 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:38:17.248 03:41:23 -- spdk/autopackage.sh@19 -- $ timing_finish 00:38:17.248 03:41:23 -- common/autotest_common.sh@734 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:38:17.248 03:41:23 -- common/autotest_common.sh@735 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:38:17.248 03:41:23 -- common/autotest_common.sh@737 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:38:17.248 03:41:23 -- spdk/autopackage.sh@20 -- $ exit 0 00:38:17.248 03:41:23 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:38:17.248 03:41:23 -- pm/common@29 -- $ signal_monitor_resources TERM 00:38:17.248 03:41:23 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:38:17.248 03:41:23 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:38:17.248 03:41:23 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:38:17.248 03:41:23 -- pm/common@44 -- $ pid=3390201 00:38:17.248 03:41:23 -- pm/common@50 -- $ kill -TERM 3390201 00:38:17.248 03:41:23 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:38:17.248 03:41:23 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:38:17.248 03:41:23 -- pm/common@44 -- $ pid=3390203 00:38:17.248 03:41:23 -- pm/common@50 -- $ kill -TERM 3390203 00:38:17.248 03:41:23 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:38:17.248 03:41:23 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:38:17.248 03:41:23 -- pm/common@44 -- $ pid=3390205 00:38:17.248 03:41:23 -- pm/common@50 -- $ kill -TERM 3390205 00:38:17.248 03:41:23 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:38:17.248 03:41:23 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:38:17.248 03:41:23 -- pm/common@44 -- $ pid=3390236 00:38:17.248 03:41:23 -- pm/common@50 -- $ sudo -E kill -TERM 3390236 00:38:17.506 + [[ -n 2951897 ]] 00:38:17.506 + sudo kill 2951897 00:38:17.514 [Pipeline] } 00:38:17.532 [Pipeline] // stage 00:38:17.537 [Pipeline] } 00:38:17.556 [Pipeline] // timeout 00:38:17.562 [Pipeline] } 00:38:17.583 [Pipeline] // catchError 00:38:17.589 [Pipeline] } 00:38:17.610 [Pipeline] // wrap 00:38:17.616 [Pipeline] } 00:38:17.635 [Pipeline] // catchError 00:38:17.645 [Pipeline] stage 00:38:17.648 [Pipeline] { (Epilogue) 00:38:17.665 [Pipeline] catchError 00:38:17.667 [Pipeline] { 00:38:17.684 [Pipeline] echo 00:38:17.686 Cleanup processes 00:38:17.693 [Pipeline] sh 00:38:17.976 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:38:17.976 3390354 /usr/bin/ipmitool sdr dump /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/sdr.cache 00:38:17.976 3390468 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:38:17.989 [Pipeline] sh 00:38:18.270 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:38:18.271 ++ grep -v 'sudo pgrep' 00:38:18.271 ++ awk '{print $1}' 00:38:18.271 + sudo kill -9 3390354 00:38:18.282 [Pipeline] sh 00:38:18.561 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:38:28.575 [Pipeline] sh 00:38:28.852 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:38:28.852 Artifacts sizes are good 00:38:28.868 [Pipeline] archiveArtifacts 00:38:28.875 Archiving artifacts 00:38:29.098 [Pipeline] sh 00:38:29.380 + sudo chown -R sys_sgci /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:38:29.392 [Pipeline] cleanWs 00:38:29.400 [WS-CLEANUP] Deleting project workspace... 00:38:29.401 [WS-CLEANUP] Deferred wipeout is used... 00:38:29.406 [WS-CLEANUP] done 00:38:29.407 [Pipeline] } 00:38:29.420 [Pipeline] // catchError 00:38:29.429 [Pipeline] sh 00:38:29.706 + logger -p user.info -t JENKINS-CI 00:38:29.714 [Pipeline] } 00:38:29.728 [Pipeline] // stage 00:38:29.732 [Pipeline] } 00:38:29.746 [Pipeline] // node 00:38:29.752 [Pipeline] End of Pipeline 00:38:29.785 Finished: SUCCESS